Some time ago there was a philosopher that was obsessed with a problem: how people deal with freedom. He argued that human beings are not born with a fixed essence or built-in script. We exist first, and then through our choices, habits, commitments, and evasions, we become something. There’s also a darker side: if there is no ready-made script, then responsibility does not go away, it lands on us.
This philosopher was Jean-Paul Sartre, and his ideas are more and more relevant in the age of AI.
We are building systems that can help us do almost anything. They can write the draft, suggest the plan, generate the code, summarize the paper, rank the options, and increasingly execute parts of the workflow. But it also makes one human capacity much more important: agency. Once intelligence becomes abundant, what matters more is direction. Which goals are worth pursuing, which trade-offs are acceptable, which outputs actually matter. Sartre is useful here because he understood that the hardest part of freedom was never coming up with more options. It was living as someone who has to choose.
AI changes the economics of intelligence, but not the burden of choosing
A small app or research prototype that used to take me weeks can now often be done in a day. In 2023 I had to write Project Idea - Where your tax money go because it felt like a big project that required many people to complete, now I’ve built a first version of Berlin Bill in one day. I wrote Is it easy to build a Flutter Android app in 2024? but in 2026 the answer is yes. The number of vibecoded projects overwhelms Reddit an HackerNews, and now everybody can create a prototype, but it doesn’t remove the main question: where to commit the most of your time.
Sartre matters because he strips away the comforting fiction that we are merely following roles
You can now look like a founder, a researcher, a designer, a developer, a writer. A lot of the surface area of competence is now cheap. This is useful, and I use it all the time, but it also creates a trap: you can start inhabiting a role without ever really choosing what you are trying to do.
Sartre is good on exactly this point. He keeps attacking the idea that we are just the roles we perform. The waiter is not “just a waiter”. The bureaucrat is not “just doing his job”. These identities feel solid because they are socially legible and psychologically convenient. For Sartre, there is still a person there, choosing, even if the choice is to drift.
I think AI makes this more relevant, not less.
It is now possible to build an entire mini-economy around role performance. You can generate the aesthetics of seriousness before you have done serious work. You can sound informed before you have formed a view. You can produce “founder output” without having picked a problem you are willing to care about for more than three days. The issue is that the tools make it easier to avoid the uncomfortable part where you ask: what am I actually committing to here?
AI makes Sartre’s idea of bad faith more relevant
Sartre’s term for one common escape hatch is bad faith. Roughly: it is what happens when I pretend I am less free than I really am because the alternative would require responsibility. I tell myself a story where I am trapped, or defined, or simply following necessity, when in reality I am also participating in the situation.
What changed with LLMs is not that they suddenly created bad faith. People were already doing that just fine. The more interesting change is that they make some old forms of bad faith harder to defend.
A lot of “I can’t do this” used to hide inside a large blob of unknowns. How do I register a business? What kind of company should I open? How do taxes work? What does the first landing page look like? But AI is very good at turning a vague “I have no idea how to begin” into “okay, these are probably the first five steps.” It can help with all of the above, and generally reduce the amount of confusion between idea and action.
Because bad faith often survives on vagueness. “I’d need a team.” “I’d need funding.” “I’d need to understand too many things first.” Sometimes that is true, but often it is half-true. LLMs are useful because they attack the half-true part. They do not magically make someone brave or disciplined, but they do make it easier to separate real constraints from ambient confusion.
That is why Sartre feels current to me here. It is harder to say “I had no way to start” when a pretty decent first pass is sitting one prompt away.
In a world of abundant intelligence, the scarce variable is initiative, taste, courage, and responsibility
LLMs mostly make it easier to express agency. They lower the activation energy for doing things. That alone should be enough to push more people into starting side businesses, trying weird tools, building niche products, or leaving jobs they were only staying in because the alternative felt too operationally messy.
This is also why the “just build things” crowd in SF is not completely wrong, even if they are occasionally unbearable online.
New harder questions now are:
- what are you choosing to work on?
- is it actually useful to someone?
- do you have an edge here, or are you just enjoying the feeling of motion?
- are you building something real, or just another polished artifact of indecision?
So I think the optimistic case for AI is real, but it is not the usual one. It is not just “everyone becomes more productive.” It is that more people can now get through the fog that used to stop them from even trying. They can spend less energy on unfamiliar bureaucracy and generic execution, and more on the part that is actually theirs: domain knowledge, judgment, relationships, taste, persistence, whatever their real edge is.
That only sharpens Sartre’s problem, because once the unknown becomes less unknown, the old excuse of “I had no real option” starts sounding weaker. What remains is the harder and more honest question: what am I actually going to commit to?