AI can't Grant You Serenity

Why spell-checking is fine but career advice isn't

This essay chronicles a philosophical investigation that started with an absurd experiment: what happens when you pray to an AI?

The author uses this as a clear boundary marker—obviously we shouldn’t do that—then works backward to find the principle that explains why. The journey moves through several attempts:

First attempt: Cataloging what makes humans irreplaceable (embodied experience, mortality, genuine desire). But this proves insufficient because the issue isn’t just about AI’s technical limitations.

Second attempt: The “bread-making metaphor”—humans should set goals and judge outputs while AI executes. This works until edge cases appear.

The real problem emerges: It’s not about what AI can’t do, but about what happens when we reduce meaningful choices to metrics. “Select a career with good economic prospects” isn’t wrong because AI lacks consciousness—it’s wrong because it treats your life’s direction as an optimization problem.

Several principle candidates get tested and discarded:

Each captures something true but none fully satisfy.

The final principle: “Don’t automate authority.”

This works because it doesn’t require proving AI lacks inner experience or that humans are metaphysically special. It addresses the actual danger: treating AI outputs as authoritative rather than as information requiring your evaluation.

Authority means the right to make binding decisions, to determine what’s true or what matters. You can use AI extensively—for ideas, execution, novel connections—but you retain final authority over whether the output serves your actual values.

The essay concludes that AI is a “pattern machine, not a meaning machine.” It can surface genuine novelty for you by connecting patterns you haven’t encountered, but it operates within the possibility space humans have already created. Only humans, through lived experience, can expand that space.

The principle isn’t a rule that removes judgment—it’s a touchstone for exercising judgment well. And appropriately, it emerged not from clever reasoning but from sustained attention to what we’re actually doing and why.

The full essay explores these ideas with more nuance, including reflections on metrics culture, the illusion of certainty, and whether embodied AI would be meaningfully different from humans.

🌊Follow the current on Medium