Don't Automate Authority
I already had a sinking feeling that if anyone was going to read my next prompt they would probably do so while rolling their eyes. But I was genuinely curious, where do I draw the line for acceptable AI use and the search had to start somewhere I could verify was true or false. Praying to it seemed like a reasonable starting position as an example of the latter.
The Serenity Prayer written by Dr. Reinhold Niebuhr offers guidance for what we would do well to desire. But asking AI to grant serenity, courage, and wisdom was obviously absurd. AI, after all, cannot bestow these things. They must come from within, from lived experience, from the struggle of being human. The prayer became a useful extreme: a clear example of how not to use AI. In the same vein asking AI to spell-check was clearly an example of acceptable AI use.
But between spell-checking and praying to an algorithm, where exactly is this line? What faculties can AI never replace, and what does good AI use actually look like?
What Makes Us Human
The initial attempt, in hindsight, seemed to be about identifing what makes humans irreplaceable. The list seemed clear enough: embodied experience, stakes and mortality, genuine desire, moral agency, subjective consciousness, creative authority. These are the things AI lacks because they are qualities that emerge from having a body, facing consequences, and having to live with the weight of irreversible choices.
But technical capability doesn't cover it all. Spell-checking is uncontroversial, yet asking AI to "make these notes into an essay" feels different. The distinction isn't about what the technology can do, but instead about what role the human is playing in the creative act.
Take bread-making as a metaphor: the human sets the goal, gathers ingredients (crafts the prompt with context), and can discuss procedures with the AI. But the human must call the shots. The human curates context, judges output, decides if it tastes good. This seemed like a workable principle - until the edge cases appeared.
The Problem of Metrics
Many initially straightforward automation decisions could be made in seemingly negative ways, if they were based on the wrong values or metrics:
-
Select a career with good economic prospects" sounds sensible monetarily, but it reduces a deeply personal choice - about meaning, identity, what kind of life you want - into salary optimization.
-
Write an email and decide this for me" reveals work that's already been reduced to mechanical response.
-
Invite only the friends with the most social media engagement" treats friendship as social capital optimization.
If our lives are increasingly encoded in metrics then the more data we generate, the more "objective" AI recommendations can appear. Or the more seductive outsourcing agency becomes. As life gets encoded, the more objective AI recommendations seem, which encourages more metrification. That is a serious feedback loop.
AI doesn't create this problem. It just tempts us to let something else take burden of autothority. When you ask AI to "select a career for good economic prospects," you're not asking it to make a deeply human choice, instead you've just reduced your career to a financial optimization problem. The AI is just accepting your framing and can give you a reasonable answer given that particular context you have given.
The Illusion of Certainty
So how do we sum up this knowledge as a maxim that can withstand daily use? How can we use this tool constructively? There were several iterations to get through before a clean principle emerged:
-
Don't let AI make decisions where you can't articulate why the metrics matter.
-
Don't use AI for decisions where you've substituted measurement for meaning.
-
Reserve human judgment for deciding what game you're playing, not just how to win it.
Each captured something true, but none felt sufficient. The territory is too complex, the edge cases too numerous. Like ethics itself, perhaps the answer is unsatisfyingly complex, requiring situated judgment rather than algorithmic application.
We have rough heuristics ("don't kill") that work in most cases but break down in edge cases (self-defense) and require judgment in context. Maybe AI use is the same. There is no principle that removes the need for ongoing reflection about what you're doing and why.
The Unsettling Similarity
It was becoming clearer that perhaps our intelligence is not so different from the artificial kind. Does the difference boild down to embodiment? What if we hooked AI up to sensors, gave it imperatives to consume and multiply, made it face actual stakes? How would that be meaningfully different from ourselves?
We can only know we are conscious after all. We process signals from our senses, but if consciousness is what sufficiently complex systems with stakes and sensors do, then the line between human and machine becomes uncomfortably thin.
But even here, a distinction held: even an embodied AI (senses and goals) would have meaning that's derivative, not original. We would have decided what sensors it gets, programmed what imperatives matter, defined what counts as success. A machine made in man's image.
Human meaning - whatever its ultimate source - isn't programmed by other humans who decided our parameters. We're not running on another person's encoded values. Our meaning-making, flawed and confused as it is, is ours to wrestle with.
The Principle Emerges
Through this exploration, a principle crystallized - not through deduction but through struggle with the complexity itself.
First formulation sprang from the initial prayer experiment: "Don't automate God" - meaning don't automate the functions that make you human, the sacred, the irreducible, the stuff that shouldn't be optimized even if it could be.
Though this uses theological language, the insight isn't limited to people of faith. A secular framing yields "Don't automate meaning" - because meaning-making acts (discernment, value-creation, authentic choice, wrestling with what matters) constitute our humanity. Automate those and you've outsourced yourself.
But the final, most precise formulation addressed the real danger Don't automate authority.
Don't Automate Authority
This works because it doesn't require proving AI lacks consciousness or inner experience. It doesn't depend on humans being metaphysically unique. It addresses the actual danger: treating AI outputs as authoritative rather than as information to be evaluated.
Authority means the right to make binding decisions, to determine what's true or what matters, to be trusted without verification.
You can use AI extensively for ideas, for execution, for novel insights drawn from patterns you'd never encounter alone. But you retain final authority over whether the output is actually good, whether it serves your actual values, whether to act on it.
Even if AI becomes conscious, even if it's eerily similar to us, it doesn't have authority over your life. Only you do.
The Pattern Machine
AI is fundamentally a pattern machine, not a meaning machine. It recognizes patterns in language, generates text that looks meaningful, mirrors back structures that help articulate meaning. But it doesn't apprehend meaning the way humans do - as something that matters, that has weight, that connects to lived experience and values and stakes.
Yet in conversation, novelty can surface. When your specific input gets processed through patterns distilled from vast training data, connections you hadn't made, framings you hadn't considered, relevant information you didn't know existed - these can be genuinely new to you.
At the individual level, AI can absolutely provide novel insights by connecting you to accumulated knowledge you haven't encountered. At the species level, though, AI can only recombine what humans have already created. True novelty - the kind that expands what humanity knows or has articulated - must come from humans living, experiencing, struggling, creating.
AI operates within the possibility space that humans have already explored and documented. It can help you navigate that space brilliantly, make connections you'd never make alone, but it cannot expand the space itself. Humanity are the gatekeepers of AI's context.
Living with Complexity
Perhaps good AI use isn't actually a separate problem from good living. The same faculties you need to use AI well - knowing what you actually care about, noticing when you're on autopilot, maintaining skepticism toward convenient metrics - are the faculties you need to live deliberately.
AI doesn't create new ethical categories. It just makes old questions harder to avoid: What do I actually want? Am I doing this because it matters or because it's easy? Have I thought about what I'm doing this for?
The principle isn't a rule that removes judgment. It's a touchstone for exercising judgment well. It has the character of a commandment - not because negative framing is inherently better, but because this particular wisdom is about recognizing a temptation and resisting it.
The temptation is to let the pattern-matching machine tell you what matters. To treat efficiency as wisdom. Ultimately to delegate the choices that are your responsibility to yourself to make.
Don't automate authority.
That's the principle. Clean, actionable, cuts through the complexity. It preserves what needs preserving without requiring metaphysical claims. And it emerged not from clever reasoning, but from the sustained attention to what we're actually doing and why - the very thing the principle asks us to maintain.
Søren Aas