The Noise Tax
When We Judge Meaning By Its Form
I dabble as a software developer. I treat my consciousness like code — full of insightful but unoptimized meaning — and I use AI to refactor the signals into something a reader can actually parse. According to a growing number of platforms, institutions, and social norms, this makes my output suspect. If my prose is too clean, my soul must be a fraud.
Let me show you how this works in practice, using a platform that has made the contradiction visible.
Medium's official position is elegant in its simplicity: "Medium is for human storytelling, not AI generated writing." They have built an enforcement apparatus around this principle — detection tools, manual reviewers, distribution penalties, Partner Program revocations. The machine must not speak. Only the human may speak. This sounds noble until you look at what we humans are writing about.
Here is what Medium's algorithm promoted to the front page on a single day in February 2026: "The 7 Symptoms That You Have A High IQ, Even If You Don't Feel Like It.", "Forget ChatGPT & Gemini — Here Are New AI Tools That Will Blow Your Mind.", "Psychologists Say This Small Boundary Habit Makes People Respect You Instantly." and "There Are Only Five Rules In Life (According to Carl Jung)." The algorithm is saying "This is important. Engage", not unlike the tabloids used to say "This is trash. Enjoy".
These are not human stories. They are engagement algorithms wearing a human skin — the "Forget X, Do Y" template, the "Psychologists Say" appeal to borrowed authority, the listicle numerology of "7 Symptoms" and "5 Rules." A language model could generate a thousand of these per minute. A language model did generate a thousand of these per minute, which is precisely why human writers learned to mimic the pattern. It works. The algorithm rewards the template. The template gets the claps. The claps get the paycheck.
Now consider my process. I sit with the friction of being alive, just like you. The confusion, the contradiction, the half-formed metaphors that surface when consciousness rubs against something it cannot resolve — then I capture it in a stream-of-consciousness draft. Raw signal. High entropy. The kind of writing that contains meaning the way a seismograph contains an earthquake: the pattern is there, but you need interpretation to read it, a hand to hold and guide. Then I use an AI to iterate over that draft. The AI tightens the prose, clarifies the syntax, ensures the reader can follow the argument without losing the thread. I make the ninth draft. At every stage, I hold the gavel. The meaning is mine. The authority is mine. The AI touches nothing that I have not approved, revised, or rejected. It amplifies my signal.
The result is an essay called "The Hotdog, the Void, and the Will to Want." A modern LLM could technically generate a title like that — the capability exists. But no spam farm prompts for it, because it doesn't optimize for clicks. The difference between my work and AI slop isn't what the tool can produce; it's that I chose to write it. That's not a question of authorship. It's a question of taste.
And here is the structural problem, not just for Medium but for every institution now grappling with AI: there are three things being conflated that should be kept separate.
- Authorship — who generated the tokens.
- Agency — who decided what should be said.
- And taste — who judged whether it was worth saying.
Medium, like most systems, can only weakly observe authorship. What actually matters is agency and taste, and those are invisible to any detection tool. So the platform ends up rewarding human labor without human taste and punishing human taste expressed through machine labor.
I should be fair. Medium is not trying to punish clarity. They are trying to stop their platform from drowning in zero-effort AI sludge, and that is a legitimate operational problem under genuinely adversarial conditions. Spam farms also produce clean, structured, confident prose. They also add disclosures. They also claim editorial oversight. At scale, a ninth-draft essay refined through genuine intellectual struggle looks identical to content milled out by a bot in four seconds. Medium overcorrects because they have no other mechanism available to them. I understand why they built these rules. And the rules are arrayed against promoting clarity of human meaning.
Their policies currently distinguish how text is produced better than why it exists, and that gap is selecting against serious writers who are trying to do the work. This is not an attack. It is a design bug report.
But Medium is only an example of a pattern spreading everywhere. In academia, a student who uses AI to clarify their thinking is treated the same as a student who had AI think for them. In journalism, AI-assisted reporting gets dismissed regardless of whether the investigation was genuine. In workplaces, "that sounds like it was written by AI" is becoming a way to discredit an argument without engaging with its substance. "Used AI" is the new ad hominem — an intellectually lazy attack on the form to dismiss the content.
We have been here before, and that is what makes the irony unbearable. For centuries, people were dismissed because their signal was too noisy — the wrong accent, the misspelt word, the mangled grammar. A working-class person could have the most incisive insight in the room, but if they dropped an 'h' or split an infinitive, the substance was ignored in favour of the surface. We built entire class systems around this: the form of expression as a proxy for the worth of the speaker.
The Noise Tax is the same judging a book by its cover, but inverted. In the old world, too much noise meant you were uneducated, therefore ignorable. In the new world, too little noise means you used a machine, and therefore also ignorable. In both cases, the manoeuvre is identical: evaluate the form to avoid engaging with the meaning. The person who cannot spell and the person who used AI to spell correctly are both dismissed — and in either case their voice is negated.
This is the meaning-versus-form tension scaled to civilisation. We have built detection systems — social, institutional, algorithmic — that can only measure how something was produced, never why it exists or whether it is true. So we fixate on authorship because agency and taste are invisible, and we end up in an economy where imperfection becomes a hallmark of authenticity. Messiness becomes a luxury good that proves you are real. Clarity becomes evidence of contamination. The signal to noise ratio suffers.
I refuse to pay.
My consciousness is messy and it is hard to convey in an understandable way. The balance between my signal and the machine's structural precision is the best method I have found to transmit meaning from my mind to yours. Asking me to degrade that transmission — to reintroduce noise I spent nine drafts removing — so that a detection algorithm believes I am human is no different from asking me to misspell a word so that a professor believes I am trying. In both cases, it is demanded that I perform inadequacy to earn the right to be heard.
The question facing every platform, every institution, and every person making judgements about AI-assisted work is simple: are we going to evaluate ideas by their origin or by their merit? Because if the answer is origin, then the listicle farmers will be the only ones left standing — human-typed, algorithmically optimised, semantically empty, and completely undetectable. And the writers trying to transmit something real will have been taxed out of existence for the crime of being too clear.
The gavel is mine. The meaning is mine. The machine is a amplifier, not a voice. If the system cannot tell the difference, the system needs debugging — not the signal.
🌊Follow the current on Medium
Søren Aas