“I think everybody should get rich and famous and do everything they ever dreamed of so they can see that it's not the answer.” — Jim Carey
As long as a society has fathers who don't hug their sons, that society is going to have a non-zero amount of people walking around who don't understand that no amount of money or status will ever make them feel complete; and who will work tirelessly to try and extract as much adoration from strangers as they can until, well, they die. Unless a society offers people whose fathers didn’t hug them enough something to do besides eat us all alive in their Sisyphean quest to feel whole, those people will eat us all alive in their Sisyphean quest to feel whole.
I’m no economist, but there’s a feature of capitalist societies that I do think is very clever. Capitalist societies —the theory goes— come with a sort of pressure-release valve for handling the people whose fathers didn’t hug them enough. You see, there are some nice human things (respect, love, emotional security) that can’t be purchased with money, but that fact doesn’t stop marketers and propagandists from LARPing the illusion that they can. And for some reason, people who’s fathers didn’t hug them enough fall for this illusion every single time. So in their aforementioned Sisyphean quests to feel whole, the people who’s fathers didn’t hug them enough will often try and fill the un-fillable void in their hearts with money.
There are no billionaires who feel whole inside. Anyone capable of experiencing contentedness would have reached that milestone long before the first billion dollars was poured into the void in their chest.
At some point during their Sisyphean quests (assuming capitalism is working properly) the people who’s fathers didn’t hug them enough will pay taxes on the money they collected before it gets tossed it into their chest-voids, and the rest of society benefits by spending that money on stuff like healthcare and bike lanes. By giving the ambitious but insecure people a path to high social status that doesn’t rely on creative talent or conquering anyone, it becomes kinda like the rest of us are exploiting their labor.
If you ask me (a low-IQ internet essayist), this hints at the actual root cause of the collapse of the Soviet Union. Without an option to hoard rubles, the Soviet citizens who’s fathers didn’t hug them enough had to pursue social status and a sense of control in other more socially destructive ways, like politics. It doesn’t take a double polisci/psych major to see that a government operated by the most insecure, fearful and self-loathing among us is ripe for corruption, and a corrupt state can only last so long before imploding. Something that even a below average double polisci/psych major can tell you is what the Soviet Union very much did.
I have a personal rule when discussing politics. That rule goes like this:
“Any political theory that begins with the phrase ‘if only everyone could just…’ can never succeed in the real world.”
And yet, I can’t help but wonder- without a capitalist pressure release valve, if only every Soviet parent could have just hugged their sons… maybe their system could have sustained?
Value Creation vs Value Extraction
The reason I (did something I swore I’d never do in this newsletter and) brought up capitalism in the opening paragraph is because there is one thing I just can’t get over when it comes to the “uses” of generative AI software. The thing that makes me feel like I’m taking crazy pills when I read about what it can “do”. That thing —zoomed out and looked at from a macroscopic level— is that generative AI is really is only useful in specific economic microcosms where something called value extraction is involved.
Brief economics lesson: “value creation” refers to the process of making something more valuable say, transforming raw cotton into a cool shirt, or a toxic ex-boyfriend into poetry. Value extraction refers to the process of making an existing process more “efficient”, like instead of turning cotton into shirts, you buy an existing profitable shirt company and move production to a country with cheaper labor. Or, if you own a search engine company with 90% market share and no more customers to capture, you could make the search quality a little worse, so that people have to use it more and see more ads.
Consider that the slop an “AI” generator outputs is worth (financially, culturally) very, very little and dear reader, the reason is not “because the technology hasn’t advanced enough”. Slop is slop even when it gets the fingers right and even when it’s made by a human being in possession of a divine soul. But to my knowledge, there exist very few humans who would knowingly spend money on an LLM-generated novel, LLM-generated music, or a print-out of an “AI”-generated image (I like to say that the printer ink is more valuable left unused).
Now, there are other markets for generated content like AI-generated porno, boyfriends1, and chatbot-“therapy” but dear reader please allow me to remind you that we have a word for “repeatedly turning to a synthetic means of emotional fulfillment over human connection” and that word is “addiction” which I will expand upon later.
The point here is that generative AI isn’t “creating value” in an economic sense, it isn’t saving time for creators who care about what they are creating, and the claims that it can save time in the workplace while maintaining quality are dubious, with one MIT study finding that 95% of businesses that have incorporated generative AI seeing “little to no measurable impact on [profit]”, and another finding that AI coding tools slowed down experienced programmers (while indicatively, fooling the developers into believing they were faster). But generative software can save time time for some jobs, specificly the that involve trying to convince people of something (often, something to buy). Consider that there are really only ever two activities that humans really need to be coerced into:
Buying something they don’t need (or a particular brand of a thing they may need)
Buying into a narrative on which they didn’t previously have a position.
We already have words that describe the marketing of those things. For the first, it’s “advertising”, and the second is “propaganda”. Like generative AI, the ability of advertising and propaganda (manipulating people to believing or doing something) to do what it’s made to do does not rely on a relationship with the truth, or an accurate portrayal of reality, but purely in its ability to convince, i.e.: “bullshitting”.
Fooling-complete Von Neuyman machines
Technology describes tools. Tools make jobs easier. The thing that an LLM-generator program is making easier, is not the “job”2 of “creating”, but “outputting images/text/sounds that are indistinguishable from human-made images/text/sounds”. Convincing.
Fooling, really.
Scientist/artist/philosopher Jaron Lanier put this idea really well recently on an episode of The Grey Area podcast, it’s a take I rarely see in mainstream conversations about “AI”:
The whole field [of AI] was founded by Alan Turing’s thought experiment called the Turing test, where if you can fool a human into thinking you’ve made a human, then you might as well have made a human because what other tests could there be? Which is fair enough. On the other hand, what other scientific field — other than maybe supporting stage magicians — is entirely based on being able to fool people? I mean, it’s stupid. Fooling people in itself accomplishes nothing. There’s no productivity, there’s no insight unless you’re studying the cognition of being fooled of course.
First of all, loving his comparison to stage magicians. Where have I heard that before? But I also like how he used the capitalist holy word —“productivity”— against generative AI, because it’s a very important talking point I’m often thinking but don’t often see: these machines don’t actually produce anything anybody who’s father hugged them enough actually wants to pay for.
Again, the reason LLM technology is worth so much to investors is not because they believe it will create stuff that has value to humans, but because they believe it can make advertising and propaganda faster, cheaper, individually targeted, and difficult or impossible to identify and block. Investors believe that this tool will be a revolution for the ability of corporations (the real artificial intelligences here let’s be honest) to more efficiently manipulate minds and control consumer and citizen behavior with “dark patterns” too slippery and complicated for the human brain to outwit. That’s where they believe the value lies.
…assuming these company’s chatbots aren’t the only ones bullshitting, of course.
Secret Agents
AI companies claim that their chatbots will soon™ be “agentic” which implies they will be able to actually do something beyond turning dead dinosaurs into images of Jesus as a shrimp. And for the record, I actually have a high degree of confidence that an agentic “AI” is an entirely possible thing to create out of silicon and electricity3, but I also have a high degree of confidence that LLM technology —who’s approach to problem solving, again, is fooling— cannot actually lead to a place where the work of a machine is being done as if by a person.
Even if everyone in the audience is tricked, it doesn’t mean the stage magician actually sawed that woman in half. It just means they’re really good at tricking.
But also, even if agentic AI were possible, does anybody really expect a corporation-created machine to actually find them the best deal on a flight? Such a creation would reduce profit, which is the greatest fear of people who’s fathers didn’t hug them enough (not to mention something a corporation is physically incapable of doing without being forced).
Speaking of profit, there’s something else I don’t see mentioned very often; and that is that as far as I can tell, the only company actually profiting off of AI appears to be the chip maker Nvidia. Is this a bubble? Again, I’m no big city economist, but I find it unnerving that thirty percent of the S&P 500’s value lies with seven companies who (outside of Nvidia) are collectively investing billions into something not profitable and is plainly just not designed to do the stuff they claim. Is generative AI the corporate personhood version of an emotionally distant father? Where they’ll just keep pouring money into the hole and never, ever, feel better?e

And if you haven’t noticed we’re already beyond “agentic” as the marketing buzzword du jour. As
recently highlighted, companies are waging a branding war, coming up with ever more dramatic and unique names to describe their special little precious and unique chatbots that all largely do the same thing. Apple has “Apple Intelligence” (that still can’t, and probably never will, do the stuff Apple was advertising it could back in 2024), and Meta has taken to describing their chatbot as a “personal superintelligence”. Personally, I find it odd that a “superintelligence” can’t cite its sources but I’m sure that’s a teensy bug they can iron out, and definitely not inherent to how the technology works.AI: Addiction Idealised
There is of course, another more obvious economic use-case for really really good fooling. And that use-case is creating addicts.
Something I'm always trying to highlight on this newsletter that if a product addresses a craving while not meaningfully satisfying that craving, then the company selling that product is trying to addict you. You see it everywhere today, from ultra-processed foods, gambling, and certain video games.
For some reason —resisting what tech companies themselves have been saying for almost a decade now— there is a hesitancy in the media to describe online services as “addictive”. I have long suspected this hesitancy is because professional media folks are largely still unwilling to take the first step and admit their own problem.
Twitter has always been really good at getting its hooks into the type of person who works in media. This is why Snapchat snaps are rarely ever headline news, but tweets frequently are, despite Twitter having fewer users than Snapchat since 2016.
Twitter to journalists is like junk food, it targets the craving of wanting to “know what’s going on” without ever satisfying their need for quality information about the world that created the craving in the first place. It’s no secret that Twitter does not provide its users with reliable information about the world. But that doesn’t stop users from coming back again and again. Imagine if heroin uniquely appealed to people in the media the way that Twitter does:
But that’s how addiction-as-a-business works. The cartels companies don’t target everybody, they find and target the specific groups of people who are really, really craving what they’re selling. And generative AI is their attempt to personalize and individualize addiction on a wide scale. Each of us fragile, soul-possessing creatures has some kind of specific weakness in our hearts that when prodded, can cause us to act irrationally. The advertisers’ “hope” is that generative AI powered advertising will save them time on finding and targeting (for example) gambling addicts. The idea is that a machine will quickly discern and generate whatever it can to prevent every user from looking away.
That’s the idea at least.
In Conclusion:
As I’m always saying, in the digital era, we need to be wary of “convenience”. We need to look at the services being sold to us and think about what activity that service is replacing. Social media companies pretended to connect us, while all the while making it easier to avoid catching up in person. Generative AI similarly claims it can help us create, but a work without work leaves us with nothing but well, slop.
As I’ve described in this essay, aside from government propaganda and a few fringe cases (like non-consensual porn) generative “AI” services only make practical sense in business situations, where money is involved4. A creator who enjoys spending time on the process of creating is a scary enigma to the tech CEO frantically shoveling more money into the Dad-shaped hole in their chest. A creator who’s livelihood doesn’t rely on a need to create faster will not see much value in a machine that offers to “extract value” from their “output” and “allow” them to spend less time creating.
The reason a machine that produces nothing but worthless slop can be worth so much is because the people who’s fathers didn’t hug them enough believe they can build a system that will make you feel just as not enough about yourself as they do all the time. They’re betting billions that they can dig a hole into your chest using computer programs personalized to your specific vulnerabilities, so that they may turn around and addict you to whatever it is they’ll claim is just as good but a lot more convenient as hearing your father say the words “I love you”.
I agree with Farrell McGuire that the moral panic surrounding AI boyfriends (etc) is almost certainly a little overblown, and that lot of the people involved are probably just role playing, and fully aware that their chatbot “boyfriends” aren’t sentient. Whether or not that makes the whole situation any better I’ll leave up to you.
If someone does it for free (ie: creating), it shouldn’t be considered a “job”. But it’s truly a rare thing for a creator to earn a living creating exactly what they want to, and not using their talents to aid a corporation. But I digress!
Buy me a beer and I’ll allow you watch my 5 hour PowerPoint presentation on the topic
Please don’t interpret this to mean that Generative AI software cannot be used as a tool in the production of art. Anything can be used in the production of art, though I continue to wait to see generative AI being incorporated in an interesting way. Also people who gate keep the tools of art are wrong and annoying.
I really really liked this
Thanks for sharing. I think worth noting is that a lot of jobs are bullshit already, and they simply happen to be performed by humans. The sort of BS tasks or roles that are riddled with BS, but everyone pretends how important they are because they have bills to pay. AI may very well replace those, contributing to a labor crisis. Curious if you considered that.