Poison at the Buffet
Nobody is forcing or even asking social media sites to serve up harmful content. They do it anyway.
Imagine you’re a toxicologist and find yourself at an all-you-can-eat buffet. Your trained eye notices that right next to the mashed potatoes and grilled chicken is an unlabeled tray of human poison. You get upset. Sure, you can recognize it as poison but not everyone will. Particularly unaware would be any children that happen through here.
You find the manager. “This is not OK” you tell them. “People are eating this, thinking it’s safe and getting sick!”
The manager responds, “Ma’am, this is a buffet. We have all kinds of consumable substances here. People get to choose what they want and how much, we won’t decide for them.” Not satisfied with the response- you threaten to call the police. At this, the manager relents somewhat and agrees to move the tray of poison to a station on the other side of the restaurant.
In the wake of the Facebook whistleblower testimony, Instagram recently announced it will adjust it’s algorithm to display harmful content to teens less often:
“We’re going to introduce something which I think will make a considerable difference, which is where our systems see that a teenagers is looking at the same content over and over again, and it’s content which may not be conducive to their well being, we will nudge them to look at other content,”
When I saw this I was reminded of the praise that Reddit receives every time it bans one of it’s myriad harmful subreddits. The praise often ignores the fact that the harmful content was given a platform and allowed to grow in the first place. Twitter took years to ban Donald Trump after he broke their own rules. Nobody was forcing them to let him stay. Would you allow a quarrelsome party guest to stay in your house after repeated ignored requests to be nicer?
It’s “interesting” that Instagram’s response to teen girls looking at harmful content was not to, y’know, actually remove the harmful content, but simply to “nudge” them away from it, as if that’s doing vulnerable people a favor.
Is it literally better than nothing? Probably! But why harm teens at all? Remember: Nobody is forcing them to host this.
Arguments aside on the ethics of continuing to operate a harm-causing machine that’s ostensibly too big to control; I fully understand these platforms are massive and it is likely impossible to manage every single tweet, post, story, subreddit, etc. Especially considering that toxic content is often context-based or cloaked in dog-whistles which makes it much harder to track in an automated fashion. But the thing is- you never see these giant companies put out a statement along these lines:
Hey, we know there’s a lot of garbage here but frankly it’s just too much for us. We can’t afford the resources necessary to look out for everyone. So just know that we DO NOT WANT this stuff on our sites. Just… use at your own risk I guess.
Instead the usual response is to just… pretend it doesn’t exist. If pushed, they’ll often give a lot of weird arguments about “free speech” as though these websites are powerless to control what they are. And maybe these companies really are powerless to a degree, in that their size is too large to reasonably moderate and still be profitable. But the people in charge, those profiting off of the suffering of teenagers, are decidedly not as powerless as the young (and old) minds being pushed into stressful situations by algorithms that hijack their limbic systems.
The free speech argument is a convenient foil. Remember, many of these sites actually have huge influence over what is being discussed online, and the emotions those discussions can elicit. The documentary “The Social Dilemma” imagined a hypothetical knob on Zuckerberg’s desk that could control the collective anxiety level of Facebook users. Each little “nudge” adds up. Facebook does exercise a degree of control over the emotional impact of what it’s users see. If these platforms truly believed in unrestricted free speech there would be no need for an algorithm at all.
Nudges, algorithms and others layers of obfuscation aside, the reality is that these companies have been, and continue to:
Host content they know to be harmful.
Serve it to people (including children and teenagers).
It took a whistleblower and a congressional panel to convince Instagram that it might want to nudge teens away from the poison at the buffet, but they are still choosing to serve it. The poison is still on the menu. The managers obviously think serving poison is desirable to some degree. Maybe it keeps people talking about the buffet?
My analogy isn’t perfect because it ignores the users who create and post the harmful content in the first place. Meta, Reddit etc. aren’t creating the toxic content and they’re not the only ones who want to spread it around. There is a small (but active) contingent of users who‘s goal is to get the buffet-goers to eat the poisoned dishes.
I assume there is a critical mass to how off-putting social media can get before the “normal parts” can’t outbalance the toxic parts. 4chan is an example of a site that does not have utility to most mainstream audiences. The question is, how “4channy” can Reddit, Twitter etc get before they begin to lose the critical mass of “normies” fueling the mainstream attraction to their sites? Customers may overlook a little poison at the buffet here-and-there, but if a quarter, third, or half of the dishes being served at the buffet are poisonous, they may start to question who this restaurant is really catering to.
After all, nobody is forcing them to do this and few people are requesting it. So why do they insist?
Maybe check in with yourself and notice what it is you’re eating, where you’re dining at, and how much poison you’re OK with your kids consuming.
Because not everyone is a toxicologist.
Stay Grounded is a reader-supported publication helping people get offline. To receive new posts and support my work, consider becoming a free or paid subscriber.