Posted by Andrea Phillips
https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1
This post is part of a series currently in progress. We’re adding links and adjusting titles as we go.
Why AI Sucks and You Shouldn’t Use It
AI is Fundamentally Bad for Most Tasks
AI is Destroying the Planet
AI is Destroying the Economy, Part I
AI is Destroying the Economy, Part II
AI is Morally Bankrupt
AI is Making You More Stupider
That Original Bluesky Thread About Art
So far I’ve been making the case against AI with cold, hard numbers: error rates, bottles of water evaporated, dollars invested. Now it’s time to move into a more subjective — and yet to my mind, far more important — set of considerations: the moral and ethical implications of AI and how we use it.
There are three categories of problem, here, all of which stem from the fundamental problem that an LLM is not, in any meaningful way, a thinking system, which also means it does not and therefore cannot have ethics or morals or feelings in any way whatsoever.
These categories of peril are attribution, or, the plagiarism issue; accountability, or more precisely the way automated systems avoid accountability by design; and attachment, which is to say the hazards that arise from you getting too attached to the machine.
Let’s look at them one by one.
The Problem of Attribution
In my creative communities, we call the chatbots “the plagiarism machine,” among other, worse nicknames.
Do you remember when you were first learning how to write a research paper in school? Your teacher probably told you it’s not okay to copy something word for word from an encyclopedia (or cut and paste from Wikipedia) because that’s plagiarism. You need to rewrite it in your own words, or else you need to attribute the original source.
An LLM isn’t really capable of attribution, because it doesn’t actually know where the words it’s saying are coming from. (And when it does put in something that looks like a quote with an attribution, it’s often wrong.) An LLM “learns” by sucking in all of the information in the world, and those patterns are still there, but the sourcing is long gone.
But it’s prone to regurgitating whole chunks of a single work, not even mixing together different sources. In fact, it can spit out almost entire novels from memory if you ask it right. Indeed, it turns out it can’t paraphrase work without plagiarizing even when that’s explicitly what you’ve asked it to do.
So these tools are, literally and legally, doing plagiarism every time you use them.
Over the long haul, this is robbing artists of their livelihood by undercutting the labor it took not just to make any given work of art, but also the years of work to achieve the level of skill required to make it in the first place. And ultimately, it’s going to rob us collectively of all the art that never gets made because the artist had to go into nursing school or agricultural work to pay the bills.
The cold fact is that if you ask a machine to spit out a new horror novel for you based on the work of Chuck Wendig, you’re both stealing Chuck’s body of work and undercutting the market for the stuff he’s actually made at the same time. And when you use a GenAI tool to make an illustration in the style of Rebecca Sugar, you're actively robbing her of the fruits of her labor and helping to devalue her product, too. The artists who spent years developing the things you love!
And here’s the kicker: even when you’re not explicitly asking it to copy anyone’s style… that’s what it’s doing anyway.
This aside from the question of copyright violation in a corporate sense, which is to say the problem of how easy it is to just tell the bot to draw a comic for you with Superman and Sonic the Hedgehog duking it out, or write you a whole new Hunger Games book, and nevermind the lawyers.
There’s a heady argument to be made here about copyright, fair use, public domain, transformative works, and indeed about whether anyone can really own an idea, particularly in an era when art is almost entirely intangible — digits on a hard disk, not ink on paper or paint on a canvas. But we don’t have time to count angels on the head of a pin when our bank accounts are running dry.
The stakes for this conversation would be much lower and emotions less high if this didn’t feel like a matter of sheer survival to artists. As long as our world operates the way it does, the only viable way to be a full-time working artist is to sell your art for money somehow, whether you want to or not. And if those sources of money have dried up because the market that used to pay can get something vaguely comparable for free now — and to add insult to injury, that work is usually pretty shitty — you’re going to have some big feelings about that.
That said, I’d argue, actually, that the real villain here is capitalism. It almost always is.
The Problem of Accountability
There’s a famous quote from a 1979 slide used for employee training at IBM: "A computer can never be held accountable, therefore a computer must never make a management decision."
A good manager will known when their star employee didn't hit quota because they were in a bad car accident, or they were out on jury duty for six weeks, or their parent died. A good manager will know this is the result of circumstance and give a little grace. An automated employee scoring system won’t.
And yet collectively businesses and other organizations (governments, universities) have been moving to automation to such a degree that it’s hard to figure out how to even reach a human being at, say, Facebook. Unfortunately, this is a feature of AI to our billionaire overlords, not a design flaw. There’s a whole book about this, actually!
The point is to take away the element of human judgement, which is to say, to take away any mechanism for accountability (except, I suppose, taking it to the courts, and most companies are rightly betting you’re not going to sue them for bad customer service no matter what it’s cost you.)
The problems this creates wind up affecting all of us, though, one way or another, and the harms extend far beyond customer service. Despite the prevalence of HR departments using AI to screen resumes, it has documented problems with, oopsie, being systematically racist and sexist.
It turns out when the data you train an AI on is the result of existing biases, you’re training the AI to maintain those same biases in perpetuity.
That’s also a problem in healthcare. United Healthcare famously uses an AI to reject claims that one lawsuit alleges has a 90% error rate, with the result that elderly patients are forced out of rehab programs and care homes they still desperately need.
So who do we blame for this? Well, it’s the system, it’s not anyone’s fault in particular.
The irony is that companies that pride themselves on giving all employees agency to make snap decisions tend to have off-the-charts excellent customer and employee satisfaction. Chewy is one example, and a quick search will give you dozens of overjoyed customer accounts of meaningful interactions. Zappos used to be like that, but with the Amazon takeover, the culture has changed dramatically.
The question of accountability is much larger than AI; this is the problem with automation of all kinds, wherein a system is designed to fit only a specific set of use cases, and when something arises that doesn’t fit into that paradigm, well, there’s nothing to be done about it.
It’s also a problem baked into the very structure of a corporation, which exists as a “person” so that the actual people who own it can’t be held accountable for its actions. Which is an ongoing and catastrophic injustice for society, because you can’t send a corporation to prison no matter what harm it’s done.
Ah, but who cares? The stock market is happy.
The Problem of Attachment
And then there’s the problem of AI use by people who are, in some way, very vulnerable. People who are lonely and want companionship, or just need to talk out their problems somewhere, or maybe need a reality check. A substitute for a friend, romantic partner, or therapist.
There’s a fair argument here that if you’re not a vulnerable person, if you’re aware that the bot isn’t a real being with real emotions, then there’s nothing to worry about. You can have interactions that make you have the good hormones in your brain any time you want, no harm done. Call it the emotional equivalent of a vibrator.
I’d still warn about the dangers of becoming attached to something owned by a corporation that is operating for profit, and not for your benefit. If anything happens — the system is changed, or the company goes bankrupt, or it jacks up its prices to something unsustainable for you — and you lose access, then you’re back where you were before, except likely with a side order of real and meaningful grief.
And even if it stays up forever, you can’t be sure that the corporation won’t be subtly manipulating your interactions to change your political views, to sell you some sponsored product or another, or even just to increase your reliance on their product to lock you into their system (and out of meaningful relationships with humans.) Businesses aren’t in the business of doing people favors out of the kindness of their hearts. Especially not the kind backed by venture capital.
But if you are a vulnerable person, this kind of interaction can be catastrophic. And it’s important to note that you, yourself, are likely not able to tell from the inside if you are such a person or not.
The chatbot can’t tell, either. The AI isn’t ever second guessing anything you tell it; your words are its gospel. Unlike a human therapist, the chatbot isn’t going to notice that you’re probably manic or delusional, so it’s not going to push back and tell you that no, your mother probably isn’t trying to poison you.
Instead, it might just give you helpful tips on how to commit suicide. Or lead you into religious psychosis. (Actually, if you read some of the subreddits in the general category of spirituality and various supernatural phenomena, it’s disturbingly easy to find people who are very clearly in the throes of some kind of AI-driven delusions.)
The AI isn’t really your friend (or your girlfriend or therapist.) No matter what it says, it’s not capable of loving you back. It’s not capable of caring about what happens to you. It’s not capable of caring about whether it’s doing right or wrong.
It’s not capable of caring at all. And that’s the whole problem.
https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1