AI is bad. Actually no. Actually it's a tool.
Artificial intelligence is going to replace us all. We'll lose our jobs, our brains will melt, robots will take over the world, and Skynet will launch the missiles on a Tuesday afternoon. At least that's what I read on LinkedIn between two posts from "thought leaders" who use ChatGPT to write their posts about the dangers of ChatGPT. (irony level: expert)
Except no. It's more complicated than that. And it's also simpler than that. Hold on tight, there will be swerves.
Mandatory disclaimer: this post is neither left-wing, nor right-wing, nor centrist, nor above, nor below. It has in fact been verified by an artificial intelligence to guarantee the complete absence of any political opinion. The AI confirmed: zero political ideas detected. Which, when you think about it, might be the most political statement in this entire post. (infinite loop)
AI is bad
Let's start with the prosecution. It's easy to build a case. There's plenty of material.
Generative AI guzzles an obscene amount of energy. Training a language model has the carbon footprint of flying Paris to New York. Several times over. To generate pictures of cats in medieval armour. Well done, humanity. (slow clap) ... That said, the same AI optimises power grids and reduces energy waste in industry. But that's less fun to talk about.
AI hallucinates. It invents sources, citations, facts. It tells you with absolute confidence that the capital of Zimbabwe is Nowheresville, and if you don't check, you'll put it in your report. Your boss won't check either. Nobody will check. And Nowheresville will become the capital of Zimbabwe in the next model, trained on your report. (virtuous circle) ... Then again, humans hallucinate too. It's called cognitive bias, and we didn't wait for AI to put nonsense in reports. At least AI can be fact-checked mechanically.
AI is replacing artists, translators, writers. The first to suffer, as usual, are the creatives and the precarious. The same companies telling you "AI will never replace humans" are laying off their content teams. But the press release is still written by a human. For now. ... Except that same AI also lets a fourteen-year-old compose music, an amateur create illustrations they could never have made, a small business owner produce content without an agency budget. The democratisation of creation, that's also what this is.
AI is used for mass surveillance, social scoring, facial recognition, CV screening (complete with racist and sexist biases). It's used to generate disinformation at industrial scale. It's used for things we wouldn't have dared imagine ten years ago. ... And it's also used to detect cancers on scans the human eye would have missed, to predict earthquakes, to find missing persons. Same coin, other side.
Deepfakes. Let's talk about them. You can now put anyone's face on anything. Your face on a compromising video, your boss's voice ordering a bank transfer, a speech from a president that never happened. And this isn't five years from now. It's right now. Your grandmother won't spot the difference. Neither will your bank manager. The worst part is that nobody will know what's real any more. A genuine video of a scandal? "It's a deepfake." An actual deepfake? "Nothing can be proved any more." AI hasn't just made it possible to create fakes. It's killed the very notion of visual evidence. (cheers, progress) ... But the same technology restores old films, gives a voice back to patients who've lost theirs, and lets a grandson hear his grandfather tell a story in a language he never spoke. It's beautiful. It's also terrifying. Both at the same time.
Cheating. The student who gets ChatGPT to write their dissertation. The employee who has their reports written without reading them. The candidate who passes a technical interview with a chatbot whispering in their ear. We'll be outraged for five minutes, and then what? We've had the same debates since the calculator. The difference is that a calculator didn't pretend to be you. When a student copies Wikipedia, at least they have to read the article. When they prompt an AI, they don't even need to understand the question. We're no longer producing graduates, we're producing people who know how to prompt. (that'll go well on the job market) ... And yet, we adapted to the calculator. We adapted to the Internet. We'll adapt to AI. Or not. It'll depend on us, not the machine.
Ethics. Training data is barely disguised theft. Millions of artists' images hoovered up without consent. Billions of texts scraped from the web without asking anyone. Your blog, your photos, your articles, all of it has probably been used to train a model that will then compete with you. You're both the raw material and the collateral damage. And when artists complain, they're told it's "fair use". (handy, fair use, it only works in one direction) ... On the other hand, that same mass of data makes human knowledge accessible to people who had no access to a library. A farmer in Burkina Faso can ask a question in Dioula and get an answer. It's data theft and the democratisation of knowledge. Try holding both in the same hand.
Social media. As if they weren't toxic enough already, now they're infested with AI-generated content. Fake accounts commenting, fake articles circulating, fake profiles flirting with you, fake reviews steering your choices. You think you're arguing with an outraged human? It's a bot. You think you're reading a heartfelt testimony? It's generated. You think you're seeing a photo from the latest war? It was made in a living room in Moscow. Half your news feed might already be synthetic and you don't even know it. At this point, the healthiest advice anyone can give is to turn off the screen and go touch some grass. Real grass. Outside. With soil and insects. (radical, I know) ... But that same AI also helps isolated people break through loneliness, helps organisations moderate child abuse content that no human should have to look at, and helps researchers detect manipulation campaigns. Even in the sewer, there are people cleaning up.
There. Case made. Verdict: guilty. Well... it's complicated.
AI is good, actually
Right, let's switch hats. Case for the defence.
Learning. I've got a kid learning to read. A patient chatbot that never tires of re-explaining, that adapts to the child's pace, that doesn't sigh at the fourteenth question, that's a terrific teaching aid. Not a replacement for the teacher. A complement. The teacher has thirty pupils. The AI has one. Both are useful. ... Except that a kid who gets used to an infinitely patient chatbot may no longer tolerate a human teacher who has limits. And above all, they may stop searching for themselves. Why rack your brains when the answer is one prompt away? It's called intellectual dependency, and it's the opposite of learning.
Translation. AI lets anyone produce a decent translation in seconds. A multilingual website, a user manual, international correspondence, things that used to cost thousands become accessible to everyone. That's progress. (honestly) ... Except it's killing the translation profession. Not the bad translators, the good ones. The ones who conveyed meaning, not words. The ones who produced literature, not conversion. And when they've all gone, who'll correct the AI's translations?
Disability. This is where it gets hard to spit on AI. A blind person who can "see" an image through a generated description. A deaf person who can follow a conversation through real-time transcription. A dyslexic person who can have a text rephrased. An autistic person who can practise social interactions with a chatbot that doesn't judge. Go tell these people that "AI is bad". I'll watch. ... But watch how tech companies use disability as a marketing argument to sell their products, while making their own interfaces less and less accessible. AI helps disabled people and serves as a moral alibi for companies that don't actually care. Both are true.
Facilitation. A developer using AI to generate boilerplate, repetitive code, unit tests, that's not cheating. It's like using an electric screwdriver instead of a manual one. The result is the same, your wrist just hurts less. And you'd better check the screw is tight in both cases. ... Except when the developer stops understanding their own code. Except when they blindly accept whatever the AI suggests. Except when they can no longer screw by hand and the electric screwdriver breaks down. Then you notice.
Understanding. Summarising a thirty-page article. Explaining a complex concept in simple terms. Rephrasing a legal text in human language. Extracting the key points from a three-hour meeting. AI doesn't understand anything in the philosophical sense, but it facilitates human understanding. And that's valuable. ... Except that "facilitating understanding" and "giving the illusion of having understood" are two very different things. Reading a summary isn't understanding. Having the key points isn't mastering the subject. And confusing the two is the beginning of confident incompetence, the worst kind there is.
There. Defence made. Verdict: not guilty. Well... it's also complicated.
Oh wait, we're going in circles
Noticed? Every argument "against" has a "for" hiding inside it. Every argument "for" has an "against" gnawing at it. Fat lot of good that does us.
It's almost as if... it were neither good nor bad. As if it were a tool. A hammer can build a house or smash a skull. We don't ban hammers. We judge the people who smash skulls. (seems obvious when you put it like that)
AI is the same. Like the printing press, which gave us the Bible and Mein Kampf. Like the Internet, which gave us Wikipedia and 4chan. Like fire, which gave us cooking and arson.
AI is the new telly.
Remember television? "It'll make children stupid." "It'll destroy reading." "It'll rot public debate." That was the 1960s. Before that, it was radio. Before radio, cinema. Before cinema, the novel (yes, the novel, people thought women would lose their minds reading love stories. Seriously.). And every time: panic, moralising, calls for a ban, and then... we adapt. Or not. But the tool remains.
Television gave us Attenborough and reality TV. AI will give us wonderful things and appalling things. And in both cases, the adjustment variable isn't the technology. It's you.
Use your critical thinking
So here's my message, and it's simple: use your brain.
AI gives you a result? Check it. AI translates a text for you? Re-read it. AI generates code for you? Test it, read it, understand it. AI summarises an article for you? Read the article anyway, at least skim it. AI tells you the Earth is flat? Don't believe it. (that goes for your brother-in-law too)
AI is an assistant, not an oracle. A draft, not a finished product. A starting point, not a finish line. And if you treat an assistant like an oracle, the problem isn't the assistant. It's you.
Should you verify everything I've written? Especially yes. That's what critical thinking is. It works with AI, with newspapers, with politicians, with your brother-in-law, and with me.
Use it. It's free and it's renewable.