I Know A.I. is Supposed to Be Amazing, but What Does it Actually Do?
Explain it to me like I'm a five-year old
I’ve been reading Yuval Noah Harari’s new book, Nexus, a sprawling behemoth of over 500 pages where the author of Sapiens tries to present a history of the flow of information and sounds the alarm on artificial intelligence. He’s one of the A.I. doomsayers, predicting that it will enslave us all, with cute formulas like, “tools should serve us, not the other way around” and demanding we’re told when we’re talking to an A.I. or a human.
I’m not an expert in the field, so I have no idea if his telling is correct. However, if a friend asks me if I recommend the book, I’d tell him to just listen to this podcast instead. You’ll get all the information necessary in less than two hours, without having to dedicate an important amount of time to the text (I’m still not sure if it’s worth it).
Now, if Harari is good at something, it’s storytelling. Here, he makes a compelling case full of hyperbole about our future A.I. overlords. According to him, people in Silicon Valley are shaking in their boots thinking about A.I.’s capabilities, but they just can’t stop developing the technology for fear of losing the market.
It’s true that A.I. is being used in many industries to startling effects. We even use it where I work to improve document fraud prevention, and Russia is using A.I. drones to bomb Ukraine. That’s why new organizations like Stop Killer Robots have sprung up, highlighting how A.I. is used in Gaza and Ukraine to determine how many civilian casualties an attack will cause and making decisions based on its own calculations.
This is very worrisome. I can at least think of three episodes of Black Mirror exploring the concept of robots deciding if we live or die.
However, this is very far from Terminator’s Skynet growing a consciousness and deciding to enslave all humanity. Harari seems to jump from premise A, “A.I. is being used to accelerate many processes,” to premise B, “A.I. will decide and determine how we live.”
Maybe I’m an old Luddite who writes physical books and doesn’t get it. But my experience with generative A.I. based on language models has been far from enthusiastic. As recently as last week, I was looking for a funny/catchy title for some corporate communications thing, so I asked ChatGPT to come up with variations on the acronym EBITDA. “Something like EBIT-DUH,” I prompted it; “I came up with that one myself; give me others.”
For some reason, the machine just couldn’t do this. It spit out “EBITDUH,” and when I told it I’d come up with that, it spit out “How about EBIT-DAH?” and I gave up. Which begs the question: this is the super-intelligent machine that’s going to enslave us all?
Let’s be honest: apart from vomiting word salads based on language probabilities, making horrible pictures and worse music, A.I. doesn’t really do much. Yes, I can have it write an email, but if I use it for my job, I have to rewrite 80% of the stuff it produces.
And things are not going to get better, at least in generative language model A.I.s. These have pretty much plateaued, as Erik Hoel explains in his Substack. GPT-5 is barely better than the previous version, despite the immense amount of cash thrown at the model to “make it better.” Researchers like Emily Bender argue that Large Language Models are nothing more than stochastic parrots who will not improve or stop hallucinating.
Is A.I. more than the last trinket fueling a financial bubble? Of course Sam Altman is going to make outrageous predictions to pump the value of OpenAI’s stock, just like Elon Musk promises dancing robots and then presents people dancing in robot suits to make his point—all leading to the stock going up (where’s my “hyperloop,” Elon?).
Our economy now runs on huge pump-and-dump schemes where the only thing that matters is the stock’s perceived value, and to me, A.I. looks like the next NFT gimmick.
Am I missing something here? Like, what does A.I. actually do, and how is that going to take over the world? Harari argues that the impact of A.I. is yet to come, and he compares A.I. to the steam engine. But these things aren’t the same: the steam engine is a new source of energy; ChatGPT is a new source of internet gobbledygook, making surfing the web worse, not better.
My last gripe here is economic and environmental: OpenAI is burning cash like a poet on payday, subsidizing their little machine. We went through this with Uber: remember when they burned billions to subsidize our rides, cornered the market, and then screwed drivers and customers? The same applies here: OpenAI isn’t making any money—far from it. And let’s not even talk about the environmental impact and all the other externalities.
So, to sum up: is it really a good idea to develop this less-than-perfect technology at a huge financial, environmental, and social cost? Because yes, some jobs will be lost; some people will be replaced by A.I., like accountants or translators (no, A.I. won’t be running the military or social security, Mister Harari). This will of course create enormous social strife and political tension, all because… what? Because companies want to save a couple thousand on accountants’ salaries? Are we really going to inject billions into A.I., accelerating global warming and leaving accountants and translators destitute—all so we can get suggestions like “EBIT-DAH”?
We’ll talk once the A.I. financial bubble bursts.
Yep. That's my experience with ChatGPT. I can have it create
activities for ESL/EFL tests, but I have to check, rearrange, and rephrase more than half of whatever it produces even though I feed it with well detailed prompts. Don't get me wrong, it safes me so much time, like a lot, but the thing is far from intelligent. I see it as an OK assistant.
Thanks! Love your approach 😉