Hollywood Miscasts AI As The Terminator

Banal AI bureaucrats, not rogue robots, are the real danger, says Yuval Noah Harari.

Yuval Noah Harari speaks with Joseph Gordon-Levitt at a Berggruen Institute event on Sept. 26, 2024 in Los Angeles. (Credit: Marco Gallico)
Credits

Nathan Gardels is the editor-in-chief of Noema Magazine.

As part of its project to bring together the storytellers of Hollywood with historians, philosophers, scientists and technologists, the Berggruen Institute’s Studio B recently hosted a conversation between Yuval Noah Harari and Joseph Gordon-Levitt.

The exchange between the Israeli historian and the actor, director and producer focused on how information networks that convey the narratives which bind humans in collective action have shaped history — the theme of Harari’s new book, “Nexus.” You can view the whole conversation held at the former Hearst Estate in Beverly Hills here.

In a further interview with Noema, Harari delved more deeply into this topic and other issues, from the role of AI in Israel’s war against Hamas to the need for checks and balances in democracies where the power of perception is so distributed through social media networks that the public sphere is disempowered, undermining social cohesion.

In particular, we discussed the meta-narrative about AI planted in the popular mind by Hollywood films. Hollywood has cast the AI story as one of rogue robots revolting against their masters in films like “The Terminator” and “Blade Runner.” Is that the right image of AI we should have in our mind’s eye, I asked?

“No, it’s completely misleading!” Harari rejoined. “Hollywood did a huge service in focusing people’s attention on the AI problem long before anybody else really thought about it. But the actual scenario is misleading because the big robot rebellion is nowhere in sight. And this, unfortunately, makes a lot of people who grew up on the ‘The Terminator’ image complacent. They look around, they don’t see any kind of Terminator scenario as really being feasible. So they say everything is okay.”

He went on: “These films portray a kind of general purpose AI that you just throw into the world, and they take over. They can mine metals in the ground, build factories and assemble hordes of other robots.

However, the AIs of today are not like this. They are not general intelligence. They are idiots. They are extremely intelligent in a very narrow field. AlphaGo knows how to play Go, but it can’t bake a cookie. And the AIs used by the military can identify Hamas headquarters, but they cannot build weapons.

What the cinematic image so far misses is that the AIs don’t need to start from scratch. They are inserted into our own systems, and they can take our systems over from within.”

Harari offers a parallel example. “Think about lawyers. If you think about the best lawyer in the United States, this person is, in a way, an idiot savant. This person can be extremely knowledgeable and intelligent in a very, very narrow field, like corporate tax law, but can’t bake a cookie and can’t produce shoes. If you take this lawyer and drop him or her in the savannah, they are helpless. They are weaker than any elephant or any lion.

But they are not in the savannah. They are inside the American legal and bureaucratic system. And inside that system, they are more powerful than all the lions in the world put together because that single lawyer knows how to press the levers of the information network and leverage the immense power of our bureaucracies and our systems.

This is the power that AI is gaining. … It’s the AI bureaucrats, not the Terminators, which will be calling the shots. That is what we need to worry about. Even in warfare, people may be pressing the trigger, but the orders will come from AI bureaucrats.”

The Banality Of Thoughtless AI Agents

In other words, the multitude of AIs acting in their own narrow domain like “idiot savants,” unaware of any larger context, including moral and ethical dimensions, will become the normalized decision agents at key points within the encompassing technocratic administration of daily life. Disembodied, inorganic intelligences acting without understanding will become integral to the functioning of human civilization.

Perhaps because Harari is Israeli, what sprung to my mind as he spoke was the famous comment by Hannah Arendt about “the banality of evil” at the trial of the Nazi war criminal, Adolf Eichmann in 1961.

As the philosopher Judith Butler has noted, Arendt did not mean by this comment that the evil was banal, but that the kind of unaware, unconcerned and unreflective mind that committed such evil had become an ordinary, or banal, feature of a system capable of efficiently carrying out the task of exterminating the Jews.

“[Arendt’s] argument was that Eichmann may well have lacked ‘intentions’ insofar as he failed to think about the crime he was committing,” Butler wrote. “She did not think he acted without conscious activity, but she insisted that the term ‘thinking’ had to be reserved for a more reflective mode of rationality. … [Arendt saw that] a new kind of historical subject had become possible with national socialism, one in which humans implemented policy, but no longer had ‘intentions’ in any usual sense.

To have ‘intentions,’ in her view, was to think reflectively about one’s own action as a political being, whose own life and thinking is bound up with the life and thinking of others. So, in this first instance, she feared that what had become ‘banal’ was non-thinking itself. This fact was not banal at all, but unprecedented, shocking and wrong.”

This description of Eichmann as “a new kind of historical subject” fits Harari’s characterization of AI bureaucrats like a glove. That is not to declare AI as somehow intrinsically evil, but to point out the capacity of thoughtless agents to effect bad consequences.

Can You Pull The Plug On A Whole System?

The misplaced Terminator scenario has even framed how Big Tech thinks about its own role and ability to control its creation. If AI models get too powerful and threaten to make their own decisions, Big Tech seems to think they can just pull the plug or hit a kill switch. I asked Harari if he had any faith in that perspective.

“Only on a very small scale,” he replied. “It’s like the Industrial Revolution. Imagine a century ago, if all these coal and oil giants told us, ‘You know, if industry will cause this pollution and the ecological system will be in danger, we’ll just pull the plug.’ How do you pull the plug on the Industrial Revolution?

They have in mind some small malfunction in one confined company or location or model. If some unauthorized agent tries to launch nuclear missiles, the process can be shut down. That can be done. But that is not where the danger mostly lies.

There will be countless AI bureaucrats, billions globally, in all the systems. They are in the healthcare system, the education system, in the military. If after a couple of years, we discover we made a big mistake somewhere, and things are getting out of control, what do you do? You can’t just shut down all the militaries and all the healthcare systems and all the education systems of the world. It is completely unrealistic and misleading to think so.”

The Sorcerer’s Apprentice

For all the techno-optimism inspired by the promising leaps of AI, from profound scientific discoveries to elimination of the drudgery of rote labor, the dystopian possibilities Harari has envisioned must always be borne in mind.

Looking back at the long past and ahead toward the long future, Harari invokes the wisdom of Goethe’s poem, “The Sorcerer’s Apprentice” (popularized in 1940 as a Disney animation starring Mickey Mouse).

In that tale, written at the very early outset of the Industrial Revolution, the apprentice appropriates one of the absent sorcerer’s spells to lessen the burden of his chores. It all goes wrong when he enchants a broom to fetch water and then can’t stop it, leading to flooding of the workshop. When the sorcerer returns, the apprentice pleads for help: “The spirits I summoned, I cannot rid myself of again.” The sorcerer then breaks the spell and saves the day. “The lesson to the apprentice — and humanity — is clear: never summon powers you cannot control,” Harari writes in “Nexus.” It is the summoning of algorithmic spirits “we cannot rid ourselves of” that worries Harari. It should worry all the rest of us too. Unlike in the Disney-fied Goethe story, there is no transcendent magician to fix things once we have handed over the keys of the human kingdom to our inorganic offspring.