Al Will Take Over Human Systems From Within

It’s not a tool, but an alien agent, says Yuval Noah Harari in an interview.

Luis López (Mallet) for Noema Magazine
Credits

Yuval Noah Harari is an Israeli historian and the author of “Sapiens,” “Homo Deus” and “Nexus.”

Yuval Noah Harari sat down with Noema Editor-in-Chief Nathan Gardels to discuss the themes of his new book, “Nexus.”

Nathan Gardels: The premise of your work is that what distinguishes sapiens is their ability to tell stories people believe in that connects them and enables collective action. As the French philosopher Régis Debray has written in his reflections on how de Gaulle revived his country after its World War II defeat: “The myth makes the people, not the people the myth.”

What matters in the march of history, you say, is the information networks that convey those narratives. Can you elaborate on this point with some historical examples?

Yuval Harari: Our human superpower is the ability to cooperate in very large numbers. And for that, you need a lot of individuals to agree on laws, norms, values and plans of action. So how do you connect a lot of individuals into a network? You do it with information, of course, and most importantly, with mythologies, narratives and stories. We are a storytelling animal.

You can compare it to how an organism or a body functions. Originally there were only single-celled organisms. It took hundreds of millions, billions of years to create multicellular organisms like humans or elephants or whales.  The big question for the multicellular organism is: How do you connect all these billions of cells into a functioning human being so that the liver, heart, muscles and brain all work together toward common goals?

In the body, you do that by transferring information, whether through the nervous system or through hormones and biochemicals. It is not just a single information network. There are actually several information networks combined to hold the body together.

It’s the same with the state, with a church, with an army, with a corporation. The big question is, how do you make all these billions of cells of individual humans cooperate as part of a single organism? The most important way you do it in humans is with stories.

Think about religions, visual information, images and icons have constituted the most common portrait in history, the most famous face in history, the face of Jesus. Over 2000 years, billions of portraits of Jesus were created, and they are alike everywhere, in churches, in cathedrals, in private homes, in government offices. The amazing thing about all these portraits is that not a single one of them is true.

Not a single one of them is authentic because nobody has any idea what Jesus actually looked like. We don’t know of any portraits that were drawn of him during his lifetime. He was a very, very minor figure working in a province of the Roman Empire, known perhaps to a few thousand people who met him personally or heard rumors about him. The actual person of Jesus had a very, very small impact on history.

Yet the story of Jesus and the image of Jesus, most of it created after he was long dead, had a tremendous impact on history.  Even in the Bible, there is not a single word about what Jesus looked like.  We have one sentence in the Bible about the cloth he wore at a certain point, but no information about whether this man was tall or short, fat or thin, blonde or black-haired. Nothing.

Over the centuries, you have had millions of people closing their eyes and visualizing Christ because of this image created of him. Still, his story has united billions of people for close to 2,000 years now, with both good and bad consequences — from charity and hospitals and relief to the poor to crusades and Inquisitions and Holy Wars. It’s all, in the end, based on a story.

The network of cathedrals are kind of the nerve center of the whole thing. The question is, what do you preach to people in the cathedral? Do you preach they should give some of their money and their time to help the poor, to heal the sick? Or do you tell them to wage war against the infidels and against the heretics.

Gardels: Networks built on stories bring people together through the information they make available. But what you call “a naïve view of information” can make things worse, not better. Can you explain what you mean by this?

“We are a storytelling animal.”

Harari: The naïve view of information, which is very common in places like Silicon Valley, thinks that information equals truth. If information is truth, the more information you have in the world, the more knowledge you have and the more wisdom you have; the answer to any problem is just more information.

People do acknowledge that there are lies and propaganda and misinformation and disinformation, but they say, “OK, the answer to all these problems with information is more information and more freedom of information. If we just flood the world with information, truth and knowledge and wisdom will kind of float to the surface on this ocean of information.” This is a complete mistake because the truth is a very rare and costly kind of information.

Most information in the world is not truth. Most information is junk. Most information is fiction and fantasies, delusions, illusions and lies. While truth is costly, fiction is cheap.

If you want to write a truthful account of things that happened in the Roman Empire, you need to invest so much time and energy and effort. Experts go to universities and spend 10 years just learning Latin and Greek and how to read ancient inscriptions. Then, just because you found an inscription by Augustus Caesar saying something doesn’t mean it’s true. Maybe it’s propaganda, maybe it was a mistake. How do you tell the difference between reliable and unreliable information? So, the truth is costly to find.

In contrast, if you want to write a fictional story about the Roman Empire, it’s very easy. You just write the first thing that comes to your mind. You don’t need to fact-check. You don’t need to know Latin or Greek or to go do archeological excavations and find ancient pottery shreds and try to interpret what they mean.

The truth is also complicated, while fiction can be made as simple as you would like it to be. What is the truth about why the Roman Republic fell, or why the Roman Empire fell? Is it because of loose sexual morals, as so many believe? The whole truth is very, very complicated and involves many factors, but fiction can be made as simple as you like.

Gardels: It is the very naïve simplicity of fiction that makes it easier for so many to grasp and for such narratives to capture so much attention.

Harari: Exactly. And finally, the truth is often painful to take in even at the level of individuals. It’s difficult to acknowledge the truth about how we behave, how we treat the people we love, how we treat ourselves. This is why people go to therapy for years to understand.

That applies as well at the level of nations and cultures. I look at my country, Israel.  If you have an Israeli politician who will tell people the truth, the whole truth and nothing but the truth about the Israeli-Palestinian conflict, that person will not win the election — guaranteed. People do not want to hear it; they do not want to acknowledge it.

That is true in the U.S. It’s true in India, Italy, in all the nations of the world. It’s also true for religions.

The truth can be unattractive. Fiction can make one’s image of reality as pleasing and attractive as you would like. So, in a competition between information that is costly, complicated and unattractive, and information that is cheap and simple and pleasing, it’s obvious which one will win.

If you just flood the world with information, truth is bound to lose. If you want truth to win and to acquire knowledge and wisdom, you must tilt the playing field. How? By building institutions that do the difficult work of investing the time, resources and effort to find the truth and to explain it and promote it.

These institutions can run the gamut from research institutions and universities to newspapers, to courts — though in the judicial system, it’s also often not easy to know what the truth is. Only if we invest in these kinds of institutions that preserve the hope of reaching the truth and acquiring knowledge and developing wisdom, can we tilt the balance.

Gardels: In other words, since the fictions or delusions you’ve described are what secure social cohesion, the prevailing logic of information networks is to privilege order over truth, which is disruptive.

“The truth is a very rare and costly kind of information.”

Harari: Yes. For an information network to function, you need two things. You need to know some truth. If you ignore reality completely, you will not be able to function in the universe and you will collapse. But at the same time, just knowing the truth is not enough. You also need to preserve order. You need to preserve cohesion.

For the human body to function, it needs to know some truth about the world: How to get water, how to get food, how to avoid predators. But the body also needs to preserve all these billions of cells working together. This is also true of armies and churches and states.

The key thing to understand is that order, in most cases, is more important than truth for societies to cohere and work together collectively.

If you think, for instance, about a country trying to develop nuclear weapons, what do you need to do to build an atom bomb? You obviously must know some facts about physics. If you ignore all the facts of physics, your bomb will not explode. But just knowing the facts of physics is not enough.

If you have a lone physicist, the most brilliant physicist in the world, and she knows that e=mc2 and she’s an expert on quantum mechanics, she can’t build an atom bomb by herself. Impossible. Just knowing the truth is not enough. She needs help from millions of other people. She needs miners in some distant land to mine the uranium. She needs people to design and build the reactor and centrifuges to enrich the uranium to bomb-grade.  And she needs people, of course, to farm food so that she and the miners and engineers and construction workers will have something to eat. You need all of them.

So, to motivate them collectively and bind them to the project, you need a story. You need a mythology. You need an ideology. And when it comes to building the mythology that will inspire these millions of people, the facts are not so crucial.  

Now, most of the time the people who understand nuclear physics get their orders from experts in mythology or ideology. If you go to Iran these days, you have experts in nuclear physics getting orders from experts in Shiite theology. If you go to Israel, the experts in nuclear physics are getting orders from experts in Jewish theology. If you were in the Soviet Union, the orders came from Communist ideologues. This is usually how it works in history: The people who understand order are giving the orders to the people who merely know the truth.

Narrative Warfare

Gardels: Networks of connectivity are a dual-use technology. They can foster social cohesion and collective action, but they can also divide. Particularly now with peer-to-peer social media, you have every group with its own identity, believing its own truth and spinning its own narrative.

This creates a kind of archipelago of subcultures, a fragmented sense of reality, which actually subverts cohesion.

As the Korean-German philosopher Byung-Chul Han puts it, peer-to-peer connectivity flows from private space to private space without creating a public sphere. Without a public sphere, there cannot be social cohesion. So you have this dual dynamic of cohesion — whether it’s for good or bad purposes — and then you have complete fragmentation. Order breaks down.

Harari: Absolutely. Stories unite, but stories also divide. Because a binding narrative is so important to keeping the order, to keeping things together, “narrative warfare” is the most potent type of warfare in the world, because it can cause the disintegration of the integral network. Yes, it goes both ways, absolutely.

Empowerment & Control

Gardels: There is another dual aspect of information networks: they both concentrate and disperse power at the same time.

This quote from DeepMind’s co-founder Mustafa Suleyman captures the contradictory nature of that duality:

“The internet centralizes in a few hubs while also empowering billions of people. It creates behemoths and yet gives everyone the opportunity to join in. Social media created a few giants and a million tribes. Everyone can build a website, but there is only one Google. Everyone can sell their niche products, but there is only one Amazon. The disruption of the internet is largely explained by this tension, this potent, combustible brew of empowerment and control.”

In other words, network connectivity tends to centralize to be more efficient, but also it creates billions of possibilities.

“Now, most of the time the people who understand nuclear physics get their orders from experts in mythology or ideology.”

Harari: Yes, but it’s not deterministic. You can build different kinds of information networks. One of the things that I try to do in my book, “Nexus,” is look again at the whole of human history from this viewpoint of information networks, to understand institutions like the Catholic Church, the Soviet Union or the Roman Empire as information networks. I’ve studied how information flows differently with different models.  If you do this, you see that many of the conflicts and wars that shaped history are actually the result of different models of information networks.

Maybe the best example is the tension between democracy and dictatorship. We tend to think of democracy and dictatorship as different ethical systems that believe in different political ideologies. That is true.  But at a more fundamental level, they are simply different models for how information flows in the world.

A dictatorship is a centralized information network. All the decisions are supposed to be made in just one place. There is one person who dictates everything. So all the information must flow to a central hub where all the decisions are being made and from where all the orders are sent.

A democracy, in contrast, is a distributed information network. It is decentralized. Most decisions are not made in the center but in other, more peripheral places.  In a democracy you will see that, yes, a lot of information is flowing to the center, let’s say to Washington in the United States. But not all of it.

You have lots of organizations, corporations, private individuals or voluntary associations that make decisions by themselves, without any kind of guidance or permission from Washington. Much of the information just flows between private companies and voluntary associations and individuals without ever passing through the Washington center, through the government.

The other thing that distinguishes these two models is that democracies retain strong self-correcting mechanisms that can identify and correct mistakes of decisions at the center.

The danger in a democracy, always, is that the center might use its power to accumulate even more and more power until you get a dictatorship.

At the simplest level, in a democracy, you give power to a person or a party for four years or some limited term, on the condition that they must give it back and the people can make a different choice. What happens if they don’t give back the power you gave them now that they have it? What can compel them to give it back?

That has been the big problem of democracy from ancient Greece until modern America, and this is also the issue at the center of the present election, in America.  You have a person, Donald Trump, with a proven track record of not being keen to give up power after you give it to him. That is what makes the upcoming election such a huge gamble.

In places like Russia, or now also in Venezuela, the public gave power to somebody through elections who now doesn’t want to give it up. Here, it is clear that the self-correcting mechanism of elections alone are not enough. If all other distributed powers of self-correction, such as courts or free media, are in the hands of the government which suppresses any active opposition, it is very easy to manipulate electoral outcomes.

We see this again and again from ancient history, from the Roman Republic to the present day. Dictators don’t abolish elections; they just use them as a kind of facade to hide their power and as a kind of authoritarian ritual. You hold an election every four years in which you win every time by a 90% majority, and you say, “Look, the people love me.”

So elections by themselves are not enough. You need the entire range of self-correcting mechanisms which are known as the checks and balances of democracy to make sure that the distributed information network remains distributed and not overly centralized.

Leninist AI

Gardels: How does the advent of artificial intelligences amplify these models of information networks?

Harari: We don’t know yet. One prominent hypothesis is that AI could tilt the balance decisively in favor of centralized information networks, in favor of dictatorships. Why?

Let’s look back again at the 20th century. The 20th century ended with people convinced that democracy won, that democracy is simply more efficient than dictatorship. And again, the easiest way to understand it is not in ethical terms, but in terms of information.

“One prominent hypothesis is that AI could tilt the balance decisively in favor of centralized information networks, in favor of dictatorships.”

The argument then was that when you try to concentrate all the information of a country like the Soviet Union in one place, it is just extremely inefficient. Humans at the center are just unable to process so much information fast enough, so they make bad decisions, first and foremost, bad economic decisions. There is no mechanism to correct their mistakes, and the economy goes from bad to worse, until you have a collapse, which is what happened to the Soviet Union.

In contrast, what made the West successful was a distributed information system like the United States, which allowed information to go to many different places.  You didn’t just rely on a few bureaucrats in Washington to make all the important economic decisions. And if somebody in Washington made the wrong decision, you could replace him. You could correct a mistake. And this proved to be just far more efficient. So in the end, it was an economic competition in which the distributed system won, because it was far, far more efficient.

Now, enter AI and people say, “Ah, well, when you concentrate all the information in one place, humans are unable to process it, and they make really bad decisions. But AI is different. When you flood humans with information, they are overwhelmed. When you flood AI with information, it becomes better. Data is the food, the fuel for the growth, of AI. So the more of it the better.  What couldn’t work in the 20th century, because you had humans at the center, might work in the 21st century when you put AI systems at the center.”

What you see today, therefore, even in capitalist societies, is that one area after the other is being monopolized by a single behemoth, as Suleyman pointed out. What we are seeing is the creation of extremely centralized information networks, because the algorithms of AIs just make it far more efficient.

Not everybody agrees with this analysis. One weakness is that you still must account for the absence of self-correcting mechanisms. Yes, if you put all the information in one place and the AI can process that information in a way that humans can’t, it still makes mistakes. Like humans, AI is fallible, very, very fallible. So, this is just a recipe for disaster. Sooner or later, this kind of Leninist AI will make some terrible mistakes, and there will be no mechanism to correct it.

Another thing worth pointing out, if you think about the impact on human dictators, is the threat it poses to them. The biggest fear of every human dictator in history was not of a democratic revolution. This was a very rare occasion in history. Not a single Roman emperor was toppled by a democratic revolution.

The biggest fear of every human autocrat is a subordinate who becomes more powerful than him and who he doesn’t know how to control. Whereas no Roman emperor was ever toppled by a democratic revolution, dozens of Roman emperors were assassinated, overthrown or manipulated by powerful subordinates — by some army general, provincial governor, their wife, cousin. This was always the biggest danger.

If I’m a human dictator, I should be terrified by AI because I’m bringing into the palace a subordinate that will be far more powerful than me and that I have no chance of controlling.

What we know from the history of dictatorships is that when you concentrate all power in the hands of one person, whoever controls that person controls the empire. What we also know is that its relatively easy to manipulate autocrats. They tend to be extremely paranoid individuals. Every Sultanate or Chinese Empire has always had concubines, eunuchs and counsels who knew how to manipulate the paranoid person at the top.

For an AI to learn how to manipulate a paranoid Putin or a paranoid Maduro is like stealing candy from a baby. That would be the easiest thing in the world. And so, if you think about human dictatorships, AI poses an enormous danger. For the Putins and Maduros of the world, I would tell them, “Don’t rush to embrace AI.”

Alien Intelligence

Gardels: The most worrying thing about AI is how it hacks the master key of human civilization by appropriating what you’ve called the superpower of sapiens — language and the ability to construct stories that bind societies together.

In this context, you see AI as an alien force that is a threat to our species.

“For an AI to learn how to manipulate a paranoid Putin or a paranoid Maduro is like stealing candy from a baby.”

Harari: I think of AI as an acronym not for artificial intelligence, but for alien intelligence. I mean alien not in the sense that it’s coming from outer space, but alien in the sense that it thinks, makes decisions and processes information in a fundamentally different way than humans. It’s not even organic.

The most important thing to realize about AI is that it is not a tool. It’s an agent. Every previous technology in history was a tool in our hands. You invent a printing press, you decide what to print. You invent an atom bomb, you decide which cities to bomb. But you invent an AI, and the AI starts to make the decisions. It starts to decide which books to print and which cities to bomb, and eventually even which new AIs to develop. So don’t think about it like the previous technologies we’ve had in history. This is completely new.

For the first time, we have to contend with a very intelligent agent here on the planet. And it’s not one agent. It’s not like one big supercomputer. It’s potentially millions and billions of AI agents that are everywhere. You have AIs in the banks deciding whether to give us a loan. You have AIs in companies deciding whether to give us jobs. You have AIs at universities deciding whether to accept you and what grades to give you. AIs will be in armies, deciding whether to bomb our houses or to target us and kill us.

We haven’t seen anything yet. Let’s remember that the AIs of today like the ChatGPTs are extremely primitive. These AIs are likely to continue developing for decades, centuries, millennia and millions of years ahead.

I talked in the beginning about organic evolution from single-celled organisms like amoebas to multicellular organisms like dinosaurs, mammals and humans. It took billions of years of evolution. AI is now at its amoeba stage, basically. But it won’t take it billions of years to get to the dinosaur stage. It may take just 20 years, because digital evolution is far, far faster than organic evolution.

If ChatGPT is the amoeba, what do you think an AI T. rex would look like? Think about it very, very seriously, because we are likely to encounter AI T. rexes in 2040 or 2050, within the lifetime of most people reading this.

By definition, AIs are not something that we can plan for in advance and anticipate everything they will do. If you can anticipate everything they will do, then it is not AI.

An AI learns and changes by itself, and this is why the challenge is so big. The idea that, “Oh, we can just build some safety mechanisms into it, and we can just have these regulations,” completely misunderstands that what we are contending with is an alien agent that can act on its own and is not a tool like all previous technologies.

Gardels: Even now, in the primitive stages of AIs, aren’t we seeing the scale and scope of its impact?

Harari: Definitely. We are not talking only about the future, but what AI has already wrought. We’ve already seen at least one big catastrophe with the way that algorithm-driven social media destabilizes democracies and societies all over the world. This was kind of a first taste of what happens when you release an agent into the world that makes decisions by itself.

The algorithms of social media used by Twitter/X, YouTube or Facebook are extremely primitive. Though only first generation, these AIs have had a huge impact on history. These social media algorithms were given the task of increasing user engagement, to make more people spend more time on Facebook, more time on YouTube, more time on Twitter. What could go wrong? Engagement, is a good thing, right?

Wrong, because the AIs discovered that the easiest way to increase user engagement is to spread hate and fear and greed since that is what catches the attention of human nature. You press the hate button in people’s minds, and they are glued to the screen. They stay longer on social media platforms and the algorithms can quickly place ads while they have your attention.

Nobody instructed the AIs to spread hatred and outrage. Mark Zuckerberg or the other people who run Facebook or YouTube did not intend to deliberately spread hate. They gave power to these algorithms, and the algorithms did something unexpected and unanticipated because they are AI. This is what AIs do.

“The most important thing to realize about AI is that it is not a tool. It’s an agent.”

The damage is not in the future. It is already in the past. American democracy is now in danger because of how these extremely primitive AIs have fragmented our societies.

Just imagine what more sophisticated AI models will wreak 10 or 20 years from now, no less when amoebas become dinosaurs.

We Need Checks & Balances On Distributed Power

Gardels: To go back on this point, to the self-correcting mechanisms of republics and democracies. In the past, they have defended themselves by putting in place checks and balances whenever too much power is concentrated in one place. The social splintering you’ve described suggests we now need another set of checks and balances when power is so distributed into tribes by social media that the public sphere is disempowered; social cohesion can’t hold and what binds societies together disintegrates.

Harari: Yes, you need checks and balances on the other side as well. Absolutely. Democracy always needs to find the middle path between dictatorship on one side, and anarchy on the other side. Anarchy is not democracy. If you lose all cohesion, that is not democracy.

Democracy, at least in the modern world of large-scale societies, requires patriotism and nationalism. It is very hard to maintain a democratic system without a cohesive national community. Lots of people get this wrong, especially on the left. They think that nationalism and patriotism are forces of evil, that they are negative; that the world would be such a wonderful place without patriotism. It won’t. It will fall into tribal anarchy.

The nation is a good thing when it’s been built and maintained properly. Nationalism should not be about hate. If you’re a patriot, it doesn’t mean you hate anybody. It’s not about hating foreigners. Nationalism is about love. It’s about loving your compatriots. It’s about being part of a network of millions of people, most of whom you’ve never met in your life, but still care enough about that, for instance, you’re willing to give 20-40-60% of your income so that these strangers on the other side of the country can enjoy good health care, education, a working sewage system and drinking water. This is patriotism.

If this community of belonging disintegrates, then democracies fall apart. What is so worrying these days is that it is often those leaders who portray themselves as nationalists who are the most responsible for destroying the national community.

Look at my own country. Prime Minister Benjamin Netanyahu built his political career for years by destroying the Israeli nation and breaking it up into hostile tribes. He deliberately spreads hate, not just against foreigners, but between Israelis, dividing the nation against itself. For one group of Israelis, he’s the Messiah, the greatest person who ever lived.

For another major part of Israeli society, he’s the most hated person in the history of the country. One thing is clear, he’s the last person on Earth who can unite the Israeli nation. If you were to pick a random person here on the streets of Los Angeles, that person has a much better chance of uniting the Israeli nation than Netanyahu.

It’s the oldest trick in the book: Divide and rule. It destroys nations, and it also destroys democracy, because once the nation is split into rival tribes, democracy is unsustainable.

In a democracy, you see other people, not as your enemies, but as your political rivals. You say, “They are not here to destroy me and my way of life. I think they are wrong in the policies they suggest, but I don’t think they hate me. I don’t think they try to harm me. So OK, I was in power for four years or eight years, or whatever, and I tried a bunch of policies, and now they want to try different policies. I think they are wrong. But let’s try and see, and after a few years, if their policies actually turn out to be good, I’ll say I was wrong. They were right.”

If you start thinking of other people not as political rivals, but as enemies — a different tribe out to destroy my tribe — then every election turns into a war of survival. If the other tribe wins, that’s the end of us. So, we must do everything, anything legal or illegal, to win the war, because it is a war. And if we lose the elections, there is no reason to accept the verdict. And if we win the elections, we only take care of our own tribe.

“If you were to pick a random person here on the streets of Los Angeles, that person has a much better chance of uniting the Israeli nation than Netanyahu.”

Gardels: We see this same phenomenon, in varying degrees, across most Western democracies today.

You don’t see this in China. President Xi Jinping’s uniting narrative is the rejuvenation of Chinese civilization. Unabashedly, his regime privileges order over freedom in the name of social cohesion — and over truth.

Is it a foregone conclusion that China will be on the losing end of history? Perhaps it is democratic societies — so fragmented that they can’t hang together — that will be on the wrong side of history?

Harari: It’s not a foregone conclusion. Nothing is deterministic. We don’t know.  Absolutely, they could be on the right side of history, at least for a while. We could also have a split world in which you have different models, different systems, in different parts of the world, competing for quite a long time, like during the Cold War. That is, of course, very bad news, because then it’s going to be very, very difficult to have any kind of joint human action on the most important existential threats facing us, from climate change to the rise of AI.

Gardels: There is another area where the AI future is demonstrably already here.  Whatever one thinks about the war in Gaza, Israel has been very efficient in rooting out Hamas without massive casualties of its own. Some say the reason is its widespread use of AI in sifting through data and other intelligence and in identifying targets. What do you know about this?

Harari: This is something I’ve been working hard to understand over the last few months. I haven’t written anything about it because I still am not sure of the facts. So, I will be very, very careful about what I say.

What everybody I’ve talked with agrees on is that AI has been a major game changer in the war, not in terms of autonomous weapon systems, which everybody focuses on, but in terms of intelligence, and especially in choosing targets where the actual shooting will be done by humans.

The big question is, who is giving the orders? And here, there is a big debate. One camp argues that, increasingly, the AI is giving the orders, in the sense of selecting the targets for bombing. They say that Israel has deployed several very powerful AI tools that collect enormous amounts of data and information to discern patterns in the data that are often hidden from human eyes, patterns that analysts would take weeks to discover, but that AI can discover in minutes. Based on that, the AI identifies that X building is a Hamas headquarters to bomb or X person is a Hamas activist to kill.

Here is the big disagreement. One camp says, basically, the Israelis just do what the AI tells them to do: Bomb those buildings. They kill people based on this AI analysis, with very little oversight by humans who don’t have the time, or maybe the willingness, to review all the information and make sure that if the AI told us to bomb that building, it’s really a Hamas headquarters — that it’s not a false positive and the AI did not make a mistake.

The other camp says, yes, the AI is now central in choosing targets, but there are always humans in the loop for ethical reasons, that the Israeli army and security forces are committed to certain ethical standards. They don’t just go blowing up buildings and killing people because some algorithm told them to. They check it all very thoroughly. They say that “the AI, is crucial, because it suddenly brings to our attention that building we never even thought of checking. We thought it was a completely innocent building, but the AI said, “No, this is Hamas’ headquarters,” and then we have human analysts review all the relevant data.

They couldn’t do it without the AI. But now that the AI has pointed out the target, it makes it must faster than in the past to review all the data, and if they have compelling evidence that this is Hamas’ headquarters, then they bomb.”

I’m not sure which camp to believe. Very likely, in different phases of the war, it worked differently. At certain times, less care was taken in making sure that the AI got it right and there was more tolerance for false positives and less tolerance for false negatives than at other times or other places.

What everybody agrees upon is that the AI definitely sped up the process of finding targets, which goes at least some way toward explaining Israel’s military success.

“AI has been a major game changer in the war … in terms of intelligence, and especially in choosing targets where the actual shooting will be done by humans.”

Again, there is a huge ethical and political debate to be had here. But if we just put on the cold glasses of military analysis without any ethics, then the Israeli security forces have had tremendous success.

People thought that Hamas built what was perhaps one of the biggest fortresses in the world underground in Gaza. According to Hamas sources, 900 kilometers [559 miles] of houses and underground tunnels, fortified with missiles, were constructed. The expectation was that the Israelis would never be able to take over the Gaza Strip, or that it would cost the Israelis many thousands of soldiers’ lives in very difficult street-to-street, house-to-house, combat.

This turned out not to be the case because of AI.

Hollywood Miscasts AI As The Terminator

Gardels: Hollywood has cast the AI story as “rogue robots revolting against their masters,” in films like “The Terminator” and “Blade Runner.” Is that the right image of AI we should have in our mind’s eye?

Harari: No, it’s completely misleading.  Hollywood did a huge service in focusing people’s attention on the AI problem long before anybody else really thought about it. But the actual scenario is misleading because the big robot rebellion is nowhere in sight. And this unfortunately makes a lot of people who grew up on “The Terminator” image complacent.  They look around, they don’t see any kind of Terminator or Skynet scenario as really being feasible. So, they say everything is okay.

These films portray a kind of general-purpose AI that you just throw into the world, and they take over. They can mine metals in the ground, build factories and assemble hordes of other robots.

However, the AIs of today are not like this. They don’t have general intelligence. They are idiots. They are extremely intelligent in a very narrow field. AlphaGo knows how to play Go, but it can’t bake a cookie. And the AIs used by the military can identify Hamas headquarters, but they cannot build weapons.

What the cinematic image so far misses is that the AIs don’t need to start from scratch. They are inserted into our own systems, and they can take our systems over from within.

I’ll give a parallel example. Think about lawyers. If you think about the best lawyer in the United States, this person is, in a way, an idiot savant. This person can be extremely knowledgeable and intelligent in a very, very narrow field like corporate tax law, but can’t bake a cookie and can’t produce shoes.  If you take this lawyer and drop him or her in the savannah, they are helpless — weaker than any elephant or any lion.

But they are not in the savannah. They are inside the American legal and bureaucratic system. And inside that system, they are more powerful than all the lions in the world put together because that single lawyer knows how to press the levers of the information network and can leverage the immense power of our bureaucracies and our systems.

This is the power that AI is gaining. It’s not that you take ChatGPT and throw it in the savannah and it builds an army. But if you throw it into the banking system, into the media system, it has immense power.

Gardels:  The unleashing of AI’s algorithmic spirits into the bureaucracies of the information networks, not some unit of general artificial intelligence, is that where we ought to focus our anxieties about these alien agents? Is it the presence of AIs in the banal tasks of large systems that is most worrisome?

Harrari: Yes, it’s the AI bureaucrats, not the Terminators, that will be calling the shots. That is what we need to worry about. Even in warfare, people may be pressing the trigger, but the orders will come from the AI bureaucrats.

Gardels: Big Tech says they are such huge players in society that they are bound to be responsible. If powerful AI models get too powerful and threaten to make their own decisions, then they can pull the plug or hit the kill switch. Do you have any faith in that perspective?

Harari: Only on a very small scale. It’s like the Industrial Revolution. Imagine a century ago if all these coal and oil giants told us, “You know, if industry will cause this pollution and the ecological system will be in danger, we’ll just pull the plug.” How do you pull the plug on the Industrial Revolution? How do you pull the plug on the internet?

“It’s the AI bureaucrats, not the Terminators, that will be calling the shots. That is what we need to worry about.”

They have in mind some small malfunction in one confined company or location or model.  If some unauthorized agent tries to launch nuclear missiles, the process can be shut down. That can be done. But that is not where the danger lies.

There will be countless AI bureaucrats, billions globally, in all the systems. They are in the healthcare system, the education system, in the military. If after a couple of years, we discover we made a big mistake somewhere, and things are getting out of control, what do you do? You can’t just shut down all the militaries and all the healthcare systems and all the education systems of the world. It is completely unrealistic and misleading to think so.

Gardels: What can be done preemptively to retard the proliferation of algorithmic spirits throughout all human-designed systems?

Harari: First understand the problem and conceptualize it properly not as rogue robots but as AIs taking over from within, as we just discussed. We humans rush to solve problems, and then end up solving the wrong problems. Stay with the problem a little and, just first, really understand what the problem is before you rush to offer solutions.

Information Fasting

Gardels: The knowledge contained in your books from “Sapiens” to “Homo Deus” to “Nexus” is encyclopedic. You’re like a Large Language Model. Ask a question, push the button and it all spews out. Where do you get your information? What do you read?

Harari: First of all, I have four people on our research team. So, if I want to know something about Neanderthals or about AIs, I get help from other people who delve deeply into the matter. Personally, I have an information diet the same way that people have food diets.

I think it is good advice for everybody to go on an information diet because we are flooded with far too much information, and most of it is junk. So, in the same way people are very careful about what they eat, they should be very careful about how much and what they consume in terms of information.

I tend to read long books and not short tweets. If I really want to understand what’s happening in Ukraine, what’s happening in Lebanon, whatever, I go and read several books, depending on what the issue is — if it’s about LLMs or the Roman Empire, history or biology or computer science.

The other thing I do is go on information fasts. Since most information is junk, and information needs processing, just putting more information in your head doesn’t make you smarter or wiser. It just fills your mind with junk.

So, as important as it is to consume information, we also need time off to digest it and to detoxify our minds. To do that, I meditate for two hours daily. Every year I go for a long retreat — between 30 to 60 days — during which I don’t consume any new information. These are silent retreats. You don’t even talk to the other people in the meditation center, and you just process, you just digest, you just detoxify everything you accumulated during the year.

I know that for most people this is going to extremes. Most people just can’t afford the time and resources to do it. But still, I think it is a good idea for everybody to think more carefully about their information diet, and also to at least have short information fasts, maybe once a week, of a day, a week or a few hours a day, when you don’t consume more information.

 This interview was edited for clarity and length.