Nathan Gardels is the editor-in-chief of Noema Magazine.
The stories we tell ourselves and the images we inhabit determine human actions. Despite mounting challenges, Hollywood remains the global epicenter of that métier. In the age of AI, synthetic biology and climate change, nothing is more important than getting the story right. What is fact, what is fiction? Where can fiction light the way ahead or serve to preemptively warn of dangerous pathways not to be taken?
Think tanks located in Washington churn out white papers and policy studies aimed at influencing Congress and the White House. Since the Berggruen Institute is situated in Los Angeles, it has made more sense to engage with the entertainment industry, which inordinately shapes the meta-narrative in the minds of the multitude.
To that end, we recently established Studio B as a way to bring scientists, technologists, historians and philosophers together with storytellers and imagemakers through a series of interactive salons.
A forthcoming event, for example, will pair Lisa Joy, creator of the sci-fi series Westworld, with Fei Fei Li, often referred to as the godmother of AI who heads the Human-Centered AI Institute at Stanford.
The latest discussion brought together one of Silicon Valley’s leading venture capitalists, Reid Hoffman, and Kevin Scott, the chief technology officer at Microsoft, with J.J. Abrams, the famed director and producer whose films include the most recent Star Wars and Star Trek installments, among others.
In that discussion, Hoffman extolled how generative AI would amplify the creativity of humans on par with the Renaissance. “Technology is not just person versus machine,” he argues. “It’s not just: ‘The machines are coming for us.’ But it’s kind of how do we shape this?” as co-creators with powerful new tools for innovation.
Scott emphasized that we are just at the beginning, infant stages of AI capability. One’s optimism or pessimism should not be based on small increments of improvement because, suddenly, they will leap to a breakthrough when it all comes together, as was the case with Open AI’s ChatGPT.
As he put it, “It is the structure of the ecosystem that the frontier advances almost in bursts.” Above all, this means that as billions are poured into AI development, compute power will grow exponentially, and the use of AI will be simplified by advances such as text and speech-to-video. Inevitably, the barriers to entry will fall and the technology will become widely accessible.
From the Hollywood perspective, as Abrams sees it, this very availability and accessibility of AI, both for production and consumption of content, will “fundamentally shift everything” in the way the industry works. “You’re going to be able to ask for anything and get it, obviating the need for studios.” That threat of redundancy applies to directors and actors as well as all the grips and cinematographers who will become less necessary with inventions like Sora, where you can describe in speech a scene and have it instantly visualized on the screen. AI celebrities will appear where human actors once tread.
“I think it is vaguely hilarious when it’s referred to as a tool,” Abrams quipped. “Like — it’s not a hammer. It is a fucking like fully formed robot, sometimes indistinguishable from humans, holding a hammer.” In other words, while generative AI may more widely empower creativity and lower barriers for entry, it is a Terminator of business as usual on the backlot.
Even as more and more people are eliminated from the process, says Abrams, the training of AIs and making of films will “require fairly in-depth human feedback to align them to the kinds of things we want to see.” That element of co-creation will inexorably remain. The creative class won’t just become “passive couchers.” But the transformation does threaten to “leave us in the dust intellectually, and certainly, just in terms of energy to make things, it’s ceaseless.”
In this, Abrams was in essential agreement with what the actor/director/tech investor Ashton Kutcher said in an earlier dialogue with Eric Schmidt, the former CEO of Google. Kutcher argued that, for all its disruption, Hollywood must bite the bullet and embrace AI as a co-creator, not resist it. For him the choice is clear: “Either AI will serve us, or we will serve AI.”
If only glancingly, all these discussions hinted at the larger existential issue raised by Yuval Noah Harari that generative AI has hacked the “master key” of human civilization — language — and thus the capacity to manipulate and command the stories, myths or religions that are the basis of social cohesion.
AI Beyond The Big Screen
When it comes to sorting fact from fiction in crafting a narrative, few are more knowledgeable than Schmidt, who has become something of an elder statesman for all things AI. In an interview with Noema before his discussion with Kutcher, Eric Schmidt walked me through the “capability ladder” of AI progress — where it is now, where it is headed, how fast, and when to “pull the plug.”
Schmidt sees the advancing capabilities of AIs converging rapidly — in the next five to 10 years — into ever-more powerful systems, or “agents,” that can suck up all available information and process it through “chain of thought reasoning” to come up with solutions to challenges in medicine, materials science and climate change that are beyond any human capacity to do so on its own.
He warns:
What happens then poses a lot of issues. Here we get into the questions raised by science fiction. What I’ve described is what is happening already. But at some point, these systems will get powerful enough that the agents will start to work together. …
Some believe that these agents will develop their own language to communicate with each other. And that’s the point when we won’t understand what the models are doing. What should we do? Pull the plug? Literally unplug the computer? It will really be a problem when agents start to communicate and do things in ways that we as humans do not understand. That’s the limit, in my view.
It is worth noting in this context that Deep Mind just released the first Frontier Safety Framework that identifies capabilities a model may have with the potential for severe harm and outlines a mitigation plan when models pass early warning evaluations.
China: Chips And The Risks Of Open Source
Schmidt also discussed how to cope with the other AI superpower, China. He accompanied Henry Kissinger on his last visit to China to meet President Xi Jinping with the mission of establishing a high-level group from both East and West to discuss on an ongoing basis both “the potential as well as catastrophic possibilities of AI.” Notably, Schmidt served as chairman of the U.S. National Security Commission for AI.
He observes:
In the first place, the Chinese should be pretty worried about generative AI. And the reason is that they don’t have free speech. And so, what do you do when the system generates something that’s not permitted under the censorship regime?
Who or what gets punished for crossing the line? The computer, the user, the developer, the training data? It’s not at all obvious. What is obvious is that the spread of generative AI will be highly restricted in China because it fundamentally challenges the information monopoly of the Party-State. That makes sense from their standpoint.
There is also the critical issue of automated warfare or AI integration into nuclear command and control systems, as Dr. Kissinger and I warned about in our book, ‘The Age of AI.’ And China faces the same concerns that we’ve been discussing as we move closer to general artificial intelligence. It is for these reasons that Dr. Kissinger, who has since passed away, wanted Xi’s agreement to set up a high-level group. Subsequent meetings have now taken place and will continue as a result of his inspiration.
Not unlike the Open Skies monitoring of nuclear missile sites to provide transparency of capabilities, Schmidt believes that “both sides should agree on … a simple requirement that, if you’re going to do training for something that’s completely new on the AI frontier, you have to tell the other side that you’re doing it. In other words, a no-surprise rule..”
He adds:
If you’re doing powerful training, there needs to be some agreements around safety. In biology, there’s a broadly accepted set of threat layers, Biosafety levels 1 to 4, for containment of contagion. That makes perfect sense because these things are dangerous.
Eventually, in both the U.S. and China, I suspect there will be a small number of extremely powerful computers with the capability for autonomous invention that will exceed what we want to give either to our own citizens without permission or to our competitors. They will be housed in an army base, powered by some nuclear power source and surrounded by barbed wire and machine guns. It makes sense to me that there will be a few of those amid lots of other systems that are far less powerful and more broadly available.
The point of these protective outposts, according to Schmidt, is:
You want to avoid a situation where a runaway agent in China ultimately gets access to a weapon and launches it foolishly, thinking that it is some game. Remember, these systems are not human; they don’t necessarily understand the consequences of their actions. They [large language models] are all based on a simple principle of predicting the next word. So, we’re not talking about high intelligence here. We’re certainly not talking about the kind of emotional understanding in history we humans have.
So, when you’re dealing with non-human intelligence that does not have the benefit of human experience, what bounds do you put on it? That is a challenge for both the West and China. Maybe we can come to some agreements on what those are?
What most concerns Schmidt, more than China’s access to the most powerful computing chips manufactured in the West, is the misuse of open-source models. He says:
The chips are important because they enable the kind of learning required for the largest models. It’s always possible to do it with slower chips, you just need more of them. And so, it’s effectively a cost tax for Chinese development. That’s the way to think about it. Is it ultimately dispositive? Does it mean that China can’t get there? No. But it makes it harder and means that it takes them longer to do so.
I don’t disagree with this strategy by the West. But I’m much more concerned about the proliferation of open source. And I’m sure the Chinese share the same concern about how it can be misused against their government as well as ours.
We need to make sure that open-source models are made safe with guardrails in the first place through what we call ‘reinforcement learning from human feedback‘ (RLHF) that is fine-tuned so those guardrails cannot be ‘backed out’ by evil people. It has to not be easy to make open-source models unsafe once they have been made safe.
From those who spin the narratives in Hollywood to those who rule over more than a billion people in the Middle Kingdom, advancing AI is shaking things up everywhere. More than a technological revolution, we are in many ways undergoing a phase transition in how humans order their affairs and frame their meaning.