The Five Stages Of AI Grief

Grief-laden vitriol directed at AI fails to help us understand paths to better futures that are neither utopian nor dystopian, but open to radically weird possibilities.

Newnome Beauton for Noema Magazine
Credits

Benjamin Bratton is the director of the Antikythera program at the Berggruen Institute and a professor at the University of California, San Diego.

At an OpenAI retreat not long ago, Ilya Sutskever, until recently the company’s chief scientist, commissioned a local artist to build a wooden effigy representing “unaligned” AI. He then set it on fire to symbolize “OpenAI’s commitment to its founding principles.” This curious ceremony was perhaps meant to preemptively cleanse the company’s work from the specter of artificial intelligence that is not directly expressive of “human values. Just a few months later, the topic became an existential crisis for the company and its board when CEO Sam Altman was betrayed by one of his disciples, crucified and then resurrected three days later. Was this “alignment” with “human values”? If not, what was going on?

At the end of last year, Fei-Fei Li, the director of the Stanford Human-Centered AI Institute, published “The Worlds I See,” a book the Financial Times called “a powerful plea for keeping humanity at the center of our latest technological transformation.” To her credit, she did not ritualistically immolate any symbols of non-anthropocentric technologies, but taken together with Sutskever’s odd ritual, these two events are notable milestones in the wider human reaction to a technology that is upsetting to our self-image.

“Alignment” toward “human-centered AI” are just words representing our hopes and fears related to how AI feels like it is out of control — but also to the idea that complex technologies were never under human control to begin with. For reasons more political than perceptive, some insist that “AI” is not even “real,” that it is just math or just an ideological construction of capitalism turning itself into a naturalized fact. Some critics are clearly very angry at the all-too-real prospects of pervasive machine intelligence. Others recognize the reality of AI but are convinced it is something that can be controlled by legislative sessions, policy papers and community workshops. This does not ameliorate the depression felt by still others, who foresee existential catastrophe.

All these reactions may confuse those who see the evolution of machine intelligence, and the artificialization of intelligence itself, as an overdetermined consequence of deeper developments. What to make of these responses?

Sigmund Freud used the term “Copernican” to describe modern decenterings of the human from a place of intuitive privilege. After Nicolaus Copernicus and Charles Darwin, he nominated psychoanalysis as the third such revolution. He also characterized the response to such decenterings as “traumas.”

Trauma brings grief. This is normal. In her 1969 book, “On Death and Dying,” the Swiss psychiatrist Elizabeth Kübler-Ross identified the “five stages of grief”: denial, anger, bargaining, depression and acceptance. Perhaps Copernican Traumas are no different.

We should add to Freud’s list. Neuroscience has demystified the mind, pushing dualism into increasingly exotic corners. Biotechnology turns artificial material into life. These insights don’t change the fundamental realities of the natural world — they reveal it to be something very different than what our intuitions and cultural cosmologies previously taught us. That revealing is the crux of the trauma. All the stages of grief are in response to the slow and then sudden fragmentation of previously foundational cultural beliefs. Like the death of a loved one, the death of a belief is profoundly painful.

What is today called “artificial intelligence” should be counted as a Copernican Trauma in the making. It reveals that intelligence, cognition, even mind (definitions of these historical terms are clearly up for debate) are not what they seem to be, not what they feel like, and not unique to the human condition. Obviously, the creative and technological sapience necessary to artificialize intelligence is a human accomplishment, but now, that sapience is remaking itself. Since the paleolithic cognitive revolution, human intelligence has artificialized many things — shelter, heat, food, energy, images, sounds, even life itself — but now, that intelligence itself is artificializable.

“What is today called ‘artificial intelligence’ reveals that intelligence, cognition and even mind are not what they seem to be, not what they feel like and not unique to the human condition.”

Kübler-Ross’s stages of grief provide a useful typology of the Western theory of AI: AI Denial, AI Anger, AI Bargaining, AI Depression and AI Acceptance. These genres of “grief” derive from the real and imagined implications of AI for institutional politics, the division of economic labor and many philosophical and religious traditions. They are variously profound, pathetic and predictable. They reflect responses that feel right for different people, that are the most politically expedient, most resonant with cultural dynamics, most consonant with previous intellectual commitments, most compatible with general mainstream consensus, most expressive of a humanist identity and self-image, and/or the most flattering to the griever. Each contains a kernel of truth and wisdom as well as neurosis and self-deception.

Each of these forms of grief is ultimately inadequate in addressing the most serious challenges posed by AI, most of which cut obliquely across all of them and their competing claims for short-term advantage. The conclusion to be drawn, however, is not that there are no real risks to be identified and mitigated against, or that net positive outcomes from AI as presently developed and monetized are inevitable. Looking back from the near future, we may well wonder how it was possible that the conversations about early AI were so puerile. 

The stages of AI grief do not go in any order. This is not a psychological diagnosis; it is mere typology. The positions of real people in the real world don’t stay put inside simple categories. For example, AI Denial and AI Anger can overlap, as they often do for critics who claim in the same sentence that AI is not real and yet must be stopped at all costs.

My focus is on Western responses to AI, which have their own quirks and obsessions and are less universal than they imagine. Alternatives abound.

First, of course, is AI Denial: How can we debate AI if AI isn’t real?

Denial

Symptomatic statements: AI is not real; it does not exist; it’s not really artificial; it’s not really intelligent; it’s not important; it’s all hype; it’s irrelevant; it’s a power play; it’s a passing fad. AI cannot write a good song or a good movie script. AI has no emotions. AI is just an illusion of anthropomorphism. AI is just statistics, just math, just gradient descent. AI is glorified autocomplete. AI is not embodied and therefore not meaningfully intelligent. This or that technique won’t work, is not working — and when it is working, it’s not what it seems.

Denial is predictable. When confronted with something unusual, disturbing, life-threatening or that undermines previously held beliefs, it is understandable that people would question the validity of that anomaly. The initial hypothesis for collective adjudication should be that something apparently unprecedented may not be what it seems.

To be sure, many forms of denial are and have been crucial in honing an understanding of what machine intelligence is and is not, can be and cannot be. For example, the paradigmatic shift from logical expert systems to deep learning is due to precise and relentless refutations of the propositional claims of some earlier approaches.

Today, there are diverse forms of AI Denial. Most are different from climate change denialism — in which alternative “facts” are obstinately invented to suit a preferred cosmology — but more than a few are. AI Denialists will cherry-pick examples, move goalposts and do anything to avoid accepting that their perceived enemies may actually be right.

Types of AI Denial might be roughly categorized as: phenomenological, political and procedural.

The philosopher Hubert Dreyfus argued that intelligence can only be understood through the lens of embodied experience, and the phenomenological denial of AI builds upon this in many ways: “AI can’t really be intelligent because it doesn’t have a body.” This critique is often explicitly anthropocentric. As a kind of populist variation of the Turing Test, it compares human experience to a machine’s and concludes that the obvious differences between them are the precise measure of how unintelligent AI is. “AI cannot write a great opera, paint a great painting, create beautiful Japanese poetry, etc.” Usually, the person offering this slam-dunk critique cannot do any of those things either and yet would probably consider themselves intelligent.

Perhaps the most directly expressed denial is offered by the concise tagline “AI is neither artificial nor intelligent.” Catchy. Strangely, this critic makes their case by saying that AI has been deliberately fabricated from tangible mineral sources (a good definition of “artificial”) and exhibits primarily goal-directed behavior based on stochastic prediction and modeling (a significant part of any definition of “intelligence,” from William James to Karl Friston). 

“As a kind of populist variation of the Turing Test, AI Denial compares human experience to a machine’s and concludes that the obvious differences between them are the precise measure of how unintelligent AI is.”

“It’s just stochastic reasoning, not real thinking” is also the conclusion of the infamous paper that compared AI with parrots — remarkably so as to suggest that AI is therefore not intelligent. Along the way, those authors include a brisk dismissal of computational neuroscience as merely ideological paradigm inflation that sees everything as “computation.” This gesture is radicalized by another writer who even concludes that neural network-based models of natural and artificial intelligence are themselves a ruse perpetrated by neoliberalism.

Such critics quickly switch back and forth between AI is not real and AI is illegitimate because it is made by capitalist corporations, and clearly the former claim is made on behalf of the latter. To insist that AI is not real is often thereby a political statement, appropriate to an epistemology for which such questions are intrinsically negotiations of power. For them, there is no practical contradiction in saying that AI is at once “not real” and also that it is “real but dangerous” because “what AI actually is” is irrelevant in comparison with “what AI actually does,” and what AI does is restricted to a highly filtered set of negative examples that supposedly stands in for the whole. 

Put differently, AI is said to be “not real” because to say so signals counter-hegemonic politics. At worst this line of thinking devolves into AI Lysenkoism, a militant disavowal of something quite real on behalf of anti-capitalist commitments.

Other AI critics who made high-stakes intellectual bets against deep learning, transformer architectures, self-attention or “scale is all you need” approaches have a parallel but more personal motivation. This is exemplified by what Blaise Aguera y Arcas calls “The Marcus Loop” after the deep learning skeptic Gary Marcus.

The cycle goes like this: First you say that X is impossible, then X happens; then you say X doesn’t really count because Y; then you say X is going to crash or fail any day now, but when it doesn’t and rather is widely adopted, you say that X is actually really bad for society. Then you exaggerate and argue online and under no circumstances admit that you got it wrong.

For Marcus, deep learning has been six months away from exhaustion as a foundational method since 2015, but the targets of his many invectives sleep easy knowing that, every day, millions of people use AI in all sorts of ways that at one time or another Gary Marcus said would be impossible.

Anger

Symptomatic statements: AI is a political, cultural, economic and/or existential threat; it threatens the future of humanity; it must be collectively, individually, actively and sometimes violently resisted; the “spark of humanity” must be defended from the tangible harms from above and outside; AI is essentially understandable from a handful of negative recent examples; AI is a symbol of control and hierarchy and thus opposes the struggle for freedom and autonomy.

Anger in response to AI is based on fear both warranted and unwarranted. That is, anger may be focused less on what AI does than on what AI means, and often the two get mixed up.

Sometimes, AI is addressed as a monolithic entity, a singular symbol of power as much as real technology; other times, it is framed as the culmination of historical sins. Often, therefore, the political mandate of AI Denial can overlap with AI Anger even as they contradict one another.

Given recent history, there are plenty of reasons to be wary of AI as it is presently configured and deployed. Looking back, the 2010s were an especially fertile era for political populisms of many persuasions. Douglas Rushkoff captured (and celebrated) populist anger against the social changes brought by the digitalization of society in his 2017 book “Throwing Rocks at The Google Bus.” Fuck Off Google!” was the Kruezberg-based activist group/meme that tried to channel its inchoate rage at the entity that would disturb an idyllic (for some) Berlin lifestyle predicated on cheap rent and cheap music.

In those years, a script was clarified that lives on today. In San Francisco on Lunar New Year, a mob set a driverless car on fire, an act both symbolic and super literal. While the script was aimed not specifically at AI but at Big Tech in general, by now the distinction may be moot. For these conflicts a battleground is drawn in the mind of only one of the combatants, and “AI” is the name given to the Oedipalized superego against which the plucky sovereign human may do battle: David attacks Goliath so that he may be David.

AI Anger may be ideologically themed but it is agnostic as to which ideology, so long as certain anti-establishment terms and conditions are met. Ideologues of the reactionary right find common cause with those of the progressive left and mainstream center as they all stand firm against the rising tide challenging their favored status quo.

“AI anger may be focused less on what AI does than on what AI means, and often the two get mixed up.”

For the reactionaries, what is at stake in the fight against AI is nothing less than the literal soul of humanity, a precious spark that is being wiped out by waves of computational secularization and for which spiritual battle must be waged. Their arguments against advanced AI encroaching on the human self-image are copied from those against heliocentrism, evolution, abortion, cloning, vaccines, transgenderism, in vitro fertilization, etc. Their watchword is less sovereignty or agency than dignity. That human spark — flickering in the image of an Abrahamic God — is being snuffed out by modern technology, and so the battle itself is not only sacred but divine.

By contrast, for the left, that human spark is vitalist (always political, often abolitionist in vocation, sometimes incoherently paranoid), whereas for the center it is Historical (and usually imagined as under temporary siege or nearing some “end”).

They all share at least three things: a common cause in defending their preferred version of human exceptionalism, a belief that their side must “win AI” as a battle for societal self-representation, and that, as cultural positions, they are honed and amplified by the algorithmic processes against which they define themselves.

Bargaining

Symptomatic statements: AI is a powerful force that can, should and will be controlled through human-centric design ethics and democratic and technocratic alignment with self-evidently consensual shared values, realized through policymaking and sovereign legislation. Its obvious challenges to the informational, technological and epistemic foundations of modern political and legal institutions is a temporary anomaly that can be mitigated through cultural intervention, targeted through legacy cultural platforms against those who make AI.

If one insists that machine intelligence is simply the latest type of digital tool, then governing it through policy is straightforward. However, if it is something more fundamental than that, akin to the development of the internet or the first computers — or deeper yet, a phase in the artificial evolution of intelligence as such — then taming AI through “policy” may be, at best, aspirational.

Even when successful in the short term, keeping AI companies under state control is not the same as controlling AI itself in the long term. Whereas nuclear weapons were a known entity and could be governed by international treaties because their destructive effects were clearly understood (even if their geopolitical ones were not), AI is not a known entity. It is not understood what its impacts will be or even, in the deepest sense, what AI is. Therefore, interventionist policy, however well-meaning and well-conceived, will have unintended consequences, ones that cut both ways.

AI Bargaining is the preferred posture of the political and legal establishment, for whom complex issues can be reduced to rights, liabilities, case law, policy white papers and advisory boards. But it is also the public consensus of the tech world’s own “sensible center.” The approach is couched in the language of “ethics, “human-centeredness” and “alignment.” Stanford’s premier AI policy and research institute is literally called Human-Centered Artificial Intelligence. Beyond salutes to milquetoast humanism and default anthropocentrism, the approach relies on fragile presumptions about the relationship between immediate political processes and long-term technological evolution.

For AI Bargaining, Western “ethics,” a framework based on legal individualism and the philosophical secularization of European Christianity, is posed as both a necessary and sufficient means to steer AI toward the social good. In practice, AI Ethics encompasses both sensible and senseless insights but is limited by its presumption that bad outcomes are the result of miscalibrated intentions on the part of clearly defined actors. Its intentionality-first view of history is convenient but superficial. Core to its remedial methodology is “working with communities” or conducting citizens’ assemblies to poll “wants” and “don’t wants” and to index and feed these into the process, as if control mechanisms over the future of AI are linear and all that needs correcting is the democratic quality of inputs.

“AI Bargaining clings to the hope that if we start negotiating with the future then the future will have no choice but to meet us halfway. If only.”

There are many criticisms of “techno-solutionism” — some are well posed and others not at all. However, political solutionism — defined as the presumption that to “politicize” something not amenable to the temporal cycle of current events, and to subordinate it to available or imaginary political decisions — is just as bad, if not worse. Watching Congress or the vice president, one is not overwhelmed with confidence that these are truly the pilots of the future they presume we want them to be. As Congress convenes for the cameras, generating footage of them taking AI very seriously, the meta-message is that these elected avatars actually are in charge of AI — and perhaps to convince themselves that they are. The premise is that modern governments as we know them are the executives of the transformations to come and not an institutional form that will be overhauled if not absorbed by them. For better or worse, the latter scenario may be more plausible.

Beyond law-passing, AI Bargaining also means the “alignment” of AI with “human values,” an objective I have questioned. The presumption is that the evolution of machine intelligence will be guided by ensuring that it is as anthropomorphic and sociomorphic as possible, a technology that convincingly performs as an obsequious mirror version of its user.

The leap of faith that human values are self-evident, methodologically discoverable and actionable, constructive, and universal is the fragile foundation of the alignment project. It balances on the idea that it will be possible to identify common concerns, to poll communities about their values and conduct studies about the ethics of possible consumer products, that it will be possible and desirable to ensure that the intelligence earthquake is as comfortable as possible for as many people as possible in as many ways as possible.

Its underlying belief is that AI is remotely amenable to this kind of approach. This stage of grief clings to the hope that if we start bargaining with the future then the future will have no choice but to meet us halfway. If only.

Depression

Symptomatic statements: It may already be too late to save humanity from an existential crisis up to and including extinction due to the intrinsically voracious nature of AI, the competitive nature of human societies amplified by it, the underlying challenges of a manifold polycrisis (of which contemporary AI is a symptom), and/or the immediate political and economic contradictions of AI’s own means of production, which are legible through well-established terms of political economy. The present moment precedes inevitable and catastrophic outcomes according to the laws of history.

Perhaps by even speaking the name of “AI,” humans have already guaranteed their extinction. Kiss your loved ones and hold them tight, stock your rations and wait for the inevitable superintelligence, malevolent and human-obsessed, to confirm if you do or do not carry the mark of the beast and to decide your providence thusly.

According to this fear, it may be that AI will eventually be responsible for millions or even billions of deaths. It’s also possible that it will be responsible for billions of future humans never being born at all, as the global replacement birth rate in a “fully automated luxury” whateverism society drops well below one, leaving a planet full of empty houses for the 2 billion or so human Earthlings who populate a quieter, greener and more geriatric and robotic planet. Contrary to Malthusianism, this population drop scenario is due to generic affluence, not widespread poverty. Maybe this ends up being one of AI’s main future contributions to mitigating climate change? Utopia or dystopia is in the eye of the beholder.

For AI Doomers — a term sometimes used with pride and sometimes pejoratively, whose focus is to defend the future against imminent, probable and/ or inevitable AI catastrophes — there is a certain satisfaction in the competitive articulation of extreme and depressing outcomes. To entertain hope is for dupes.

This movement of elite preppers jokes about Roko’s Basilisk and new variational motifs of rarified wankery: eschatological, moralizing, self-congratulatory. The Doomer discourse attracts many who are deeply tied into the AI industry because it implies that if AI is truly bringing humanity to the edge of extinction, then those in charge of it must be Very Important People. Our collective future is in the hands of these final protagonists. Who wouldn’t be seduced by such an accusation?

“For AI Doomers, to entertain hope is for dupes.”

On the other side of the tech culture war, a different genre of AI Depression is the orthodox discourse for a scholastic establishment spanning law, government and liberal arts that sees the technology as a delinquent threat to its own natural duty to supervise and narrate society. From The Atlantic to LOGIC(S), from the Berkman Klein Center at Harvard Law School to RAND, they imagine themselves as the democratic underdog fighting the Power without ever wondering if their cultural and institutional incumbency, more than California’s precocious usurpation, actually is the Power.

For other camps, the basic tenets of High Doomerism might be associated with Nick Bostrom and the late Future of Humanity Institute at the University of Oxford — but where their original research on existential risk explicitly focused on low-probability catastrophes, the low probability part got sidelined in favor of not just high probability runaway AI but inevitable runaway superintelligent AI.

Why this slippage? Perhaps it’s because predestined runaway superintelligent AI was already a big character in the pop discourse, and so to summon its name meant to signal not its remoteness but its inescapability. For this, Bostrom can thank Ray Kurzweil, who blended the observation of mutually reinforcing technological convergence with evangelical transhumanist transcendence for years before most people took AI seriously as a real thing. Depression (or elation) is a rational response to a predetermined reality even if predetermination is not a rational interpretation of that reality.

It is this oscillation between the inevitable and the evitable that may be the key to understanding the Depression form of AI Grief. Recall that another type of depression is manic depression, which manifests as a tendency to flip to and from polar extremes of euphoria and despair. Horseshoe theory in politics refers to the tendency of extreme left and extreme right political positions to converge in ways both predictable and startling. A horseshoe theory of AI Depression sees the fluctuation between messianic grief and solemn ecstasy for what is to come, often manifesting in the same person, the same blog, the same subculture, where audiences who applaud the message that AI transcendence is nigh will clap even harder when the promise of salvation turns to one of apocalypse.

Acceptance

Symptomatic statements: The eventual emergence of machine intelligence may be an outcome of deeper evolutionary forces that exceed conventional historical frames of reference; its long-term implications for planetary intelligence may supercede our available vocabulary. Acceptance is in a rush to abdicate. Acceptance recognizes the future in the present. Where others see chaos, it sees inevitability. 

The last but not necessarily final stage is AI Acceptance, a posture not necessarily better or worse than any of the others. Acceptance of what? From the perspective of the other stages, it may mean the acceptance of something that is not real, something that is dehumanizing, that dominates, that portends doom, that is a gimmick, that needs a good finger-wagging. Or Acceptance may mean an understanding that the evolution of machine intelligence is no more or less under political control than the evolution of natural intelligence. Its “artificiality” is real, essential, polymorphous and also part of a long arc of the complexification of intelligence, from “bacteria to Bach and back” in the words of the late Daniel Dennett, one that drives human societies more than it is driven by them.

Acceptance asks: Is AI inside human history or is human history inside of a bio-technological evolutionary process that exceeds the boundaries of our traditional, parochial cosmologies? Are our cultures a cause or an effect of the material world? To what extent is the human artificialization of intelligence via language (as for an LLM) a new technique for making machine intelligence, and to what extent is it a discovery of a generic quality of intelligence, one that was going to work eventually, whenever somebody somewhere got around to figuring it out?

If the latter, then AI is a lot less contingent, less sociomorphic, than it appears. Great minds are necessary to stitch the pieces, but eventually somebody was going to do it. Its inventors are less Promethean super-geniuses than just the people who happened to be there when some intrinsic aspect of intelligence was functionally demystified.

Acceptance is haunted by these questions, about its own agency and the illusions it implies. How far back do we have to go in the history of technology and global science and society before the path dependencies outweigh all the contingencies?

Like all complex technologies, AI is built of many simpler previous technologies, from calculus to data centers.. A lot is contingent. Decisions of convenience are reinforced through positive feedback, like clock hands moving “clockwise,” get locked in and become, over time, components of larger platforms that seem natural but are arbitrary. Chance is on rails: It all comes together at a point where its extreme contingency becomes unavoidable.

“Acceptance asks: Is AI inside human history or is human history inside of a bio-technological evolutionary process that exceeds the boundaries of our traditional, parochial cosmologies?”

If you get digital computers plus an understanding of biological neural networks plus enough data to tokenize linguistic morphemes plus cheap and fast hardware to run self-recursive models at a scale where any number of developers can work on it and so on, then is some real form of artificialized intelligence running on an abiotic substrate eventually going to appear? Not necessarily the AI we have now — such as it is — but, eventually, something?

Once any intelligent species develops the faculties of abstraction and communication that humans associate with the prefrontal cortex, is something like writing a foregone conclusion? Once writing emerged in Sumer, inscribing first quantitative and then qualitative abstractions, then the printing press appeared, and then much later electricity was harnessed — is some momentum set in motion that operates on profoundly inhuman scales?

Once calculus was formalized and industrial machinery assembled at mass scale, was the modern computer going to come together once somebody applied Leibnizian binary logic to them both? Once people started hooking up computers to each other and settled on a single unwieldy but workable networking protocol, something like “the internet” was going to happen. It could have gone differently, but it was going to go; the names and dates are coincidental, almost arbitrary.

For the AI Acceptance stage of grief, the key term of comfort is inevitability. It lifts a weight. For this, the world could be no other way than how it is. Sweet release. Is this acceptance or acquiescence? Is this a Copernican inversion of the cause-and-effect relation between intentional human agency (now effect) from planetary processes (now cause) — or is it an all-too-human naturalization of those outcomes as theodically fixed? In terms of Kubler-Ross’ stages, is this the acceptance of someone who is grieving? Grieving for what exactly? Their own existential purpose?

In grief, the trauma response is to believe that which is must be so, and thus there is no guilt because there’s no freedom, no disappointment because there’s no alternative. But this is not the only way to recognize that the present is not autonomous from the past and future, and that even what seem like very powerful decisions are made within determining constraints, whether they realize it or not.

We can call this “Non-Grief.” The conclusion it draws is very different. It’s not that the form of AI we have now is inevitable, but rather that the AI we have now is very certainly not the form of AI to come. The lesson is to not reify the present, neither as outcome nor as cause.

Non-Grief

Every stage of grief expresses not just apprehension but also insight, even when its claims are off the mark. Looking back on these years from the near future, we may see different things. “If only we had listened to the Cassandras!” Or, “What the hell were they thinking and why were they talking such nonsense?” There are “non-grief” ways of thinking through a philosophy of artificialized intelligence that are neither optimistic nor pessimistic, utopian nor dystopian. They emphasize the reconciliation of the Copernican Trauma of what AI means with new understandings of “life,” “technology” and “intelligence.”

Exploring the collapsing boundaries between these terms is part of the work of the Antikythera research program that I direct, incubated by the Berggruen Institute (Noema’s publisher) especially our collaboration with the astrobiologist and theoretical physicist Sara Walker, who wrote about this in an extraordinary piece in Noema called “AI Is Life.” “Life” is understood not as the unique quality of a single organism but as the process of evolutionary lineages over billions of years. But “technology” also evolves, and is not ontologically separate from biological evolution but rather part of it, from ribosomes to robotics.

Any technology only exists because the form of life necessary to make it possible exists — but at the same time, technologies make certain things exist that could not without them. In Walker’s view, it is all “selection” — and that very much includes humans, what humans make and certainly what makes humans. “Just as we outsource some of our sensory perceptions to technologies we built over centuries,” she wrote, “we are now outsourcing some of the functioning of our own minds.”

James Lovelock knew he was dying when he wrote his last book, “Novacene: The Coming Age of Hyperintelligence,” and he concludes his own personal life’s work with a chapter that must startle some of the more mystically-minded admirers of Gaia theory. He calmly reports that Earth life as we know it may be giving way to abiotic forms of life/intelligence, and that as far as he is concerned, that’s just fine. He tells us quite directly that he is happy to sign off from this mortal coil knowing that the era of the human substrate for complex intelligence is giving way to something else — not as transcendence, not as magic, not as leveling up, but simply a phase shift in the very same ongoing process of selection, complexification and aggregation that is “life,” that is us.

“There are ‘non-grief’ ways of thinking through a philosophy of artificialized intelligence that are neither optimistic nor pessimistic, utopian nor dystopian.”

Part of what made Lovelock at peace with his conclusion is, I think, that whatever the AI Copernican Trauma means, it does not mean that humans are irrelevant, are replaceable or are at war with their own creations. Advanced machine intelligence does not suggest our extinction, neither as noble abdication nor as bugs screaming into the void.

It does mean, however, that human intelligence is not what human intelligence thought it was all this time. It is both something we possess but which possesses us even more. It exists not just in individual brains, but even more so in the durable structures of communication between them, for example, in the form of language.

Like “life,” intelligence is modular, flexible and scalar, extending to the ingenious work of subcellular living machines and through the depths of evolutionary time. It also extends to much larger aggregations, of which each of us is a part, and also an instance. There is no reason to believe that the story would or should end with us; eschatology is useless. The evolution of intelligence does not peak with one terraforming species of nomadic primates.

This is the happiest news possible. Like Lovelock, grief is not what I feel.