Not My Problem

I asked an AI to predict the near future of art and creative work. This is the story it told me.

WA SEI for Noema Magazine
Credits

Tim Maughan is a writer of fiction and nonfiction. His debut novel “Infinite Detail” (FSG, 2019) was The Guardian’s science fiction and fantasy book of the year and was shortlisted for the Locus Award for best first novel.

Back in early 2023, I was approached by Lathe — a machine-learning and AI startup based in Vancouver — about an experimental project involving a new large language model the company was building.

It’ll come as no surprise to anyone familiar with my work that I was deeply skeptical at first, as I am about any technology that likes to label itself with AI buzzwords. In fact, it took all my effort to not just delete that first email as soon as I opened and read it. But as my cursor floated over the “mark as spam” button, a single sentence on the screen gave me pause, as though the pixels themselves were reaching out of the screen and gripping my hand, stopping my finger from clicking.

It said they would give me money.

Perhaps not the most ethical reason to take part, granted — but the economic reality of being a writer in 2024 is stark and depressing, as I’m sure many of you reading this will understand. Last year, I formally gave up pitching nonfiction articles to editors; most of the good ones I’d worked with in the past (and they were very good) were gone, the outlets they worked for vanishing one after another as the slow collapse of the ad-tech model denied them revenue that was probably never there, and impatient corporate investors got tired of waiting for the growth they were promised by overpaid, lying CEOs. At the same time, my other main income stream — writing design fiction copy for futurists and corporate foresight agencies — had all but evaporated. I started to hear that many of my clients were cutting costs by simply feeding the prompts they would typically give me, alongside examples of my previous work for them, into ChatGPT.

So I took the call from Lathe, and their money. But on one condition: Before I started on the company’s project, I wanted access to the LLM. I wanted to talk to it about itself. As someone known for writing critiques of technology, I wanted to see how good it really was. After all, the company planned to market it as a tool to help me and other writers at their craft. But could the software critique itself? Lathe said ok. And so I decided to task the LLM with mimicking my voice and style to generate a speculative vision of how so-called AI and machine-learning systems like it would impact and influence art for the next 10 years.

What follows is the result, written mainly by the LLM (“Lathie”), with a little editing and direction from me. It’s an AI’s vision of the future of AI art in the form of a timeline of the next decade.

2025

Art galleries, museums and festivals worldwide are starting to ban the use of cameras and smartphones in new and major exhibitions. The move comes mainly at the request of artists, agents and collectors, with some even threatening legal action if visitors are caught using cameras.

“It’s an unfortunate decision,” the senior curator of Toronto’s AGO tells CBC News in an interview. “But we had little choice. It was only a matter of time before somebody sued us.”

The drive for this comes from the fear that visitors are taking photos of new art to feed into AI image generators such as Midjourney and Stable Diffusion so they can produce their own variations or remixes. At first, some parts of both the art and tech communities push back at the bans, calling them “gatekeeping,” “paranoid,” and “anti-art.” Social media erupts into its usual outrage cycles, with many seeing it as an overhyped non-issue. Still, over the following months, attitudes start to change as a string of high-profile cases come to light.

  • A renowned Korean watercolor artist returns from her first show at MoMA to find t-shirts and prints of inescapably similar art being sold on Instagram just three days after the opening.
  • An obscure Indian graphic novel illustrator with a modest following on TikTok is confused to wake up to comments congratulating him “on the Coke thing” before discovering that a single image of his has been used to generate a full two minutes of animation for a Coca-Cola advert. “Sure, this has been going on for decades, this stealing of art for advertising, of course,” he tells 404 Media in an interview, “but it was never this quick, this bare-faced or this cheap for them to do. It was never at this scale or this disturbingly accurate.” After a successful GoFundMe campaign, he raises enough money to hire a media rights lawyer and file suits against Coca-Cola and the London-based agency that produced the ad, both of whom deny any knowledge of his work’s existence.

By the end of 2025, most galleries worldwide have banned cameras and recording equipment. Many have started to shut down their free Wi-Fi services, citing contractual obligations to artists and collectors. This poses a huge problem for galleries across the world, who have relied heavily on social media and sharing culture as their primary strategy for attracting visitors. Unable to bring in influencers for show openings, or even rely on teenagers taking selfies in front of their favorite art, their whole approach to marketing vanishes overnight. Visitor numbers start to drop drastically, and by the end of the year, a record number of small galleries have permanently closed.

“I fully understand artists’ concerns and legitimate fears about this issue,” the AGO curator tells CBC. “But with some of the demands they’re making of us, it’s starting to feel like they’re cutting off their nose to spite their face.”

2026

A video named NOT MY PROBLEM.VHS is posted to YouTube by an unknown user. It appears to be a recreation of the fictitious TV show “It’s Not My Problem!” from the 1987 sci-fi action-satire movie “Robocop.” Despite the original film featuring less than a minute of the show in the background of a handful of scenes, the YouTube clip — very obviously AI generated — presents a full 23-minute episode of “It’s Not My Problem!,” including opening and end credits and even a theme song. What it doesn’t have much of is a plot, and instead just lurches from chaotic scene to chaotic scene, following its lead star — the grotesque, grinning Bixby Snyder — as he performs the most basic of slapstick humor, makes cruel jokes about his off-screen wife, takes pies to the face and paws misogynistically at women half his age, only pausing every few minutes to look directly into the camera and scream, wide-eyed and sweating, “I’d buy that for a dollar!” at the audience.

Included in the original movie as a satirical warning about where cheap and trashy generic American TV was heading in the next century, it feels disturbingly accurate in the YouTube clip’s longer form. Some commentators point out how perfectly it now captures the unimaginative, formulaic, populist aesthetic of most AI-generated entertainment.

After being picked up by a few high-profile influencers, the clip quickly goes viral, racking up over 6 million views in just a few days. While for many of its viewers, it’s a nostalgic reminder and knowing nod to a scarily prophetic movie, it becomes clear from posts, comments and hundreds of reaction videos that millions more are not in on the joke. Instead, to them it’s just another dumb, funny meme — this year’s “Skibidi Toilet,” just another What the fuck? moment to find unironically funny. Almost instantly, TikTok becomes swamped with endless remixes, mash-ups and whole new episodes — all also generated by AI, and most of which seemingly miss the point of the original satire, unintentionally making its predictions and parodies all the more accurate.

“Some commentators point out how perfectly it now captures the unimaginative, formulaic, populist aesthetic of most AI-generated entertainment.”

Within a week, the original clip and most of its imitators have been yanked from YouTube following a copyright infringement claim from Amazon, who have owned the rights to the “Robocop” franchise since their acquisition of MGM Studios in 2021. Unsurprisingly, this just makes the whole situation worse, as angry fans flood every video-sharing platform they can with new clips. Sensing a PR disaster incoming, Amazon reverses its stance and drops all claims. The tech giant also announces that a full-length, official “It’s Not My Problem” TV series will be coming to Amazon Prime Video.

Meanwhile, a German arts magazine manages to track down the original clip’s creator, who agrees to an interview as long as their anonymity is maintained. They talk about how the clip was created — “It was just three simple prompts in Sora AI and some light editing” — and about how, yes, it was meant to be a parody of AI-generated video content, and, yes, they were surprised at how popular it became. When asked what comes next, they laugh and say, “Well, I guess that’s up to the internet; it belongs to them now,” before mentioning that in an unfilmed 1988 screenplay for “Robocop 2,” set some years later, Bixby Snyder becomes president of the United States.

2027

Pushing back against the proliferation of bland generative work, a so-called “organic art” movement starts to gain traction. It promotes a “back to basics, human-made” approach to art creation across all media and genres. While some critics welcome this attempt to resist AI art, many accuse it of being the wrong kind of knee-jerk reaction, pointing to its overtly traditional aesthetics and focus on unchallenging — or even comforting — subjects. According to an editorial in The New Yorker, “Most of it looks like a rich white person’s idea of Good Art, if their sole experience of art had been ‘Eat, Pray, Love’ and the postcard section of the Met’s gift shop.”

This sense of bland elitism is compounded by most Organic Art being visible only in real-life galleries — where cameras and social media are still banned, and entrance prices have soared – as its creators and dealers continue to demand it not be shared online. Similar to the organic food movement after which it’s named, its high cost and limited accessibility make it an aspirational art form only affordable to the rich. As such, it becomes trendy among lifestyle influencers, spawning the suddenly viral ASMR-like micro-trend of “word paintings,” where immaculately dressed and made-up TikTok and Instagram influencers breathily read long, wordy and overwrought descriptions of art that they own but contractually can’t show.

“2027: Pushing back against the proliferation of bland generative work, a so-called ‘organic art’ movement starts to gain traction.”

Even in its early days, it’s evident that this new scene only benefits a handful of already high-profile and established artists by allowing them to market themselves as even more exclusive and aspirational. Even its perceived pushback against generative art is seen by some as ineffectual: It doesn’t really matter if AI produces horribly generic work or sees less take-up than predicted by its corporate evangelists; just the hype around its existence and potential has devalued the work of professional artists, animators, designers and writers. Many of them take to social media to describe how they’ve seen freelance rates collapse, contracts and clients disappear, and now find themselves looking for alternative employment or switching careers.

In an attempt to counter this perception — and while facing a swelling cohort of class action suits over unauthorized use — a group of tech companies led by OpenAI, Microsoft and Meta launch Inspired and Acknowledged, a nonprofit rights-payment managing trust that aims to make sure artists whose work has contributed to AI art and learning datasets receive fair compensation. At launch, it’s unclear how it will achieve this, but the press release mentions something vague about “automated authorship tracking via machine-learning agents.”

2028

The publication of an extensive, two-year-long joint investigation by The Guardian and The Washington Post reveals a massive explosion in CO2 output from the tech industry over the last five years, with AI companies the major contributors. In an open letter to governments and CEOs, a group of over 150 leading climate scientists call it the “final nail in the coffin of our hopes of limiting global warming to 1.5º C” and demand action be taken. The same investigation shows that over the same period, the tech industry has almost doubled its water consumption as it buys up dwindling supplies to cool its ever-growing army of data centers, often starving small communities and farmers of the water they need to survive.

Meanwhile, a separate investigation by 404 Media reveals that 75% of the most popular AI-generated content on Spotify is owned by Spotify itself, which is intentionally flooding its own platform to avoid paying artists. Most of the AI music is ambient and similar background-style genres — chill piano, lo-fi beats, meditation music, cyberpunk vibes for insomniacs, etc. But the company claims these are some of the most popular streams on the platform, second only to a steadily shrinking number of ultra-famous popstars.

Human musicians working in those genres are deeply upset, pointing out that even if they use AI tools to create music, they can’t compete with Spotify’s bots, who clearly have complete access to the platform’s data and visibility into how its recommendation algorithm works. This gives the bots an incalculable advantage over human musicians, as they can see micro-trends before they even emerge, predict what users will want before they even know it themselves, and create whole playlists of tailored music almost instantly.

Spotify itself seems unconcerned by the outrage over this, with the CEO boasting to happy shareholders on an earnings call that their AI technology “is revolutionizing the music industry in the same way automated high-frequency trading revolutionized the financial industry.”

“Human musicians working in those genres are deeply upset, pointing out that even if they use AI tools to create music, they can’t compete with Spotify’s bots.”

The same investigation claims that Inspired and Acknowledged, the Silicon Valley rights tracking body, is doing basically nothing and has not paid a single artist beyond a handful of celebrities and sponsored influencers.

Meanwhile, after years of delays, Amazon’s promised “It’s Not My Problem!” TV project eventually launches. Now more of a platform than a conventional TV series, it allows users to prompt a text-to-video model and create and share clips and mini-episodes of the show. Suddenly, Bixby Snyder is back, his drunken, grin-split, ham-like, mocking face splattered across screens again, once more spawning hot-take op-eds and an avalanche of cheap, opportunistic merchandise and advertising campaigns.

Astute critics point out how much of the new Bixby content — especially the stuff riding high on Amazon’s recommendation algorithm — has a clear right-wing or “anti-woke” bias, with some accusing Amazon of using the show and character to deflect attention away from its environmental record by incessantly mocking well-known climate activists. The tech and plastic junk distribution mega-corporation denies any of this, pointing at how the “show” is made up almost entirely of user-prompted content.

“As the first genuinely 100% AI-generated TV experience controlled by its viewers,” an Amazon press release claims, “‘It’s Not My Problem!’ represents the true values and standards of its fans.”

2030

Two years later, the Organic Art movement has become an industry in itself, albeit affordable and accessible only to a select, elite demographic. This year is its first biennale, held at the prestigious Museum of the Future in Dubai; within its elegantly curved walls, attendees take air-conditioned shelter from both the 120º heatwave outside and the rest of the world’s prying eyes. The secrecy and high cost of entry protect intellectual property while also giving the whole scene a sense of mystique, exclusivity and aspiration. Being invited to exhibit there is marketed as the pinnacle of artistic and commercial success, with few young or emerging artists making it inside.

Following the success of “It’s Not My Problem!” and the persistent popularity of Bixby Snyder, Netflix, Spotify and YouTube all launch apps aimed at echoing Amazon’s success. These basic AI text-to-video and audio-prompting services allow their subscribers to create content using a range of licensed IPs, franchises and even celebrities.

“2030: The Organic Art movement has become an industry in itself, albeit affordable and accessible only to a select, elite demographic.

But despite an early rush of excitement — especially from fan communities — within just a few months, the novelty wears off, and it’s clear the new features are struggling to muster up that most important of metrics: engagement. Users just don’t seem to want to use any of the services; research shows that people view them as too much work. “The whole point of watching shit TV and listening to corpo pop music is that I don’t have to think about it,” complains a TikTok user in a car seat rant that goes viral. The hashtag #2tired2prompt starts trending across social media.

The ineffectual and largely forgotten Inspired and Acknowledged trust lays off 70% of its human staff, drops its nonprofit status and changes its business model. Instead of tracking down derivative works and compensating artists, it now uses AI agents — taught on years of watching the art and entertainment industries — to identify and rank emerging artists and creators. The company then offers to buy the rights for their work to be used to train generative systems, with payment coming as multiyear advances based on how lucrative its LLMs predict their careers will be. It’s similar to how hedge fund-backed streaming rights agencies have been investing in popular musicians since the late 2010s.

2031: The Summer Of Hell

A perfect jackpot of rolling climate emergencies erupts across the globe as temperatures break new records; the media quickly starts calling it the “summer of hell.” For those old enough to remember, it feels disturbingly reminiscent of the chaos of 2020, but worse — more heightened, more accelerated. It would be impossible to list everything that happened, but here’s a selection of highlights:

  • An estimated half a million elderly and vulnerable people across the world die under suffocating heatwaves as hospitals and emergency services are stressed to the point of collapse. In Phoenix, Arizona, an estimated 10,000 die as temperatures push past 125º, and a four-day power cut paralyzes infrastructure and shuts down air-conditioning citywide.
  • An unprecedentedly aggressive hurricane season batters the Americas, flooding coastal cities.
  • After the destruction of multiple oil refineries and processing plants, which leaves a large tract of the Texas coastline under several feet of heavily polluted floodwater, the area is declared a no-go “Uninhabitable Zone.” Oil company scientists claim that a successful clean-up would be impossible, and the area could remain inhospitable to life for centuries due to the massive quantities of toxins and corrosive chemicals leaking into the groundwater.
  • Supply chains and global shipping grind to a halt as vital container ports in East and South Asia are left unusable by storms. A bridge in Ningbo, China, collapses, forcing one of the country’s most vital export routes to close for three months, sending out ripples of logistic chaos around the planet and causing prices for everything from everyday household items to crucial medical supplies to skyrocket.
  • An estimated 4 million people worldwide are displaced from their homes, forced to evacuate in the face of storms, floods and forest fires. Most of them find themselves in hastily erected climate refugee camps facing disease, hunger and shortages of drinkable water — plus the realization that they might never return to their homes.
  • A significant cholera outbreak erupts in Brooklyn, New York, after the city’s aging sewage system is overwhelmed by a storm surge.
  • Thousands of businesses are disappearing and millions of people are losing their jobs, an economic crisis magnified by ruthless and erratic AI trading algorithms that maximize profit even amid global disaster.

2033

Delayed by a year, the second Organic Art Biennale closes its doors just days after opening. An unmitigated disaster, the organizers publicly blame the city of Dubai’s evident failure to rebuild its infrastructure after the summer of hell. However, according to some attendees willing to break their NDAs, the reality is much more interesting — a number of high-profile stunts by artists and protestors successfully disrupted the show. Most notable of these was an invited artist who managed to smuggle in a crate filled with 150 micro-drones, which he released to swarm the halls of the Museum of the Future, livestreaming the secret art to the outside world before splattering much of it with precision-guided paintball pellets.

Elsewhere, the “new normal” world feels dangerous and confusing to many, a lot of whom find themselves still living in ever-growing city-sized refugee camps, unsure if they will ever be able to return home. Looking for a little comfort and distraction at a time when the traditional media and entertainment industries have all but collapsed, they find themselves turning to the abandoned generative art platforms and prompted content. Bixby Snyder rides again, his infamous catchphrase “I’ll buy that for a dollar” repurposed as a darkly humorous, self-deprecating refrain for the millions who find themselves falling into poverty and displacement.

Despite having to shutter many of its theme parks, Disney has actually seen profits rise and starts aggressively buying up smaller studios, platforms and IP that were struggling to survive in the economic and climate chaos. It announces that it wants to “give something back to our passionate fans and supporters” and reveals that it will start distributing free VR headsets to climate refugees in Florida, Texas and California. Just in time for the holiday season.

2034

A strong and highly organized protest movement emerges from climate refugee camps around the world, led mainly by teenagers. It brings together climate activists and artists, demanding actions to slow global warming and immediate and just reparations for the displaced. Alongside fossil fuel companies, governments, banks and hedge funds, the movement targets data centers, internet infrastructure and tech company campuses with protest camps. Over the coming months, these protests start to swell as refugees with nowhere else to be and nothing left to lose pour in. Many are surprised to find better living conditions than in the climate camps and vow to stay.

Several attempts by tech companies to use police and private security to clear the protestors (who have spent the last few years of displacement learning valuable lessons about self-organizing and building community infrastructure) end in chaos. But despite the violence being directed against them, the protesters hold the line and suffer few casualties.

That tragically changes in July at Google’s HQ in Mountain View, California. Amid yet another heatwave, the frustrated commander of the private military company hired by Google to monitor the site allows a so-called “counter-protest” to enter the camp. A mob of right-wing thugs, all wearing 3D-printed Bixby Snyder masks and chanting “You’re not my problem” rain down violence on the camp’s inhabitants. Before long, half of the camp is ablaze, and dozens of protesters are missing or dead.

The violence continues for another two days as angry and traumatized protestors retaliate against the security forces and local police. Rumors and conspiracy theories circulate — allegations of security guards opening fire on protesters, hints that Google deployed a new technology that blocked cellphone and Wi-Fi signals, even that a drone strike was called in, the first on American soil. The protests continue, the death toll rises and the heat wave grinds on, but what’s left of both social and traditional media moves quickly on.

But one video clip, supposedly shot during the chaos at Google, goes viral. It shows a young woman, barely out of her teens, her face both scared and defiant as she stares directly into the camera. Behind her flash glimpses of bodies moving through smoke and flames flicking up the walls of an unknown structure. Her voice trembles as she speaks, the noise-canceling algorithm on her phone struggling to keep her voice focused against the constant, near-deafening drone of shouting voices and low-altitude rotor blades.

“I hope you can see this, all of this,” she says, her head turning to glance behind her. “Because we see you. The people responsible for this, we see you all. Not just Google — the cops, everyone that wants to drive us out of here —”

She flinches as something explodes nearby; for a second, the video is gone, replaced by a technicolor checkerboard of compression artifacts, and then she’s back, shaken, but her voice louder, more defiant, a single tear glinting on her cheek. Her mouth moves before the dropped-out audio can cut back in.

“— the oil companies, the banks, the tech companies — all of you. We see you. We see you selling out our future from under us. The climate, our jobs, even art itself. We see you all, every one of you who thought you could use AI to make a quick buck, every one of you who thought you could get an advantage by using it to take shortcuts, every one of you who chose a quick fix over solidarity. We see you all. Remember that. Even if I — even if we don’t make it out of here, remember that there are others, many others — we all see you. And we are coming to —”

And with that, the screen goes black, the clip ends.

Within days, millions have viewed it, and with every new share, the discourse around it grows. But little of it is actually focused on the still-unknown woman’s words. Battered and drowned by a decade of the fake and artificial, the message finally suffocated by the medium, the internet instead argues over just one thing, like it always does now: whether the clip is real or not.

Elsewhere: business as usual. Disney teams up with Inspired and Acknowledged, now a hedge fund, to launch a “lifetime fellowship” scheme for artists. It promises an above-average living wage and bonuses for highly popular work for artists accepted into the program. In return, they must agree to give Disney the rights to anything they create and agree that their work will be used to train AI. The scheme is compared to indentured labor by a few critics, but it doesn’t stop over a million desperate artists from applying, many of them refugees. Less than a week after its launch, Disney closes the application portal while a fleet of their AI agents rank and choose those who will be accepted.

And in Washington DC, a legion of “It’s Not My Problem!” fans descend on the National Mall to hold what soon appears to be a political rally presided over by none other than Bixby Snyder himself. It’s unclear who or what the figure on stage really is — a convincing lookalike actor, a hologram, maybe a sophisticated robot. But one thing is clear: It calls itself Bixby Snyder, and in 2036 it plans to run for president of the United States.

Author’s note: This story is a work of solely human-crafted speculative fiction. No artificial intelligences, generative processes or large language models were involved in writing it.