We Need An FDA For Artificial Intelligence

What AI regulators can learn from the history of the FDA.

Misha Primitive for Noema Magazine
Credits

Emilia Javorsky is a physician-scientist working at the nexus of emerging technology and society at The Future of Life Institute.

At the turn of the 20th century, the commercial medical landscape was littered with self-appointed medicine men who traveled with carnivals, toting supposed cure-all brews. It was the era of the “snake-oil salesman” — the seller of quack remedies.

One of these men, a cowboy of the American West named Clark Stanley, claimed he learned about the remarkable medicinal benefits of rattlesnake oil from Hopi medicine men. Such lore was the basis for his “Clark Stanley’s Snake Oil Liniment” origin story. Stanley began selling the liniment at shows in the American West and secured a spot at the 1893 World’s Fair in Chicago— the ultimate demo day of the time.

To the crowd’s delight, Stanley supposedly “cut open a live rattlesnake, boiled it in water, skimmed off the fat that arose from the fat sacks of the snake and mixed it into his supposedly medicinal concoction,” NYU dentists described in a paper on old chemical bottles at the College of Dentistry. Given the surge in interest, he scaled the enterprise, teaming up with a chemist to develop production facilities in Massachusetts and Rhode Island.

Countless other players in the space also hawked “patent medicines” — direct-to-consumer proprietary blends disregarding or even claiming safety in marketing without proof — like “Warner’s Safe Diabetes Cure” and “Hamlin’s Wizard Oil Liniment” to cure all aches and pains.

This recklessness did not go unnoticed. Dr. Harvey Wiley, chief chemist at the U.S. Department of Agriculture in 1882, began studying the effects of food adulteration when ingested by humans. In 1902 he received federal funds to research the impact of preservatives like formaldehyde being added to foods, on volunteers working at the USDA.

The now notorious “poison squad” experiment became a rallying cry for better federal laws around what constituted human-grade ingredients. Wiley became convinced that manufacturers should have the burden of proof in demonstrating the safety of their products and that the public had the right to know, through labeling, what they were consuming.

Wiley brought the science to the public, speaking at women’s clubs, consumer advocacy groups and with journalists; all of whom became key parts of a coalition driving a movement to demand federal action.

In 1906 the Food and Drugs Act, commonly known as the Wiley Act, was passed, mandating labeling requirements on food and drug products. And in a precedent-setting move for scientific governance, the FDA’s then-Bureau of Chemistry was tasked with its enforcement. In 1916, the bureau tested Stanley’s liniment and found it contained primarily mineral rather than snake oil. He was fined $20 and did not contest the charges.

About a decade after the public learned they had been duped, the poet Stephen Vincent Benét memorialized snake oil as synonymous with fraud, writing “Crooked creatures of a thousand dubious trades … sellers of snake-oil balm and lucky rings.”

Today, frontier artificial intelligence is in its own Clark Stanley era. While narrow AI systems are designed to solve problems, frontier AI is about developing ever more powerful giant, general-purpose systems, which now cost hundreds of millions and may soon cost billions to produce. Private actors build increasingly powerful AI systems and deploy them for public use without independent oversight. Vague promises accompany the release of these systems; for example, that in the future they will be societal panaceas capable of curing cancer, eradicating poverty and solving climate change. These promises are notably light on specifics.

At the same time, an increasing body of data shows such systems carry societal risks ranging from unprecedented concentration of power by a few private companies to disinformation, cyberattacks, deepfakes, bias and discrimination, manipulation, labor disruption and even existential risks.

In 2023, many top AI researchers and executives signed onto the statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

So far there is no comprehensive federal law on the books, beyond many self-declared principles and voluntary commitments put forward by private companies. Such professed commitments have already been violated, prompting Andrew Strait, an associate director at the Ada Lovelace Institute, which does AI research, to note, “It should now be abundantly clear that we cannot rely on the goodwill of AI companies. Voluntary agreements are no substitute for legal mandates.” Meanwhile, technology continues to be deployed and integrated into society faster than existing laws and enforcement can keep up.

“Today, frontier artificial intelligence is in its own Clark Stanley snake-oil era.”

This is especially salient with AI given the scale and pace of adoption. ChatGPT was the fastest-growing user base in technology history, amassing an estimated 100 million users within two months of launch. Moreover, the architect of ChatGPT, OpenAI, has already solidified corporate partnerships across media, biotechnology and incumbent technology giants such as Microsoft and Apple.

As society grapples with the novel challenges inherent to artificial intelligence (AI), how the FDA came into being and evolved can provide us with crucial lessons on mitigating technology’s risks while maximizing its benefits. Creating similar regulatory oversight mechanisms and guardrails for AI could also help build public trust and prevent techlash.

Scandal Converts Science Into Policy

In 1937, an improperly prepared antibiotic named Elixir Sulfanilamide fatally poisoned over 100 people in the United States, many of them children. A mother grieving the death of her child wrote to President Franklin Roosevelt, “It is my plea that you will take steps to prevent such sales of drugs that will take little lives and leave such suffering behind and such a bleak outlook on the future as I have tonight.”

At the time no law governed the safety or efficacy of such pharmaceutical medications. News reports blasted the lack of basic safety testing, which would have shown the formulation’s toxicity and prevented physicians from becoming unwitting participants in the deaths of their patients for prescribing the medication.

It was this public scandal that converted science into policy and was the catalyst for the creation of the modern FDA. One year later, Congress passed the Federal Food, Drug, and Cosmetics Act. The new law empowered the U.S. Food and Drug Administration (FDA) to review drugs before they could be marketed. It also built public trust by ensuring pharmaceutical companies were incentivized to develop new medications with the public’s safety in mind.

Such regulatory policies and the politics surrounding them frequently determine which new technologies succeed and fail. As the field of biology has advanced, regulators and scientists have drawn red lines; for example, prohibiting the creation of biological weapons or heritable human genome editing. Globally, almost all countries have some variant of the FDA.

In contrast, nuclear technology was first applied to weaponry, with no initial regulatory limitations; this has resulted in concerns over radiation, nuclear waste and war. After poor safety engineering resulted in accidents such as the Chernobyl disaster of 1986, regulators developed excessively strict regulations around even the use of peaceful nuclear energy. This has, in part, limited our ability to live in a world with an abundant clean energy source that is roughly 600 times less deadly than oil, per unit of electricity produced, according to the International Atomic Energy Agency.

Today AI poses potentially significant ethical, safety and societal risks, but also the potential to benefit humanity and solve some of the most pressing problems of our time. It is therefore imperative that regulators urgently develop robust governance mechanisms to mitigate its risks so we can effectively realize its benefits. The nearly 100-year-old FDA has adapted to remarkable leaps in medical innovation and should serve as the model of efforts to govern this new category of emerging technology.

The impact of a properly run regulatory organization speaks for itself. The United States has been the undisputed leader in biomedical innovation over the past century, accounting for nearly 59% of the total global biotech sector’s value in 2021. Beyond curing and mitigating diseases, leadership in biotechnology has provided the country with political authority, diplomatic leverage, economic growth and enhanced national security.

This dominant position is precisely because of the incentive structures and safeguards developed by the FDA. AI is a budding technology with the potential to join the frontier of science that can provide innovation, prosperity, security and U.S. leadership. Like biotechnology, realizing that potential will come from enforcing smart guardrails, rather than solely enabling a Wild West-style free market.

As historical lessons from the FDA show, once something is already out to market, it becomes vastly harder to safeguard the public from harm and to determine whether risks outweigh the benefits. Unlike recalling drugs, how do you recall bytes when something goes wrong? We now stand at a crossroads. Will we allow frontier AI to develop unchecked? Or will we decide to build a governance structure — much like the FDA did in 1938 — to align incentives to promote innovation while safeguarding the public?

“How the FDA came into being and evolved can provide us with crucial lessons on mitigating technology’s risks while maximizing its benefits.”

Early Safety Successes

The creation of the modern FDA marked a critical transition from a landscape marred by unregulated testing and hazardous products to one that legally mandated safety, quality, efficacy and ethical testing. Thalidomide was produced and marketed in Germany in 1957 as a sleeping pill as well as a safe and effective treatment for morning sickness during pregnancy.

The drug was already being marketed in dozens of countries by the time the U.S. application for approval came across FDA reviewer Dr. Frances Kelsey’s desk in 1960. It was her first month on the job. Kelsey was aware of reports of neurological side effects in patients who had been taking the drug for more than 18 months. In reviewing the application, she noted that despite the company’s claims, it did not provide adequate scientific evidence and experimental data to show that the drug was safe. So she rejected it.

But since the drug was a commercial hit — the second-best selling drug in Germany after aspirin with immense potential for commercial upside —corporate pressure for FDA approval was intense. The American licensee of the drug made dozens of visits to Kelsey, and both the company and her FDA colleagues ruthlessly criticized her as a “bureaucratic nitpicker” and “unreasonable,” as described by The Washington Post for her reluctance to push the drug application through. She did not capitulate.

By 1961 reports of severe birth defects in children of women who had taken thalidomide started coming in from Europe. It is estimated at least 10,000 children suffered severe physical defects, many being born without arms or legs, in addition to many stillbirths and miscarriages.

In 1962 President John F. Kennedy awarded Kelsey with the President’s Award for Distinguished Federal Civilian Service, which cited: “Her exceptional judgment in evaluating a new drug for safety for human use has prevented a major tragedy of birth deformities in the United States. Through high ability and steadfast confidence in her professional decision, she has made an outstanding contribution to the protection of the health of the American people.” 

Her work led to the 1962 Kefauver-Harris Drug Amendments to the Federal Food, Drug and Cosmetic Act, which instituted requirements on manufacturers not only to prove a drug’s safety but also its efficacy, through well-controlled scientific studies.

These amendments required reporting a drug’s adverse effects to the FDA. They required obtaining informed consent, a process by which a clinician running a trial must explain the risks, benefits and alternatives as well as other relevant information to any subjects participating in studies before obtaining their written permission to proceed. They also required Good Manufacturing Practice, a set of standards ensuring quality control of drugs.

Finally, Congress transferred regulation of pharmaceutical marketing from the Federal Trade Commission to the FDA, moving away from standard commercial practices to a more rigorous standard of consumer protection with scientific oversight.

In the same way the FDA mandates pre-deployment safety testing, future frontier AI systems could also be subject to such regulatory authority. And much like with pharmaceuticals, the burden of proof to show that the system is safe must be on the developer. Such a regulatory agency would have full transparency into the model and would, crucially, have the authority to audit, just as the FDA can audit data or manufacturing facilities.

The United Kingdom recently created an AI Safety Institute to independently evaluate frontier AI systems across four domains: societal impact, dual-use capabilities, system safety and security, and loss of control. But because the institute lacks the authority to conduct audits, it has been forced to rely on voluntary commitments by AI companies to submit to them.

As of April, no major players in the AI world, with the sole exception of Google DeepMind, a UK-headquartered company, have agreed to an evaluation of their AI tech, according to Politico . The UK’s effort highlights the need for legally binding rules to ensure industry cooperation. Even OpenAI has noted: “At some point, it may be important to get independent review before starting to train future systems.”

Audit and pre-deployment testing to ensure safety is necessary but in and of itself, insufficient. A controlled research environment is not representative of the real world. Just as pharmaceutical companies must monitor for and report side effects to the FDA after approval, AI developers should be required to do the same for safety concerns that may arise in real-world settings that were not detected during testing.

“As of April, no major players in the AI world, with the sole exception of Google DeepMind … have agreed to an evaluation of their AI tech.”

We have already seen the potential for unforeseen risks in real-world settings with current AI systems. AI researchers conducting experiments on publicly released AI models have shown that the system’s safeguards can easily be overcome.

Unexpected adverse events may even happen during the development of such systems that warrant reporting. In drug clinical trials companies are mandated to report severe adverse effects to the FDA. There have recently been calls for a “right to warn” — policies that would enable current and former AI company employees to notify a company’s board, regulators, independent oversight groups or even the public directly, of potential risks they encounter in development or deployment — without retaliation.

One aspect of AI that differs from pharmaceutical manufacturing is the concept of “open” AI, or systems that are openly available to the public to tune or use. This would be nearly impossible to regulate. Still, this impossibility argument may be a bit of a red herring. The current generation of frontier AI systems is only being developed by a handful of companies with access to the substantial amounts of capital and computing power needed to create these giant new models.

The remarkable breadth of the FDA’s regulatory mandate across foods, drugs and cosmetics also means it needs to be highly adaptive to technological progress made across a range of industries. This requires an inherently flexible regulatory structure. Since its founding, the FDA has taken on new classes of technology, including medical devices, biologics and cell therapies as well as digital health applications such as mobile medical apps. Such flexibility should serve as a model for creating institutions capable of meeting the demands and pace of technological progress. 

Incentive Structures

Scott Alexander’s essay, “Meditations on Moloch,” highlights the problem of multipolar traps or situations where no one wins and there is a collective race to the bottom. One classic example is risks to public health generated by pollution through companies chasing increased profits; or in this case, the technological race to build up AI systems. But wisely deployed regulation can help break and reverse these dynamics by creating new incentive structures.

In pre-FDA days, the pharmaceutical field was easily a race to the bottom, with no incentives for companies to conduct costly safety and efficacy testing because doing so would put them at a competitive disadvantage. The FDA ushered in a new era of incentives — such as mandating rigorous safety testing, independent audits, reporting of side effects, quality manufacturing, and requiring new drugs to be as good or better than the current standard of care in order to access the market — that evened the playing field and reset market dynamics so that it became a race to the top.

Companies now receive FDA approval by demonstrating their drug is equivalent or superior on safety and efficacy relative to other medications on the market. This positive feedback loop helped increase consumer trust and led to more investment, better products and a flourishing pharmaceutical industry.

Today, frontier AI companies are locked in arms races toward improving AI capabilities, inherently prioritizing speed over safety. In March 2023, the nonprofit Future of Life Institute organized an open letter calling for a six-month pause on the development and deployment of larger frontier systems to break free from this dynamic. The pause was to ensure companies put safeguards in place. As of today, it has nearly 34,000 signatures, including leading AI experts.

Google CEO Sundar Pichai noted in an interview with the New York Times’ podcast “Hard Fork,”  the challenges of such a unilateral pause in 2023 when the AI arms race was well underway, “To me, at least, there is no way to do this effectively without getting governments involved.”  

Just over a year later, in May 2024, several members of OpenAI’s superalignment safety team resigned due to safety concerns. Although OpenAI is governed by a nonprofit board with the stated mission to “directly build safe and beneficial AGI,” one resignee told Vox about “very, very strong incentives to maximize profit” and criticized leadership for succumbing to such incentives at the cost of the mission.

Then OpenAI’s safety team began to quit amid concerns about safety, with one researcher claiming on X, “safety culture and processes have taken a backseat to shiny products.” Exiting OpenAI employees had risked losing potentially millions of their vested equity in the company unless they signed lifelong non-disparagement agreements.  When these NDA details became public, OpenAI leadership apologized on X, committing that the provision was never enforced and would be removed.

“The FDA ushered in a new era of incentives … that evened the playing field and reset market dynamics so that it became a race to the top.”

Several former employees, current employees and leading AI luminaries have since banded together to call for a “right to warn” the public about safety concerns, noting that “AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”  Further, some OpenAI whistleblowers have reportedly filed a complaint with the U.S. Securities and Exchange Commission, “calling for an investigation over the artificial intelligence company’s allegedly restrictive non-disclosure agreements.”

AI that’s designed and developed wisely has the potential to advance science, education, innovation and even improve many of the problems plaguing American society; it could improve the coordination of aligned stakeholders and citizens, enable better democratic processes, promote understanding between groups and challenge us to grow into the best versions of ourselves.

Realizing such upside requires new incentive structures that promote safe AI to solve problems over risky AI to consolidate power and profit. The history of the FDA shows that such structures can promote innovation rather than inhibit it and stimulate a lot of profit along the way, by rewarding companies to push the frontier of science to solve societal problems. 

Challenging Lessons

The FDA also provides its share of cautionary tales and bureaucratic failings. While digital innovation has become more efficient, drug development is growing less so — and its costs are skyrocketing. This decline in cost-efficient innovation in the pharmaceutical sector is even more marked considering the incredible developments made in biotechnology tools such as DNA sequencing, multiomics and imaging.

In fact, from 1950 to 2010 the number of newly approved drugs per billion dollars of inflation-adjusted R&D fell roughly eightyfold. While we have experienced great innovation in drug development over the decades, statistics suggest we are realizing a fraction of what may be possible. Given the stakes for patients, it is important to examine the potential regulatory factors that have led to such a decline. 

Although the creation of the modern FDA reset market dynamics by mandating comparative safety and efficacy, perhaps it did not set the bar high enough. As in any new industry, low-hanging fruit is usually the first to be dealt with.  If the threshold for FDA approval is at a relatively low baseline level, many companies will strive to simply hit that adequate threshold.

This is especially true in high-risk and high-cost drug development. As a result, companies are incentivized to produce relatively lower-risk, lower-cost copycat drugs rather than breakthrough medical technologies. These copycats provide no major advantages over existing standards of care but make up the bulk of drug approvals — more than 60% of the World Health Organization’s Essential Medicines. 

Part of the problem is that federal regulators themselves are not incentivized to take risks. Their main jobs are to ensure safety and avoid the associated political fallout of a bad drug call. Decreased regulator tolerance for risk over the decades is a key driver of increased cost and decreased innovation. 

This is why the regulatory burden is very costly to pharmaceutical companies and effectively prices out many small and medium-sized companies from the drug development process. These smaller companies, frequently the source of breakthrough innovations, are then reliant on acquisition or strategic financial investment by the big pharma players to advance their work. 

This is why we must be wary of creating systems that make smaller players dependent on large ones for development as we look at how to regulate the AI space. Most small AI companies are already tethered to Big Tech to access the capital and computing power required to train their models. There is an urgent need for a societal conversation on where we want to set our collective risk threshold.

Like with drugs, there is unlikely to be a risk-free frontier AI system. We must collectively prioritize what risks are most important to mitigate to safeguard the public and what risks are acceptable for society to take on so that we can fully realize the benefits of the technology. 

Risks Of Regulatory Capture

In June 2021 the FDA announced the approval of the drug Aduhelm (aducanumab) for the treatment of Alzheimer’s disease. It was the first new approval for this indication since 2003 and the first to treat the underlying pathology of the disease rather than its symptoms. What should have been a celebratory moment in medical history quickly became a historic FDA debacle.

“Like with drugs, there is unlikely to be a risk-free frontier AI system.”

The FDA’s independent Peripheral and Central Nervous System Drugs Advisory Committee had convened in November 2020 to evaluate the clinical data submitted to support the application. In response to the question of whether the study provided supporting evidence for the drug’s effective treatment of Alzheimer’s, not one of the 11 voting members answered yes. Instead, seven voted no and four were “uncertain.” 

The incident led to understandable outrage from public watchdog groups and politicians who demanded answers as to why there was such a misalignment between the scientific analysis of its independent advisory committee and the agency’s own regulatory decision-making. Three of the independent scientific reviewers resigned from the advisory committee in protest.

In his resignation letter, Dr. Aaron Kesselheim called the decision “probably the worst drug approval decision in recent U.S. history” and that such approvals by the FDA “will undermine the care of these patients, public trust in the FDA, the pursuit of useful therapeutic innovation and the affordability of the health care system.”

A congressional investigation concluded that the FDA did not follow its protocols and had an inappropriately close collaboration with Biogen, the pharmaceutical company submitting the drug for approval. On January 31, Biogen announced it would discontinue Aduhelm and terminate ongoing studies.

While a lack of incentives for breakthrough innovation and public outcry over the erroneous approval of unsafe drugs can explain part of the innovation problem, these explanations operate under the assumption that federal regulators and the pharmaceutical industry are entirely independent entities.

The Aduhelm incident prompted new reflections on the question of regulatory capture, which has long plagued the modern FDA; the idea that regulators are, in practice, controlled by the industries they intend to regulate. As Upton Sinclair said, “It is difficult to get a man to understand something when his salary depends on his not understanding it.”

While one source of funding for the FDA comes from federal annual appropriations, its other source comes from user fees paid by the pharma industry when companies submit their materials for review. These fees make up an estimated 47% of the FDA budget, prompting some academics to ask: “Why is the FDA funded in part by the companies it regulates?”

FDA policy and federal appropriation decisions are arguably not entirely immune from outside influence by industry either. The pharmaceutical industry leads lobbying across all industries with a whopping $382 million spent last year alone. These financial structures do not necessarily mean undue influence is occurring, but it does make it appear that way. The fact that FDA reviewers frequently leave their jobs to go work as lobbyists also doesn’t help. Might reviewers — enticed by the prospect of future lucrative employment — be influenced to more easily grant approval of some pharma drug? 

The young AI industry is already catching on. From 2022 to 2023 the number of groups lobbying on AI issues in Washington, D.C. grew from 158 to 451, according to data from Open Secrets; that includes Big Tech incumbents like Microsoft, Meta and Google. While many companies welcomed government regulation in their public statements, the AI industry lobbied heavily against proposed outside regulation and liability provisions in the EU AI Act, arguing for self-governance.

In the same vein, capital-rich AI companies can also lure talent away from future regulatory roles in government by offering plum jobs.  While it remains to be seen if there will be a porous boundary between AI companies and the government officials tasked with regulating them, employees are already moving from industry to the public sector, with the UK AI Safety Institute already onboarding former employees of OpenAI and Google DeepMind. 

As we think about designing new institutions to regulate AI, we must be sure that these institutions are independent, so they engender public trust. We also need to be sure that actual, not illusory, guardrails are put into place that are accessible to smaller companies and do not create barriers to entry. Like in other government positions, ethics guidelines that place timelines on future employment prospects must be created to ensure bureaucrats aren’t incentivized to sell their decisions for a future cushy job in Big Tech. 

Unintended Consequences

While designed to be flexible and responsive to technological shifts and real-world evidence, the FDA has not been adaptive enough to address some of the unintended consequences it has created.

“AI that’s designed and developed wisely has the potential to advance science, education, innovation and even improve many of the problems plaguing American society.”

In 1988 the FDA decided not to regulate homeopathic products listed in the Homeopathic Register, at the time a small industry of pseudoscience dating back to 1796. That exclusion formed the basis of what is today a multibillion-dollar industry of ineffective, often adulterated products that make health claims to consumers.

In 2019 the FDA withdrew that policy but given the size and ubiquity of the market, it is nearly impossible to put that genie back into the bottle.  Similarly, in 1994 the FDA initially adopted a fairly lax policy on nutritional supplements allowing them to be sold to consumers without FDA review and approval as long as they did not make any claims to treat, cure or mitigate diseases.

Today supplements are a $40 billion market in the United States taken by 80% of Americans. Many companies blatantly violate the health claims restrictions or employ savvy marketing departments that suggest health benefits to consumers — like a supplement that allegedly “improves memory” to target adults with dementia worries —while still technically coloring inside the lines. 

As a result, unsafe and unregulated products often end up on the market, while potentially safe treatments for patients go unexplored and unrealized. Because securing FDA approval is so costly, biotechnology investors generally only want to invest in creating new molecular structures that can be patented, but this doesn’t mean these are the only molecules that could help patients.

To date, 11% of medications considered “basic” and “essential” by the World Health Organization originated in flowering plants. There is immense potential in naturally occurring compounds and old molecules whose patents have expired but there are no financially viable regulatory paths to research these compounds and obtain approval for treatments of diseases due to a lack of intellectual property protection for ideas that are not novel or are obvious. 

The stakes for AI are even higher. That’s because once systems are released into the wild, control becomes difficult; and once systems are open-sourced, containment becomes nearly impossible. Any new agency must build in the flexibility to identify and adapt — in as close to real-time as possible — to real-world data that provides unintended results or consequences. Given the incredible benefits AI has to offer, this flexibility should also be extended to ensure that beneficial applications of the technology are not needlessly limited.

Paradigms Of Human Agency

The FDA regulatory model, prevalent in most industrialized countries worldwide, has become the dominant paradigm through which we think about human agency and our health. A government agency makes decisions about risks, benefits and sufficiency of evidence of new medications for its general population, and those treatments are then only accessible through a prescription from an expert. Each citizen effectively outsources judgments about risk tolerance and what is best for their body to the government and experts trained and licensed under its licensure requirements.

In this paradigm, other viewpoints about human agency are illegal. Informed patients who would like to try a prescription or investigational medication for their illness have no way of accessing drugs except through a clinician or FDA-approved clinical trial. “Right to Try” laws do enable terminally ill patients to legally try investigational medicines that aren’t approved by the FDA.

Though this changing of such a paternalistic paradigm was only passed in 2018 after much heated debate; it allowed the FDA to make investigational medicines available under a very narrow set of circumstances. Patients with debilitating or severe illnesses not managed by FDA-approved options are ineligible. 

There are strong arguments that we should have a healthcare system that respects and accommodates differing views on individual autonomy. Some may draw the line on the “right to try” at terminal illness, others at serious illness. But there is also something to be said about radical autonomy — that patients should be allowed to try whatever they want for their bodies, provided they are made aware of the risks.

In author Jef Akst’s book, “Personal Trials: How Terminally Ill ALS Patients Took Medical Treatment Into Their Own Hands,” PatientsLikeMe co-founder Jamie Heywood, whose online community enabled patients to share their experiences, summed up the idea as follows: “I don’t believe that in America, where the Declaration of Independence has life, liberty, and pursuit of happiness as a fundamental right, that regulatory authorities or medical authorities should deny any consenting, understanding patients the ability to do anything that can help them.”

“Any new agency must build in the flexibility to identify and adapt — in as close to real-time as possible — to real-world data that provides unintended results or consequences.”

The FDA has a prescribed and locked-in paternalistic ideology about health decision-making in the U.S. The FDA is effectively a government monopoly that curtails individual choice. The absence of AI governance is, ironically, likely to deliver a similar result: individual choice will be limited by a single entity. And without effective regulation, there is potential for a winner-take-all AI monopoly to emerge: A future where a single system becomes critical to every aspect of society. This would not only lead to unprecedented power concentration, but it would also present a unique risk of locking in an ideological and cultural paradigm reflected in the values of that system.

Because AI systems inherently reflect the values, aesthetics and preferences of its architects, those values are unlikely to be representative of the population and its spectrum of views. The output of AI systems is whatever answer is most probable, based on the data. So this functionally further restricts the cultural landscape, as the system shows us the average, not the diversity or extremes of the data. The failure to preserve a competitive AI landscape risks losing the diversity of ideas, values and opinions that exist within a single society — not to mention globally.

Charlie Munger, the former vice-chair of Berkshire Hathaway, observed the following about economic behavior: “Show me the incentive and I’ll show you the outcome.” Regulators play a powerful role in structuring and shaping the incentive structures of industry.

The story of the FDA’s legacy follows this adage. As we think about AI governance, we must tame the reckless arms race underway, but we also must be mindful of how governance can be structured to promote the maximal financial and social benefits AI has to offer.