Jordan Schneider is the founder of the ChinaTalk podcast and newsletter.
Matthew Mittelsteadt is a research fellow and technologist for the AI and Progress Project at the Mercatus Center at George Mason University.
In times of technological tumult, great powers rise and fall. In the past, it was not just leading in the cutting-edge technologies that proved critical for national advancement; the nations that spread the benefits of the technologies most effectively, rather than innovated first, have been able to grow faster over time and ultimately define the trajectory of their era.
The age of AI may prove to be another of these moments. Alongside dramatic societal and economic implications, AI could drive differences in national trajectories. Facilitating AI’s diffusion will be key to national competitiveness for the coming decades.
Alongside supporting frontier research, American policymakers should do their utmost to ensure that not just leading-edge labs, but also firms, schools and government bureaucracies themselves are able to make the most out of AI.
America Shouldn’t Bank On Its Frontier Tech Lead
The three components of the AI triad that comprise the building blocks for developing AI are computing power, algorithms and data. OpenAI, Google, Anthropic and Meta today all have models that significantly outperform their Chinese rivals, U.S. tech firms’ main competitors on AI technology. For both algorithms and hardware, however, it’s not difficult to imagine this competitive edge fading.
Thanks to talent migration, industrial espionage, and open-source advancements, the algorithmic gap between Western and Chinese firms is likely to narrow. With China’s proven tech strengths and the global momentum of open-source AI, a more level playing field in AI innovation may well emerge.
Unlike past technological advances, which favored government and firm control, individual AI developers enjoy unique competitive autonomy today. Today’s researchers can read up on computer science discoveries and get close to the frontier of knowledge just through open-access publications. But the global shortage of talent means top AI researchers can be enticed from across the Pacific by top Chinese firms offering salaries in the millions, or even to Chinese firms’ Bay Area research labs, which is where Anthropic’s Dario Amodei got started in the field.
Industrial espionage adds additional competitive pressure and may further erode the research gap. In China, Xi Jinping has personally emphasized that the state must give “attention to the development of general artificial intelligence”; espionage incentives are particularly high. Models will be stolen. Meanwhile, Western defenses are tenuous. Amodei, perhaps the most outwardly security-conscious leader of a top AI lab, stated that, despite his firm’s commitment to keeping his models safe, “Could we resist if it was a state actor’s top priority to steal our model weights? No. They would succeed.”
The espionage threat to AI algorithmic advances is uniquely elevated compared to previous competitive technologies, which were often manufacturing-dependent. While there certainly is a fair amount of “secret sauce” involved in training frontier models, manufacturing at the technological frontier is fundamentally much harder to grok than software. Chinese firms, even with the benefit of sustained and generous state support, have struggled to compete in some industries that require precision manufacturing like aerospace and internal combustion engine vehicles. Unlike physically engineered systems where China has struggled to transition from taking blueprints to, say, building a commercial jet, learning from algorithms is simple. In software-dominant industries like chip design and platform social media, where practitioners can directly piggyback off global innovation, Chinese computer scientists have shown they have the talent and drive to compete with anyone on the planet.
These industrial espionage and talent transfer possibilities only matter, however, if firms and labs lead the way. In recent years, the Western AI edge has faced a potent third threat: open source. Today, the AI open-source hive mind is collectively far larger than any single AI company, rapidly developing innovations that are increasingly competitive with the latest and greatest coming out of the top private AI labs. If the best models are to come from open-source development, the best models will be equally accessible in both China and the West. The success of open source may mean the elimination of any software-related geopolitical technological gaps.
The potential impact of open-source innovation in the future of AI cannot be overstated. A now-famous leaked Google memo suggests that the rapid crumbling of barriers to entry in AI technology has unleashed a “flurry of ideas and iteration” from “ordinary people,” potentially rendering even the best IP protections moot.
Compute shortages add further innovative pressure. Both open-source researchers and large Chinese firms are bottlenecked by a lack of computing power, driving innovation to reduce the cost to train and deploy models, in turn further democratizing access to top-tier AI across borders.
Meanwhile, firms like Meta have added fuel to the open-source fire by openly licensing their cutting-edge models to Chinese players. So long as this backing continues, the open-source community won’t lack access to frontier tech. An open-source-fueled future is indeed possible, and combined with espionage and talent transfer, in the coming years, any AI algorithmic edge will be fleeting at best.
The Hardware Lead Also Shouldn’t Be Relied On
If the Western algorithmic lead erodes, can we still succeed by leading in AI chips and hardware? AI algorithms are only useful once they’re trained into models and deployed at a large scale. To do so, firms around the world are investing billions in acquiring the computing power necessary for training, banking on a hardware-derived AI competitive edge. Currently, as demonstrated through last year’s export controls, the U.S. has a seemingly dominant position in this critical AI hardware space.
This position, however, may be less sustainable or relevant to long-term national competitiveness than these policies presume. Restrictions on Chinese players’ ability to procure cutting-edge AI chips are actually quite porous. For instance, NVIDIA has intentionally designed a chip to skirt export control restrictions, freeing them to reap $5 billion in orders from Chinese firms so far this year.
These sales are augmented by a range of additional workarounds. Smuggling banned AI chips, with hundreds of thousands manufactured each year, is a relatively trivial task. While illicitly acquiring enough chips to graduate from “GPU-poor” status may be difficult, using global cloud service providers or even Chinese cloud service providers’ overseas server farms, Chinese companies can access top-of-the-line chips that can’t be imported into China. What’s more, there’s nothing on the books in the U.S. stopping the likes of Google or Anthropic from selling API access in China.
Beyond the reach of most Western export controls are Chinese firms trying to develop leading-edge domestic semiconductor capabilities. The recent release of Huawei’s Kirin 9000S chip manufactured by Chinese semiconductor champion SMIC shows how, absent tighter export controls, China is perhaps a year or two off from making chips roughly competitive with NVIDIA’s.
In the medium term, as Moore’s Law slows down, it will be increasingly difficult for the G7 to push their chips forward in capability relative to the advances China can make along an already de-risked technological trajectory. There is also the potential for paradigmatic shifts on the horizon like quantum or even biological computing that may end up leveling the global playing field rather than increasing the gap between China and the West.
Hardware is certainly less prone to a level playing field; however, our current lead is not as stable as many presume and should not be relied on as the basis of a long-term, sustainable competitive advantage.
A Diffusion-Centric Strategy
The challenges of maintaining long-term technical leadership in AI demand policymakers consider alternatives to policies focused on maintaining, as National Security Advisor Jake Sullivan put it, “as large of a lead as possible” in frontier AI technology. The most promising path forward is instead structural: a diffusion-centric AI policy laying the groundwork for long-term productivity growth.
AI success requires breaking the tech out of the lab and putting it into people’s hands. For AI to provide real economic and political advantages, this is true no matter what future path the technology takes. As a catalyst for productivity — a key driver of both general prosperity and geopolitical strategic success — emerging evidence suggests today’s frontier AI technologies have great productive potential if put into practical use and diffused. However, the rapid diffusion and impact of this technology is no guarantee.
Diffusion, not development, is the bottleneck for success and today’s true public policy challenge. Policymakers must shift strategic thinking away from R&D and toward policies that can help AI avoid the slow, decades-long diffusion process that delayed and inhibited electricity — a similar general-purpose technology — in the 20th century.
While technical diffusion may sound amorphous from a policy lens, as the Princeton scholar Helen Milner argues and the OECD has since empirically validated, the process is significantly influenced by regulatory and institutional design factors. The implication is that policymakers potentially hold the tools and levers needed to speed up the process and to come out ahead.
Start With A Light Regulatory Touch
Alongside technical advances, impactful diffusion will require inspired organizational leadership beyond just the release of increasingly powerful models. For instance, as the George Washington University scholar Jeff Ding has written: “More than five decades passed before key innovations in electricity, the quintessential GPT [general purpose technology], significantly transformed manufacturing productivity. … Like other GPT trajectories, electrification required a protracted process of workforce skill adjustments, organizational adaptations, such as changes in factory layout, and complementary innovations like the steam turbine, which enabled central power generation in the form of utilities.”
Like innovation itself, diffusion can be a protracted creative process, characterized by trial and error and experimentation. For this process to play out, regulation cannot act as an excessive burden. As a first step in any diffusion-centric strategy, regulators should aim to do no harm. That requires analyzing regulations and determining what rules and processes may inhibit AI success for little societal gain.
One piece of this task will be regulatory clarity. Today, the American AI regulatory environment remains hazy: Agencies have yet to analyze their existing statutes and determine how emerging AI products and technologies might interact with standing law. To provide clarity, the Biden administration should begin the process of identifying statutes and compiling them into a comprehensive “AI regulatory map.” For the private sector, such clarity will promote risk-taking and confident innovation. For U.S. AI strategy, this effort would help Congress and regulators easily identify needless regulatory barriers, overlaps and contradictions that may forestall competitive success.
Successful AI regulation must also be adaptable to match and service the unique needs of AI technology. Traditional Food and Drug Administration (FDA) approval processes for medical devices, for example, were designed with assumptions of a slow pace of innovation and static device design; AI innovation, on the other hand, is proceeding at breakneck speed with dynamic devices whose designs improve and change over time.
To accommodate AI in the medical industry, the FDA has proposed statutory changes to create a software pre-certification program, which approves organizations that build medical software rather than discrete devices. Such changes would allow regulatory room for the continuous updates, security patches and changes safe and effective AI systems demand, rather than a cumbersome case-by-case review.
Across the government, similar changes are no doubt needed; in addition to mapping relevant AI statutes, the Biden administration should also identify which statutes may not match the needs and dynamics of AI technology. Agencies and research organizations can then consider alternatives that may ease device approval and deployment.
In China, where many observers initially figured that large language models would be unwelcome in a meticulously controlled information space, Beijing seems to have internalized that AI is too important to be strangled in the cradle. Baidu’s ERNIE release has proven it’s feasible for a large language model to be good enough at filtering sensitive content to appease regulators.
In this year’s draft regulations for generative AI released by the Cyberspace Administration of China, regulators initially drew a hard line, which would have significantly impacted innovation. What followed was a lively public debate in which both firms and academics complained that mandates that require all training data and AI outputs be “true and accurate” were unrealistic. In the final ruling, regulators loosened their demands so that rules would not apply to internal company R&D and only asked that firms “take effective measures to increase the truth and accuracy of the training data and the outputs.”
Beijing is embracing an unexpectedly light regulatory touch; if Western regulators fail to do the same, our AI diffusion may well fall behind.
Soften The Inevitable Backlash
The best forcing function for AI diffusion will be the American capitalist system’s tolerance for creative destruction. Firms that come up with the best ways to improve their productivity should be empowered to outcompete their rivals. With disruption, however, comes backlash and change-resistant incumbents often try to use exsisting power to slow, stop or even reverse change.
Policymakers should be wary of attempts by incumbent firms and workers who won in the pre-AI paradigm to shield themselves from the knock-on changes necessary to diffuse this technology and reap AI’s productivity gains.
AI licensure is one prominent regulatory idea to avoid and offers an illustration of the growing backlash. According to an Obama-era Treasury Department report, “most research does not find that licensing improves quality or public health and safety.” Licensure discourages and limits market participation, decreasing the very competition needed to ensure AI safety and innovation. In exchange for these questionable safety benefits, such policies simultaneously reduce competitiveness at a national level.
Mercatus Center economists have also found licensure regulations tend to both decrease labor supply and increase prices. OECD data from this year already suggests that high-cost AI products supported by a limited pool of IT talent are the primary factors limiting AI adoption. Exacerbating these adoption chokepoints through licensure would slow growth and likely cede global market share to less hidebound Chinese competitors.
Beyond the industry-by-industry regulatory risk of lobbying leading to protectionism, there is the possibility of a broader societal AI backlash that could choke off an even wider swath of potential productivity gains. Recent AI Policy Institute polling shows that 86% of voters believe AI could cause a catastrophic event, and as a result, 72% favor slowing down AI progress.
The public is increasingly wary of AI tech, and not entirely without cause: If AI does end up delivering substantial productivity gains, that will, like every past industrial revolution, inevitably come alongside social disruption and a new risk matrix. While the juice will be worth the squeeze (we doubt many readers would like to exchange lives with those born before past industrial revolutions, and the same will almost certainly be the case going forward), governments would be wise to invest in certain technical and societal interventions that could provide downside risk protection.
In the U.S., to prevent anti-AI sentiments from translating into net-negative regulation, regulators should invest in AI safety efforts. Transparency offers a good starting point.
Clarity about training data sets, ethical principles, foundation models used and intended applications would help consumers navigate opaque AI markets while allowing agencies like the Cybersecurity and Infrastructure Security Agency to flag models or data sets that carry known security vulnerabilities. The government could pair these reforms with other efforts to shore up institutions likely to be impacted by certain AI risks.
With the rise of AI-generated election disinformation, Congress should consider restrictions on AI-generated content in political ads. In the face of novel AI cyber threats, the government should invest in much-needed critical infrastructure. Finally, governmental AI R&D support should target research that prizes AI detection, including efforts to watermark AI outputs.
While this list is by no means comprehensive, and novel risks are sure to emerge in the future, modest steps can help tame some of the most probable AI risks and build the social trust needed for the technology to be developed and diffused unimpeded.
More speculatively, if AI does perform surprisingly well, perhaps officials should begin laying the groundwork for the equivalent of a Trade Adjustment Assistance, a federal program that helps workers in professions most adversely impacted by AI adjust to new industries so that society doesn’t throw the baby out with the bathwater in a regulatory overcorrection.
How To Speed Up Diffusion
Beyond measures to defend AI diffusion from those seeking to slow down technological change, the government can also be proactive in laying the groundwork for America to take advantage of the opportunities AI opens for productivity growth.
Education offers the most promising path toward catalyzing technical diffusion. Today, we cannot say with confidence what skills will be key to future national competitiveness. What we can guarantee is continued change. To diffuse the technological advancements of past industrial revolutions, entire new disciplines will likely develop. The same will likely be the case for AI.
Historically, as Jeffrey Ding illustrates in his forthcoming book, America has shined in adapting education to new technological paradigms and professional disciplines. In the first industrial revolution, American home-taught mechanical engineers and innovative institutions like MIT helped the U.S. take advantage of British breakthroughs. In the late 19th century, the U.S. was able to overcome an invention deficit relative to Germany because of its ability to produce more chemical engineers to apply European breakthroughs to different industries.
Most recently, the U.S. was able to outcompete Japan in the information and communication technologies revolution thanks to its ability to develop human capital. The U.S. was able to train hardware and software engineers and attract top global talent, allowing a diffusion across the economy to reap the benefits of digitization.
As Alexandr Wang, the CEO of Scale.AI, put it, “Software engineering as a job was invented in the 1960s with the Apollo program. Fast forward to today, it’s viewed as the best job in America, with 1.6 million jobs.” While it’s too early to say exactly what professions will be key to unlocking AI’s productive potential, the government should be on the lookout for emerging disciplines and incentivize universities to experiment, continuing this historical trend of educational adaption.
Beyond identifying new directions, how might education itself adapt? Programs that promote technical adoption and diffusion may hinge on the idea of educational flexibility and experimentation — qualities that match the guarantee of change in coming years. Moving forward, education should deprioritize predicting what technical skills might be needed for the future in favor of broad-based adaptation and technical ability.
Through proper curricular design, the workforce can be trained to flexibly adapt to uncertain technical needs and prepare to continuously pivot and adopt cutting-edge systems throughout their careers. The flexibility demands of today’s computer science programs offer a promising conceptual direction.
In CS, technical change is constant. From semester to semester, students must learn new programming languages, techniques and architectural paradigms from the ground up. The technical skills required are ever-shifting, forcing students to avoid over-investing in one technical skill in favor of the ability to quickly upskill in technologies they’ve never seen before whenever necessary. Policymakers should prioritize a similar model across education, avoiding overinvestments in technology-specific trade programs while promoting the very confidence and comfort with technical change needed for fast diffusion and adoption.
Simultaneously, policymakers should resist calls to unduly restrict the use of AI in classrooms. The sort of teacher who can’t accept that phonics is a superior way to teach reading because of misplaced romanticism about the classroom can’t be expected to embrace AI out of the gate. But a decade from now, we’ll look back at the curriculum of 2023 and see modules as outdated as penmanship and calls to block ChatGPT as akin to the push to ban calculators.
AI holds tremendous promise in being able not only to address a national teacher shortage currently numbering in the hundreds of thousands but to scale the sort of one-on-one tutoring-style education that’s far too expensive to roll out at a societal level but has proven to be effective. Truly personalized education has the pleasant side effect of being more likely to produce Einsteins. Local and national governments should view AI not as a threat to today’s education system, which fails so many, but rather as a historic opportunity to provide every student with superhuman levels of attention and training.
Education, however, is not the only possibility — AI also presents the U.S. government with a tremendous opportunity to improve the way it does business. Without action, government “waste, fraud and abuse” may end up being primarily tasks public-sector workers still do manually well after the private sector figured out how to automate or accelerate them with AI. Already we can predict what some of these tasks may be. From a public services perspective, perhaps AI agents can process tax returns and conduct consular interviews. Meanwhile, painful backend tasks from supply-chain monitoring to Department of Defense auditing, could be accelerated. Finally, perhaps the Census Bureau could supplement its forms with open-ended conversations between AI agents and citizens to paint a richer, more nuanced portrait of the nation.
Unfortunately, new approaches to applying technology within the government often “die in the iron cage of outdated bureaucracy.” According to Code For America founder Jennifer Pahlka, no matter the strength or quality of policy decisions, “culture eats policy.” Today’s bureaucratic culture is often structured like a waterfall: Policymakers at the top of the falls make one-way decisions that flow down on top of developers and project managers who, operating under rigid strictures, struggle to build successful systems. The result is a culture of IT inflexibility, where developer ingenuity is bound by rules written by often inaccessible decision-makers. Rather than succeed at policy outcomes, systems are instead designed to follow rules.
While there is no easy fix to such cultural challenges, small steps toward improvement can be taken. At the State Department, leaders are considering “designated technology tours” where diplomats are assigned to engage and study critical technologies for several years in exchange for employment record credits and possible preference toward advancement. While hardly a cure-all, favoring the advancement of staff with technical competencies would counteract waterfall structures. Managers who understand technology and developer needs may be more likely to engage with implementation and build policies that respect innovation. Across agencies, a version of this model could sow the seeds of a technically inclined culture, one that not only has greater IT development success but which also encourages AI adoption.
It’s Alright To Drive In The Dark
The former Secretary of the Navy Richard Danzig was right to note that unpredictable technological futures mean “policymakers will always drive in the dark.” Indeed, we don’t know what AI will look like in 2030, and we don’t know what future innovations may come. Strategies that focus on maintaining an often-tenuous technical lead therefore are insufficient. Uncertainty about America’s long-term frontier AI advantage, however, needn’t be paralyzing. A better guarantee is to focus on policies that enable these uncertain AI innovations of the coming years to quickly shift from development into broad application.
If we take care to design today’s AI policy decisions around the need for technical diffusion and the assumption of unpredictable change, we can ensure those decisions flex and adapt to uncertain technical tides while setting the table to maximize productivity, growth and competitive advantage. The result will be confident, effective policy well matched to ensure American competitive success while managing whatever changes the AI of the future may bring.