Articles by Adam Thierer

Adam ThiererSenior Fellow in Technology & Innovation at the R Street Institute in Washington, DC. Formerly a senior research fellow at the Mercatus Center at George Mason University, President of the Progress & Freedom Foundation, Director of Telecommunications Studies at the Cato Institute, and a Fellow in Economic Policy at the Heritage Foundation.


In my latest column for The Hill, I explore how “State and Local Meddling Threatens to Undermine the AI Revolution” in America as mountains of parochial tech mandates accumulate. We need a federal response, but we’re not likely to get the right one, I argue.

I specifically highlight the danger of new measures from big states like NY and California, but it’s the patchwork of all the state and local regs that will result in a sort of ‘death-by-a-thousand-cuts’ for AI innovation as the red tape grows and hinders innovation and capital formation.

What we need is the same sort of principled, pro-innovation federal framework or AI that we adopted for the Internet a generation ago. Specifically, we need some sort of preemption of most of the state and local constraints on what is inherently national (and even global) commerce and speech.

Alas, Congress appears incapable of getting even basic things done on tech policy these days. Continue reading →

Here’s a new DC EKG podcast I recently appeared on to discuss the current state of policy development surrounding artificial intelligence. In our wide-ranging chat, we discussed:

* why a sectoral approach to AI policy is superior to general purpose licensing
* why comprehensive AI legislation will not pass in Congress
* the best way to deal with algorithmic deception
* why Europe lost its tech sector
* how a global AI regulator threatens our safety
* the problem with Biden’s AI executive order
* will AI policy follow same path as nuclear policy?
* global innovation arbitrage & the innovation cage
* AI, health care & FDA regulation
* AI regulation vs trade secrets
* is AI transparency / auditing the solution?

Listen to the full show here or here. To read more about current AI policy developments, check out my “Running List of My Research on AI, ML & Robotics Policy.”

 

My latest dispatch from the frontlines of the artificial intelligence policy wars in Washington looks at the major proposals to regulate AI. In my new essay, “Artificial Intelligence Legislative Outlook: Fall 2023 Update,” I argue that there are 3 major impediments to getting major AI legislation over the finish line in Congress: (1) Breadth and complexity of the issue; (2) Multiplicity of concerns & special interests; & (3) Extreme rhetoric / proposals are dominating the discussion.

If Congress wants to get something done in this session, they’ll need to do two things: (1) set aside the most radical regulatory proposals (like big new AI agencies or licensing schemes); and (2) break AI policy down into its smaller subcomponents and then prioritize among them where policy gaps might exist.

Prediction: Congress will not pass any AI-related legislation this session due to the factors identified in my essay. The temptation to “go big” with everything-and-the-kitchen-sink approaches to AI regulation will (especially with extreme ideas like new agencies & licenses) will doom AI legislation. It’s also worth noting that Washington’s swelling interest in AI policy is having a crowding-out effect on other important legislative proposals that might have advanced otherwise, such as the baseline privacy bill (ADPPA) and other things like driverless car legislation. Many want to advance those efforts first, but the AI focus makes that hard.

Read the entire essay here.

The Brookings Institution hosted this excellent event on frontier AI regulation this week featuring a panel discussion I was on that followed opening remarks from Rep. Ted Lieu (D-CA). I come in around the 51-min mark of the event video and explain why I worry that AI policy now threatens to devolve into an all-out war on computation and open source innovation in particular. ​

I argue that some pundits and policymakers appear to be on the way to substituting a very real existential risk (authoritarian govt control over computation/science) for a hypothetic existential risk of powerful AGI. I explain how there are better, less destructive ways to address frontier AI concerns than the highly repressive approaches currently being considered.

I have developed these themes and arguments at much greater length in a series of essays over on Medium over the past few months. If you care to read more, the four key articles to begin with are:

In June, I also released this longer R Street Institute report on “Existential Risks & Global Governance Issues around AI & Robotics,” and then spent an hour talking about these issues on the TechPolicyPodcast about “Who’s Afraid of Artificial Intelligence?” All of my past writing and speaking on AI, ML, and robotic policy can be found here, and that list is update every month.

As always, I’ll have much more to say on this topic as the war on computation expands. This is quickly becoming the most epic technology policy battle of modern times.

I was my pleasure to participate in this Cato Institute event today on “Who’s Leading on AI Policy?
Examining EU and U.S. Policy Proposals and the Future of AI.” Cato’s Jennifer Huddleston hosted and also participating was Boniface de Champris, Policy Manager with the Computer and Communications Industry Association. Here’s a brief outline of some of the issues we discussed:

  • What are the 7 leading concerns driving AI policy today?
  • What is the difference between horizontal vs. vertical AI regulation?
  • Which agencies are moving currently to extend their reach and regulate AI tech?
  • What’s going on at the state, local, and municipal level in the US on AI policy?
  • How will the so-called “Brussels Effect” influence the course of AI policy in the US?
  • What have the results been of the EU’s experience with the GDPR?
  • How will the EU AI Act work in practice?
  • Can we make algorithmic systems perfectly transparent / “explainable”?
  • Should AI innovators be treated as ‘guilty until proven innocent’ of certain risks?
  • How will existing legal concepts and standards (like civil rights law and unfair and deceptive practices regulation) be applied to algorithmic technologies?
  • Do we have a fear-based model of AI governance currently? What role has science fiction played in fueling that?
  • What role will open source AI play going forward?
  • Is AI licensing a good idea? How would it even work?
  • Can AI help us identify and address societal bias and discrimination?

Again, you can watch the entire video here and, as always, here’s my “Running List of My Research on AI, ML & Robotics Policy.”

The New York Times today published my response to an oped by Senators Lindsey Graham & Elizabeth Warren calling for a new “Digital Consumer Protection Commission” to micromanage the high-tech information economy. “Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution,” I argue. Here’s my full response:

Senators Lindsey Graham and Elizabeth Warren propose a new federal mega-regulator for the digital economy that threatens to undermine America’s global technology standing.

A new “licensing and policing” authority would stall the continued growth of advanced technologies like artificial intelligence in America, leaving China and others to claw back crucial geopolitical strategic ground.

America’s digital technology sector enjoyed remarkable success over the past quarter-century — and provided vast investment and job growth — because the U.S. rejected the heavy-handed regulatory model of the analog era, which stifled innovation and competition.

The tech companies that Senators Graham and Warren cite (along with countless others) came about over the past quarter-century because we opened markets and rejected the monopoly-preserving regulatory regimes that had been captured by old players.

The U.S. has plenty of federal bureaucracies, and many already oversee the issues that the senators want addressed. Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution.

As I noted in a recent interview with James Pethokoukis for his Faster, Please! newsletter, “[t]he current policy debate over artificial intelligence is haunted by many mythologies and mistaken assumptions. The most problematic of these is the widespread belief that AI is completely ungoverned today.” In a recent R Street Institute report and series of other publications, I have documented just how wrong that particular assumption is.

The first thing I try to remind everyone is that the U.S. federal government is absolutely massive—2.1 million employees, 15 cabinet agencies, 50 independent federal commissions and 434 federal departments. Strangely, when policymakers and pundits deliver remarks on AI policy today, they seem to completely ignore all that regulatory capacity while simultaneously casually tossing out proposals to just add more and more layers of regulation and bureaucracy to it. Well, I say why not see if the existing regulations and bureaucracy are working first, and then we can have a chat about what more is needed to fill gaps.

And a lot is being done on this front. In a new blog post for R Street, I offer a brief summary of some of the most important recent efforts. Continue reading →

Can we advance AI safety without new international regulatory bureaucracies, licensing schemes or global surveillance systems? I explore that question in my latest R Street Institute study, “Existential Risks & Global Governance Issues around AI & Robotics.” (31 pgs)  My report rejects extremist thinking about AI arms control & stresses how the “realpolitik” of international AI governance is such that things cannot and must not be solved through silver-bullet gimmicks and grandiose global government regulatory regimes.

The report uses Nick Bostrom’s “vulnerable world hypothesis” as a launching point and discusses how his five specific control mechanisms for addressing AI risks have started having real-world influence with extreme regulatory proposals now being floated. My report also does a deep dive into the debate about a proposed global ban on “killer robots” and looks at how past treaties and arms control efforts might apply, or what we can learn from them about what won’t work.

I argue that proposals to impose global controls on AI through a worldwide regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are largely futile because many nations will not agree to them. As with nuclear and chemical weapons, treaties, accords, sanctions and other multilateral agreements can help address some threats of malicious uses of AI or robotics. But trade-offs are inevitable, and addressing one type of existential risk sometimes can give rise to other risks.

A culture of AI safety by design is critical. But there is an equally compelling interest in ensuring algorithmic innovations are developed and made widely available to society. The most effective solution to technological problems usually lies in more innovation, not less. Many other multistakeholder and multilateral efforts can help AI safety. Final third of my study is devoted to a discussion of that. Continuous communication, coordination, and cooperation—among countries, developers, professional bodies and other stakeholders—will be essential. Continue reading →

This week, I appeared on the Tech Freedom Tech Policy Podcast to discuss “Who’s Afraid of Artificial Intelligence?” It’s an in-depth, wide-ranging conversation about all things AI related. Here’s a summary of what host what Corbin Barthold and I discussed:

1. The “little miracles happening every day” thanks to AI

2. Is AI a “born free” technology?

3. Potential anti-competitive effects of AI regulation

4. The flurry of joint letters

5. new AI regulatory agency political realities

6. the EU’s Precautionary Principle tech policy disaster

7. The looming “war on computation” & open source

8. The role of common law for AI

9. Is Sam Altman breaking the very laws he proposes?

10. Do we need an IAEA for AI or an “AI Island”

11. Nick Bostrom’s global control & surveillance model

12. Why “doom porn” dominates in academic circles

13. Will AI take all the jobs?

14. Smart regulation of algorithmic technology

15. How the “pacing problem” is sometimes the “pacing benefit”

 

It was my pleasure to recently appear on the Independent Women’s Forum’s “She Thinks” podcast to discuss “Artificial Intelligence for Dummies.” In this 24-minute conversation with host Beverly Hallberg, I outline basic definitions, identify potential benefits, and then consider some of the risks associated with AI, machine learning, and algorithmic systems.

Reminder, you can find all my relevant past work on these issues via my, “Running List of My Research on AI, ML & Robotics Policy.”