© 2024 WUGA | University of Georgia
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Congress ponders regulation of powerful emergent A.I. platforms

ARI SHAPIRO, HOST:

Generally speaking, businesses don't ask Congress to regulate them, so it was striking yesterday to hear one of the most prominent executives in artificial intelligence say this.

(SOUNDBITE OF ARCHIVED RECORDING)

SAM ALTMAN: We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models. I think, if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening.

SHAPIRO: That was Sam Altman, CEO of OpenAI, testifying before a Senate committee yesterday.

Artificial intelligence is global, and Congress does not exactly have a reputation for being ahead of the technology curve. So what are the chances lawmakers could get their arms around this, and what would effective regulation look like? Paul Scharre studies those exact questions. He is vice president at the Center for a New American Security. Welcome back to ALL THINGS CONSIDERED.

PAUL SCHARRE: Thanks. Thanks for having me.

SHAPIRO: Before we look globally, let's talk about what is happening here in the United States. Here's what Democratic Senator Peter Welch of Vermont said yesterday.

(SOUNDBITE OF ARCHIVED RECORDING)

PETER WELCH: I've come to the conclusion that it's impossible for Congress to keep up with the speed of technology.

SHAPIRO: Paul, is that true, or do you see a valuable role for Congress to play here?

SCHARRE: Well, I think both those things can be true. There is definitely a valuable role for Congress, but there's a huge disconnect between the pace of the technology, especially in AI, and the pace of lawmaking. So I think there's a real incentive for Congress to move faster, and that's what we see, I think, members of Congress trying to do here with these hearings - is figure out what's going on with AI, and then what is the role that government needs to play to regulate this?

SHAPIRO: How would you answer that question? What role should government play? I mean, regulation is such a broad and general term. Is there a consensus even among experts here?

SCHARRE: Well, there's certainly not a consensus. And I think part of it is that AI can mean so many different things. It can be facial recognition or AI used in finance or medicine. And there's going to be a lot of industry-specific regulation. One of the things in the topic of the hearing, and where some experts are starting to talk about, is regulating the most powerful AI models - AI models like ChatGPT or the newest version, GPT-4. They're sort of in a different class because there are these very general-purpose systems that can do a whole wide variety of tasks. And one of the things that Sam Altman, the CEO of OpenAI, who created these systems, ChatGPT and GPT-4, is calling for is regulations on the technology they're building, which is surprising. And other experts are calling for that as well, and I think that's an area where some special regulatory attention is probably needed.

SHAPIRO: Do you think we're more likely to see Congress do something narrow and specific, like, say, anything that's fake must be labeled as such, or do something broad, like create a body that will, at some point in the future, issue regulations?

SCHARRE: I mean, pessimistic answer - we're probably likely to see not very much.

SHAPIRO: (Laughter) OK.

SCHARRE: But I would - I mean, if we're being honest, that's been the story so far with social media, for example. But I think, you know, if we can get just a couple specific kinds of narrow regulation - there was some talk about a licensing regime for training these very powerful models. That probably makes some sense at this point. And then things like requirements to label AI-generated media, like you mentioned. California has passed a law like this called a "Blade Runner" law - I love this term - that basically says, if you're talking to a bot, it has to disclose that it's a bot. That's a pretty sensible regulation.

SHAPIRO: Artificial intelligence is a technology that exists all over the world, and many countries are pursuing it with wild abandon. If the U.S. imposes limitations, is that just going to hamstring the U.S. without actually eliminating the potential for harm?

SCHARRE: Well, we're going to need to get other countries on board with some kind of global AI governance over time. We're going to have to get other countries to adopt a similar approach.

SHAPIRO: That seems even less likely...

SCHARRE: Now, the good thing...

SHAPIRO: ...Than getting Congress to agree on something. You're talking about Russia, China, like, the U.S., all saying, here's the rules were collectively going to agree to. I mean, Russia is pulling out of nuclear treaties. You think they're going to sign onto an AI treaty?

SCHARRE: Well, so here's the thing. The U.S. has leverage over what other countries are going to do with these very powerful AI systems because they require these very specialized chips to train the most powerful models, and those chips use U.S. technology. And so we've always seen the U.S. put in place export controls on these chips, and that's a point of leverage that the U.S. can have over other countries accessing this technology, where - hey, if you don't agree to these rules and safety standards, you can't get access to the hardware you need to actually build and train these systems.

SHAPIRO: Hmm. During the hearings yesterday, Missouri Republican Josh Hawley asked whether AI is more like a printing press or an atom bomb. Is it a useful or a deadly technology? Your research focuses on AI in the military. Do you think it's hyperbolic to compare something like - I don't know - ChatGPT to a weapon that could kill huge numbers of people immediately?

SCHARRE: Well, I don't think ChatGPT is there, but one of the fears is what's coming next and the pace of AI development. And we've seen really astonishing gains in just the last 12 months or so. I don't think anybody has a good sense of what's possible even 12 months from now, much less a few years from now.

SHAPIRO: And so would you choose printing press or atom bomb? Which do you think it's more like?

SCHARRE: Ooh. Well, I mean, maybe nuclear technology is not a bad comparison 'cause there are good uses, like nuclear energy, but also bad uses, like atom bombs.

SHAPIRO: So what do you think a scenario without guardrails looks like a decade or so down the road? I mean, what's your nightmare - whether it's - I don't know - killer robots coming for us all or something totally different from that?

SCHARRE: I think one of the risks is that we see wide proliferation of very powerful AI systems that are general-purpose that can do lots of good things and lots of bad things. And we see some bad actors use them for things like helping to design better chemical or biological weapons or cyberattacks. And it's really hard to defend against that if there aren't guardrails in place and if anyone can access this just as easily as anyone can hop on the internet today. And so thinking about how do we control proliferation, how do we ensure that systems that are being built are safe, is really essential.

SHAPIRO: Paul Scharre's latest book is called "Four Battlegrounds: Power In The Age Of Artificial Intelligence." Thank you so much.

SCHARRE: Thank you. Thanks for having me. Transcript provided by NPR, Copyright NPR.

Ari Shapiro has been one of the hosts of All Things Considered, NPR's award-winning afternoon newsmagazine, since 2015. During his first two years on the program, listenership to All Things Considered grew at an unprecedented rate, with more people tuning in during a typical quarter-hour than any other program on the radio.
Christopher Intagliata is an editor at All Things Considered, where he writes news and edits interviews with politicians, musicians, restaurant owners, scientists and many of the other voices heard on the air.