top of page
Search
Writer's pictureThe San Juan Daily Star

The US regulates cars, radio and TV. When will it regulate AI?


By Ian Prasad Philbrick


As increasingly sophisticated artificial intelligence systems with the potential to reshape society come online, many experts, lawmakers and even executives of top AI companies want the U.S. government to regulate the technology, and fast.


“We should move quickly,” Brad Smith, the president of Microsoft, which launched an AI-powered version of its search engine this year, said in May. “There’s no time for waste or delay,” Chuck Schumer, the Senate majority leader, has said. “Let’s get ahead of this,” said Sen. Mike Rounds, R-S.D.


Yet history suggests that comprehensive federal regulation of advanced AI systems probably won’t happen soon. Congress and federal agencies have often taken decades to enact rules governing revolutionary technologies, from electricity to cars. “The general pattern is, it takes a while,” said Matthew Mittelsteadt, a technologist who studies AI at George Mason University’s Mercatus Center.


In the 1800s, it took Congress more than half a century after the introduction of the first public, steam-powered train to give the government the power to set price rules for railroads, the first U.S. industry subject to federal regulation. In the 20th century, the bureaucracy slowly expanded to regulate radio, television and other technologies. And in the 21st century, lawmakers have struggled to safeguard digital data privacy.


It’s possible that policymakers will defy history. Members of Congress have worked furiously in recent months to understand and imagine ways to regulate AI, holding hearings and meeting privately with industry leaders and experts. Last month, President Joe Biden announced voluntary safeguards agreed to by seven leading AI companies.


But AI also presents challenges that could make it even harder — and slower — to regulate than past technologies.


The hurdles


To regulate a new technology, Washington first has to try to understand it. “We need to get up to speed very quickly,” Sen. Martin Heinrich, D-N.M., who is part of a bipartisan working group on AI, said in a statement.


That typically happens faster when new technologies resemble older ones. Congress created the Federal Communications Commission in 1934, when television was still a nascent industry, and the FCC regulated it based on earlier rules for radio and telephones.


But AI, some advocates for regulation argue, combines the potential for privacy invasion, misinformation, hiring discrimination, labor disruptions, copyright infringement, electoral manipulation and weaponization by unfriendly governments in ways that have little precedent. That’s on top of some AI experts’ fears that a superintelligent machine might one day end humanity.


While many want fast action, it’s hard to regulate technology that’s evolving as quickly as AI. “I have no idea where we’ll be in two years,” said Dewey Murdick, who leads Georgetown University’s center for security and emerging technology.


Regulation also means minimizing potential risks while harnessing potential benefits, which for AI can range from drafting emails to advancing medicine. That’s a tricky balance to strike with a new technology. “Often, the benefits are just unanticipated,” said Susan Dudley, who directs George Washington University’s regulatory studies center. “And, of course, risks also can be unanticipated.”


Overregulation can quash innovation, Dudley added, driving industries overseas. It can also become a means for larger companies with the resources to lobby Congress to squeeze out less-established competitors.


Historically, regulation often happens gradually as a technology improves or an industry grows, as with cars and television. Sometimes it happens only after tragedy. When Congress passed, in 1906, the law that led to the creation of the Food and Drug Administration, it didn’t require safety studies before companies marketed new drugs. In 1937, an untested and poisonous liquid version of sulfanilamide, meant to treat bacterial infections, killed more than 100 people across 15 states. Congress strengthened the FDA’s regulatory powers the following year.


“Generally speaking, Congress is a more reactive institution,” said Jonathan Lewallen, a University of Tampa political scientist. The counterexamples tend to involve technologies that the government effectively built itself, like nuclear power development, which Congress regulated in 1946, one year after the first atomic bombs were detonated.


“Before we seek to regulate, we have to understand why we are regulating,” said Rep. Jay Obernolte, R-Calif., who has a master’s degree in AI. “Only when you understand that purpose can you craft a regulatory framework that achieves that purpose.”


Brain drain


Even so, lawmakers say they’re making strides. “I actually have been very impressed with my colleagues’ efforts to educate themselves,” Obernolte said. “Things are moving, by congressional standards, extremely quickly.”


Regulation advocates broadly agree. “Congress is taking the issue really seriously,” said Camille Carlton of the Center for Humane Technology, a nonprofit that regularly meets with lawmakers.


If federal regulation of AI did emerge, what might it look like?


Some experts say a range of federal agencies already have regulatory powers that cover aspects of AI. The Federal Trade Commission could use its existing antitrust powers to prevent larger AI companies from dominating smaller ones. The FDA has already authorized hundreds of AI-enabled medical devices. And piecemeal, AI-specific regulations could trickle out from such agencies within a year or two, experts said.


Still, drawing up rules agency by agency has downsides. Mittelsteadt called it “the too-many-cooks-in-the-kitchen problem, where every regulator is trying to regulate the same thing.” Similarly, state and local governments sometimes regulate technologies before the federal government, such as with cars and digital privacy. The result can be contradictions for companies and headaches for courts.


But some aspects of AI may not fall under any existing federal agency’s jurisdiction — so some advocates want Congress to create a new one. One possibility is an FDA-like agency: Outside experts would test AI models under development, and companies would need federal approval before releasing them. Call it a “Department of Information,” Murdick said.


But creating a new agency would take time — perhaps a decade or more, experts guessed. And there’s no guarantee it would work. Miserly funding could render it toothless. AI companies could claim its powers were unconstitutionally overbroad, or consumer advocates could deem them insufficient. The result could be a prolonged court fight or even a push to deregulate the industry.


Rather than a one-agency-fits-all approach, Obernolte envisions rules that accrete as Congress enacts successive laws in coming years. “It would be naive to believe that Congress is going to be able to pass one bill — the AI Act, or whatever you want to call it — and have the problem be completely solved,” he said.


Heinrich said in his statement, “This will need to be a continuous process as these technologies evolve.” Last month, the House and Senate separately passed several provisions about how the Defense Department should approach AI technology. But it is not yet clear which provisions will become law, and none would regulate the industry itself.


40 views0 comments

Recent Posts

See All

Comments


bottom of page