top of page
  • Writer's pictureThe San Juan Daily Star

With executive order, White House tries to balance AI’s potential and peril

The Biden administration has been under pressure to do something about A.I. since late last year.

By Kevin Roose

How do you regulate something that has the potential to both help and harm people, that touches every sector of the economy and that is changing so quickly even the experts can’t keep up?

That has been the main challenge for governments when it comes to artificial intelligence.

Regulate AI too slowly and you might miss out on the chance to prevent potential hazards and dangerous misuses of the technology.

React too quickly and you risk writing bad or harmful rules, stifling innovation or ending up in a position like the European Union’s. It first released its AI Act in 2021, just before a wave of new generative AI tools arrived, rendering much of the act obsolete. (The proposal, which has not yet been made law, was subsequently rewritten to shoehorn in some of the new tech, but it’s still a bit awkward.)

On Monday, the White House announced its own attempt to govern the fast-moving world of AI with a sweeping executive order that imposes new rules on companies and directs a host of federal agencies to begin putting guardrails around the technology.

The Biden administration, like other governments, has been under pressure to do something about the technology since late last year, when ChatGPT and other generative AI apps burst into public consciousness. AI companies have been sending executives to testify in front of Congress and briefing lawmakers on the technology’s promise and pitfalls, while activist groups have urged the federal government to crack down on AI’s dangerous uses, such as making new cyberweapons and creating misleading deepfakes.

In addition, a cultural battle has broken out in Silicon Valley, as some researchers and experts urge the AI industry to slow down, and others push for its full-throttle acceleration.

President Joe Biden’s executive order tries to chart a middle path — allowing AI development to continue largely undisturbed while putting some modest rules in place, and signaling that the federal government intends to keep a close eye on the AI industry in the coming years. In contrast to social media, a technology that was allowed to grow unimpeded for more than a decade before regulators showed any interest in it, it shows that the Biden administration has no intent of letting AI fly under the radar.

The full executive order, which is more than 100 pages, appears to have a little something in it for almost everyone.

The most worried AI safety advocates — like those who signed an open letter this year claiming that AI poses a “risk of extinction” akin to pandemics and nuclear weapons — will be happy that the order imposes new requirements on the companies that build powerful AI systems.

In particular, companies that make the largest AI systems will be required to notify the government and share the results of their safety testing before releasing their models to the public.

These reporting requirements will apply to models above a certain threshold of computing power — more than 100 septillion integer or floating-point operations, if you’re curious — that will most likely include next-generation models developed by OpenAI, Google and other large companies developing AI technology.

These requirements will be enforced through the Defense Production Act, a 1950 law that gives the president broad authority to compel U.S. companies to support efforts deemed important for national security. That could give the rules teeth that the administration’s earlier, voluntary AI commitments lacked.

In addition, the order will require cloud providers that rent computers to AI developers — a list that includes Microsoft, Google and Amazon — to tell the government about their foreign customers. And it instructs the National Institute of Standards and Technology to come up with standardized tests to measure the performance and safety of AI models.

The executive order also contains some provisions that will please the AI ethics crowd — a group of activists and researchers who worry about near-term harms from AI, such as bias and discrimination, and who think that long-term fears of AI extinction are overblown.

In particular, the order directs federal agencies to take steps to prevent AI algorithms from being used to exacerbate discrimination in housing, federal benefits programs and the criminal justice system. And it directs the Commerce Department to come up with guidance for watermarking AI-generated content, which could help crack down on the spread of AI-generated misinformation.

And what do AI companies, the targets of these rules, think of them? Several executives I spoke to Monday seemed relieved that the White House’s order stopped short of requiring them to register for a license in order to train large AI models, a proposed move that some in the industry had criticized as draconian. It will also not require them to pull any of their current products off the market, or force them to disclose the kinds of information they have been seeking to keep private, such as the size of their models and the methods used to train them.

It also doesn’t try to curb the use of copyrighted data in training AI models — a common practice that has come under attack from artists and other creative workers in recent months and is being litigated in the courts.

And tech companies will benefit from the order’s attempts to loosen immigration restrictions and streamline the visa process for workers with specialized expertise in AI as part of a national “AI talent surge.”

Not everyone will be thrilled, of course. Hard-line safety activists may wish that the White House had placed stricter limits around the use of large AI models, or that it had blocked the development of open-source models, whose code can be freely downloaded and used by anyone. And some gung-ho AI boosters may be upset that the government is doing anything at all to limit the development of a technology they consider mostly good.

But the executive order seems to strike a careful balance between pragmatism and caution, and in the absence of congressional action to pass comprehensive AI regulations into law, it seems like the clearest guardrails we’re likely to get for the foreseeable future.

There will be other attempts to regulate AI — most notably in the European Union, where the AI Act could become law as soon as next year, and in Britain, where a summit of global leaders this week is expected to produce new efforts to rein in AI development.

The White House’s executive order is a signal that it intends to move fast. The question, as always, is whether AI itself will move faster.

15 views0 comments

Recent Posts

See All


bottom of page