top of page
Search
Writer's pictureThe San Juan Daily Star

Five ways AI could be regulated



The bipartisan A.I. Insight Forum, organized by Senate Majority Leader Chuck Schumer (D-N.Y.) along with labor union leaders and civil society groups, at the Capitol in Washington on Wednesday, Sept. 13, 2023.

By Cecilia Kang and Adam Satariano


Although their attempts to keep up with developments in artificial intelligence have mostly fallen short, regulators around the world are taking vastly different approaches to policing the technology. The result is a highly fragmented and confusing global regulatory landscape for a borderless technology that promises to transform job markets, contribute to the spread of disinformation or even present a risk to humanity.


The major frameworks for regulating AI include:


Europe’s Risk-Based Law: The European Union’s AI Act, which is being negotiated Wednesday, assigns regulations proportionate to the level of risk posed by an AI tool. The idea is to create a sliding scale of regulations aimed at putting the heaviest restrictions on the riskiest AI systems. The law would categorize AI tools based on four designations: unacceptable, high, limited and minimal risk.


Unacceptable risks include AI systems that perform social scoring of individuals or real-time facial recognition in public places. They would be banned. Other tools carrying less risk, such as software that generates manipulated videos and deepfake images must disclose that people are seeing AI-generated content. Violators could be fined 6% of their global sales. Minimally risky systems include spam filters and AI-generated video games.


U.S. Voluntary Codes of Conduct: The Biden administration has given companies leeway to voluntarily police themselves for safety and security risks. In July, the White House announced that several AI makers, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, had agreed to self-regulate their systems.


The voluntary commitments included third-party security testing of tools, known as red-teaming, research on bias and privacy concerns, information-sharing about risks with governments and other organizations, and development of tools to fight societal challenges such as climate change, while including transparency measures to identify AI-generated material. The companies were already performing many of those commitments.


U.S. Tech-Based Law: Any substantive regulation of AI will have to come from Congress. Senate Majority Leader Chuck Schumer, D-N.Y., has promised a comprehensive bill for AI, possibly by next year.


But so far, lawmakers have introduced bills that are focused on the production and deployment of AI systems. The proposals include the creation of an agency like the Food and Drug Administration that could create regulations for AI providers, approve licenses for new systems and establish standards. Sam Altman, the CEO of OpenAI, has supported the idea. Google, however, has proposed that the National Institute of Standards and Technology, founded more than a century ago with no regulatory powers, serve as the hub of government oversight.


Other bills are focused on copyright violations by AI systems that gobble up intellectual property to create their systems. Proposals on election security and limiting the use of deepfakes have also been put forward.


China Moves Fast on Regulations of Speech: Since 2021, China has moved swiftly in rolling out regulations on recommendation algorithms, synthetic content such as deepfakes, and generative AI. The rules ban price discrimination by recommendation algorithms on social media, for instance. AI makers must label synthetic AI-generated content. And draft rules for generative AI, like OpenAI’s chatbot, would require training data and the content the technology creates to be “true and accurate,” which many view as an attempt to censor what the systems say.


Global Cooperation: Many experts have said that effective AI regulation will need global collaboration. So far, such diplomatic efforts have produced few concrete results. One idea that has been floated is the creation of an international agency, akin to the International Atomic Energy Agency that was created to limit the spread of nuclear weapons. A challenge will be overcoming the geopolitical distrust, economic competition and nationalistic impulses that have become so intertwined with the development of AI.


58 views1 comment

Recent Posts

See All

1 Comment


lekor adams
lekor adams
Jun 05

Regulating AI effectively is crucial for ensuring its ethical and safe use. Oteemo’s foundation is rooted in the belief that impactful enterprise transformations are crafted from the ground up, not purchased off the shelf. Our expertise in modernizing digital capabilities enables businesses to ai services companies navigate these regulations, ensuring competitive advantage and superior service. We assist both commercial enterprises and federal agencies in harnessing modern technology, expediting their digital transformations responsibly.

Like
bottom of page