top of page
Search
  • Writer's pictureThe San Juan Daily Star

In show of force, Silicon Valley titans pledge ‘getting this right’ with AI


The bipartisan A.I. Insight Forum, organized by Senate Majority Leader Chuck Schumer (D-N.Y.) along with labor union leaders and civil society groups, at the Capitol in Washington on Wednesday, Sept. 13, 2023.

By Cecilia Kang


Elon Musk warned of civilizational risks posed by artificial intelligence. Sundar Pichai of Google highlighted the technology’s potential to solve health and energy problems. And Mark Zuckerberg of Meta stressed the importance of open and transparent AI systems.


The tech titans held forth earlier this week in a three-hour meeting with lawmakers in Washington about AI and future regulations. The gathering, known as the AI Insight Forum, was part of a crash course for Congress on the technology and organized by Sen. Chuck Schumer, D-N.Y., the majority leader.


The meeting — also attended by Bill Gates, a founder of Microsoft; Sam Altman of OpenAI; Satya Nadella of Microsoft; and Jensen Huang of Nvidia — was a rare congregation of more than a dozen top tech executives in the same room. It amounted to one of the industry’s most proactive shows of force in the nation’s capital as companies race to be at the forefront of AI and to be seen to influence its direction.


“We all share the same incentives of getting this right,” Altman said after the meeting, which was held in the Senate building’s Kennedy Caucus Room.


Pichai called the event productive, and he stressed the need for the government to balance the “innovation side and building the right safeguards.”


The gathering punctuated a year of rapid developments in AI. Ever since ChatGPT, an AI-powered chatbot, exploded in popularity last year, lawmakers and regulators have grappled with how the technology might alter jobs, spread disinformation and potentially develop its own kind of intelligence.


While Europe has been in the throes of drafting laws to regulate AI, the United States has lagged. But the frenzy has prompted the White House, Congress and regulatory agencies to start responding in recent months with AI safeguards and other measures.


The White House is expected to release an executive order on AI this year and has held multiple meetings with tech executives. This week, it announced that a total of 15 companies had agreed to voluntary safety and security standards for their AI tools, including third-party security testing.


On Tuesday, a Senate Judiciary subcommittee held a hearing on AI legislation with Microsoft’s president and Nvidia’s chief scientist. And last week, Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., announced a framework for AI legislation that calls for an independent office to oversee AI, as well as licensing requirements and safety standards for the technology.


“This is the most difficult issue that Congress is facing because AI is so complex and technical,” Schumer said in an interview.


On Wednesday, Schumer invited 22 guests, who appeared before dozens of lawmakers in the Kennedy Caucus Room, where hearings on the sinking of the Titanic, the bombing of Pearl Harbor and the Watergate scandal unfolded.


At a crescent-shaped table extending nearly the length of the room, Schumer was flanked by Sens. Mike Rounds of South Dakota and Todd Young of Indiana, both Republicans. Huang was seated next to Nadella. Alex Karp, the CEO of Palantir, was next to Musk, who gave the media a thumbs-up and held his hands up in a heart sign.


In the closed-door session, the tech chiefs delivered opening statements and joined a discussion moderated by Schumer. He has acknowledged a tech-knowledge deficit within Congress and had said he would lean on Silicon Valley leaders, academics and public interest groups to teach members about the technology.


Most of the executives agreed on the need for regulating AI, which has been under scrutiny for its transformative and risky effects.


But there was still disagreement, attendees said. Zuckerberg highlighted open-source research and development of AI, which means that the source code of the underlying AI systems are available to the public.


“Open source democratizes access to these tools, and that helps level the playing field and foster innovation for people and businesses,” he said.


Others, like Jack Clark of AI startup Anthropic and Gates, raised concerns that open-source AI could lead to security risks, attendees said. Anthropic, Google and OpenAI have said open source can allow outsiders to get past safety guardrails and spread misinformation and other toxic material.


Musk, who has called for a moratorium on the development of some AI systems even as he has pushed forward with his own AI initiatives, was among the most vocal about the risks. He painted an existential crisis posed by the technology.


“If someone takes us out as a civilization, all bets are off,” he said, according to a person who was in the room. Musk said he had told Chinese authorities, “If you have exceptionally smart AI, the Communist Party will no longer be in charge of China.”


Deborah Raji, a researcher at the University of California, Berkeley, responded to Musk by questioning the safety of self-driving cars, which are powered by AI, according to a person who was in the room. She specifically noted the Autopilot technology of Tesla, the electric carmaker, which Musk leads and which has been under scrutiny after the deaths of some drivers.


Musk didn’t respond, according to a person who was in the room.


Schumer said that future meetings were likely to be public and noted that he had asked several critics of the tech companies from labor unions and civil society groups to attend. The first meeting was closed to encourage debate that was “unvarnished” and so no one would “play to the press,” he said.

65 views0 comments
bottom of page