By Steven Lee Myers
Two years ago, at a virtual gathering organized by the Nobel Foundation, Sheldon Himelfarb outlined the idea that the world’s leading scholars should join forces to study misinformation the way that scientists of the Intergovernmental Panel on Climate Change documented the global effect of carbon emissions.
That new group gathered for its official introduction in Washington on Wednesday, uniting more than 200 researchers from 55 countries with a similar sense of urgency and alarm as the threat of global warming. In the group’s first report, the researchers questioned the effectiveness of fighting falsehoods online with content moderation, one of the most common strategies to combating misinformation, saying other tactics had more scientific evidence behind them.
“You have to approach the information environment in the same way scientists approached the environment,” said Himelfarb, the group’s executive director and the chief executive of PeaceTech Lab, an advocacy organization affiliated with the United States Institute of Peace in Washington.
The group, the International Panel on the Information Environment, has registered as a nongovernmental organization in Zurich at a time when the fight against misinformation has become increasingly mired in a broader erosion of trust in government, news organizations and other public institutions.
“Algorithmic bias, manipulation and misinformation has become a global and existential threat that exacerbates existing social problems, degrades public life, cripples humanitarian initiatives and prevents progress on other serious threats,” the panel wrote in its inaugural announcement.
The panel was introduced during a three-day meeting, organized by the Nobel Foundation and the National Academy of Sciences, devoted to the erosion of public understanding and trust in science.
Speaker after speaker at the meeting described an onslaught of disinformation that has become a dispiriting fact of public life across the globe and that, with the recent explosion of artificial intelligence, could soon become even worse.
Maria Ressa of the Philippines, a winner of the Nobel Peace Prize in 2021, issued a manifesto demanding that democratic governments and Big Tech companies become more transparent, do more to protect personal data and privacy and end practices that contribute to disinformation and other threats against independent journalism. It has 276 signatories representing more than 140 organizations.
One challenge facing these efforts is overcoming the increasingly fierce arguments over what exactly constitutes misinformation. In the United States, the efforts to combat it have run aground on First Amendment protections of free speech. The biggest companies have now shifted focus and resources away from the fight against misinformation, even as new platforms emerged promising to forgo policies that moderate content.
On Wednesday, the panel’s researchers presented the summary of its first two studies, which reviewed 4,798 peer-reviewed publications examining misleading information on social media and aggregated the findings on the effectiveness of countermeasures to it.
The findings suggest that the most effective responses to false information online are labeling content as “disputed” or flagging sources of state media and publishing corrective information, typically in the form of debunking rumors and disinformation.
Far less certain, the report argues, is the effectiveness of public and government efforts to pressure social media giants like Facebook and Twitter to take down content, as well as internal company algorithms that suspend or play down offending accounts. The same is true of media literacy programs that train people to identify sources of misinformation.
“We’re not saying that information literacy programs don’t work,” said Sebastián Valenzuela, a professor at the Pontifical Catholic University of Chile who oversaw the study. “What we’re saying is that we need more evidence that they work.”
The panel’s inspirational model, the Intergovernmental Panel on Climate Change, was founded in 1988, a time when climate change was equally contested. Its scientists, working under the auspices of the United Nations, toiled for decades before its assessments and recommendations came to be recognized as scientific consensus.
When it comes to the digital landscape, and the impact on society of abuses, the science of disinformation could prove even harder to measure in concrete terms. Climate change is “hard science,” said Young Mie Kim, a professor at the University of Wisconsin-Madison who serves as vice chair of a committee focused on research methodology.
“So, relatively speaking, it’s easier to develop some common concepts and tool kits,” Kim said. “It’s hard to do that in social science or humanities.”
The new panel eschews a governmental role — at least for now. It plans to issue regular reports, not fact-checking individual falsehoods but rather looking for deeper forces behind the spread of disinformation as a way to guide government policy.
“It’d be too hard to put a bunch of scientists on evaluating the truth claims in any particular piece of junk,” said Philip N. Howard, director of Oxford University’s Program on Democracy and Technology and chairman of the new panel.
“What we can do is look for infrastructural interference,” he went on. “What we can do is audit an algorithmic system to see if it’s got lousy or unintended outcomes. It’s still hard, but I think that’s within reach as a research objective.”
コメント