San Francisco considers allowing use of deadly robots by police
By Michael Levenson
San Francisco police could use robots to deploy lethal force under a policy advanced by city supervisors this week that thrust the city into the forefront of a national debate about the use of weaponized robots in U.S. cities.
The possibility is not merely hypothetical. In 2016, the Dallas Police Department ended a standoff with a gunman suspected of killing five officers by blowing him up with a bomb attached to a robot in what was believed to be the first lethal use of the technology by a U.S. law enforcement agency.
Supporters of the policy, advanced by the San Francisco Board of Supervisors by a 8-3 vote, said it would allow police to deploy a robot with deadly force in extraordinary circumstances, such as when a mass shooter or a terrorist is threatening the lives of officers or civilians.
David Lazar, assistant chief of the San Francisco Police Department, cited as an example the gunman who opened fire from his Las Vegas high-rise hotel room in 2017, killing 60 people in the deadliest mass shooting in modern U.S. history.
“He’s shooting, people are pinned down, the police are pinned down,” Lazar told the board during a contentious debate about the policy. “We would then think to ourselves, ‘OK, this is a possible option.’ ”
To become law, the policy must be approved again by the board — which is slated to consider it Dec. 6 — and be signed by Mayor London Breed, a Democrat who has expressed support for the proposal.
“If the police are called to serve in a situation where someone intends to do harm or is already doing harm to innocent people, and there is technology that can help to end the violence and save lives, we need to allow police to use these tools to save lives,” Breed’s office said in a statement.
Paul Scharre, the vice president and director of studies at the Center for a New American Security and the author of “Army of None: Autonomous Weapons and the Future of War,” said he was not aware of another U.S. city that had approved such a policy.
He said that using robots to deliver deadly force was “the exact opposite of what we should be using robots for.”
The advantage of robots, Scharre said, is that they can create distance between police and a potential threat, giving officers more time to make decisions without putting themselves in harm’s way.
“Precisely because a police officer is no longer at risk, you don’t have to use lethal force,” he said. “You can use nonlethal options such as tear gas or flash bangs to incapacitate someone.”
He said the concern was that other cities would follow San Francisco’s example, eventually leading to the broader use of deadly robots by U.S. law enforcement.
“It becomes normalized,” Scharre said. “It becomes a tool that police departments turn to in situations where they really don’t have to.”
Aaron Peskin, a member of the San Francisco Board of Supervisors, said the policy was developed in response to a state law enacted last year that required law enforcement agencies across California to seek the approval of their local governing bodies to use military-style equipment.
Responding to the law, the San Francisco Police Department proposed a policy governing a range of equipment in its possession, including a long-range acoustic device, sometimes referred to as a sound cannon, a BearCat armored vehicle and a 40 mm launcher that can deploy chemical agents.
Also on the list: 17 robots, five of which are out of commission, according to the Police Department. The robots were acquired between 2010 and 2017, the police said, and include heavy-duty models that can climb stairs, robots with tank treads that can defuse bombs and a small robot that can deliver an instantaneous video and audio feed.
The Police Department said none of its robots were “outfitted with lethal force options and the department has no plans to outfit robots with any type of firearm.”
But “robots could potentially be equipped with explosive charges to breach fortified structures containing violent, armed or dangerous subjects or used to contact, incapacitate or disorient violent, armed or dangerous suspects who pose a risk of loss of life to law enforcement or other first responders,” the department said in a statement.
The department added, “Robots equipped in this manner would only be used in extreme circumstances to save or prevent further loss of innocent lives.”
The policy advanced by the board states that robots “will only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and officers cannot subdue the threat after using alternative force options or de-escalation tactics options or conclude that they will not be able to subdue the threat after evaluating alternative force options and de-escalation tactics.”
Only the police chief, assistant chief of operations or deputy chief of special operations would be able to authorize the use of deadly force by robots, the policy says.
Opponents said the policy was dangerous and could lead to more police violence.
Robots create a “false distance that makes killing the individual easier,” said Hillary Ronen, a city supervisor who voted against the policy. “We don’t want it to be easy. We don’t want to create that distance and that removal from the emotional impact of killing, of taking an individual’s life.”
Elizabeth E. Joh, a professor at the University of California, Davis School of Law, who focuses on policing and technology, said the policy would erode public trust in law enforcement, more than two years after the murder of George Floyd led to global demonstrations against police brutality and systemic racism.
“I suppose all of this can be summarized as whether we want to live in a world in which police can kill people remotely with robots,” Joh said. “I’m not sure we do.”