UMBC’s Professor Marie desJardins was quoted recently in a TechRebublic article on the possible risks of adding more autonomy to weapons used by police and the military. The article focused on the novel use of a remotely controlled bomb-disposal robot by Dallas police to kill the suspect involved in the shooting of police officers. Although it was manually controlled by police officers, its use raised concerns about future devices that expected to have the capacity for independent decision making and actions.

Marie desJardins, AI professor at the University of Maryland in Baltimore County, agrees with Yampolskiy. “The real challenge will come when we start to put more autonomy into drones and assault robots. I don’t think that we should be building weapons that can, of their own accord, decide who to kill, and how to kill,” said desJardins.

“I think those decisions always need to be made by people—not just by individual people, but by processes in military organizations that have safeguards and accountability measures built into the process,” she said.

These issues were addressed by a recent series of workshops sponsored by the White House Office of Science and Technology Policy to learn more about the benefits and risks of artificial intelligence.