Robotics and artificial intelligence seem poised to raise us up with intoxicating possibilities, only to drop us suddenly into unfamiliar—and potentially dangerous—territory. From driverless cars and drones to robots in our workplace, hospitals and homes, advances in technology are bringing the future forward at quite a clip.
Few doubt that robotics and AI will create significant benefits, but most experts also believe that these technologies will bring pitfalls around which we’ll need to tread carefully.
Speaking at an MIT aeronautics symposium, SpaceX CEO/CTO and Tesla Motors chief product architect Elon Musk cautioned: “I think we should be very careful about artificial intelligence. If I were to guess at what our biggest existential threat is, it’s probably that.”
Ryan Calo, assistant professor at the University of Washington School of Law and an expert in law and emerging technology, doesn’t think AI will come close to human intelligence in the foreseeable future. However, he says, even the technology we have today is creating difficult challenges for government regulators.
One example is private drone use. The FAA issued an “interim policy” in 2005, but there are still no official rules. In 2012, a frustrated Congress passed a law requiring the FAA to devise a “comprehensive plan to safely accelerate the integration of civil unmanned aircraft systems into the national airspace” by September 2015.
The interim policy states that private individuals may fly drones, but only for fun, not for profit. Showing they mean business, the FAA issued subpoenas in July to several New York City realtors who had used drones to shoot aerial photos of their listings.
The FAA says it’s concerned about safety, but the current policy may be holding vital business initiatives back.
“Realtors, farmers and others using drones for business will think carefully about what exposes them to unnecessary business risks or lawsuits,” says Gregory S. McNeal, an associate professor of law at Pepperdine University. “In short, commercial users will be at least as careful as a hobbyist, but the FAA is keeping them grounded. This is completely backwards and it seems that it isn’t about safety, it’s about bureaucrats flexing their muscles as they struggle to deal with new technologies.”
Calo agrees that currently the government lacks the expertise to integrate robotics and AI into society safely and efficiently without hampering innovation. Yet that’s the balancing act that needs to be achieved if these technologies are to reach their full potential and the United States is to stay competitive with other nations.
So, what makes robotics so tricky to integrate into our existing laws, policies and institutions?
In his Brookings Institution report, The Case for a Federal Robotics Commission, Calo lists three attributes that create many of the challenges: Robots accomplish tasks in ways that cannot always be anticipated in advance; increasingly they are blurring the line between person and instrument; and they combine “promiscuity of data with physical embodiment—robots are software that can touch you.”
The later is a minefield.
“You can’t sue Facebook because someone defames you,” says Calo. “And you can’t sue Microsoft if Word eats your manuscript. Courts have made these decisions. But it will be different when bones not bits are on the line.”
Increasingly, robots display behavior that is useful, but cannot be anticipated by operators. This creates a challenge in terms of assigning responsibility for unpredictable outcomes.
“Someone could put a system into play without being able to foresee the result,” says Calo. “Especially when one system interacts with another, like two high-speed trading algorithms. Criminal and tort law look for intent and foreseeability. Those could be missing, but you still might have harm.”
“When you have risk of autonomous and semi-autonomous systems being able to make decisions that affect not only users but others around them, you have robotic products that have potentially lethal implications,” says Evan Selinger, associate professor of philosophy at Rochester Institute of Technology and a fellow at the Institute for Ethics and Emerging Technology.
As Calo envisions it, a Federal Robotics Commission would provide expertise to other government entities as they look to protect safety and privacy, sort through legal and ethical issues, and invest research dollars wisely.
Having a single agency involved in all robotics issues would make it possible to examine and treat distinct but related challenges together. Current regulatory activity is hopelessly piecemeal, Calo says.
“Agencies, states, courts and others are not in conversation with one another. Even the same government entities fail to draw links across similar technologies; drones come up little in discussions of driverless cars despite presenting similar issues of safety, privacy and psychological unease.”
Selinger agrees the challenges are interdisciplinary. “You need lots of people involved to put things in the big picture,” he says. “The core problems of robotics are more complex than meets the eye and more complex than any particular discipline can lay claim to.”
With highly regarded scientists—nationally and internationally—now calling for cohesive and careful oversight of robotics and AI, the time seems right to begin considering the options.
“The less we do anticipatory thinking about this—ethics on the offense, so to speak—the harder it will be to minimize risks. And it’s harder to peel back and take away permissions that have been granted,” Selinger concludes.