和谐英语

VOA常速英语:How Safe Is Artificial Intelligence?

2015-08-13来源:VOA

The notion of machines rebelling against humans has long been a theme of science fiction books and films.  But as advances in computer processors and memory chips rapidly usher us into the world of artificial intelligence, some scientists and technological entrepreneurs are urging us to get ready for machines becoming more independent.
 
For instance, the artificial brains called autopilots that have been flying airplanes for decades require the presence but not the focused attention of the pilot.  But smart cars are going to be the first machines making independent decisions in close interaction with humans.
 
Many feel uneasy about it.  But Jerry Kaplan, a computer scientist and author of the book "Humans Need Not Apply," says smart cars will be much more cautious than humans and their style of driving will save many lives.  He spoke to VOA via Skype.
 
“I think when you take a broader picture, you have to recognize that we may be able to reduce the amount of carnage on our highways by 80, 90 percent” by using smart cars, he said.
 
Advanced technologies have always been of great interest to the military, which is developing drones capable of flying without remote control. Some say the possibility of such weapons systems making decisions about life and death is much harder to accept than a smart car.
 
“Who will  be responsible for one of these weapons systems killing an innocent person?" asked Toby Walsh, a professor of artificial intelligence at the University of New South Wales in Sydney. "It’s very unclear; we don’t have the framework in place to understand. Is it the person who built the autonomous robot? Is it the person who built the program? It is the person who turned it on?”
 
While some are calling for a worldwide ban on smart weapons, others point out that the components for building them already exist — creating the danger that a rogue nation or terrorist group could gain the upper hand.  There are other reasons, too, for moving forward.
 
“There are very strong moral arguments that we need to develop these technologies because they reduce things like the so-called collateral damage,” Kaplan said.
 
As the technology is racing ahead, artificial intelligence may bring both good and bad things. And scientists warn that we have to do more to adjust to changing times.
 
Kaplan said there is an interim period "in which people are put out of work and people need to learn new skills, and unfortunately, because this is an accelerated pace of this kind of automation, those problems in the short run are going to get significantly worse.”
 
While thinking about the changes artificial intelligence will create in our everyday lives, it may be worthwhile to remind ourselves of the First Law of Robotics, which famed science fiction writer Isaac Asimov suggested in 1942: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”