With so many people taking their cues from the movies on what a future with artificial intelligence will look like, some who fear one day having robotic overlords will be heartened by research that Google is doing.
Google DeepMind, a London-based artificial intelligence company that Google acquired in 2014, is working on what will be a kill switch for robots and other A.I. systems.
The idea is that one day a smart machine might be able to override its own off button. If that’s the case, then humans would need another way to gain the upper hand.
“If an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions -- harmful either for the agent or for the environment,” researchers wrote in a paper posted on the Machine Intelligence Research Institute website. “However, if the learning agent… learns in the long run to avoid such interruptions, for example by disabling the red button, it is an undesirable outcome.”
The paper was co-written by Laurent Orseau, a research scientist with Google Deep Mind, and Stuart Armstrong, a researcher with the Future of Humanity Institute at the University of Oxford in the U.K.
The researchers are looking for ways to keep a machine from learning about human interventions and stopping them from happening.
It’s an interesting, and possibly critical, step for researchers to take given the popular fears surrounding artificial intelligence.
With many getting their ideas for a robotic future from movies like 2001: A Space Odyssey, The Terminator, I, Robot and the TV show Battlestar Galactica, some envision a future where the machines are in charge and people are their slaves.
It’s not a pretty picture and it has many people, even high-tech entrepreneur Elon Musk and renowned physicist Steven Hawking, anxious about what awaits humans in a future where smart machines are increasingly involved in people’s lives.
Hawking went so far as to say, “The development of full artificial intelligence could spell the end of the human race."
Google’s kill switch research is more than technology to placate the frightened masses, but may also be well-timed.
“The timing is right for this to be discussed as the architectures for A.I. and autonomous machines are being laid right now,” said Patrick Moorhead, an analyst with Moor Insights & Strategy. “It would be like designing a car and only afterwards creating the ABS and braking system. The kill switch needs to be designed into the overall system. Otherwise, it is open to security issues and maybe even the machines trying to circumvent the kill.”
Moorhead declined to say if he is concerned about the rise of artificial intelligence-based machines, but he did say he’s glad work is being done on a big red button.
“We should be concerned about A.I. systems with no kill switch,” he added. “It would be like creating a bullet train without brakes.”
Jeff Kagan, an independent industry analyst, said the DeepMind research makes sense. For an industry that often develops technology before thinking through its repercussions, this is a well-timed effort.
“We are not at the point yet where we have to worry about A.I. taking over,” Kagan said. “However, we always build faster than we think… I just hope that these brilliant scientists can use their brainpower to protect us rather than just invent and eventually threaten us.”
And there’s some general acknowledgement that it’s a good thing Google is involved in the kill switch effort.
It only makes sense that the company put some of its attention on safety features, as well, according to Dan Olds, an analyst with The Gabriel Consulting Group.
“I think that given their deep immersion in the area of robotics and A.I., Google is the natural party to lead the way on this research,” Olds said. “Having an A.I. or robot try to ‘take over’ is pretty farfetched, given today’s technology. But how far is that, really? In the next five years, I could see a financial services firm giving over portfolio management to an A.I. I could also see some mechanized equipment that will be totally in control of an A.I. This is why we need the kill switch – just in case one of these things goes off the rails and threatens to cause some damage.”
This story, "Google DeepMind’s kill switch research may ease A.I. fears" was originally published by Computerworld.