Tuesday, February 3, 2015

Bill Gates thinks we should all worry about the threats from super-intelligent AI

Although Bill Gates expressed 'concern' about super-intelligent AI, he praised Microsoft's Personal Agent 'which will remember everything' you do.



During his latest Reddit "Ask me Anything" session, Bill Gates was asked “how much of an existential threat do you think machine superintelligence will be?” Gates said:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

However, Gates also admitted that had he not started Microsoft, he “would probably be a researcher on AI. When I started Microsoft I was worried I would miss the chance to do basic work in that field.”

Nevertheless, Gates has aligned his opinion of AI superintelligence threats to that of Stephen Hawking and Elon Musk. Hawking said, “The development of full artificial intelligence could spell the end of the human race.” Musk has stated many of his opinions about AI, including that “With artificial intelligence we are summoning the demon” and that AI could potentially be “more dangerous than nukes.”

Ironically, however, Gates then jumped into praising Microsoft’s Personal Agent. It’s unclear if Gates was referring to Cortana, Microsoft’s digital assistant, but it seems like he would have called Cortana by name if he was indeed talking about it. Cortana, by the way, which uses Bing Predicts algorithm, predicted the New England Patriots would win the Super Bowl.

Back to Gates, who replied to the question of what technology will be like in the next 30 years by mentioning Microsoft’s Personal Agent.

There will be more progress in the next 30 years than ever. Even in the next 10 problems like vision and speech understanding and translation will be very good. Mechanical robot tasks like picking fruit or moving a hospital patient will be solved. Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively.

One project I am working on with Microsoft is the Personal Agent which will remember everything and help you go back and find things and help you pick what things to pay attention to. The idea that you have to find applications and pick them and they each are trying to tell you what is new is just not the efficient model - the agent will help solve this. It will work across all your devices.

As the BBC pointed out, Gates contradicted Microsoft Research scientist and managing director Eric Horvitz who stated that he does not believe humans will “lose control of certain kinds of intelligences. I fundamentally don't think that's going to happen.” Horvitz clearly doesn’t since "over a quarter of all attention and resources" at his research unit are focused on AI-related activities.

Horvitz previously served as president of the Association for the Advancement of Artificial Intelligence (AAAI) and is now co-chair of the AAAI presidential panel on long-term AI futures. It’s worth noting, however, that Horvitz is among a long list of experts like physicist Stephen Hawking and Tesla’s Elon Musk who signed an open letter calling for AI safety measures to be established. These great minds believe that “artificial intelligence has the potential to bring unprecedented benefits to humanity,” but call for AI research safety measures that align with human interests.

If you read the open letter and the paper outlining AI research priorities (pdf), under “control” it references a Stanford study which specifically discusses “Loss of Control of AI systems.”

Stanford’s One-Hundred Year Study of Artificial Intelligence includes "Loss of Control of AI systems” as an area of study, specifically highlighting concerns over the possibility that “we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity.” Are such dystopic outcomes possible? If so, how might these situations arise? ...What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an “intelligence explosion”?

Then on the other side of the fence, there are AI researchers who are openly annoyed by talk of fearing super-intelligent AI as if it were Skynet. For example, Baidu chief scientist Andrew Ng called such discussions “a distraction from the conversation about…serious issues.” Instead of talking about potential long term issues with AI, Wired said Ng believes we should be more worried “about robot truck drivers than the Terminator.”

Some people claim robots won’t be taking jobs from Americans, but look how quickly companies outsourced when doing so increased their profit margin. Last year Bill Gates talked about “software substitution,” basically meaning that people with only high school educations would be replaced by software automation doing their jobs. Gates added, “The quality of automation, software artificial intelligence, is improving fast enough that you can start to worry about middle class jobs.”

Whether or not you believe robots will take regular Joe and Jane’s jobs, or if it will bring about Skynet, AI advancements march on. For example, DARPA-funded research made it possible for autonomous robots to learn and perform complex actions via observation.

Put more simply…robots can now learn how to cook by watching YouTube videos. That might be all kinds of cool, but what’s to stop the next “smarter” version of such a robot from watching Terminator to learn how robots can take over?

No comments:

Post a Comment