Off the wire
Mexican students celebrate Chinese Culture Day  • Xinhua China news advisory -- March 20  • Parliament election kicks off in Kazakhstan  • China's nature reserve wins top conservation award of UNESCO  • (Sports) Roundup: Mercedes lives up to "favorites" tag ahead of Australian F1 GP  • Roundup: China, Malaysia optimistic on fresh opportunities under Belt and Road initiatives  • Brazilian official warns against police leaks over Petrobras case  • Portuguese soccer league standings  • Portuguese soccer league results/fixtures  • Xinhua world news summary at 0054 GMT, March 20  
You are here:   Home

News Analysis: After AlphaGo's achievement, do humans need to fear artificial intelligence?

Xinhua, March 20, 2016 Adjust font size:

Google's computer program AlphaGo stunned the world this week by defeating a top professional player at Go, a board game that has trillions of possible moves and often requires players' intuition.

"The AlphaGo success shows again that this is a golden era for the development of intelligent software that can be as intelligent or more intelligent than humans in narrow domains of intelligence," Tomaso Poggio, professor of Department of Brain and Cognitive Sciences at Massachusetts Institute of Technology (MIT), told Xinhua.

While many people around the world witnessed the achievement accomplished by artificial intelligence (AI) with excitement, some also expressed a renewed wariness over the technology and wonder: It is a boon or a bane for mankind?

BENEFITS OUTWEIGH RISKS

AI, a field that has existed for more than 50 years, grows along with the development of cognitive science and neuroscience.

Poggio regards the subject of intelligence as the greatest one in science, because it has to do with the question of how human brain works and what is the mind.

"This quest to understand intelligence promises -- beyond its scientific aspects -- to provide powerful technology for helping solve all other problems of our society," Poggio said. "Therefore it can be very good for mankind."

But he also cautioned that AI, like every powerful technology, can be used for evil goals, thus calling on researchers to make sure it does not happen.

The likely medium-term risks of AI involve machines stealing human jobs, said Poggio.

However, Poggio said, the problem can be solved with political means since productivity will be higher and the society as a whole will be better off.

"The potential benefits of it so much overweight the risks that we cannot stop research on intelligence," he said.

Poggio's opinion is echoed by his colleague Professor Feng Guoping, who believed that with proper human control, AI technology could help make great contribution to mankind, as in the case of DNA.

When the study of DNA began to develop, Feng said, many people feared that it would bring about disastrous consequences to the world. But over the past 50 years, DNA research has greatly helped people diagnose and treat diseases and such fear has been proved unjustified.

As a neuroscientist, Feng is particularly interested in finding out how AI could help improve the precision of neurological disorder diagnosis by making use of large amounts of data from around the world.

"AI is a promising field of study for young researchers," said Feng, who places high hopes on the potential power of AI to improve people's lives.

AI TAKEOVER?

As the study of AI began to take shape in the 1960s and 1970s, the fear that computers would pose a threat to humanity or eventually take over the world became one of the themes in science fiction.

Some experts, such as physicist Stephen Hawking and SpaceX founder Elon Musk, have expressed concerns about the possibility that AI could develop to the point that human beings could not control it. Hawking even said he felt that machines with AI could "spell the end of the human race."

In 2015, Musk donated 10 million dollars to the Future of Life Institute, which would run a global research program aimed at keeping AI "beneficial to humanity."

Experts told Xinhua that an evil super intelligent machine is highly unlikely to suddenly take over in the near or medium term.

"That's low down on my list of worries," Josh Tenenbaum, professor of MIT's Department of Brain and Cognitive Sciences, said. "I think we have other worries that are much higher, like, for example, climate change."

Tenenbaum said that people should be concerned the same way about AI they are about encryption technology or bio-technology or other kinds of computing power and focus on how to make sure these technologies are used in good and safe ways.

AI is "not something that is fundamentally different because somehow the technology itself is super intelligent," Tenenbaum said.

Poggio pointed out that the science of intelligence cannot be solved by a single breakthrough, so it is very unlikely that all these breakthroughs will happen at the same time, suddenly leading to human-like intelligence.

"It is obvious to knowledgeable researchers that nuclear weapons are much more dangerous than AI," Poggio said. Endi