Off the wire
British FTSE 100 decreases 0.98 pct on Thursday  • LME base metals decrease on Thursday  • Bulgaria, Russia to deepen cooperation in energy, tourism  • Kenya's bomb experts to detonate IEDs along Somali border  • Egypt receives 3 more Rafale jets from France  • Iran to buy 118 Airbus aircraft: statement  • Algeria to hike natural gas output to meet growing local, global demand  • France to expand growth in 2016 but joblessness still challenges: PM  • U.S. stocks trade mixed amid upbeat earnings, downbeat data  • Roundup: Rouhani's visit to Europe brings about deals worth billions of USD  
You are here:   Home

Roundup: Google algorithm beats Go champion, signifying major AI breakthrough

Xinhua, January 29, 2016 Adjust font size:

By mastering the classic strategy game Go, the deep-learning computer program developed by Google's London-based subsidiary, DeepMind, has demonstrated a major step forward in artificial intelligence (AI).

Some experts predicted that it would take at least a decade before AI programs can beat human professionals in the full-sized game of Go with no handicap. Now the program, called AlphaGo, changes that course.

It has defeated Fan Hui, the European Go champion, five times out of five in tournament conditions, as was revealed by DeepMind researchers in a study published in the journal Nature. The program also achieved a 99.8 percent winning rate against other Go programs.

The DeepMind team will further test their program by arranging a five-game challenge match in Seoul against South Korean professional Lee Sedol, considered by many to be the world's top Go player. The game has been scheduled for March.

"I heard Google DeepMind's AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time," said Lee Sedol.

AlphaGo combines an advanced tree search with deep neural networks, and these neural networks "take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections", said Demis Hassabis, DeepMind's chief executive, in a Google official blog post.

One neural network, the "policy network," selects the next move to play. The other neural network, the "value network," predicts the winner of the game, according to Hassabis."We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time."

"But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning," Hassabis said in the post.

Computer programs have a long history of competing with human rivals in various games. The first game mastered by a computer program was noughts and crosses (also known as tic-tac-toe) in 1952. Then fell checkers in 1994. In 1997, IBM's Deep Blue famously beat Garry Kasparov at chess. The same company later developed Watson which bested two champions at Jeopardy in 2011.

But before DeepMind's AI breakthrough, Go has thwarted AI researchers, and computers still only play Go as well as amateurs. Another internet giant Facebook has developed software that uses machine learning to play Go as well, but is still behind commercial state-of-the-art Go AI systems, according to Nature.

The game of Go originated in China more than 2,500 years ago, which has been regarded as an outstanding "grand challenge" for artificial intelligence owing to its large search space and the difficulty of evaluating board positions and moves. But the challenge is irresistible to AI researchers, who use games as a testing ground to invent smart, flexible algorithms that can tackle problems, sometimes in ways similar to humans.

Many challenges remain in DeepMind's goal of developing a generalized AI system. In particular, its programs cannot yet usefully transfer their learning about one system, such as Go, to new tasks, which is a feat that humans perform seamlessly, according to Hassabis.

"While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems," he said. Endit