Off the wire
Researchers point telescopes at possible extraterrestrial inhabitants  • Supporters of Brazil's embattled Rousseff block roads to Sao Paulo  • (G20 Summit)Interview: World expects realistic decisions to stabilize global environment in G20 summit: Lithuanian expert  • Bolivia to recall ambassador to Brazil in protest if Rousseff is impeached  • (G20 Summit) Argentine media highlight China's hosting of G20 Summit  • Nearly 30 Hillary Clinton emails recovered by FBI may involve Benghazi attack  • Blockade will impede flight potential from U.S., says senior Cuban official  • News Analysis: Trump needs to tone down immigration rhetoric to woo moderate voters  • Messi can't fix all our problems, says Argentina coach  • Chicago agricultural commodities close lower on bountiful supplies  
You are here:   Home

UC Berkeley's new center to make artificial intelligence human-compatible

Xinhua, August 31, 2016 Adjust font size:

The University of California, Berkeley, has launched a Center for Human-Compatible Artificial Intelligence to work on ways to guarantee that the most sophisticated artificial intelligence (AI) systems of the future will act in a manner that is aligned with human values.

The new center, launched this week and headed by Stuart Russell, a UC Berkeley professor of electrical engineering and computer sciences and co-author of Artificial Intelligence: A Modern Approach, which is considered the standard text in the field of artificial intelligence, has received a grant of 5.5 million U.S. dollars from the Open Philanthropy Project, plus additional grants for the center's research from the Leverhulme Trust and the Future of Life Institute.

As an advocate for incorporating human values into the design of AI, Russell has dismissed the imaginary threat from the sentient and evil robots of science fiction by saying that machines as we currently design them in fields like AI, robotics, control theory and operations research take the objectives that we humans give them very literally.

"AI systems must remain under human control, with suitable constraints on behavior, despite capabilities that may eventually exceed our own," Russell was quoted as saying in a news release. "This means we need cast-iron formal proofs, not just good intentions."

One approach researchers are exploring is called inverse reinforcement learning, through which a robot can learn about human values by observing human behavior. "Rather than have robot designers specify the values, which would probably be a disaster," Russell said, "instead the robots will observe and learn from people. Not just by watching, but also by reading. Almost everything ever written down is about people doing things, and other people having opinions about it. All of that is useful evidence."

"People are highly varied in their values and far from perfect in putting them into practice," he acknowledged. "These aspects cause problems for a robot trying to learn what it is that we want and to navigate the often conflicting desires of different individuals."

Believed to be the way of the future, systems with AI technologies may be entrusted with control of critical infrastructure and may provide essential services to billions of people.

In a recent article titled "Will They Make Us Better People?," Russell was optimistic: "In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think."

He said the new center expects to add collaborators with related expertise in economics, philosophy and other social sciences. Endit