U.S. researchers sets new mark for 'deep learning'
Xinhua, December 18, 2016 Adjust font size:
U.S. researchers have taken inspiration from the human brain in creating a new "deep learning" method that enables computers to teach themselves about the visual world largely on their own, much as human babies do.
In test, the new image-processing system "deep rendering mixture model," developed by neuroscience and artificial intelligence experts from Rice University and Baylor College of Medicine, can learn largely on their own about how to distinguish handwritten digits using a standard dataset of 10,000 digits written by federal employees and high school students.
The study, presented this month at the Neural Information Processing Systems (NIPS) conference in Barcelona, Spain, was funded by the Director of National Intelligence's Intelligence Advanced Research Projects Activity, the U.S. National Science Foundation, the Air Force Office of Science and Research, the Army Research Office and the Office of Naval Research.
In results, the researchers said they trained their algorithm by giving it just 10 correct examples of each handwritten digit between zero and nine and then presenting it with several thousand more examples that it used to further teach itself, Rice University reported on Friday.
Compared with almost all previous algorithms that were trained with thousands of correct examples of each digit, in tests, the semisupervised Rice-Baylor algorithm was more accurate at correctly distinguishing handwritten digits.
The semisupervised algorithm, "essentially a very simple visual cortex," is a "convolutional neural network," a piece of software made up of layers of artificial neurons whose design was inspired by biological neurons, researchers said.
In deep-learning parlance, the most successful technique called supervised learning, where the machine is trained with thousands of examples: This is a one. This is a two. But this new system uses a different method known as semisupervised learning, according to the researchers.
"Humans don't learn that way," lead researcher Ankit Patel said in a statement on Friday. "When babies learn to see during their first year, they get very little input about what things are. Parents may label a few things: 'Bottle. Chair. Momma.' But the baby can't even understand spoken words at that point. It's learning mostly unsupervised via some interaction with the world." Endit