AI-powered robot learns new skills in the same way as humans!
BERKELEY, Kalifornia (PNN) - July 5, 2018 - A new breed of artificial intelligence-powered robots could soon mimic almost any action after watching a human do them just once.
Scientists have developed a clawed machine that can learn new tasks, such as dropping a ball into a bowl or picking up a cup, simply by viewing a person perform them first.
Researchers said the trick allows the android to master new skills much faster than other robots, and could one day lead to machines capable of learning complex tasks purely through observation - much like humans and animals do.
Project lead scientist Tianhe Yu wrote in a blog post, “Learning a new skill by observing another individual, the ability to imitate, is a key part of intelligence in human and animals. Such a capability would make it dramatically easier for us to communicate new goals to robots - we could simply show robots what we want them to do.”
Developed by engineers at the University of Kalifornia at Berkeley, the robot quickly learns new actions by watching a person do it on video.
Clips of the android show it picking up fruit and putting it into a bowl, as well as carefully moving around an obstacle following the same path demonstrated by a scientist.
Most machines, such as the robots in car factories, are programmed to complete tasks via computer code - a rigid and often time-consuming process.
More recently, androids have been developed that can learn by watching another robot complete the action, though they typically need to mimic the task thousands of times before perfecting it.
In the new paper, the UC team outlines the technique that allowed them to teach a robot actions with just one demonstration - vastly speeding up the learning process.
They combined two different learning algorithms into a single super-AI.
One of these - a meta-learning algorithm - helps a robot to learn by incorporating the movements used in prior tasks rather than master each skill from scratch.
The other, an imitation algorithm, allows the machine to pick up a new skill by watching something else perform it.
Combining the two allowed scientists to build an AI that draws on both prior experience as well as mimicry to build new skills in a process the researchers call model-agnostic meta-learning (Maml).
This means it can learn to manipulate an object it has never seen before by watching a single video – a breakthrough that could accelerate machine learning.
Researchers wrote, “Our method enables a PR2 robot to effectively learn to push many different objects that are unseen during meta-training toward target positions.”
The robot can also “pick up many objects and place them onto target containers by watching a human manipulates each object,” they said.
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.
ANNs can be trained to recognize patterns in information - including speech, text data, or visual images - and are the basis for a large number of the developments in AI over recent years.
Conventional AI uses input to “teach” an algorithm about a particular subject by feeding it massive amounts of information.
Practical applications include Google's language translation services, Facebook's facial recognition software, and Snapchat's image altering live filters.
The process of inputting this data can be extremely time consuming and is limited to one type of knowledge.
A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other.
This approach is designed to speed up the process of learning, as well as refining the output created by AI systems.
In future, the team said they plan to expand the range of tasks that the robot can learn from humans.
The eventual goal is to develop machines that can “quickly develop strategies for new situations,” they said.
The study was posted to the pre-publish journal Arvix, and has not yet undergone peer review.