Robots Learn to Perform Tasks by “Watching” YouTube Videos

DARPA program advances robots’ ability to sense visual information and turn it into action

Robots can learn to recognize objects and patterns fairly well, but to interpret and be able to act on visual input is much more difficult. Researchers at the University of Maryland, funded by DARPA's Mathematics of Sensing, Exploitation and Execution (MSEE) program, recently developed a system that enabled robots to process visual data from a series of "how to" cooking videos on YouTube. Based on what was shown on a video, robots were able to recognize, grab and manipulate the correct kitchen utensil or object and perform the demonstrated task with high accuracy—without additional human input or programming.


"The MSEE program initially focused on sensing, which involves perception and understanding of what's happening in a visual scene, not simply recognizing and identifying objects," said Reza Ghanadan, program manager in DARPA's Defense Sciences Offices. "We've now taken the next step to execution, where a robot processes visual cues through a manipulation action-grammar module and translates them into actions."

Another significant advance to come out of the research is the robots' ability to accumulate and share knowledge with others. Current sensor systems typically view the world anew in each moment, without the ability to apply prior knowledge.

"This system allows robots to continuously build on previous learning—such as types of objects and grasps associated with them—which could have a huge impact on teaching and training," Ghanadan said. "Instead of the long and expensive process of programming code to teach robots to do tasks, this research opens the potential for robots to learn much faster, at much lower cost and, to the extent they are authorized to do so, share that knowledge with other robots. This learning-based approach is a significant step towards developing technologies that could have benefits in areas such as military repair and logistics."

The DARPA-funded researchers presented their work today at the 29th meeting of the Association for the Advancement of Artificial Intelligence. The University of Maryland paper is available here: http://ow.ly/I30im

Featured Product

Bitflow is the leader in CoaXPress

Bitflow is the leader in CoaXPress

With the introduction of its Cyton and Karbon CXP frame grabbers, BitFlow has established itself as the leader in CoaXPress (CXP), a simple, yet powerful, standard for moving high speed serial data from a camera to a frame grabber. With CXP, video is captured at speeds of up to 6.25 Gigabits/Second (Gb/S). Simultaneously, control commands and triggers can be sent to the camera 20 Mb/S (with a trigger accuracy of +/- 2 nanoseconds). Up to 13 W of power can also supplied to the camera. All this happens over a single piece of industry standard 75 Ohm coaxial cable. Multiple CXP links can be aggregated to support higher data rates (e.g. four links provide 25 Gb/S of data). BitFlow CXP frame grabbers open the door to applications where cable cost, routing requirements and long distances have prevented the move to high resolution, high speed digital cameras. In many cases, existing coaxial infrastructure can be repurposed for CXP with very low installation costs.