Tech

Meet ‘Brett,’ the robot that learns the way your toddler does—only faster

Researchers built Brett’s software on a neural network that operated like a human brain.

Photo of Dylan Love

Dylan Love

Article Lead Image

Brett is like most small children, playing with Lego, developing thinking skills, and generally figuring out how the world works. There’s just one small difference: Brett’s not “alive” in any sense of the word.

Featured Video

That name is actually an acronym for Berkeley Robot for the Elimination of Tedious Tasks. Brett is a robotics project at University of California, Berkeley, built on the well-known PR-2 platform. It can interact with the world on a level somewhere between that of an infant and toddler, but it stands to improve rapidly.

Researchers built Brett’s software on a neural network, a computerized approximation of how the human brain operates. They are training the robot to operate itself by using deep learning technology and reinforcement learning techniques. Deep learning is a branch of artificial intelligence that helps computers understand abstract human concepts (a door is a thing that you open and close, for example). Reinforcement learning is pretty much how one might train a dog, rewarding progress and penalizing undesirable behavior.

In practice, this means that Brett is given a basic task to complete but is not given the instructions on how to do it, receiving a computerized dog treat of sorts once its neural network successfully identifies how to complete the task.

Advertisement

For example, Brett might be told to screw in the lid of a water bottle. Deep learning technology will help it identify the lid and the bottle, and reinforcement learning methods will help the robot get better at inserting the lid with every attempt. Since the robot isn’t beholden to human qualities like impatience and frustration, this means it has no problem taking all the time it needs to figure out how to successfully execute the task, however basic it might be.

This isn’t Robocop, but never before have deep learning and reinforcement learning come together so strongly in a single robot, according to Bloomberg. Neural network technology likely holds the strongest promise for bringing those oft-hypothesized autonomous science-fiction-style robots into the real world, allowing them to effectively make their own decisions on how to go about something.

The following GIF shows a visual example of how software “neurons” might direct a robot to try several incorrect processes to eventually arrive at the correct one.

Advertisement

Earlier this year, Google created an artificial intelligent system to learn and master video games. After making many weak, exploratory attempts at classic Atari titles, Google’s software quickly ramped up to outdo even professional human video game testers at certain games for which it could derive a mathematical model for success. Its weapon of choice? A neural network.

Chelsea Finn, a Ph.D. student at Berkeley, cautions Bloomberg that such technology doesn’t even come close to causing concern over machine uprising. To worry about neural-network-enabled robots taking over the world is akin to “worrying about overpopulation on Mars,” she said. There are simply too many critical advancements to be made before there’s a credible threat there.

Until then, Brett will be biding time in its lab, stacking Lego and putting lids on bottles.

Screengrab via Bloomberg Business 

Advertisement
 
The Daily Dot