Robot trained in a game-like simulation performs better in real life
Training a robot in a simulation that allows it to remember how to get out of sticky situations lets it traverse difficult terrain more smoothly in real life.
Joonho Lee at ETH Zurich in Switzerland and his colleagues trained a neural network algorithm, which was designed to control a four-legged robot, in a simulated environment similar to a video game, which was full of hills, steps and stairs.
The researchers told the algorithm which direction it should be trying to move in, as well limiting how quickly it could turn, reflecting the capabilities of the real robot. They then started the algorithm making random movements in the simulation, rewarding it for moving in the right way and penalising it otherwise. By accumulating rewards, the neural network learned how to move over a variety of terrain.
Currently, most robots respond in real time to measurements of their surroundings using preprogrammed reactions, encountering every problem for the first time. A neural network allows the robot to lean from its mistakes, and by first training the algorithm in a simulation, the team is able to limit risks and reduce costs.
“We can’t bring it to the real terrains we want to train it on because it might damage the robot, which is very expensive,” says Lee.
The researchers initially used a small neural network that was preprogrammed with knowledge about the simulated environment, enabling the algorithm to learn quickly by taking inputs from virtual sensors and remembering them. They then transferred this knowledge to a large network used to control the real robot.
Using this experience, the robot was able to move 0.452 metres per second on mossy land – more than twice as fast as it can move with its default programming.
Journal reference: Science Robotics, DOI: 10.1126/scirobotics.abc5986
More on these topics: