Past experiences are used to create a model of the surrounding world. Dreamer allows the robot to conduct trial-and-error calculations in a computer program as opposed to the real world in order to predict future outcomes. The ability to learn faster by doing is given by this. The robot was able to adapt to unexpected situations once it had learned to walk.

Lerrel Pinto is an assistant professor of computer science at New York University, who specializes in robotic and machine learning. Dreamer shows that deep reinforcement learning and world models can teach a robot new skills in a short period of time.

The findings, which have not yet been peer-reviewed, make it clear that reinforcement learning will be a cornerstone in the future of robot control.

The simulator can be removed from robot training. For example, a robot could learn to walk with a malfunctioning motor in one leg, if it was taught how to use the algorithm.

An assistant professor of artificial intelligence at the University of Edinburgh says that the approach could have huge potential for more complicated things like self-driving cars. A new generation of reinforcement-learning software could pick up on how the environment works in a matter of seconds.