Robots could learn much better thanks to an innovative method devised by Dyson-backed research: removing the traditional complexities of teaching robots how to perform tasks will make them even more human.


One of the biggest hurdles in teaching robots new skills is how to convert complex, high-dimensional data, such as images from embedded RGB cameras, into actions that achieve specific goals. Existing methods typically rely on 3D representations that require precise depth information or use hierarchical predictions that work with motion planners or independent policies.

Researchers from Imperial College London and the Dyson Robot Learning Lab have revealed a novel approach that could address this problem. The “Render and Diffuse” (R&D) method aims to bridge the gap between high-dimensional observations and low-level robotic actions, especially when data is sparse.

R&D, detailed in an article published in the arXiv preprint server, addresses the problem by using virtual renderings of a 3D model of the robot. By representing low-level actions within the observation space, the researchers were able to simplify the learning process.

(Image credit: Vosylius et al)

Imagining their actions within an image.

scroll to top