Russ Tedrake discussed the progress of robots in terms of manual dexterity and social intelligence, and how engineers are using visuomotor policies and pre-trained perceptual networks to develop a ‘learned state representation’ in order to plan actions. He also mentioned a shift in thinking from traditional reinforcement learning to ‘behavioral cloning’, and the use of diffusion policy to learn a distribution over possible actions. As an example, he showed a robot making a pizza, demonstrating its dexterity and precision.
Previous ArticleMeet Advanced Reasoning Benchmark (arb): A New Benchmark To Evaluate Large Language Models
Next Article Revolutionizing Sleep With Apap Machine Technology