Researchers have released OpenVLA, an open-source visual-language-action AI model for guiding robots based on prompts. It sets a new state of the art for generalist robot manipulation policies and can be quickly adapted to new robot setups.
Get the latest creative news from FooBar about art, design and business.
Researchers have released OpenVLA, an open-source visual-language-action AI model for guiding robots based on prompts. It sets a new state of the art for generalist robot manipulation policies and can be quickly adapted to new robot setups.
Login below or Register Now.
Already registered? Login.