Researchers have released OpenVLA, an open-source visual-language-action AI model for guiding robots based on prompts. It sets a new state of the art for generalist robot manipulation policies and can be quickly adapted to new robot setups.

Researchers have released OpenVLA, an open-source visual-language-action AI model for guiding robots based on prompts. It sets a new state of the art for generalist robot manipulation policies and can be quickly adapted to new robot setups.
Login below or Register Now.
Already registered? Login.