Skip to content

Google Equips Robots with Version 2.0 of Gemini Technology

DeepMind Unveils Gemini Robotics: A New Model Crafted for Agile, Dexterous Robotic Hands in Real-World Scenarios

Google Integrates Gemini 2.0 into Robotic Applications
Google Integrates Gemini 2.0 into Robotic Applications

Google Equips Robots with Version 2.0 of Gemini Technology

In a significant stride towards the future, Google has announced the development of Gemini Robotics-ER, an advanced AI model designed for real-life robots. This vision-language-action (VLA) model is built on the foundation of Gemini 2.0 and is set to revolutionise the robotics industry.

Google is collaborating with experts in the field to ensure the responsible development of AI applications. The safety of its robots in real-world scenarios is a top priority. To achieve this, Google is using the ASIMOV dataset and DeepMind’s Genie 3 world model to train its robots in high-fidelity, physics-aware simulation environments. This simulation-based training exposes robots to diverse physical situations in a controlled manner, reducing risks before real-world deployment.

Google's operational AI safety strategy also includes human supervision and intervention mechanisms. Autonomous actions proposed by AI systems are verified against predefined safety constraints before implementation, allowing human operators to override decisions if necessary. This approach aligns with Google's AI applications in areas like datacenter management, where human oversight complements AI decision-making to ensure safety and reliability.

The development of Gemini Robotics-ER involves collaborations that prioritise transparency, ethical compliance, and rigorous pilot testing in controlled environments. These frameworks advocate for continuous logging and monitoring of AI agent behaviour during task execution to enable system optimisation and incident analysis.

Gemini Robotics-ER is equipped with embodied reasoning techniques, enabling the AI to navigate its environment in real-time. It can understand the context of tasks, such as packing items in a lunch bag, and discern specific objects within containers. The AI can also distinguish between objects of varying finishes, colours, and shapes, such as bowls and fruits.

Google is partnering with companies like Apptronik to build the next generation of humanoid robots. While the exact release date of these robots is yet to be announced, they will be powered by Google's Gemini AI technology. In a robot, Gemini would physically respond to commands instead of performing actions on a phone.

The AI's ability to understand the safety implications of its actions in real-world scenarios, coupled with its advanced spatial understanding and embodied reasoning, makes Gemini Robotics-ER a promising step towards safe and intelligent robots in our daily lives. The model will be available to partners for testing, including Agile Robots, Agility Robots, Boston Dynamics, and Enchanted Tools.

As Google prepares itself for questions regarding Gemini safeguards, it is clear that the company is committed to ensuring the safety and ethical use of its AI technology in the development of real-life robots.

  1. Google's new AI model, Gemini Robotics-ER, is built on a foundation that includes technology and artificial-intelligence, aimed at revolutionizing the robotics industry.
  2. To ensure the safety of its robots, Google is using the ASIMOV dataset and DeepMind’s Genie 3 world model, training its robots in high-fidelity, physics-aware simulation environments.
  3. Google is collaborating with companies like Apptronik to build the next generation of humanoid robots, which will be powered by Google's Gemini AI technology, enabling them to physically respond to commands.
  4. The company is committed to ensuring the safety and ethical use of its AI technology, especially in the development of real-life robots, addressing potential questions regarding Gemini safeguards.

Read also:

    Latest