Patents by Inventor Sergey Levine

Sergey Levine has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12240113
    Abstract: Implementations utilize deep reinforcement learning to train a policy neural network that parameterizes a policy for determining a robotic action based on a current state. Some of those implementations collect experience data from multiple robots that operate simultaneously. Each robot generates instances of experience data during iterative performance of episodes that are each explorations of performing a task, and that are each guided based on the policy network and the current policy parameters for the policy network during the episode. The collected experience data is generated during the episodes and is used to train the policy network by iteratively updating policy parameters of the policy network based on a batch of collected experience data. Further, prior to performance of each of a plurality of episodes performed by the robots, the current updated policy parameters can be provided (or retrieved) for utilization in performance of the episode.
    Type: Grant
    Filed: December 1, 2023
    Date of Patent: March 4, 2025
    Assignee: GOOGLE LLC
    Inventors: Sergey Levine, Ethan Holly, Shixiang Gu, Timothy Lillicrap
  • Patent number: 12226920
    Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
    Type: Grant
    Filed: August 11, 2023
    Date of Patent: February 18, 2025
    Assignee: GOOGLE LLC
    Inventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
  • Patent number: 12103564
    Abstract: A method of generating an output trajectory of an ego vehicle includes recording trajectory data of the ego vehicle and pedestrian agents from a scene of a training environment of the ego vehicle. The method includes identifying at least one pedestrian agent from the pedestrian agents within the scene of the training environment of the ego vehicle causing a prediction-discrepancy by the ego vehicle greater than the pedestrian agents within the scene. The method includes updating parameters of a motion prediction model of the ego vehicle based on a magnitude of the prediction-discrepancy caused by the at least one pedestrian agent on the ego vehicle to form a trained, control-aware prediction objective model. The method includes selecting a vehicle control action of the ego vehicle in response to a predicted motion from the trained, control-aware prediction objective model regarding detected pedestrian agents within a traffic environment of the ego vehicle.
    Type: Grant
    Filed: January 6, 2022
    Date of Patent: October 1, 2024
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Rowan Thomas McAllister, Blake Warren Wulfe, Jean Mercat, Logan Michael Ellis, Sergey Levine, Adrien David Gaidon
  • Publication number: 20240308068
    Abstract: Training and/or utilizing a hierarchical reinforcement learning (HRL) model for robotic control. The HRL model can include at least a higher-level policy model and a lower-level policy model. Some implementations relate to technique(s) that enable more efficient off-policy training to be utilized in training of the higher-level policy model and/or the lower-level policy model. Some of those implementations utilize off-policy correction, which re-labels higher-level actions of experience data, generated in the past utilizing a previously trained version of the HRL model, with modified higher-level actions. The modified higher-level actions are then utilized to off-policy train the higher-level policy model. This can enable effective off-policy training despite the lower-level policy model being a different version at training time (relative to the version when the experience data was collected).
    Type: Application
    Filed: May 24, 2024
    Publication date: September 19, 2024
    Inventors: Honglak Lee, Shixiang Gu, Sergey Levine
  • Patent number: 12083678
    Abstract: Techniques are disclosed that enable training a meta-learning model, for use in causing a robot to perform a task, using imitation learning as well as reinforcement learning. Some implementations relate to training the meta-learning model using imitation learning based on one or more human guided demonstrations of the task. Additional or alternative implementations relate to training the meta-learning model using reinforcement learning based on trials of the robot attempting to perform the task. Further implementations relate to using the trained meta-learning model to few shot (or one shot) learn a new task based on a human guided demonstration of the new task.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: September 10, 2024
    Assignee: GOOGLE LLC
    Inventors: Mrinal Kalakrishnan, Yunfei Bai, Paul Wohlhart, Eric Jang, Chelsea Finn, Seyed Mohammad Khansari Zadeh, Sergey Levine, Allan Zhou, Alexander Herzog, Daniel Kappler
  • Patent number: 11992945
    Abstract: Techniques are disclosed that enable training a plurality of policy networks, each policy network corresponding to a disparate robotic training task, using a mobile robot in a real world workspace. Various implementations include selecting a training task based on comparing a pose of the mobile robot to at least one parameter of a real world training workspace. For example, the training task can be selected based on the position of a landmark, within the workspace, relative to the pose. For instance, the training task can be selected such that the selected training task moves the mobile robot towards the landmark.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: May 28, 2024
    Assignee: GOOGLE LLC
    Inventors: Jie Tan, Sehoon Ha, Peng Xu, Sergey Levine, Zhenyu Tan
  • Patent number: 11992944
    Abstract: Training and/or utilizing a hierarchical reinforcement learning (HRL) model for robotic control. The HRL model can include at least a higher-level policy model and a lower-level policy model. Some implementations relate to technique(s) that enable more efficient off-policy training to be utilized in training of the higher-level policy model and/or the lower-level policy model. Some of those implementations utilize off-policy correction, which re-labels higher-level actions of experience data, generated in the past utilizing a previously trained version of the HRL model, with modified higher-level actions. The modified higher-level actions are then utilized to off-policy train the higher-level policy model. This can enable effective off-policy training despite the lower-level policy model being a different version at training time (relative to the version when the experience data was collected).
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: May 28, 2024
    Assignee: GOOGLE LLC
    Inventors: Honglak Lee, Shixiang Gu, Sergey Levine
  • Publication number: 20240131695
    Abstract: Implementations utilize deep reinforcement learning to train a policy neural network that parameterizes a policy for determining a robotic action based on a current state. Some of those implementations collect experience data from multiple robots that operate simultaneously. Each robot generates instances of experience data during iterative performance of episodes that are each explorations of performing a task, and that are each guided based on the policy network and the current policy parameters for the policy network during the episode. The collected experience data is generated during the episodes and is used to train the policy network by iteratively updating policy parameters of the policy network based on a batch of collected experience data. Further, prior to performance of each of a plurality of episodes performed by the robots, the current updated policy parameters can be provided (or retrieved) for utilization in performance of the episode.
    Type: Application
    Filed: December 1, 2023
    Publication date: April 25, 2024
    Inventors: Sergey Levine, Ethan Holly, Shixiang Gu, Timothy Lillicrap
  • Publication number: 20240118667
    Abstract: Implementations disclosed herein relate to mitigating the reality gap through training a simulation-to-real machine learning model (“Sim2Real” model) using a vision-based robot task machine learning model. The vision-based robot task machine learning model can be, for example, a reinforcement learning (“RL”) neural network model (RL-network), such as an RL-network that represents a Q-function.
    Type: Application
    Filed: May 15, 2020
    Publication date: April 11, 2024
    Inventors: Kanishka Rao, Chris Harris, Julian Ibarz, Alexander Irpan, Seyed Mohammad Khansari Zadeh, Sergey Levine
  • Patent number: 11897133
    Abstract: Implementations utilize deep reinforcement learning to train a policy neural network that parameterizes a policy for determining a robotic action based on a current state. Some of those implementations collect experience data from multiple robots that operate simultaneously. Each robot generates instances of experience data during iterative performance of episodes that are each explorations of performing a task, and that are each guided based on the policy network and the current policy parameters for the policy network during the episode. The collected experience data is generated during the episodes and is used to train the policy network by iteratively updating policy parameters of the policy network based on a batch of collected experience data. Further, prior to performance of each of a plurality of episodes performed by the robots, the current updated policy parameters can be provided (or retrieved) for utilization in performance of the episode.
    Type: Grant
    Filed: August 1, 2022
    Date of Patent: February 13, 2024
    Assignee: GOOGLE LLC
    Inventors: Sergey Levine, Ethan Holly, Shixiang Gu, Timothy Lillicrap
  • Publication number: 20240017405
    Abstract: Training and/or using a recurrent neural network model for visual servoing of an end effector of a robot. In visual servoing, the model can be utilized to generate, at each of a plurality of time steps, an action prediction that represents a prediction of how the end effector should be moved to cause the end effector to move toward a target object. The model can be viewpoint invariant in that it can be utilized across a variety of robots having vision components at a variety of viewpoints and/or can be utilized for a single robot even when a viewpoint, of a vision component of the robot, is drastically altered. Moreover, the model can be trained based on a large quantity of simulated data that is based on simulator(s) performing simulated episode(s) in view of the model. One or more portions of the model can be further trained based on a relatively smaller quantity of real training data.
    Type: Application
    Filed: July 17, 2023
    Publication date: January 18, 2024
    Inventors: Alexander Toshev, Fereshteh Sadeghi, Sergey Levine
  • Patent number: 11845183
    Abstract: Implementations utilize deep reinforcement learning to train a policy neural network that parameterizes a policy for determining a robotic action based on a current state. Some of those implementations collect experience data from multiple robots that operate simultaneously. Each robot generates instances of experience data during iterative performance of episodes that are each explorations of performing a task, and that are each guided based on the policy network and the current policy parameters for the policy network during the episode. The collected experience data is generated during the episodes and is used to train the policy network by iteratively updating policy parameters of the policy network based on a batch of collected experience data. Further, prior to performance of each of a plurality of episodes performed by the robots, the current updated policy parameters can be provided (or retrieved) for utilization in performance of the episode.
    Type: Grant
    Filed: August 1, 2022
    Date of Patent: December 19, 2023
    Assignee: GOOGLE LLC
    Inventors: Sergey Levine, Ethan Holly, Shixiang Gu, Timothy Lillicrap
  • Publication number: 20230381970
    Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
    Type: Application
    Filed: August 11, 2023
    Publication date: November 30, 2023
    Inventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
  • Publication number: 20230367996
    Abstract: A method includes determining a first state associated with a particular task, and determining, by a task policy model, a latent space representation of the first state. The task policy model may have been trained to define, for each respective state of a plurality of possible states associated with the particular task, a corresponding latent space representation of the respective state. The method also includes determining, by a primitive policy model and based on the first state and the latent space representation of the first state, an action to take as part of the particular task. The primitive policy model may have been trained to define a space of primitive policies for the plurality of possible states associated with the particular task and a plurality of possible latent space representations. The method further includes executing the action to reach a second state associated with the particular task.
    Type: Application
    Filed: September 23, 2021
    Publication date: November 16, 2023
    Inventors: Anurag Ajay, Ofir Nachum, Aviral Kumar, Sergey Levine
  • Publication number: 20230311335
    Abstract: Implementations process, using a large language model, a free-form natural language (NL) instruction to generate to generate LLM output. Those implementations generate, based on the LLM output and a NL skill description of a robotic skill, a task-grounding measure that reflects a probability of the skill description in the probability distribution of the LLM output. Those implementations further generate, based on the robotic skill and current environmental state data, a world-grounding measure that reflects a probability of the robotic skill being successful based on the current environmental state data. Those implementations further determine, based on both the task-grounding measure and the world-grounding measure, whether to implement the robotic skill.
    Type: Application
    Filed: March 30, 2023
    Publication date: October 5, 2023
    Inventors: Karol Hausman, Brian Ichter, Sergey Levine, Alexander Toshev, Fei Xia, Carolina Parada
  • Patent number: 11772272
    Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: October 3, 2023
    Assignee: GOOGLE LLC
    Inventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
  • Patent number: 11717959
    Abstract: Deep machine learning methods and apparatus related to semantic robotic grasping are provided. Some implementations relate to training a training a grasp neural network, a semantic neural network, and a joint neural network of a semantic grasping model. In some of those implementations, the joint network is a deep neural network and can be trained based on both: grasp losses generated based on grasp predictions generated over a grasp neural network, and semantic losses generated based on semantic predictions generated over the semantic neural network. Some implementations are directed to utilization of the trained semantic grasping model to servo, or control, a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: August 8, 2023
    Assignee: GOOGLE LLC
    Inventors: Eric Jang, Sudheendra Vijayanarasimhan, Peter Pastor Sampedro, Julian Ibarz, Sergey Levine
  • Patent number: 11701773
    Abstract: Training and/or using a recurrent neural network model for visual servoing of an end effector of a robot. In visual servoing, the model can be utilized to generate, at each of a plurality of time steps, an action prediction that represents a prediction of how the end effector should be moved to cause the end effector to move toward a target object. The model can be viewpoint invariant in that it can be utilized across a variety of robots having vision components at a variety of viewpoints and/or can be utilized for a single robot even when a viewpoint, of a vision component of the robot, is drastically altered. Moreover, the model can be trained based on a large quantity of simulated data that is based on simulator(s) performing simulated episode(s) in view of the model. One or more portions of the model can be further trained based on a relatively smaller quantity of real training data.
    Type: Grant
    Filed: December 4, 2018
    Date of Patent: July 18, 2023
    Assignee: GOOGLE LLC
    Inventors: Alexander Toshev, Fereshteh Sadeghi, Sergey Levine
  • Patent number: 11548145
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: January 10, 2023
    Assignee: GOOGLE LLC
    Inventors: Sergey Levine, Peter Pastor Sampedro, Alex Krizhevsky
  • Publication number: 20230001953
    Abstract: A method of generating an output trajectory of an ego vehicle includes recording trajectory data of the ego vehicle and pedestrian agents from a scene of a training environment of the ego vehicle. The method includes identifying at least one pedestrian agent from the pedestrian agents within the scene of the training environment of the ego vehicle causing a prediction-discrepancy by the ego vehicle greater than the pedestrian agents within the scene. The method includes updating parameters of a motion prediction model of the ego vehicle based on a magnitude of the prediction-discrepancy caused by the at least one pedestrian agent on the ego vehicle to form a trained, control-aware prediction objective model. The method includes selecting a vehicle control action of the ego vehicle in response to a predicted motion from the trained, control-aware prediction objective model regarding detected pedestrian agents within a traffic environment of the ego vehicle.
    Type: Application
    Filed: January 6, 2022
    Publication date: January 5, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Rowan Thomas MCALLISTER, Blake Warren WULFE, Jean MERCAT, Logan Michael ELLIS, Sergey LEVINE, Adrien David GAIDON