Patents by Inventor Daniel Kappler

Daniel Kappler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230381970
    Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
    Type: Application
    Filed: August 11, 2023
    Publication date: November 30, 2023
    Inventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
  • Patent number: 11772272
    Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: October 3, 2023
    Assignee: GOOGLE LLC
    Inventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
  • Patent number: 11607802
    Abstract: Generating and utilizing action image(s) that represent a candidate pose (e.g., a candidate end effector pose), in determining whether to utilize the candidate pose in performance of a robotic task. The action image(s) and corresponding current image(s) can be processed, using a trained critic network, to generate a value that indicates a probability of success of the robotic task if component(s) of the robot are traversed to the particular pose. When the value satisfies one or more conditions (e.g., satisfies a threshold), the robot can be controlled to cause the component(s) to traverse to the particular pose in performing the robotic task.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: March 21, 2023
    Assignee: X DEVELOPMENT LLC
    Inventors: Seyed Mohammad Khansari Zadeh, Daniel Kappler, Jianlan Luo, Jeffrey Bingham, Mrinal Kalakrishnan
  • Publication number: 20220297303
    Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
    Type: Application
    Filed: March 16, 2021
    Publication date: September 22, 2022
    Inventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
  • Publication number: 20220105624
    Abstract: Techniques are disclosed that enable training a meta-learning model, for use in causing a robot to perform a task, using imitation learning as well as reinforcement learning. Some implementations relate to training the meta-learning model using imitation learning based on one or more human guided demonstrations of the task. Additional or alternative implementations relate to training the meta-learning model using reinforcement learning based on trials of the robot attempting to perform the task. Further implementations relate to using the trained meta-learning model to few shot (or one shot) learn a new task based on a human guided demonstration of the new task.
    Type: Application
    Filed: January 23, 2020
    Publication date: April 7, 2022
    Inventors: Mrinal Kalakrishnan, Yunfei Bai, Paul Wohlhart, Eric Jang, Chelsea Finn, Seyed Mohammad Khansari Zadeh, Sergey Levine, Allan Zhou, Alexander Herzog, Daniel Kappler
  • Publication number: 20210078167
    Abstract: Generating and utilizing action image(s) that represent a candidate pose (e.g., a candidate end effector pose), in determining whether to utilize the candidate pose in performance of a robotic task. The action image(s) and corresponding current image(s) can be processed, using a trained critic network, to generate a value that indicates a probability of success of the robotic task if component(s) of the robot are traversed to the particular pose. When the value satisfies one or more conditions (e.g., satisfies a threshold), the robot can be controlled to cause the component(s) to traverse to the particular pose in performing the robotic task.
    Type: Application
    Filed: May 28, 2020
    Publication date: March 18, 2021
    Inventors: Seyed Mohammad Khansari Zadeh, Daniel Kappler, Jianlan Luo, Jeffrey Bingham, Mrinal Kalakrishnan