Patents by Inventor Paul WOHLHART

Paul WOHLHART has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11951622
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a generator neural network to adapt input images.
    Type: Grant
    Filed: March 23, 2022
    Date of Patent: April 9, 2024
    Assignee: Google LLC
    Inventors: Paul Wohlhart, Stephen James, Mrinal Kalakrishnan, Konstantinos Bousmalis
  • Patent number: 11727593
    Abstract: Methods for annotating objects within image frames are disclosed. Information is obtained that represents a camera pose relative to a scene. The camera pose includes a position and a location of the camera relative to the scene. Data is obtained that represents multiple images, including a first image and a plurality of other images, being captured from different angles by the camera relative to the scene. A 3D pose of the object of interest is identified with respect to the camera pose in at least the first image. A 3D bounding region for the object of interest in the first image is defined, which indicates a volume that includes the object of interest. A location and orientation of the object of interest is determined in the other images based on the defined 3D bounding region of the object of interest and the camera pose in the other images.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: August 15, 2023
    Assignee: Google LLC
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser, Paul Wohlhart
  • Patent number: 11607807
    Abstract: Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: March 21, 2023
    Assignee: X DEVELOPMENT LLC
    Inventors: Seyed Mohammad Khansari Zadeh, Mrinal Kalakrishnan, Paul Wohlhart
  • Publication number: 20220215208
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a generator neural network to adapt input images.
    Type: Application
    Filed: March 23, 2022
    Publication date: July 7, 2022
    Inventors: Paul Wohlhart, Stephen James, Mrinal Kalakrishnan, Konstantinos Bousmalis
  • Patent number: 11341364
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network that is used to control a robotic agent interacting with a real-world environment.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: May 24, 2022
    Assignee: Google LLC
    Inventors: Konstantinos Bousmalis, Alexander Irpan, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Julian Ibarz, Sergey Vladimir Levine, Kurt Konolige, Vincent O. Vanhoucke, Matthew Laurance Kelcey
  • Patent number: 11314987
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a generator neural network to adapt input images.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: April 26, 2022
    Assignee: X Development LLC
    Inventors: Paul Wohlhart, Stephen James, Mrinal Kalakrishnan, Konstantinos Bousmalis
  • Publication number: 20220105624
    Abstract: Techniques are disclosed that enable training a meta-learning model, for use in causing a robot to perform a task, using imitation learning as well as reinforcement learning. Some implementations relate to training the meta-learning model using imitation learning based on one or more human guided demonstrations of the task. Additional or alternative implementations relate to training the meta-learning model using reinforcement learning based on trials of the robot attempting to perform the task. Further implementations relate to using the trained meta-learning model to few shot (or one shot) learn a new task based on a human guided demonstration of the new task.
    Type: Application
    Filed: January 23, 2020
    Publication date: April 7, 2022
    Inventors: Mrinal Kalakrishnan, Yunfei Bai, Paul Wohlhart, Eric Jang, Chelsea Finn, Seyed Mohammad Khansari Zadeh, Sergey Levine, Allan Zhou, Alexander Herzog, Daniel Kappler
  • Patent number: 11151744
    Abstract: Methods for annotating objects within image frames are disclosed. Information is obtained that represents a camera pose relative to a scene. The camera pose includes a position and a location of the camera relative to the scene. Data is obtained that represents multiple images, including a first image and a plurality of other images, being captured from different angles by the camera relative to the scene. A 3D pose of the object of interest is identified with respect to the camera pose in at least the first image. A 3D bounding region for the object of interest in the first image is defined, which indicates a volume that includes the object of interest. A location and orientation of the object of interest is determined in the other images based on the defined 3D bounding region of the object of interest and the camera pose in the other images.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: October 19, 2021
    Assignee: X Development LLC
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser, Paul Wohlhart
  • Publication number: 20210229276
    Abstract: Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action.
    Type: Application
    Filed: April 14, 2021
    Publication date: July 29, 2021
    Inventors: Seyed Mohammad Khansari Zadeh, Mrinal Kalakrishnan, Paul Wohlhart
  • Patent number: 11007642
    Abstract: Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: May 18, 2021
    Assignee: X DEVELOPMENT LLC
    Inventors: Seyed Mohammad Khansari Zadeh, Mrinal Kalakrishnan, Paul Wohlhart
  • Publication number: 20200279134
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network that is used to control a robotic agent interacting with a real-world environment.
    Type: Application
    Filed: September 20, 2018
    Publication date: September 3, 2020
    Inventors: Konstantinos Bousmalis, Alexander Irpan, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Julian Ibarz, Sergey Vladimir Levine, Kurt Konolige, Vincent O. Vanhoucke, Matthew Laurance Kelcey
  • Publication number: 20200167606
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a generator neural network to adapt input images.
    Type: Application
    Filed: November 22, 2019
    Publication date: May 28, 2020
    Inventors: Paul Wohlhart, Stephen James, Mrinal Kalakrishnan, Konstantinos Bousmalis
  • Publication number: 20200122321
    Abstract: Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action.
    Type: Application
    Filed: October 23, 2018
    Publication date: April 23, 2020
    Inventors: Seyed Mohammad Khansari Zadeh, Mrinal Kalakrishnan, Paul Wohlhart
  • Patent number: 10417781
    Abstract: Methods for annotating objects within image frames are disclosed. Information is obtained that represents a camera pose relative to a scene. The camera pose includes a position and a location of the camera relative to the scene. Data is obtained that represents multiple images, including a first image and a plurality of other images, being captured from different angles by the camera relative to the scene. A 3D pose of the object of interest is identified with respect to the camera pose in at least the first image. A 3D bounding region for the object of interest in the first image is defined, which indicates a volume that includes the object of interest. A location and orientation of the object of interest is determined in the other images based on the defined 3D bounding region of the object of interest and the camera pose in the other images.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: September 17, 2019
    Assignee: X Development LLC
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser, Paul Wohlhart
  • Patent number: 9996936
    Abstract: A computer-implemented method, apparatus, computer readable medium and mobile device for determining a 6DOF pose from an input image. The process of determining 6DOF pose may include processing an input image to create one or more static representations of the input image, creating a dynamic representation of the input image from an estimated 6DOF pose and a 2.5D reference map, and measuring correlation between the dynamic representation and the one or more static representations of the input image. The estimated 6DOF may be iteratively adjusted according to the measured correlation error until a final adjusted dynamic representation meets an output threshold.
    Type: Grant
    Filed: May 20, 2016
    Date of Patent: June 12, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Clemens Arth, Paul Wohlhart, Vincent Lepetit
  • Publication number: 20170337690
    Abstract: A computer-implemented method, apparatus, computer readable medium and mobile device for determining a 6DOF pose from an input image. The process of determining 6DOF pose may include processing an input image to create one or more static representations of the input image, creating a dynamic representation of the input image from an estimated 6DOF pose and a 2.5D reference map, and measuring correlation between the dynamic representation and the one or more static representations of the input image. The estimated 6DOF may be iteratively adjusted according to the measured correlation error until a final adjusted dynamic representation meets an output threshold.
    Type: Application
    Filed: May 20, 2016
    Publication date: November 23, 2017
    Inventors: Clemens ARTH, Paul WOHLHART, Vincent LEPETIT