Patents by Inventor Paul WOHLHART
Paul WOHLHART has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11951622Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a generator neural network to adapt input images.Type: GrantFiled: March 23, 2022Date of Patent: April 9, 2024Assignee: Google LLCInventors: Paul Wohlhart, Stephen James, Mrinal Kalakrishnan, Konstantinos Bousmalis
-
Patent number: 11727593Abstract: Methods for annotating objects within image frames are disclosed. Information is obtained that represents a camera pose relative to a scene. The camera pose includes a position and a location of the camera relative to the scene. Data is obtained that represents multiple images, including a first image and a plurality of other images, being captured from different angles by the camera relative to the scene. A 3D pose of the object of interest is identified with respect to the camera pose in at least the first image. A 3D bounding region for the object of interest in the first image is defined, which indicates a volume that includes the object of interest. A location and orientation of the object of interest is determined in the other images based on the defined 3D bounding region of the object of interest and the camera pose in the other images.Type: GrantFiled: August 9, 2021Date of Patent: August 15, 2023Assignee: Google LLCInventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser, Paul Wohlhart
-
Patent number: 11607807Abstract: Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action.Type: GrantFiled: April 14, 2021Date of Patent: March 21, 2023Assignee: X DEVELOPMENT LLCInventors: Seyed Mohammad Khansari Zadeh, Mrinal Kalakrishnan, Paul Wohlhart
-
Publication number: 20220215208Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a generator neural network to adapt input images.Type: ApplicationFiled: March 23, 2022Publication date: July 7, 2022Inventors: Paul Wohlhart, Stephen James, Mrinal Kalakrishnan, Konstantinos Bousmalis
-
Patent number: 11341364Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network that is used to control a robotic agent interacting with a real-world environment.Type: GrantFiled: September 20, 2018Date of Patent: May 24, 2022Assignee: Google LLCInventors: Konstantinos Bousmalis, Alexander Irpan, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Julian Ibarz, Sergey Vladimir Levine, Kurt Konolige, Vincent O. Vanhoucke, Matthew Laurance Kelcey
-
Patent number: 11314987Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a generator neural network to adapt input images.Type: GrantFiled: November 22, 2019Date of Patent: April 26, 2022Assignee: X Development LLCInventors: Paul Wohlhart, Stephen James, Mrinal Kalakrishnan, Konstantinos Bousmalis
-
Publication number: 20220105624Abstract: Techniques are disclosed that enable training a meta-learning model, for use in causing a robot to perform a task, using imitation learning as well as reinforcement learning. Some implementations relate to training the meta-learning model using imitation learning based on one or more human guided demonstrations of the task. Additional or alternative implementations relate to training the meta-learning model using reinforcement learning based on trials of the robot attempting to perform the task. Further implementations relate to using the trained meta-learning model to few shot (or one shot) learn a new task based on a human guided demonstration of the new task.Type: ApplicationFiled: January 23, 2020Publication date: April 7, 2022Inventors: Mrinal Kalakrishnan, Yunfei Bai, Paul Wohlhart, Eric Jang, Chelsea Finn, Seyed Mohammad Khansari Zadeh, Sergey Levine, Allan Zhou, Alexander Herzog, Daniel Kappler
-
Patent number: 11151744Abstract: Methods for annotating objects within image frames are disclosed. Information is obtained that represents a camera pose relative to a scene. The camera pose includes a position and a location of the camera relative to the scene. Data is obtained that represents multiple images, including a first image and a plurality of other images, being captured from different angles by the camera relative to the scene. A 3D pose of the object of interest is identified with respect to the camera pose in at least the first image. A 3D bounding region for the object of interest in the first image is defined, which indicates a volume that includes the object of interest. A location and orientation of the object of interest is determined in the other images based on the defined 3D bounding region of the object of interest and the camera pose in the other images.Type: GrantFiled: September 16, 2019Date of Patent: October 19, 2021Assignee: X Development LLCInventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser, Paul Wohlhart
-
Publication number: 20210229276Abstract: Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action.Type: ApplicationFiled: April 14, 2021Publication date: July 29, 2021Inventors: Seyed Mohammad Khansari Zadeh, Mrinal Kalakrishnan, Paul Wohlhart
-
Patent number: 11007642Abstract: Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action.Type: GrantFiled: October 23, 2018Date of Patent: May 18, 2021Assignee: X DEVELOPMENT LLCInventors: Seyed Mohammad Khansari Zadeh, Mrinal Kalakrishnan, Paul Wohlhart
-
Publication number: 20200279134Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network that is used to control a robotic agent interacting with a real-world environment.Type: ApplicationFiled: September 20, 2018Publication date: September 3, 2020Inventors: Konstantinos Bousmalis, Alexander Irpan, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Julian Ibarz, Sergey Vladimir Levine, Kurt Konolige, Vincent O. Vanhoucke, Matthew Laurance Kelcey
-
Publication number: 20200167606Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a generator neural network to adapt input images.Type: ApplicationFiled: November 22, 2019Publication date: May 28, 2020Inventors: Paul Wohlhart, Stephen James, Mrinal Kalakrishnan, Konstantinos Bousmalis
-
Publication number: 20200122321Abstract: Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action.Type: ApplicationFiled: October 23, 2018Publication date: April 23, 2020Inventors: Seyed Mohammad Khansari Zadeh, Mrinal Kalakrishnan, Paul Wohlhart
-
Patent number: 10417781Abstract: Methods for annotating objects within image frames are disclosed. Information is obtained that represents a camera pose relative to a scene. The camera pose includes a position and a location of the camera relative to the scene. Data is obtained that represents multiple images, including a first image and a plurality of other images, being captured from different angles by the camera relative to the scene. A 3D pose of the object of interest is identified with respect to the camera pose in at least the first image. A 3D bounding region for the object of interest in the first image is defined, which indicates a volume that includes the object of interest. A location and orientation of the object of interest is determined in the other images based on the defined 3D bounding region of the object of interest and the camera pose in the other images.Type: GrantFiled: December 30, 2016Date of Patent: September 17, 2019Assignee: X Development LLCInventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser, Paul Wohlhart
-
Patent number: 9996936Abstract: A computer-implemented method, apparatus, computer readable medium and mobile device for determining a 6DOF pose from an input image. The process of determining 6DOF pose may include processing an input image to create one or more static representations of the input image, creating a dynamic representation of the input image from an estimated 6DOF pose and a 2.5D reference map, and measuring correlation between the dynamic representation and the one or more static representations of the input image. The estimated 6DOF may be iteratively adjusted according to the measured correlation error until a final adjusted dynamic representation meets an output threshold.Type: GrantFiled: May 20, 2016Date of Patent: June 12, 2018Assignee: QUALCOMM IncorporatedInventors: Clemens Arth, Paul Wohlhart, Vincent Lepetit
-
Publication number: 20170337690Abstract: A computer-implemented method, apparatus, computer readable medium and mobile device for determining a 6DOF pose from an input image. The process of determining 6DOF pose may include processing an input image to create one or more static representations of the input image, creating a dynamic representation of the input image from an estimated 6DOF pose and a 2.5D reference map, and measuring correlation between the dynamic representation and the one or more static representations of the input image. The estimated 6DOF may be iteratively adjusted according to the measured correlation error until a final adjusted dynamic representation meets an output threshold.Type: ApplicationFiled: May 20, 2016Publication date: November 23, 2017Inventors: Clemens ARTH, Paul WOHLHART, Vincent LEPETIT