Patents by Inventor Stefan Hinterstoisser

Stefan Hinterstoisser has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230398683
    Abstract: Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 14, 2023
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser
  • Patent number: 11741666
    Abstract: Particular techniques for generating synthetic images and/or for training machine learning model(s) based on the generated synthetic images. For example, training a machine learning model based on training instances that each include a generated synthetic image, and ground truth label(s) for the generated synthetic image. After training of the machine learning model is complete, the trained machine learning model can be deployed on one or more robots and/or one or more computing devices.
    Type: Grant
    Filed: October 26, 2022
    Date of Patent: August 29, 2023
    Assignee: GOOGLE LLC
    Inventors: Stefan Hinterstoisser, Hauke Heibel
  • Patent number: 11727593
    Abstract: Methods for annotating objects within image frames are disclosed. Information is obtained that represents a camera pose relative to a scene. The camera pose includes a position and a location of the camera relative to the scene. Data is obtained that represents multiple images, including a first image and a plurality of other images, being captured from different angles by the camera relative to the scene. A 3D pose of the object of interest is identified with respect to the camera pose in at least the first image. A 3D bounding region for the object of interest in the first image is defined, which indicates a volume that includes the object of interest. A location and orientation of the object of interest is determined in the other images based on the defined 3D bounding region of the object of interest and the camera pose in the other images.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: August 15, 2023
    Assignee: Google LLC
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser, Paul Wohlhart
  • Patent number: 11691273
    Abstract: Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: July 4, 2023
    Assignee: X DEVELOPMENT LLC
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser
  • Patent number: 11607809
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for planning robotic movements to perform a given task while satisfying object pose estimation accuracy requirements. One of the methods includes generating a plurality of candidate measurement configurations for measuring an object to be manipulated by a robot; determining respective measurement accuracies for the plurality of candidate measurement configurations; determining a measurement accuracy landscape for the object including defining a high measurement accuracy region based on the respective measurement accuracies for the plurality of candidate measurement configurations; and generating a motion plan for manipulating the object in the robotic process that moves the robot, a sensor, or both, through the high measurement accuracy region when performing pose estimation for the object.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: March 21, 2023
    Assignee: Intrinsic Innovation LLC
    Inventors: Martin Bokeloh, Stefan Hinterstoisser, Olivier Pauly, Hauke Heibel, Martina Marek
  • Publication number: 20230046655
    Abstract: Particular techniques for generating synthetic images and/or for training machine learning model(s) based on the generated synthetic images. For example, training a machine learning model based on training instances that each include a generated synthetic image, and ground truth label(s) for the generated synthetic image. After training of the machine learning model is complete, the trained machine learning model can be deployed on one or more robots and/or one or more computing devices.
    Type: Application
    Filed: October 26, 2022
    Publication date: February 16, 2023
    Inventors: Stefan Hinterstoisser, Hauke Heibel
  • Patent number: 11488351
    Abstract: Particular techniques for generating synthetic images and/or for training machine learning model(s) based on the generated synthetic images. For example, training a machine learning model based on training instances that each include a generated synthetic image, and ground truth label(s) for the generated synthetic image. After training of the machine learning model is complete, the trained machine learning model can be deployed on one or more robots and/or one or more computing devices.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: November 1, 2022
    Assignee: GOOGLE LLC
    Inventors: Stefan Hinterstoisser, Hauke Heibel
  • Patent number: 11383380
    Abstract: Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: July 12, 2022
    Assignee: Intrinsic Innovation LLC
    Inventors: Gary Bradski, Steve Croft, Kurt Konolige, Ethan Rublee, Troy Straszheim, John Zevenbergen, Stefan Hinterstoisser, Hauke Strasdat
  • Publication number: 20220193901
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for planning robotic movements to perform a given task while satisfying object pose estimation accuracy requirements. One of the methods includes generating a plurality of candidate measurement configurations for measuring an object to be manipulated by a robot; determining respective measurement accuracies for the plurality of candidate measurement configurations; determining a measurement accuracy landscape for the object including defining a high measurement accuracy region based on the respective measurement accuracies for the plurality of candidate measurement configurations; and generating a motion plan for manipulating the object in the robotic process that moves the robot, a sensor, or both, through the high measurement accuracy region when performing pose estimation for the object.
    Type: Application
    Filed: December 22, 2020
    Publication date: June 23, 2022
    Inventors: Martin Bokeloh, Stefan Hinterstoisser, Olivier Pauly, Hauke Heibel, Martina Marek
  • Publication number: 20220138535
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing image data. One of the method includes receiving an input image from a source domain, the input image showing an object to be manipulated by a robot in a robotic process; processing the input image to generate an intermediate representation of the input image, comprising: generating a gradient orientation representation and a gradient magnitude representation of the input image; and generating the intermediate representation of the input image from the gradient orientation representation and the gradient magnitude representation; processing the intermediate representation of the input image using a neural network trained to make predictions about objects in images to generate a network output that represents a prediction about physical characteristics of the object in the input image.
    Type: Application
    Filed: November 4, 2020
    Publication date: May 5, 2022
    Inventors: Olivier Pauly, Stefan Hinterstoisser, Hauke Heibel, Martina Marek, Martin Bokeloh
  • Publication number: 20220058419
    Abstract: Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
    Type: Application
    Filed: November 5, 2021
    Publication date: February 24, 2022
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser
  • Patent number: 11195041
    Abstract: Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: December 7, 2021
    Assignee: X DEVELOPMENT LLC
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser
  • Patent number: 11192250
    Abstract: Methods, apparatus, and computer readable media that are related to 3D object detection and pose determination and that may optionally increase the robustness and/or efficiency of the 3D object recognition and pose determination. Some implementations are generally directed to techniques for generating an object model of an object based on model point cloud data of the object. Some implementations of the present disclosure are additionally and/or alternatively directed to techniques for application of acquired 3D scene point cloud data to a stored object model of an object to detect the object and/or determine the pose of the object.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: December 7, 2021
    Assignee: X DEVELOPMENT LLC
    Inventor: Stefan Hinterstoisser
  • Patent number: 11170581
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a feature extraction neural network to generate domain-invariant feature representations from domain-varying input images. In one aspect, the method includes obtaining a training dataset comprising a first set of target domain images and a second set of real domain images that each have pixel-wise level alignment with a corresponding target domain image, and training the feature extraction neural network on the training dataset based on optimizing an objective function that includes a term that depends on a similarity between respective feature representations generated by the network for a pair of target and source domain images.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: November 9, 2021
    Assignee: Intrinsic Innovation LLC
    Inventors: Martina Marek, Stefan Hinterstoisser, Olivier Pauly, Hauke Heibel, Martin Bokeloh
  • Patent number: 11170220
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for delegating object type and/or pose detection to a plurality of “targeted object recognition modules.” In some implementations, a method may be provided that includes: operating an object recognition client to facilitate object recognition for a robot; receiving, by the object recognition client, sensor data indicative of an observed object in an environment; providing, by the object recognition client, to each of a plurality of remotely-hosted targeted object recognition modules, data indicative of the observed object; receiving, by the object recognition client, from one or more of the plurality of targeted object recognition modules, one or more inferences about an object type or pose of the observed object; and determining, by the object recognition client, information about the observed object, such as its object type and/or pose, based on the one or more inferences.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: November 9, 2021
    Assignee: X DEVELOPMENT LLC
    Inventors: Nareshkumar Rajkumar, Stefan Hinterstoisser
  • Publication number: 20210327127
    Abstract: Particular techniques for generating synthetic images and/or for training machine learning model(s) based on the generated synthetic images. For example, training a machine learning model based on training instances that each include a generated synthetic image, and ground truth label(s) for the generated synthetic image. After training of the machine learning model is complete, the trained machine learning model can be deployed on one or more robots and/or one or more computing devices.
    Type: Application
    Filed: November 15, 2019
    Publication date: October 21, 2021
    Inventors: Stefan Hinterstoisser, Hauke Heibel
  • Patent number: 11151744
    Abstract: Methods for annotating objects within image frames are disclosed. Information is obtained that represents a camera pose relative to a scene. The camera pose includes a position and a location of the camera relative to the scene. Data is obtained that represents multiple images, including a first image and a plurality of other images, being captured from different angles by the camera relative to the scene. A 3D pose of the object of interest is identified with respect to the camera pose in at least the first image. A 3D bounding region for the object of interest in the first image is defined, which indicates a volume that includes the object of interest. A location and orientation of the object of interest is determined in the other images based on the defined 3D bounding region of the object of interest and the camera pose in the other images.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: October 19, 2021
    Assignee: X Development LLC
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser, Paul Wohlhart
  • Patent number: 10891484
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: January 12, 2021
    Assignee: X DEVELOPMENT LLC
    Inventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya
  • Publication number: 20200361082
    Abstract: Implementations are directed to training a machine learning model that, once trained, is used in performance of robotic grasping and/or other manipulation task(s) by a robot. The model can be trained using simulated training examples that are based on simulated data that is based on simulated robot(s) attempting simulated manipulations of various simulated objects. At least portions of the model can also be trained based on real training examples that are based on data from real-world physical robots attempting manipulations of various objects. The simulated training examples can be utilized to train the model to predict an output that can be utilized in a particular task—and the real training examples used to adapt at least a portion of the model to the real-world domain can be tailored to a distinct task. In some implementations, domain-adversarial similarity losses are determined during training, and utilized to regularize at least portion(s) of the model.
    Type: Application
    Filed: August 7, 2020
    Publication date: November 19, 2020
    Inventors: Yunfei Bai, Kuan Fang, Stefan Hinterstoisser, Mrinal Kalakrishnan
  • Patent number: 10773382
    Abstract: Implementations are directed to training a machine learning model that, once trained, is used in performance of robotic grasping and/or other manipulation task(s) by a robot. The model can be trained using simulated training examples that are based on simulated data that is based on simulated robot(s) attempting simulated manipulations of various simulated objects. At least portions of the model can also be trained based on real training examples that are based on data from real-world physical robots attempting manipulations of various objects. The simulated training examples can be utilized to train the model to predict an output that can be utilized in a particular task—and the real training examples used to adapt at least a portion of the model to the real-world domain can be tailored to a distinct task. In some implementations, domain-adversarial similarity losses are determined during training, and utilized to regularize at least portion(s) of the model.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: September 15, 2020
    Assignee: X DEVELOPMENT LLC
    Inventors: Yunfei Bai, Kuan Fang, Stefan Hinterstoisser, Mrinal Kalakrishnan