Patents by Inventor Kurt Konolige

Kurt Konolige has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200025561
    Abstract: Methods and systems for depth sensing are provided. A system includes a first and second optical sensor each including a first plurality of photodetectors configured to capture visible light interspersed with a second plurality of photodetectors configured to capture infrared light within a particular infrared band. The system also includes a computing device configured to (i) identify first corresponding features of the environment between a first visible light image captured by the first optical sensor and a second visible light image captured by the second optical sensor; (ii) identify second corresponding features of the environment between a first infrared light image captured by the first optical sensor and a second infrared light image captured by the second optical sensor; and (iii) determine a depth estimate for at least one surface in the environment based on the first corresponding features and the second corresponding features.
    Type: Application
    Filed: September 26, 2019
    Publication date: January 23, 2020
    Inventor: Kurt Konolige
  • Patent number: 10518410
    Abstract: Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.
    Type: Grant
    Filed: May 1, 2018
    Date of Patent: December 31, 2019
    Assignee: X Development LLC
    Inventors: Gary Bradski, Steve Croft, Kurt Konolige, Ethan Rublee, Troy Straszheim, John Zevenbergen, Stefan Hinterstoisser, Hauke Strasdat
  • Patent number: 10488523
    Abstract: An example system includes one or more laser sensors on a robotic device, where the one or more laser sensors are configured to produce laser sensor data indicative of a first area within a first distance in front of the robotic device. The system further includes one or more stereo sensors on the robotic device, where the stereo sensors on the robotic device are configured to produce stereo sensor data indicative of a second area past a second distance in front of the robotic device. The system also includes a controller configured to receive the laser sensor data, receive the stereo sensor data, detect one or more objects in front of the robotic device based on at least one of the laser sensor data and the stereo sensor data, and provide instructions for the robotic device to navigate based on the one or more detected objects.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: November 26, 2019
    Assignee: X Development LLC
    Inventors: Kevin William Watts, Kurt Konolige
  • Patent number: 10466043
    Abstract: Methods and systems for depth sensing are provided. A system includes a first and second optical sensor each including a first plurality of photodetectors configured to capture visible light interspersed with a second plurality of photodetectors configured to capture infrared light within a particular infrared band. The system also includes a computing device configured to (i) identify first corresponding features of the environment between a first visible light image captured by the first optical sensor and a second visible light image captured by the second optical sensor; (ii) identify second corresponding features of the environment between a first infrared light image captured by the first optical sensor and a second infrared light image captured by the second optical sensor; and (iii) determine a depth estimate for at least one surface in the environment based on the first corresponding features and the second corresponding features.
    Type: Grant
    Filed: May 30, 2017
    Date of Patent: November 5, 2019
    Assignee: X Development LLC
    Inventor: Kurt Konolige
  • Patent number: 10455212
    Abstract: Example implementations relate to determining depth information using stereo sensor data. An example system may include at least one projector coupled to a robotic manipulator and configured to project a texture pattern onto an environment. The system may further include a displacer coupled to the at least one texture projector and configured to repeatedly change a position of the texture pattern within the environment. The system may also include at least two optical sensors configured to capture stereo sensor data for the environment. And the system may include a computing device configured to determine, using the stereo sensor data, an output including a virtual representation of the environment.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: October 22, 2019
    Assignee: X Development LLC
    Inventors: Kurt Konolige, Ethan Rublee
  • Patent number: 10427296
    Abstract: Methods and apparatus related to receiving a request that includes robot instructions and/or environmental parameters, operating each of a plurality of robots based on the robot instructions and/or in an environment configured based on the environmental parameters, and storing data generated by the robots during the operating. In some implementations, at least part of the stored data that is generated by the robots is provided in response to the request and/or additional data that is generated based on the stored data is provided in response to the request.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: October 1, 2019
    Assignee: X DEVELOPMENT LLC
    Inventors: Peter Pastor Sampedro, Mrinal Kalakrishnan, Ali Yahya Valdovinoa, Adrian Li, Kurt Konolige, Vincent Dureau
  • Patent number: 10417781
    Abstract: Methods for annotating objects within image frames are disclosed. Information is obtained that represents a camera pose relative to a scene. The camera pose includes a position and a location of the camera relative to the scene. Data is obtained that represents multiple images, including a first image and a plurality of other images, being captured from different angles by the camera relative to the scene. A 3D pose of the object of interest is identified with respect to the camera pose in at least the first image. A 3D bounding region for the object of interest in the first image is defined, which indicates a volume that includes the object of interest. A location and orientation of the object of interest is determined in the other images based on the defined 3D bounding region of the object of interest and the camera pose in the other images.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: September 17, 2019
    Assignee: X Development LLC
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser, Paul Wohlhart
  • Publication number: 20180349725
    Abstract: Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
    Type: Application
    Filed: July 23, 2018
    Publication date: December 6, 2018
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser
  • Publication number: 20180243904
    Abstract: Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.
    Type: Application
    Filed: May 1, 2018
    Publication date: August 30, 2018
    Inventors: Gary Bradski, Steve Croft, Kurt Konolige, Ethan Rublee, Troy Straszheim, John Zevenbergen, Stefan Hinterstoisser, Hauke Strasdat
  • Patent number: 10058995
    Abstract: Methods and apparatus related to receiving a request that includes robot instructions and/or environmental parameters, operating each of a plurality of robots based on the robot instructions and/or in an environment configured based on the environmental parameters, and storing data generated by the robots during the operating. In some implementations, at least part of the stored data that is generated by the robots is provided in response to the request and/or additional data that is generated based on the stored data is provided in response to the request.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: August 28, 2018
    Assignee: X DEVELOPMENT LLC
    Inventors: Peter Pastor Sampedro, Mrinal Kalakrishnan, Ali Yahya Valdovinos, Adrian Li, Kurt Konolige, Vincent Dureau
  • Patent number: 10055667
    Abstract: Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: August 21, 2018
    Assignee: X DEVELOPMENT LLC
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser
  • Patent number: 9987746
    Abstract: Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: June 5, 2018
    Assignee: X Development LLC
    Inventors: Gary Bradski, Kurt Konolige, Ethan Rublee, Troy Straszheim, Hauke Strasdat, Stefan Hinterstoisser, Steve Croft, John Zevenbergen
  • Publication number: 20180107218
    Abstract: An example method includes determining a target area of a ground plane in an environment of a mobile robotic device, where the target area of the ground plane is in front of the mobile robotic device in a direction of travel of the mobile robotic device. The method further includes receiving depth data from a depth sensor on the mobile robotic device. The method also includes identifying a portion of the depth data representative of the target area. The method additionally includes determining that the portion of the depth data lacks information representing at least one section of the target area. The method further includes providing an output signal identifying at least one zone of non-traversable space for the mobile robotic device in the environment, where the at least one zone of non-traversable space corresponds to the at least one section of the target area.
    Type: Application
    Filed: December 14, 2017
    Publication date: April 19, 2018
    Inventors: Kevin William Watts, Kurt Konolige
  • Publication number: 20180093377
    Abstract: Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.
    Type: Application
    Filed: November 30, 2017
    Publication date: April 5, 2018
    Inventors: Gary Bradski, Kurt Konolige, Ethan Rublee
  • Publication number: 20180039848
    Abstract: Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
    Type: Application
    Filed: August 3, 2016
    Publication date: February 8, 2018
    Inventors: Kurt Konolige, Nareshkumar Rajkumar, Stefan Hinterstoisser
  • Patent number: 9886035
    Abstract: An example method includes determining a target area of a ground plane in an environment of a mobile robotic device, where the target area of the ground plane is in front of the mobile robotic device in a direction of travel of the mobile robotic device. The method further includes receiving depth data from a depth sensor on the mobile robotic device. The method also includes identifying a portion of the depth data representative of the target area. The method additionally includes determining that the portion of the depth data lacks information representing at least one section of the target area. The method further includes providing an output signal identifying at least one zone of non-traversable space for the mobile robotic device in the environment, where the at least one zone of non-traversable space corresponds to the at least one section of the target area.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: February 6, 2018
    Assignee: X Development LLC
    Inventors: Kevin William Watts, Kurt Konolige
  • Patent number: 9862093
    Abstract: Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: January 9, 2018
    Assignee: X Development LLC
    Inventors: Gary Bradski, Kurt Konolige, Ethan Rublee
  • Publication number: 20170308086
    Abstract: An example system includes one or more laser sensors on a robotic device, where the one or more laser sensors are configured to produce laser sensor data indicative of a first area within a first distance in front of the robotic device. The system further includes one or more stereo sensors on the robotic device, where the stereo sensors on the robotic device are configured to produce stereo sensor data indicative of a second area past a second distance in front of the robotic device. The system also includes a controller configured to receive the laser sensor data, receive the stereo sensor data, detect one or more objects in front of the robotic device based on at least one of the laser sensor data and the stereo sensor data, and provide instructions for the robotic device to navigate based on the one or more detected objects.
    Type: Application
    Filed: July 6, 2017
    Publication date: October 26, 2017
    Inventors: Kevin William Watts, Kurt Konolige
  • Publication number: 20170261314
    Abstract: Methods and systems for depth sensing are provided. A system includes a first and second optical sensor each including a first plurality of photodetectors configured to capture visible light interspersed with a second plurality of photodetectors configured to capture infrared light within a particular infrared band. The system also includes a computing device configured to (i) identify first corresponding features of the environment between a first visible light image captured by the first optical sensor and a second visible light image captured by the second optical sensor; (ii) identify second corresponding features of the environment between a first infrared light image captured by the first optical sensor and a second infrared light image captured by the second optical sensor; and (iii) determine a depth estimate for at least one surface in the environment based on the first corresponding features and the second corresponding features.
    Type: Application
    Filed: May 30, 2017
    Publication date: September 14, 2017
    Inventor: Kurt Konolige
  • Patent number: 9746852
    Abstract: An example system includes one or more laser sensors on a robotic device, where the one or more laser sensors are configured to produce laser sensor data indicative of a first area within a first distance in front of the robotic device. The system further includes one or more stereo sensors on the robotic device, where the stereo sensors on the robotic device are configured to produce stereo sensor data indicative of a second area past a second distance in front of the robotic device. The system also includes a controller configured to receive the laser sensor data, receive the stereo sensor data, detect one or more objects in front of the robotic device based on at least one of the laser sensor data and the stereo sensor data, and provide instructions for the robotic device to navigate based on the one or more detected objects.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: August 29, 2017
    Assignee: X Development LLC
    Inventors: Kevin William Watts, Kurt Konolige