Patents by Inventor Gary Bradski

Gary Bradski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10573042
    Abstract: A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, or whether an adjustment to a rendering location of virtual object should be made to compensate for a deviation of the wearable device from its normal resting position.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: February 25, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Adrian Kaehler, Gary Bradski, Vijay Badrinarayanan
  • Patent number: 10518410
    Abstract: Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.
    Type: Grant
    Filed: May 1, 2018
    Date of Patent: December 31, 2019
    Assignee: X Development LLC
    Inventors: Gary Bradski, Steve Croft, Kurt Konolige, Ethan Rublee, Troy Straszheim, John Zevenbergen, Stefan Hinterstoisser, Hauke Strasdat
  • Publication number: 20190316753
    Abstract: A texture projecting light bulb includes an extended light source located within an integrator. The integrator includes at least one aperture configured to allow light to travel out of the interior of the integrator. In various embodiments, the interior of the integrator may be a diffusely reflective surface and the integrator may be configured to produce a uniform light distribution at the aperture to approximate a point source. The integrator may be surrounded by a light bulb enclosure. In various embodiments, the light bulb enclosure may include transparent and opaque regions configured to project a structured pattern of visible and/or infrared light.
    Type: Application
    Filed: May 8, 2019
    Publication date: October 17, 2019
    Inventors: Adrian Kaehler, Gary Bradski
  • Patent number: 10337691
    Abstract: A texture projecting light bulb includes an extended light source located within an integrator. The integrator includes at least one aperture configured to allow light to travel out of the interior of the integrator. In various embodiments, the interior of the integrator may be a diffusely reflective surface and the integrator may be configured to produce a uniform light distribution at the aperture to approximate a point source. The integrator may be surrounded by a light bulb enclosure. In various embodiments, the light bulb enclosure may include transparent and opaque regions configured to project a structured pattern of visible and/or infrared light.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: July 2, 2019
    Assignee: Magic Leap, Inc.
    Inventors: Adrian Kaehler, Gary Bradski
  • Publication number: 20190087659
    Abstract: Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.
    Type: Application
    Filed: November 19, 2018
    Publication date: March 21, 2019
    Inventors: Adrian Kaehler, Gary Bradski
  • Publication number: 20190034765
    Abstract: Disclosed herein are examples of a wearable display system capable of determining a user interface (UI) event with respect to a virtual UI device (e.g., a button) and a pointer (e.g., a finger or a stylus) using a neural network. The wearable display system can render a representation of the UI device onto an image of the pointer captured when the virtual UI device is shown to the user and the user uses the pointer to interact with the virtual UI device. The representation of the UI device can include concentric shapes (or shapes with similar or the same centers of gravity) of high contrast. The neural network can be trained using training images with representations of virtual UI devices and pointers.
    Type: Application
    Filed: May 31, 2018
    Publication date: January 31, 2019
    Inventors: Adrian Kaehler, Gary Bradski, Vijay Badrinarayanan
  • Publication number: 20190014310
    Abstract: This disclosure is directed to a hardware system for inverse graphics capture. An inverse graphics capture system (IGCS) captures data regarding a physical space that can be used to generate a photorealistic graphical model of that physical space. In certain approaches, the system includes hardware and accompanying software used to create a photorealistic six degree of freedom (6DOF) graphical model of the physical space. In certain approaches, the system includes hardware and accompanying software used for projection mapping onto the physical space. In certain approaches, the model produced by the IGCS is built using data regarding the geometry, lighting, surfaces, and environment of the physical space. In certain approaches, the model produced by the IGCS is both photorealistic and fully modifiable.
    Type: Application
    Filed: August 6, 2018
    Publication date: January 10, 2019
    Applicant: Arraiy, Inc.
    Inventors: Gary Bradski, Moshe Benezra, Daniel A. Aden, Ethan Rublee
  • Patent number: 10163011
    Abstract: Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.
    Type: Grant
    Filed: May 17, 2017
    Date of Patent: December 25, 2018
    Assignee: Magic Leap, Inc.
    Inventors: Adrian Kaehler, Gary Bradski
  • Publication number: 20180243904
    Abstract: Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.
    Type: Application
    Filed: May 1, 2018
    Publication date: August 30, 2018
    Inventors: Gary Bradski, Steve Croft, Kurt Konolige, Ethan Rublee, Troy Straszheim, John Zevenbergen, Stefan Hinterstoisser, Hauke Strasdat
  • Patent number: 10044922
    Abstract: This disclosure is directed to a hardware system for inverse graphics capture. An inverse graphics capture system (IGCS) captures data regarding a physical space that can be used to generate a photorealistic graphical model of that physical space. In certain approaches, the system includes hardware and accompanying software used to create a photorealistic six degree of freedom (6DOF) graphical model of the physical space. In certain approaches, the system includes hardware and accompanying software used for projection mapping onto the physical space. In certain approaches, the model produced by the IGCS is built using data regarding the geometry, lighting, surfaces, and environment of the physical space. In certain approaches, the model produced by the IGCS is both photorealistic and fully modifiable.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: August 7, 2018
    Assignee: Arraiy, Inc.
    Inventors: Gary Bradski, Moshe Benezra, Daniel A. Aden, Ethan Rublee
  • Patent number: 9987746
    Abstract: Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: June 5, 2018
    Assignee: X Development LLC
    Inventors: Gary Bradski, Kurt Konolige, Ethan Rublee, Troy Straszheim, Hauke Strasdat, Stefan Hinterstoisser, Steve Croft, John Zevenbergen
  • Publication number: 20180093377
    Abstract: Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.
    Type: Application
    Filed: November 30, 2017
    Publication date: April 5, 2018
    Inventors: Gary Bradski, Kurt Konolige, Ethan Rublee
  • Publication number: 20180096503
    Abstract: A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, or whether an adjustment to a rendering location of virtual object should be made to compensate for a deviation of the wearable device from its normal resting position.
    Type: Application
    Filed: September 27, 2017
    Publication date: April 5, 2018
    Inventors: Adrian Kaehler, Gary Bradski, Vijay Badrinarayanan
  • Publication number: 20180018451
    Abstract: Systems and methods for iris authentication are disclosed. In one aspect, a deep neural network (DNN) with a triplet network architecture can be trained to learn an embedding (e.g., another DNN) that maps from the higher dimensional eye image space to a lower dimensional embedding space. The DNN can be trained with segmented iris images or images of the periocular region of the eye (including the eye and portions around the eye such as eyelids, eyebrows, eyelashes, and skin surrounding the eye). With the triplet network architecture, an embedding space representation (ESR) of a person's eye image can be closer to the ESRs of the person's other eye images than it is to the ESR of another person's eye image. In another aspect, to authenticate a user as an authorized user, an ESR of the user's eye image can be sufficiently close to an ESR of the authorized user's eye image.
    Type: Application
    Filed: April 26, 2017
    Publication date: January 18, 2018
    Inventors: Alexey Spizhevoy, Adrian Kaehler, Gary Bradski
  • Patent number: 9862093
    Abstract: Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: January 9, 2018
    Assignee: X Development LLC
    Inventors: Gary Bradski, Kurt Konolige, Ethan Rublee
  • Publication number: 20180005034
    Abstract: Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.
    Type: Application
    Filed: May 17, 2017
    Publication date: January 4, 2018
    Inventors: Adrian Kaehler, Gary Bradski
  • Publication number: 20170356620
    Abstract: A texture projecting light bulb includes an extended light source located within an integrator. The integrator includes at least one aperture configured to allow light to travel out of the interior of the integrator. In various embodiments, the interior of the integrator may be a diffusely reflective surface and the integrator may be configured to produce a uniform light distribution at the aperture to approximate a point source. The integrator may be surrounded by a light bulb enclosure. In various embodiments, the light bulb enclosure may include transparent and opaque regions configured to project a structured pattern of visible and/or infrared light.
    Type: Application
    Filed: April 17, 2017
    Publication date: December 14, 2017
    Inventors: Adrian Kaehler, Gary Bradski
  • Patent number: 9707682
    Abstract: Methods and systems for recognizing machine-readable information on three-dimensional (3D) objects are described. A robotic manipulator may move at least one physical object through a designated area in space. As the at least one physical object is being moved through the designated area, one or more optical sensors may determine a location of a machine-readable code on the at least one physical object and, based on the determined location, scan the machine-readable code so as to determine information associated with the at least one physical object encoded in the machine-readable code. Based on the information associated with the at least one physical object, a computing device may then determine a respective location in a physical environment of the robotic manipulator at which to place the at least one physical object. The robotic manipulator may then be directed to place the at least one physical object at the respective location.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: July 18, 2017
    Assignee: X Development LLC
    Inventors: Kurt Konolige, Ethan Rublee, Gary Bradski
  • Patent number: 9630320
    Abstract: Methods and systems for detecting and reconstructing environments to facilitate robotic interaction with such environments are described. An example method may involve determining a three-dimensional (3D) virtual environment representative of a physical environment of the robotic manipulator including a plurality of 3D virtual objects corresponding to respective physical objects in the physical environment. The method may then involve determining two-dimensional (2D) images of the virtual environment including 2D depth maps. The method may then involve determining portions of the 2D images that correspond to a given one or more physical objects. The method may then involve determining, based on the portions and the 2D depth maps, 3D models corresponding to the portions. The method may then involve, based on the 3D models, selecting a physical object from the given one or more physical objects. The method may then involve providing an instruction to the robotic manipulator to move that object.
    Type: Grant
    Filed: July 7, 2015
    Date of Patent: April 25, 2017
    Assignee: Industrial Perception, Inc.
    Inventors: Kurt Konolige, Ethan Rublee, Stefan Hinterstoisser, Troy Straszheim, Gary Bradski, Hauke Malte Strasdat
  • Patent number: 9630321
    Abstract: Example systems and methods allow for dynamic updating of a plan to move objects using a robotic device. One example method includes determining a virtual environment by one or more processors based on sensor data received from one or more sensors, the virtual environment representing a physical environment containing a plurality of physical objects, developing a plan, based on the virtual environment, to cause a robotic manipulator to move one or more of the physical objects in the physical environment, causing the robotic manipulator to perform a first action according to the plan, receiving updated sensor data from the one or more sensors after the robotic manipulator performs the first action, modifying the virtual environment based on the updated sensor data, determining one or more modifications to the plan based on the modified virtual environment, and causing the robotic manipulator to perform a second action according to the modified plan.
    Type: Grant
    Filed: December 10, 2015
    Date of Patent: April 25, 2017
    Assignee: Industrial Perception, Inc.
    Inventors: Gary Bradski, Kurt Konolige, Ethan Rublee, Troy Straszheim, Hauke Strasdat, Stefan Hinterstoisser