Patents by Inventor Gary Bradski

Gary Bradski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210187736
    Abstract: Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.
    Type: Application
    Filed: March 9, 2021
    Publication date: June 24, 2021
    Inventors: Gary Bradski, Kurt Konolige, Ethan Rublee
  • Patent number: 11004203
    Abstract: Systems and methods for user guided iterative frame and scene segmentation are disclosed herein. The systems and methods can rely on overtraining a segmentation network on a frame. A disclosed method includes selecting a frame from a scene and generating a frame segmentation using the frame and a segmentation network. The method also includes displaying the frame and frame segmentation overlaid on the frame, receiving a correction input on the frame, and training the segmentation network using the correction input. The method includes overtraining the segmentation network for the scene by iterating the above steps on the same frame or a series of frames from the scene.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: May 11, 2021
    Assignee: Matterport, Inc.
    Inventor: Gary Bradski
  • Patent number: 10997448
    Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: May 4, 2021
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20210110020
    Abstract: Systems and methods for iris authentication are disclosed. In one aspect, a deep neural network (DNN) with a triplet network architecture can be trained to learn an embedding (e.g., another DNN) that maps from the higher dimensional eye image space to a lower dimensional embedding space. The DNN can be trained with segmented iris images or images of the periocular region of the eye (including the eye and portions around the eye such as eyelids, eyebrows, eyelashes, and skin surrounding the eye). With the triplet network architecture, an embedding space representation (ESR) of a person's eye image can be closer to the ESRs of the person's other eye images than it is to the ESR of another person's eye image. In another aspect, to authenticate a user as an authorized user, an ESR of the user's eye image can be sufficiently close to an ESR of the authorized user's eye image.
    Type: Application
    Filed: December 21, 2020
    Publication date: April 15, 2021
    Inventors: Alexey Spizhevoy, Adrian Kaehler, Gary Bradski
  • Patent number: 10967506
    Abstract: Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: April 6, 2021
    Assignee: X Development LLC
    Inventors: Gary Bradski, Kurt Konolige, Ethan Rublee
  • Patent number: 10922393
    Abstract: Systems and methods for iris authentication are disclosed. In one aspect, a deep neural network (DNN) with a triplet network architecture can be trained to learn an embedding (e.g., another DNN) that maps from the higher dimensional eye image space to a lower dimensional embedding space. The DNN can be trained with segmented iris images or images of the periocular region of the eye (including the eye and portions around the eye such as eyelids, eyebrows, eyelashes, and skin surrounding the eye). With the triplet network architecture, an embedding space representation (ESR) of a person's eye image can be closer to the ESRs of the person's other eye images than it is to the ESR of another person's eye image. In another aspect, to authenticate a user as an authorized user, an ESR of the user's eye image can be sufficiently close to an ESR of the authorized user's eye image.
    Type: Grant
    Filed: April 26, 2017
    Date of Patent: February 16, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Alexey Spizhevoy, Adrian Kaehler, Gary Bradski
  • Publication number: 20200364482
    Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.
    Type: Application
    Filed: May 15, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20200364521
    Abstract: Trained networks configured to detect fiducial elements in encodings of images and associated methods are disclosed. One method includes instantiating a trained network with a set of internal weights which encode information regarding a class of fiducial elements, applying an encoding of an image to the trained network where the image includes a fiducial element from the class of fiducial elements, generating an output of the trained network based on the set of internal weights of the network and the encoding of the image, and providing a position for at least one fiducial element in the image based on the output. Methods of training such networks are also disclosed.
    Type: Application
    Filed: May 15, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20200364900
    Abstract: Systems and methods for point marking using virtual fiducial elements are disclosed. An example method includes placing a set of fiducial elements in a locale or on an object and capturing a set of calibration images using an imager. The set of fiducial elements is fully represented in the set of calibration images. The method also includes generating a three-dimensional geometric model of the set of fiducial elements using the set of calibration images. The method also includes capturing a run time image of the locale or object. The run time image does not include a selected fiducial element, from the set of fiducial elements, which was removed from a location in the locale or on the object prior to capturing the run time image. The method concludes with identifying the location relative to the run time image using the run time image and the three-dimensional geometric model.
    Type: Application
    Filed: May 15, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20200364877
    Abstract: Systems and methods for frame and scene segmentation are disclosed herein. A disclosed method includes providing a frame of a scene. The scene includes a scene background. The method also includes providing a model of the scene background. The method also includes determining a frame background using the model and subtracting the frame background from the frame to obtain an approximate segmentation. The method also includes training a segmentation network using the approximate segmentation.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Ethan Rublee
  • Publication number: 20200364871
    Abstract: Systems and methods for user guided iterative frame and scene segmentation are disclosed herein. The systems and methods can rely on overtraining a segmentation network on a frame. A disclosed method includes selecting a frame from a scene and generating a frame segmentation using the frame and a segmentation network. The method also includes displaying the frame and frame segmentation overlain on the frame, receiving a correction input on the frame, and training the segmentation network using the correction input. The method includes overtraining the segmentation network for the scene by iterating the above steps on the same frame or a series of frames from the scene.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventor: Gary Bradski
  • Publication number: 20200364878
    Abstract: Systems and methods for frame and scene segmentation are disclosed herein. One method includes associating a first primary element from a first frame with a background tag, associating a second primary element from the first frame with a subject tag, generating a background texture using the first primary element, generating a foreground texture using the second primary element, and combining the background texture and the foreground texture into a synthesized frame. The method also includes training a segmentation network using the background tag, the foreground tag, and the synthesized frame.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Prasanna Krishnasamy, Mona Fathollahi, Michael Tetelman
  • Publication number: 20200364895
    Abstract: A trained network for point tracking includes an input layer configured to receive an encoding of an image. The image is of a locale or object on which the network has been trained. The network also includes a set of internal weights which encode information associated with the locale or object, and a tracked point therein or thereon. The network also includes an output layer configured to provide an output based on the image as received at the input layer and the set of internal weights. The output layer includes a point tracking node that tracks the tracked point in the image. The point tracking node can track the point by generating coordinates for the tracked point in an input image of the locale or object. Methods of specifying and training the network using a three-dimensional model of the locale or object are also disclosed.
    Type: Application
    Filed: May 15, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20200364913
    Abstract: Systems and methods for user guided iterative frame segmentation are disclosed herein. A disclosed method includes providing a ground truth segmentation, synthesizing a failed segmentation from the ground truth segmentation, synthesizing a correction input for the failed segmentation using the ground truth segmentation, and conducting a supervised training routine for the segmentation network. The routine uses the failed segmentation and correction input as a segmentation network input and the ground truth segmentation as a supervisory output.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventor: Gary Bradski
  • Publication number: 20200364873
    Abstract: Methods and systems regarding importance sampling for the modification of a training procedure used to train a segmentation network are disclosed herein. A disclosed method includes segmenting an image using a trainable directed graph to generate a segmentation, displaying the segmentation, receiving a first selection directed to the segmentation, and modifying a training procedure for the trainable directed graph using the first selection. In a more specific method, the training procedure alters a set of trainable values associated with the trainable directed graph based on a delta between the segmentation and a ground truth segmentation, the first selection is spatially indicative with respect to the segmentation, and the delta is calculated based on the first selection.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Ethan Rublee, Mona Fathollahi, Michael Tetelman, Ian Meeder, Varsha Vivek, William Nguyen
  • Publication number: 20200250872
    Abstract: A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, or whether an adjustment to a rendering location of virtual object should be made to compensate for a deviation of the wearable device from its normal resting position.
    Type: Application
    Filed: February 3, 2020
    Publication date: August 6, 2020
    Inventors: Adrian Kaehler, Gary Bradski, Vijay Badrinarayanan
  • Publication number: 20200232622
    Abstract: A texture projecting light bulb includes an extended light source located within an integrator. The integrator includes at least one aperture configured to allow light to travel out of the interior of the integrator. In various embodiments, the interior of the integrator may be a diffusely reflective surface and the integrator may be configured to produce a uniform light distribution at the aperture to approximate a point source. The integrator may be surrounded by a light bulb enclosure. In various embodiments, the light bulb enclosure may include transparent and opaque regions configured to project a structured pattern of visible and/or infrared light.
    Type: Application
    Filed: April 6, 2020
    Publication date: July 23, 2020
    Inventors: Adrian Kaehler, Gary Bradski
  • Patent number: 10666929
    Abstract: This disclosure is directed to a hardware system for inverse graphics capture. An inverse graphics capture system (IGCS) captures data regarding a physical space that can be used to generate a photorealistic graphical model of that physical space. In certain approaches, the system includes hardware and accompanying software used to create a photorealistic six degree of freedom (6DOF) graphical model of the physical space. In certain approaches, the system includes hardware and accompanying software used for projection mapping onto the physical space. In certain approaches, the model produced by the IGCS is built using data regarding the geometry, lighting, surfaces, and environment of the physical space. In certain approaches, the model produced by the IGCS is both photorealistic and fully modifiable.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: May 26, 2020
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Moshe Benezra, Daniel A. Aden, Ethan Rublee
  • Patent number: 10612749
    Abstract: A texture projecting light bulb includes an extended light source located within an integrator. The integrator includes at least one aperture configured to allow light to travel out of the interior of the integrator. In various embodiments, the interior of the integrator may be a diffusely reflective surface and the integrator may be configured to produce a uniform light distribution at the aperture to approximate a point source. The integrator may be surrounded by a light bulb enclosure. In various embodiments, the light bulb enclosure may include transparent and opaque regions configured to project a structured pattern of visible and/or infrared light.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: April 7, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Adrian Kaehler, Gary Bradski
  • Publication number: 20200078938
    Abstract: Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.
    Type: Application
    Filed: November 18, 2019
    Publication date: March 12, 2020
    Inventors: Gary Bradski, Steve Croft, Kurt Konolige, Ethan Rublee, Troy Straszheim, John Zevenbergen, Stefan Hinterstoisser, Hauke Strasdat