Patents by Inventor Gary Bradski

Gary Bradski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240134200
    Abstract: A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, or whether an adjustment to a rendering location of virtual object should be made to compensate for a deviation of the wearable device from its normal resting position.
    Type: Application
    Filed: December 29, 2023
    Publication date: April 25, 2024
    Inventors: Adrian KAEHLER, Gary BRADSKI, Vijay BADRINARAYANAN
  • Patent number: 11906742
    Abstract: A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, or whether an adjustment to a rendering location of virtual object should be made to compensate for a deviation of the wearable device from its normal resting position.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: February 20, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Adrian Kaehler, Gary Bradski, Vijay Badrinarayanan
  • Patent number: 11775788
    Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: October 3, 2023
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Patent number: 11765339
    Abstract: Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: September 19, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Adrian Kaehler, Gary Bradski
  • Patent number: 11734827
    Abstract: Systems and methods for user guided iterative frame and scene segmentation are disclosed herein. The systems and methods can rely on overtraining a segmentation network on a frame. A disclosed method includes selecting a frame from a scene and generating a frame segmentation using the frame and a segmentation network. The method also includes displaying the frame and frame segmentation overlain on the frame, receiving a correction input on the frame, and training the segmentation network using the correction input. The method includes overtraining the segmentation network for the scene by iterating the above steps on the same frame or a series of frames from the scene.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: August 22, 2023
    Assignee: Matterport, Inc.
    Inventor: Gary Bradski
  • Patent number: 11568035
    Abstract: Systems and methods for iris authentication are disclosed. In one aspect, a deep neural network (DNN) with a triplet network architecture can be trained to learn an embedding (e.g., another DNN) that maps from the higher dimensional eye image space to a lower dimensional embedding space. The DNN can be trained with segmented iris images or images of the periocular region of the eye (including the eye and portions around the eye such as eyelids, eyebrows, eyelashes, and skin surrounding the eye). With the triplet network architecture, an embedding space representation (ESR) of a person's eye image can be closer to the ESRs of the person's other eye images than it is to the ESR of another person's eye image. In another aspect, to authenticate a user as an authorized user, an ESR of the user's eye image can be sufficiently close to an ESR of the authorized user's eye image.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: January 31, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Alexey Spizhevoy, Adrian Kaehler, Gary Bradski
  • Patent number: 11383380
    Abstract: Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: July 12, 2022
    Assignee: Intrinsic Innovation LLC
    Inventors: Gary Bradski, Steve Croft, Kurt Konolige, Ethan Rublee, Troy Straszheim, John Zevenbergen, Stefan Hinterstoisser, Hauke Strasdat
  • Patent number: 11379992
    Abstract: Systems and methods for frame and scene segmentation are disclosed herein. One method includes associating a first primary element from a first frame with a background tag, associating a second primary element from the first frame with a subject tag, generating a background texture using the first primary element, generating a foreground texture using the second primary element, and combining the background texture and the foreground texture into a synthesized frame. The method also includes training a segmentation network using the background tag, the foreground tag, and the synthesized frame.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: July 5, 2022
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Prasanna Krishnasamy, Mona Fathollahi, Michael Tetelman
  • Publication number: 20220101004
    Abstract: Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.
    Type: Application
    Filed: December 10, 2021
    Publication date: March 31, 2022
    Inventors: Adrian Kaehler, Gary Bradski
  • Patent number: 11274807
    Abstract: A texture projecting light bulb includes an extended light source located within an integrator. The integrator includes at least one aperture configured to allow light to travel out of the interior of the integrator. In various embodiments, the interior of the integrator may be a diffusely reflective surface and the integrator may be configured to produce a uniform light distribution at the aperture to approximate a point source. The integrator may be surrounded by a light bulb enclosure. In various embodiments, the light bulb enclosure may include transparent and opaque regions configured to project a structured pattern of visible and/or infrared light.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: March 15, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Adrian Kaehler, Gary Bradski
  • Publication number: 20220058414
    Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.
    Type: Application
    Filed: April 30, 2021
    Publication date: February 24, 2022
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20220020192
    Abstract: A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, or whether an adjustment to a rendering location of virtual object should be made to compensate for a deviation of the wearable device from its normal resting position.
    Type: Application
    Filed: July 26, 2021
    Publication date: January 20, 2022
    Inventors: Adrian Kaehler, Gary Bradski, Vijay Badrinarayanan
  • Publication number: 20220005095
    Abstract: Disclosed herein is an augmented reality (AR) system that provides information about purchasing alternatives to a user who is about to purchase an item or product (e.g., a target product) in a physical retail location. In some variations, offers to purchase the product and/or an alternative product are provided by the merchant and/or competitors via the AR system. An offer negotiation server (ONS) aggregates offer data provided various external parties (EPs) and displays these offers to the user as the user is considering the purchase of a target product. In some variations, an AR system may be configured to facilitate the process of purchasing items at a retail location.
    Type: Application
    Filed: September 21, 2021
    Publication date: January 6, 2022
    Inventors: Adrian KAEHLER, Gary BRADSKI, Prasanna KRISHNASAMY, Doug LEE
  • Patent number: 11200420
    Abstract: Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: December 14, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Adrian Kaehler, Gary Bradski
  • Patent number: 11189031
    Abstract: Methods and systems regarding importance sampling for the modification of a training procedure used to train a segmentation network are disclosed herein. A disclosed method includes segmenting an image using a trainable directed graph to generate a segmentation, displaying the segmentation, receiving a first selection directed to the segmentation, and modifying a training procedure for the trainable directed graph using the first selection. In a more specific method, the training procedure alters a set of trainable values associated with the trainable directed graph based on a delta between the segmentation and a ground truth segmentation, the first selection is spatially indicative with respect to the segmentation, and the delta is calculated based on the first selection.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: November 30, 2021
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Ethan Rublee, Mona Fathollahi, Michael Tetelman, Ian Meeder, Varsha Vivek, William Nguyen
  • Patent number: 11164227
    Abstract: Disclosed herein is an augmented reality (AR) system that provides information about purchasing alternatives to a user who is about to purchase an item or product (e.g., a target product) in a physical retail location. In some variations, offers to purchase the product and/or an alternative product are provided by the merchant and/or competitors via the AR system. An offer negotiation server (ONS) aggregates offer data provided various external parties (EPs) and displays these offers to the user as the user is considering the purchase of a target product. In some variations, an AR system may be configured to facilitate the process of purchasing items at a retail location.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: November 2, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Adrian Kaehler, Gary Bradski, Prasanna Krishnasamy, Doug Lee
  • Publication number: 20210264609
    Abstract: Systems and methods for user guided iterative frame and scene segmentation are disclosed herein. The systems and methods can rely on overtraining a segmentation network on a frame. A disclosed method includes selecting a frame from a scene and generating a frame segmentation using the frame and a segmentation network. The method also includes displaying the frame and frame segmentation overlain on the frame, receiving a correction input on the frame, and training the segmentation network using the correction input. The method includes overtraining the segmentation network for the scene by iterating the above steps on the same frame or a series of frames from the scene.
    Type: Application
    Filed: May 11, 2021
    Publication date: August 26, 2021
    Applicant: Matterport, Inc.
    Inventor: Gary Bradski
  • Patent number: 11100692
    Abstract: A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, or whether an adjustment to a rendering location of virtual object should be made to compensate for a deviation of the wearable device from its normal resting position.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: August 24, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Adrian Kaehler, Gary Bradski, Vijay Badrinarayanan
  • Patent number: 11080884
    Abstract: A trained network for point tracking includes an input layer configured to receive an encoding of an image. The image is of a locale or object on which the network has been trained. The network also includes a set of internal weights which encode information associated with the locale or object, and a tracked point therein or thereon. The network also includes an output layer configured to provide an output based on the image as received at the input layer and the set of internal weights. The output layer includes a point tracking node that tracks the tracked point in the image. The point tracking node can track the point by generating coordinates for the tracked point in an input image of the locale or object. Methods of specifying and training the network using a three-dimensional model of the locale or object are also disclosed.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: August 3, 2021
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Patent number: 11080861
    Abstract: Systems and methods for frame and scene segmentation are disclosed herein. A disclosed method includes providing a frame of a scene. The scene includes a scene background. The method also includes providing a model of the scene background. The method also includes determining a frame background using the model and subtracting the frame background from the frame to obtain an approximate segmentation. The method also includes training a segmentation network using the approximate segmentation.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: August 3, 2021
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Ethan Rublee