Patents by Inventor Andrew Rabinovich

Andrew Rabinovich has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10769858
    Abstract: A sensory eyewear system for a mixed reality device can facilitate user's interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user's environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: September 8, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Eric C. Browy, Michael Janusz Woods, Andrew Rabinovich
  • Patent number: 10733447
    Abstract: A head-mounted augmented reality (AR) device can include a hardware processor programmed to receive different types of sensor data from a plurality of sensors (e.g., an inertial measurement unit, an outward-facing camera, a depth sensing camera, an eye imaging camera, or a microphone); and determining an event of a plurality of events using the different types of sensor data and a hydra neural network (e.g., face recognition, visual search, gesture identification, semantic segmentation, object detection, lighting detection, simultaneous localization and mapping, relocalization).
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: August 4, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Tomasz Jan Malisiewicz, Daniel DeTone
  • Patent number: 10726570
    Abstract: Augmented reality devices and methods for computing a homography based on two images. One method may include receiving a first image based on a first camera pose and a second image based on a second camera pose, generating a first point cloud based on the first image and a second point cloud based on the second image, providing the first point cloud and the second point cloud to a neural network, and generating, by the neural network, the homography based on the first point cloud and the second point cloud. The neural network may be trained by generating a plurality of points, determining a 3D trajectory, sampling the 3D trajectory to obtain camera poses viewing the points, projecting the points onto 2D planes, comparing a generated homography using the projected points to the ground-truth homography and modifying the neural network based on the comparison.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: July 28, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Jan Malisiewicz, Andrew Rabinovich
  • Publication number: 20200234051
    Abstract: Systems and methods for estimating a layout of a room are disclosed. The room layout can comprise the location of a floor, one or more walls, and a ceiling. In one aspect, a neural network can analyze an image of a portion of a room to determine the room layout. The neural network can comprise a convolutional neural network having an encoder sub-network, a decoder sub-network, and a side sub-network. The neural network can determine a three-dimensional room layout using two-dimensional ordered keypoints associated with a room type. The room layout can be used in applications such as augmented or mixed reality, robotics, autonomous indoor navigation, etc.
    Type: Application
    Filed: April 9, 2020
    Publication date: July 23, 2020
    Inventors: Chen-Yu Lee, Vijay Badrinarayanan, Tomasz Jan Malisiewicz, Andrew Rabinovich
  • Publication number: 20200226785
    Abstract: Systems and methods for reducing error from noisy data received from a high frequency sensor by fusing received input with data received from a low frequency sensor by collecting a first set of dynamic inputs from the high frequency sensor, collecting a correction input point from the low frequency sensor, and adjusting a propagation path of a second set of dynamic inputs from the high frequency sensor based on the correction input point either by full translation to the correction input point or dampened approach towards the correction input point.
    Type: Application
    Filed: March 27, 2020
    Publication date: July 16, 2020
    Applicant: Magic Leap, Inc.
    Inventors: Michael Janusz Woods, Andrew Rabinovich
  • Publication number: 20200202554
    Abstract: Systems and methods for cuboid detection and keypoint localization in images are disclosed. In one aspect, a deep cuboid detector can be used for simultaneous cuboid detection and keypoint localization in monocular images. The deep cuboid detector can include a plurality of convolutional layers and non-convolutional layers of a trained convolution neural network for determining a convolutional feature map from an input image. A region proposal network of the deep cuboid detector can determine a bounding box surrounding a cuboid in the image using the convolutional feature map. The pooling layer and regressor layers of the deep cuboid detector can implement iterative feature pooling for determining a refined bounding box and a parameterized representation of the cuboid.
    Type: Application
    Filed: March 5, 2020
    Publication date: June 25, 2020
    Inventors: Tomasz Jan Malisiewicz, Andrew Rabinovich, Vijay Badrinarayanan, Debidatta Dwibedi
  • Publication number: 20200193714
    Abstract: A sensory eyewear system for a mixed reality device can facilitate user's interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user's environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
    Type: Application
    Filed: February 26, 2020
    Publication date: June 18, 2020
    Inventors: Eric C. Browy, Michael Janusz Woods, Andrew Rabinovich
  • Patent number: 10657376
    Abstract: Systems and methods for estimating a layout of a room are disclosed. The room layout can comprise the location of a floor, one or more walls, and a ceiling. In one aspect, a neural network can analyze an image of a portion of a room to determine the room layout. The neural network can comprise a convolutional neural network having an encoder sub-network, a decoder sub-network, and a side sub-network. The neural network can determine a three-dimensional room layout using two-dimensional ordered keypoints associated with a room type. The room layout can be used in applications such as augmented or mixed reality, robotics, autonomous indoor navigation, etc.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: May 19, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Chen-Yu Lee, Vijay Badrinarayanan, Tomasz Malisiewicz, Andrew Rabinovich
  • Patent number: 10650552
    Abstract: Systems and methods for reducing error from noisy data received from a high frequency sensor by fusing received input with data received from a low frequency sensor by collecting a first set of dynamic inputs from the high frequency sensor, collecting a correction input point from the low frequency sensor, and adjusting a propagation path of a second set of dynamic inputs from the high frequency sensor based on the correction input point either by full translation to the correction input point or dampened approach towards the correction input point.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: May 12, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Michael Janusz Woods, Andrew Rabinovich
  • Patent number: 10621747
    Abstract: Systems and methods for cuboid detection and keypoint localization in images are disclosed. In one aspect, a deep cuboid detector can be used for simultaneous cuboid detection and keypoint localization in monocular images. The deep cuboid detector can include a plurality of convolutional layers and non-convolutional layers of a trained convolution neural network for determining a convolutional feature map from an input image. A region proposal network of the deep cuboid detector can determine a bounding box surrounding a cuboid in the image using the convolutional feature map. The pooling layer and regressor layers of the deep cuboid detector can implement iterative feature pooling for determining a refined bounding box and a parameterized representation of the cuboid.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: April 14, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Tomasz Jan Malisiewicz, Andrew Rabinovich, Vijay Badrinarayanan, Debidatta Dwibedi
  • Publication number: 20200111262
    Abstract: Examples of the disclosure describe systems and methods for generating and displaying a virtual companion. In an example method, a first input from an environment of a user is received at a first time via a first sensor on a head-wearable device. An occurrence of an event in the environment is determined based on the first input. A second input from the user is received via a second sensor on the head-wearable device, and an emotional reaction of the user is identified based on the second input. An association is determined between the emotional reaction and the event. A view of the environment is presented at a second time later than the first time via a see-through display of the head-wearable device. A stimulus is presented at the second time via a virtual companion displayed via the see-through display, wherein the stimulus is determined based on the determined association between the emotional reaction and the event.
    Type: Application
    Filed: October 8, 2019
    Publication date: April 9, 2020
    Inventors: Andrew RABINOVICH, John MONOS
  • Publication number: 20200097819
    Abstract: A method for training a neural network includes receiving a plurality of images and, for each individual image of the plurality of images, generating a training triplet including a subset of the individual image, a subset of a transformed image, and a homography based on the subset of the individual image and the subset of the transformed image. The method also includes, for each individual image, generating, by the neural network, an estimated homography based on the subset of the individual image and the subset of the transformed image, comparing the estimated homography to the homography, and modifying the neural network based on the comparison.
    Type: Application
    Filed: September 30, 2019
    Publication date: March 26, 2020
    Applicant: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
  • Patent number: 10580213
    Abstract: A sensory eyewear system for a mixed reality device can facilitate user's interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user's environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
    Type: Grant
    Filed: September 12, 2017
    Date of Patent: March 3, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Eric Browy, Michael Janusz Woods, Andrew Rabinovich
  • Patent number: 10515114
    Abstract: A facial recognition search system identifies one or more likely names (or other personal identifiers) corresponding to the facial image(s) in a query as follows. After receiving the visual query with one or more facial images, the system identifies images that potentially match the respective facial image in accordance with visual similarity criteria. Then one or more persons associated with the potential images are identified. For each identified person, person-specific data comprising metrics of social connectivity to the requester are retrieved from a plurality of applications such as communications applications, social networking applications, calendar applications, and collaborative applications. An ordered list of persons is then generated by ranking the identified persons in accordance with at least metrics of visual similarity between the respective facial image and the potential image matches and with the social connection metrics.
    Type: Grant
    Filed: July 9, 2018
    Date of Patent: December 24, 2019
    Assignee: Google LLC
    Inventors: David Petrou, Andrew Rabinovich, Hartwig Adam
  • Patent number: 10489708
    Abstract: A method for generating inputs for a neural network based on an image includes receiving the image, identifying a position within the image, and identifying a subset of the image at the position. The subset of the image is defined by a first set of corners. The method also includes perturbing at least one of the first set of corners to form a second set of corners. The second set of corners defines a modified subset of the image. The method further includes determining a homography based on a comparison between the subset of the image and the modified subset of the image, generating a transformed image by applying the homography to the image, and identifying a subset of the transformed image at the position.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: November 26, 2019
    Assignee: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
  • Publication number: 20190340435
    Abstract: A head-mounted augmented reality (AR) device can include a hardware processor programmed to receive different types of sensor data from a plurality of sensors (e.g., an inertial measurement unit, an outward-facing camera, a depth sensing camera, an eye imaging camera, or a microphone); and determining an event of a plurality of events using the different types of sensor data and a hydra neural network (e.g., face recognition, visual search, gesture identification, semantic segmentation, object detection, lighting detection, simultaneous localization and mapping, relocalization).
    Type: Application
    Filed: July 18, 2019
    Publication date: November 7, 2019
    Inventors: Andrew Rabinovich, Tomasz Jan Malisiewicz, Daniel DeTone
  • Publication number: 20190289281
    Abstract: Systems and methods are disclosed for computing depth maps. One method includes capturing, using a camera, a camera image of a runtime scene. The method may also include analyzing the camera image of the runtime scene to determine a plurality of target sampling points at which to capture depth of the runtime scene. The method may further include adjusting a setting associated with a low-density depth sensor based on the plurality of target sampling points. The method may further include capturing, using the low-density depth sensor, a low-density depth map of the runtime scene at the plurality of target sampling points. The method may further include generating a computed depth map of the runtime scene based on the camera image of the runtime scene and the low-density depth map of the runtime scene.
    Type: Application
    Filed: March 13, 2019
    Publication date: September 19, 2019
    Applicant: Magic Leap, Inc.
    Inventors: Vijay BADRINARAYANAN, Zhao CHEN, Andrew RABINOVICH, Elad JOSEPH
  • Publication number: 20190286951
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Application
    Filed: March 27, 2019
    Publication date: September 19, 2019
    Applicant: MAGIC LEAP, INC.
    Inventors: Andrew RABINOVICH, Vijay BADRINARAYANAN, Daniel DETONE, Srivignesh RAJENDRAN, Douglas Bertram LEE, Tomasz MALISIEWICZ
  • Patent number: 10402649
    Abstract: A head-mounted augmented reality (AR) device can include a hardware processor programmed to receive different types of sensor data from a plurality of sensors (e.g., an inertial measurement unit, an outward-facing camera, a depth sensing camera, an eye imaging camera, or a microphone); and determining an event of a plurality of events using the different types of sensor data and a hydra neural network (e.g., face recognition, visual search, gesture identification, semantic segmentation, object detection, lighting detection, simultaneous localization and mapping, relocalization).
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: September 3, 2019
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Tomasz Jan Malisiewicz, Daniel DeTone
  • Publication number: 20190147341
    Abstract: Systems, devices, and methods for training a neural network and performing image interest point detection and description using the neural network. The neural network may include an interest point detector subnetwork and a descriptor subnetwork. An optical device may include at least one camera for capturing a first image and a second image. A first set of interest points and a first descriptor may be calculated using the neural network based on the first image, and a second set of interest points and a second descriptor may be calculated using the neural network based on the second image. A homography between the first image and the second image may be determined based on the first and second sets of interest points and the first and second descriptors. The optical device may adjust virtual image light being projected onto an eyepiece based on the homography.
    Type: Application
    Filed: November 14, 2018
    Publication date: May 16, 2019
    Applicant: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Daniel DeTone, Tomasz Jan Malisiewicz