Patents by Inventor Andrew Rabinovich

Andrew Rabinovich has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220365351
    Abstract: Methods and systems for triggering presentation of virtual content based on sensor information. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergences. The system may monitor information detected via the sensors, and based on the monitored information, trigger access to virtual content identified in the sensor information. Virtual content can be obtained, and presented as augmented reality content via the display system. The system may monitor information detected via the sensors to identify a QR code, or a presence of a wireless beacon. The QR code or wireless beacon can trigger the display system to obtain virtual content for presentation.
    Type: Application
    Filed: May 26, 2022
    Publication date: November 17, 2022
    Inventors: Michael Janusz Woods, Andrew Rabinovich, Richard Leslie Taylor
  • Publication number: 20220337899
    Abstract: The invention provides a content provisioning system. A mobile device has a mobile device processor. The mobile device mobile device has communication interface connected to the mobile device processor and a first resource device communication interface and under the control of the mobile device processor to receive first content transmitted by the first resource device transmitter The mobile device mobile device has a mobile device output device connected to the mobile device processor and under control of the mobile device processor capable of providing an output that can be sensed by a user.
    Type: Application
    Filed: June 17, 2022
    Publication date: October 20, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Eric C. BROWY, Andrew RABINOVICH, David C. LUNDMARK
  • Publication number: 20220327663
    Abstract: Methods and systems for obtaining an input video sequence comprising input video frames; determining i) an input resolution of input video frames and ii) a target output resolution of the plurality of input video frames, wherein the target output resolution is higher than the input resolution; and processing the input video sequence using a neural network to generate an output video sequence, comprising, for each of the plurality of input video frames: processing the input video frame to generate an output video frame having the target output resolution, comprising processing the input video frame using a subnetwork of the neural network corresponding to the input resolution of the plurality of input video frames, the neural network configured to process input video frames having one of a set of possible input resolutions and to generate output video frames having one of a set of possible output resolutions.
    Type: Application
    Filed: April 11, 2022
    Publication date: October 13, 2022
    Inventors: Maruan Al-Shedivat, Yihui He, Megan Hardy, Andrew Rabinovich
  • Patent number: 11445232
    Abstract: The invention provides a content provisioning system. A mobile device has a mobile device processor. The mobile device mobile device has communication interface connected to the mobile device processor and a first resource device communication interface and under the control of the mobile device processor to receive first content transmitted by the first resource device transmitter The mobile device mobile device has a mobile device output device connected to the mobile device processor and under control of the mobile device processor capable of providing an output that can be sensed by a user.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: September 13, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Eric C. Browy, Andrew Rabinovich, David C. Lundmark
  • Patent number: 11410392
    Abstract: A sensory eyewear system for a mixed reality device can facilitate user's interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user's environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: August 9, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Eric C. Browy, Michael Janusz Woods, Andrew Rabinovich
  • Publication number: 20220244781
    Abstract: Techniques related to the computation of gaze vectors of users of wearable devices are disclosed. A neural network may be trained through first and second training steps. The neural network may include a set of feature encoding layers and a plurality of sets of task-specific layers that each operate on an output of the set of feature encoding layers. During the first training step, a first image of a first eye may be provided to the neural network, eye segmentation data may be generated using the neural network, and the set of feature encoding layers may be trained. During the second training step, a second image of a second eye may be provided to the neural network, network output data may be generated using the neural network, and the plurality of sets of task-specific layers may be trained.
    Type: Application
    Filed: February 17, 2022
    Publication date: August 4, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Zhengyang Wu, Srivignesh Rajendran, Tarrence van As, Joelle Zimmermann, Vijay Badrinarayanan, Andrew Rabinovich
  • Publication number: 20220237815
    Abstract: Systems and methods for cuboid detection and keypoint localization in images are disclosed. In one aspect, a deep cuboid detector can be used for simultaneous cuboid detection and keypoint localization in monocular images. The deep cuboid detector can include a plurality of convolutional layers and non-convolutional layers of a trained convolution neural network for determining a convolutional feature map from an input image. A region proposal network of the deep cuboid detector can determine a bounding box surrounding a cuboid in the image using the convolutional feature map. The pooling layer and regressor layers of the deep cuboid detector can implement iterative feature pooling for determining a refined bounding box and a parameterized representation of the cuboid.
    Type: Application
    Filed: April 11, 2022
    Publication date: July 28, 2022
    Inventors: Tomasz Jan Malisiewicz, Andrew Rabinovich, Vijay Badrinarayanan, Debidatta Dwibedi
  • Publication number: 20220215640
    Abstract: Examples of the disclosure describe systems and methods for generating and displaying a virtual companion. In an example method, a first input from an environment of a user is received at a first time via a first sensor on a head-wearable device. An occurrence of an event in the environment is determined based on the first input. A second input from the user is received via a second sensor on the head-wearable device, and an emotional reaction of the user is identified based on the second input. An association is determined between the emotional reaction and the event. A view of the environment is presented at a second time later than the first time via a see-through display of the head-wearable device. A stimulus is presented at the second time via a virtual companion displayed via the see-through display, wherein the stimulus is determined based on the determined association between the emotional reaction and the event.
    Type: Application
    Filed: March 25, 2022
    Publication date: July 7, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, John Monos
  • Patent number: 11347054
    Abstract: Methods and systems for triggering presentation of virtual content based on sensor information. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergences. The system may monitor information detected via the sensors, and based on the monitored information, trigger access to virtual content identified in the sensor information. Virtual content can be obtained, and presented as augmented reality content via the display system. The system may monitor information detected via the sensors to identify a QR code, or a presence of a wireless beacon. The QR code or wireless beacon can trigger the display system to obtain virtual content for presentation.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: May 31, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Michael Janusz Woods, Andrew Rabinovich, Richard Leslie Taylor
  • Patent number: 11328443
    Abstract: Systems and methods for cuboid detection and keypoint localization in images are disclosed. In one aspect, a deep cuboid detector can be used for simultaneous cuboid detection and keypoint localization in monocular images. The deep cuboid detector can include a plurality of convolutional layers and non-convolutional layers of a trained convolution neural network for determining a convolutional feature map from an input image. A region proposal network of the deep cuboid detector can determine a bounding box surrounding a cuboid in the image using the convolutional feature map. The pooling layer and regressor layers of the deep cuboid detector can implement iterative feature pooling for determining a refined bounding box and a parameterized representation of the cuboid.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: May 10, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Tomasz Jan Malisiewicz, Andrew Rabinovich, Vijay Badrinarayanan, Debidatta Dwibedi
  • Patent number: 11315325
    Abstract: Examples of the disclosure describe systems and methods for generating and displaying a virtual companion. In an example method, a first input from an environment of a user is received at a first time via a first sensor on a head-wearable device. An occurrence of an event in the environment is determined based on the first input. A second input from the user is received via a second sensor on the head-wearable device, and an emotional reaction of the user is identified based on the second input. An association is determined between the emotional reaction and the event. A view of the environment is presented at a second time later than the first time via a see-through display of the head-wearable device. A stimulus is presented at the second time via a virtual companion displayed via the see-through display, wherein the stimulus is determined based on the determined association between the emotional reaction and the event.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: April 26, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, John Monos
  • Patent number: 11288832
    Abstract: A method of determining a pose of an image capture device includes capturing an image using an image capture device. The method also includes generating a data structure corresponding to the captured image. The method further includes comparing the data structure with a plurality of known data structures to identify a most similar known data structure. Moreover, the method includes reading metadata corresponding to the most similar known data structure to determine a pose of the image capture device.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: March 29, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Brigit Schroeder, Tomasz Jan Malisiewicz, Andrew Rabinovich
  • Publication number: 20220067965
    Abstract: Systems and methods for reducing error from noisy data received from a high frequency sensor by fusing received input with data received from a low frequency sensor by collecting a first set of dynamic inputs from the high frequency sensor, collecting a correction input point from the low frequency sensor, and adjusting a propagation path of a second set of dynamic inputs from the high frequency sensor based on the correction input point either by full translation to the correction input point or dampened approach towards the correction input point.
    Type: Application
    Filed: November 9, 2021
    Publication date: March 3, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Michael Janusz Woods, Andrew Rabinovich
  • Publication number: 20220067378
    Abstract: A head-mounted augmented reality (AR) device can include a hardware processor programmed to receive different types of sensor data from a plurality of sensors (e.g., an inertial measurement unit, an outward-facing camera, a depth sensing camera, an eye imaging camera, or a microphone); and determining an event of a plurality of events using the different types of sensor data and a hydra neural network (e.g., face recognition, visual search, gesture identification, semantic segmentation, object detection, lighting detection, simultaneous localization and mapping, relocalization).
    Type: Application
    Filed: September 10, 2021
    Publication date: March 3, 2022
    Inventors: Andrew Rabinovich, Tomasz Jan Malisiewicz, Daniel DeTone
  • Patent number: 11238606
    Abstract: Augmented reality devices and methods for computing a homography based on two images. One method may include receiving a first image based on a first camera pose and a second image based on a second camera pose, generating a first point cloud based on the first image and a second point cloud based on the second image, providing the first point cloud and the second point cloud to a neural network, and generating, by the neural network, the homography based on the first point cloud and the second point cloud. The neural network may be trained by generating a plurality of points, determining a 3D trajectory, sampling the 3D trajectory to obtain camera poses viewing the points, projecting the points onto 2D planes, comparing a generated homography using the projected points to the ground-truth homography and modifying the neural network based on the comparison.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: February 1, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Jan Malisiewicz, Andrew Rabinovich
  • Publication number: 20220028110
    Abstract: In an example method of training a neural network for performing visual odometry, the neural network receives a plurality of images of an environment, determines, for each image, a respective set of interest points and a respective descriptor, and determines a correspondence between the plurality of images. Determining the correspondence includes determining one or point corrspondences between the sets of interest points, and determining a set of candidate interest points based on the one or more point correspondences, each candidate interest point indicating a respective feature in the environment in three-dimensional space). The neural network determines, for each candidate interest point, a respective stability metric and a respective stability metric. The neural network is modified based on the one or more candidate interest points.
    Type: Application
    Filed: November 13, 2019
    Publication date: January 27, 2022
    Inventors: Daniel DETONE, Tomasz Jan MALISIEWICZ, Andrew RABINOVICH
  • Publication number: 20210406609
    Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.
    Type: Application
    Filed: June 10, 2021
    Publication date: December 30, 2021
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Srivignesh Rajendran, Chen-Yu Lee
  • Patent number: 11210808
    Abstract: Systems and methods for reducing error from noisy data received from a high frequency sensor by fusing received input with data received from a low frequency sensor by collecting a first set of dynamic inputs from the high frequency sensor, collecting a correction input point from the low frequency sensor, and adjusting a propagation path of a second set of dynamic inputs from the high frequency sensor based on the correction input point either by full translation to the correction input point or dampened approach towards the correction input point.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: December 28, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Michael Janusz Woods, Andrew Rabinovich
  • Publication number: 20210365785
    Abstract: A method for training a neural network includes receiving a plurality of images and, for each individual image of the plurality of images, generating a training triplet including a subset of the individual image, a subset of a transformed image, and a homography based on the subset of the individual image and the subset of the transformed image. The method also includes, for each individual image, generating, by the neural network, an estimated homography based on the subset of the individual image and the subset of the transformed image, comparing the estimated homography to the homography, and modifying the neural network based on the comparison.
    Type: Application
    Filed: June 7, 2021
    Publication date: November 25, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
  • Patent number: 11128854
    Abstract: Systems and methods are disclosed for computing depth maps. One method includes capturing, using a camera, a camera image of a runtime scene. The method may also include analyzing the camera image of the runtime scene to determine a plurality of target sampling points at which to capture depth of the runtime scene. The method may further include adjusting a setting associated with a low-density depth sensor based on the plurality of target sampling points. The method may further include capturing, using the low-density depth sensor, a low-density depth map of the runtime scene at the plurality of target sampling points. The method may further include generating a computed depth map of the runtime scene based on the camera image of the runtime scene and the low-density depth map of the runtime scene.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: September 21, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Vijay Badrinarayanan, Zhao Chen, Andrew Rabinovich, Elad Joseph