Patents by Inventor Andrew Rabinovich
Andrew Rabinovich has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11120266Abstract: A head-mounted augmented reality (AR) device can include a hardware processor programmed to receive different types of sensor data from a plurality of sensors (e.g., an inertial measurement unit, an outward-facing camera, a depth sensing camera, an eye imaging camera, or a microphone); and determining an event of a plurality of events using the different types of sensor data and a hydra neural network (e.g., face recognition, visual search, gesture identification, semantic segmentation, object detection, lighting detection, simultaneous localization and mapping, relocalization).Type: GrantFiled: June 30, 2020Date of Patent: September 14, 2021Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, Tomasz Jan Malisiewicz, Daniel DeTone
-
Publication number: 20210241114Abstract: Systems, devices, and methods for training a neural network and performing image interest point detection and description using the neural network. The neural network may include an interest point detector subnetwork and a descriptor subnetwork. An optical device may include at least one camera for capturing a first image and a second image. A first set of interest points and a first descriptor may be calculated using the neural network based on the first image, and a second set of interest points and a second descriptor may be calculated using the neural network based on the second image. A homography between the first image and the second image may be determined based on the first and second sets of interest points and the first and second descriptors. The optical device may adjust virtual image light being projected onto an eyepiece based on the homography.Type: ApplicationFiled: February 18, 2021Publication date: August 5, 2021Applicant: Magic Leap, Inc.Inventors: Andrew Rabinovich, Daniel DeTone, Tomasz Jan Malisiewicz
-
Patent number: 11062209Abstract: A method for training a neural network includes receiving a plurality of images and, for each individual image of the plurality of images, generating a training triplet including a subset of the individual image, a subset of a transformed image, and a homography based on the subset of the individual image and the subset of the transformed image. The method also includes, for each individual image, generating, by the neural network, an estimated homography based on the subset of the individual image and the subset of the transformed image, comparing the estimated homography to the homography, and modifying the neural network based on the comparison.Type: GrantFiled: September 30, 2019Date of Patent: July 13, 2021Assignee: Magic Leap, Inc.Inventors: Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
-
Patent number: 11048978Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.Type: GrantFiled: November 9, 2018Date of Patent: June 29, 2021Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Srivignesh Rajendran, Chen-Yu Lee
-
Publication number: 20210192357Abstract: Systems and methods for gradient adversarial training of a neural network are disclosed. In one aspect of gradient adversarial training, an auxiliary neural network can be trained to classify a gradient tensor that is evaluated during backpropagation in a main neural network that provides a desired task output. The main neural network can serve as an adversary to the auxiliary network in addition to a standard task-based training procedure. The auxiliary neural network can pass an adversarial gradient signal back to the main neural network, which can use this signal to regularize the weight tensors in the main neural network. Gradient adversarial training of the neural network can provide improved gradient tensors in the main network. Gradient adversarial techniques can be used to train multitask networks, knowledge distillation networks, and adversarial defense networks.Type: ApplicationFiled: May 15, 2019Publication date: June 24, 2021Inventors: Ayan Tuhinendu Sinha, Andrew Rabinovich, Zhao Chen, Vijay Badrinarayanan
-
Publication number: 20210182636Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.Type: ApplicationFiled: February 23, 2021Publication date: June 17, 2021Applicant: MAGIC LEAP, INC.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
-
Publication number: 20210182554Abstract: Systems and methods for estimating a gaze vector of an eye using a trained neural network. An input image of the eye may be received from a camera. The input image may be provided to the neural network. Network output data may be generated using the neural network. The network output data may include two-dimensional (2D) pupil data, eye segmentation data, and/or cornea center data. The gaze vector may be computed based on the network output data. The neural network may be previously trained by providing a training input image to the neural network, generating training network output data, receiving ground-truth (GT) data, computing error data based on a difference between the training network output data and the GT data, and modifying the neural network based on the error data.Type: ApplicationFiled: December 21, 2020Publication date: June 17, 2021Applicant: Magic Leap, Inc.Inventors: Vijay Badrinarayanan, Zhengyang Wu, Srivignesh Rajendran, Andrew Rabinovich
-
Publication number: 20210150252Abstract: The description relates the feature matching. Our approach establishes pointwise correspondences between challenging image pairs. It takes off-the-shelf local features as input and uses an attentional graph neural network to solve an assignment optimization problem. The deep middle-end matcher acts as a middle-end and handles partial point visibility and occlusion elegantly, producing a partial assignment matrix.Type: ApplicationFiled: November 13, 2020Publication date: May 20, 2021Applicant: Magic Leap, Inc.Inventors: Paul-Edouard SARLIN, Daniel DeTONE, Tomasz Jan MALISIEWICZ, Andrew RABINOVICH
-
Publication number: 20210134000Abstract: Systems and methods for cuboid detection and keypoint localization in images are disclosed. In one aspect, a deep cuboid detector can be used for simultaneous cuboid detection and keypoint localization in monocular images. The deep cuboid detector can include a plurality of convolutional layers and non-convolutional layers of a trained convolution neural network for determining a convolutional feature map from an input image. A region proposal network of the deep cuboid detector can determine a bounding box surrounding a cuboid in the image using the convolutional feature map. The pooling layer and regressor layers of the deep cuboid detector can implement iterative feature pooling for determining a refined bounding box and a parameterized representation of the cuboid.Type: ApplicationFiled: January 12, 2021Publication date: May 6, 2021Inventors: Tomasz Jan Malisiewicz, Andrew Rabinovich, Vijay Badrinarayanan, Debidatta Dwibedi
-
Patent number: 10977554Abstract: Systems, devices, and methods for training a neural network and performing image interest point detection and description using the neural network. The neural network may include an interest point detector subnetwork and a descriptor subnetwork. An optical device may include at least one camera for capturing a first image and a second image. A first set of interest points and a first descriptor may be calculated using the neural network based on the first image, and a second set of interest points and a second descriptor may be calculated using the neural network based on the second image. A homography between the first image and the second image may be determined based on the first and second sets of interest points and the first and second descriptors. The optical device may adjust virtual image light being projected onto an eyepiece based on the homography.Type: GrantFiled: November 14, 2018Date of Patent: April 13, 2021Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, Daniel DeTone, Tomasz Jan Malisiewicz
-
Patent number: 10963758Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.Type: GrantFiled: March 27, 2019Date of Patent: March 30, 2021Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel Detone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
-
Patent number: 10937188Abstract: Systems and methods for cuboid detection and keypoint localization in images are disclosed. In one aspect, a deep cuboid detector can be used for simultaneous cuboid detection and keypoint localization in monocular images. The deep cuboid detector can include a plurality of convolutional layers and non-convolutional layers of a trained convolution neural network for determining a convolutional feature map from an input image. A region proposal network of the deep cuboid detector can determine a bounding box surrounding a cuboid in the image using the convolutional feature map. The pooling layer and regressor layers of the deep cuboid detector can implement iterative feature pooling for determining a refined bounding box and a parameterized representation of the cuboid.Type: GrantFiled: March 5, 2020Date of Patent: March 2, 2021Assignee: Magic Leap, Inc.Inventors: Tomasz Jan Malisiewicz, Andrew Rabinovich, Vijay Badrinarayanan, Debidatta Dwibedi
-
Patent number: 10909711Abstract: A method of determining a pose of an image capture device includes capturing an image using an image capture device. The method also includes generating a data structure corresponding to the captured image. The method further includes comparing the data structure with a plurality of known data structures to identify a most similar known data structure. Moreover, the method includes reading metadata corresponding to the most similar known data structure to determine a pose of the image capture device.Type: GrantFiled: December 5, 2016Date of Patent: February 2, 2021Assignee: Magic Leap, Inc.Inventors: Brigit Schroeder, Tomasz J. Malisiewicz, Andrew Rabinovich
-
Publication number: 20200410699Abstract: Systems and methods are disclosed for training and using neural networks for computing depth maps. One method for training the neural network includes providing an image input to the neural network. The image input may include a camera image of a training scene. The method may also include providing a depth input to the neural network. The depth input may be based on a high-density depth map of the training scene and a sampling mask. The method may further include generating, using the neural network, a computed depth map of the training scene based on the image input and the depth input. The method may further include modifying the neural network based on an error between the computed depth map and the high-density depth map.Type: ApplicationFiled: September 11, 2020Publication date: December 31, 2020Applicant: Magic Leap, Inc.Inventors: Vijay Badrinarayanan, Zhao Chen, Andrew Rabinovich
-
Publication number: 20200380793Abstract: A sensory eyewear system for a mixed reality device can facilitate user's interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user's environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.Type: ApplicationFiled: August 6, 2020Publication date: December 3, 2020Inventors: Eric C. Browy, Michael Janusz Woods, Andrew Rabinovich
-
Publication number: 20200372246Abstract: A neural network in multi-task deep learning paradigm for machine vision includes an encoder that further includes a first, a second, and a third tier. The first tier comprises a first-tier unit having one or more first-unit blocks. The second tier receives a first-tier output from the first tier at one or more second-tier units in the second tier, a second-tier unit comprises one or more second-tier blocks, the third tier receives a second-tier output from the second tier at one or more third-tier units in the third tier, and a third-tier block comprises one or more third-tier blocks. The neural network further comprises a decoder operatively the encoder to receive an encoder output from the encoder as well as one or more loss function layers that are configured to backpropagate one or more losses for training at least the encoder of the neural network in a deep learning paradigm.Type: ApplicationFiled: May 20, 2020Publication date: November 26, 2020Applicant: MAGIC LEAP, INC.Inventors: Prajwal CHIDANANDA, Ayan Tuhinendu SINHA, Adithya Shricharan Srinivasa RAO, Douglas Bertram LEE, Andrew RABINOVICH
-
Publication number: 20200351537Abstract: The invention provides a content provisioning system. A mobile device has a mobile device processor. The mobile device mobile device has communication interface connected to the mobile device processor and a first resource device communication interface and under the control of the mobile device processor to receive first content transmitted by the first resource device transmitter The mobile device mobile device has a mobile device output device connected to the mobile device processor and under control of the mobile device processor capable of providing an output that can be sensed by a user.Type: ApplicationFiled: May 1, 2020Publication date: November 5, 2020Applicant: Magic Leap, Inc.Inventors: Eric C. BROWY, Andrew RABINOVICH, David C. LUNDMARK
-
Publication number: 20200334849Abstract: A method of determining a pose of an image capture device includes capturing an image using an image capture device. The method also includes generating a data structure corresponding to the captured image. The method further includes comparing the data structure with a plurality of known data structures to identify a most similar known data structure. Moreover, the method includes reading metadata corresponding to the most similar known data structure to determine a pose of the image capture device.Type: ApplicationFiled: July 7, 2020Publication date: October 22, 2020Applicant: Magic Leap, Inc.Inventors: Brigit SCHROEDER, Tomasz Jan MALISIEWICZ, Andrew RABINOVICH
-
Publication number: 20200334461Abstract: A head-mounted augmented reality (AR) device can include a hardware processor programmed to receive different types of sensor data from a plurality of sensors (e.g., an inertial measurement unit, an outward-facing camera, a depth sensing camera, an eye imaging camera, or a microphone); and determining an event of a plurality of events using the different types of sensor data and a hydra neural network (e.g., face recognition, visual search, gesture identification, semantic segmentation, object detection, lighting detection, simultaneous localization and mapping, relocalization).Type: ApplicationFiled: June 30, 2020Publication date: October 22, 2020Inventors: Andrew Rabinovich, Tomasz Jan Malisiewicz, Daniel DeTone
-
Publication number: 20200302628Abstract: Augmented reality devices and methods for computing a homography based on two images. One method may include receiving a first image based on a first camera pose and a second image based on a second camera pose, generating a first point cloud based on the first image and a second point cloud based on the second image, providing the first point cloud and the second point cloud to a neural network, and generating, by the neural network, the homography based on the first point cloud and the second point cloud. The neural network may be trained by generating a plurality of points, determining a 3D trajectory, sampling the 3D trajectory to obtain camera poses viewing the points, projecting the points onto 2D planes, comparing a generated homography using the projected points to the ground-truth homography and modifying the neural network based on the comparison.Type: ApplicationFiled: June 8, 2020Publication date: September 24, 2020Applicant: Magic Leap, Inc.Inventors: Daniel DeTone, Tomasz Jan Malisiewicz, Andrew Rabinovich