Patents by Inventor Vijay Badrinarayanan
Vijay Badrinarayanan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240134200Abstract: A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, or whether an adjustment to a rendering location of virtual object should be made to compensate for a deviation of the wearable device from its normal resting position.Type: ApplicationFiled: December 29, 2023Publication date: April 25, 2024Inventors: Adrian KAEHLER, Gary BRADSKI, Vijay BADRINARAYANAN
-
Patent number: 11906742Abstract: A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, or whether an adjustment to a rendering location of virtual object should be made to compensate for a deviation of the wearable device from its normal resting position.Type: GrantFiled: July 26, 2021Date of Patent: February 20, 2024Assignee: Magic Leap, Inc.Inventors: Adrian Kaehler, Gary Bradski, Vijay Badrinarayanan
-
Publication number: 20240029269Abstract: Systems and methods for eye image segmentation and image quality estimation are disclosed. In one aspect, after receiving an eye image, a device such as an augmented reality device can process the eye image using a convolutional neural network with a merged architecture to generate both a segmented eye image and a quality estimation of the eye image. The segmented eye image can include a background region, a sclera region, an iris region, or a pupil region. In another aspect, a convolutional neural network with a merged architecture can be trained for eye image segmentation and image quality estimation. In yet another aspect, the device can use the segmented eye image to determine eye contours such as a pupil contour and an iris contour. The device can use the eye contours to create a polar image of the iris region for computing an iris code or biometric authentication.Type: ApplicationFiled: August 24, 2023Publication date: January 25, 2024Inventors: Alexey SPIZHEVOY, Adrian KAEHLER, Vijay BADRINARAYANAN
-
Patent number: 11853894Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.Type: GrantFiled: June 10, 2021Date of Patent: December 26, 2023Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Srivignesh Rajendran, Chen-Yu Lee
-
Publication number: 20230394315Abstract: Systems and methods for estimating a layout of a room are disclosed. The room layout can comprise the location of a floor, one or more walls, and a ceiling. In one aspect, a neural network can analyze an image of a portion of a room to determine the room layout. The neural network can comprise a convolutional neural network having an encoder sub-network, a decoder sub-network, and a side sub-network. The neural network can determine a three-dimensional room layout using two-dimensional ordered keypoints associated with a room type. The room layout can be used in applications such as augmented or mixed reality, robotics, autonomous indoor navigation, etc.Type: ApplicationFiled: August 23, 2023Publication date: December 7, 2023Inventors: Chen-Yu Lee, Vijay Badrinarayanan, Tomasz Jan Malisiewicz, Andrew Rabinovich
-
Patent number: 11797860Abstract: Systems and methods for cuboid detection and keypoint localization in images are disclosed. In one aspect, a deep cuboid detector can be used for simultaneous cuboid detection and keypoint localization in monocular images. The deep cuboid detector can include a plurality of convolutional layers and non-convolutional layers of a trained convolution neural network for determining a convolutional feature map from an input image. A region proposal network of the deep cuboid detector can determine a bounding box surrounding a cuboid in the image using the convolutional feature map. The pooling layer and regressor layers of the deep cuboid detector can implement iterative feature pooling for determining a refined bounding box and a parameterized representation of the cuboid.Type: GrantFiled: April 11, 2022Date of Patent: October 24, 2023Assignee: MAGIC LEAP, INC.Inventors: Tomasz Jan Malisiewicz, Andrew Rabinovich, Vijay Badrinarayanan, Debidatta Dwibedi
-
Patent number: 11775058Abstract: Systems and methods for estimating a gaze vector of an eye using a trained neural network. An input image of the eye may be received from a camera. The input image may be provided to the neural network. Network output data may be generated using the neural network. The network output data may include two-dimensional (2D) pupil data, eye segmentation data, and/or cornea center data. The gaze vector may be computed based on the network output data. The neural network may be previously trained by providing a training input image to the neural network, generating training network output data, receiving ground-truth (GT) data, computing error data based on a difference between the training network output data and the GT data, and modifying the neural network based on the error data.Type: GrantFiled: December 21, 2020Date of Patent: October 3, 2023Assignee: Magic Leap, Inc.Inventors: Vijay Badrinarayanan, Zhengyang Wu, Srivignesh Rajendran, Andrew Rabinovich
-
Patent number: 11776131Abstract: Systems and methods for eye image segmentation and image quality estimation are disclosed. In one aspect, after receiving an eye image, a device such as an augmented reality device can process the eye image using a convolutional neural network with a merged architecture to generate both a segmented eye image and a quality estimation of the eye image. The segmented eye image can include a background region, a sclera region, an iris region, or a pupil region. In another aspect, a convolutional neural network with a merged architecture can be trained for eye image segmentation and image quality estimation. In yet another aspect, the device can use the segmented eye image to determine eye contours such as a pupil contour and an iris contour. The device can use the eye contours to create a polar image of the iris region for computing an iris code or biometric authentication.Type: GrantFiled: August 20, 2021Date of Patent: October 3, 2023Assignee: Magic Leap, Inc.Inventors: Alexey Spizhevoy, Adrian Kaehler, Vijay Badrinarayanan
-
Patent number: 11775835Abstract: Systems and methods for estimating a layout of a room are disclosed. The room layout can comprise the location of a floor, one or more walls, and a ceiling. In one aspect, a neural network can analyze an image of a portion of a room to determine the room layout. The neural network can comprise a convolutional neural network having an encoder sub-network, a decoder sub-network, and a side sub-network. The neural network can determine a three-dimensional room layout using two-dimensional ordered keypoints associated with a room type. The room layout can be used in applications such as augmented or mixed reality, robotics, autonomous indoor navigation, etc.Type: GrantFiled: April 9, 2020Date of Patent: October 3, 2023Assignee: MAGIC LEAP, INC.Inventors: Chen-Yu Lee, Vijay Badrinarayanan, Tomasz Jan Malisiewicz, Andrew Rabinovich
-
Patent number: 11682127Abstract: Systems and methods are disclosed for training and using neural networks for computing depth maps. One method for training the neural network includes providing an image input to the neural network. The image input may include a camera image of a training scene. The method may also include providing a depth input to the neural network. The depth input may be based on a high-density depth map of the training scene and a sampling mask. The method may further include generating, using the neural network, a computed depth map of the training scene based on the image input and the depth input. The method may further include modifying the neural network based on an error between the computed depth map and the high-density depth map.Type: GrantFiled: September 11, 2020Date of Patent: June 20, 2023Assignee: Magic Leap, Inc.Inventors: Vijay Badrinarayanan, Zhao Chen, Andrew Rabinovich
-
Patent number: 11657286Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.Type: GrantFiled: February 23, 2021Date of Patent: May 23, 2023Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
-
Patent number: 11630314Abstract: An example wearable display system can be capable of determining a user interface (UI) event with respect to a virtual UI device (e.g., a button) and a pointer (e.g., a finger or a stylus) using a neural network. The wearable display system can render a representation of the UI device onto an image of the pointer captured when the virtual UI device is shown to the user and the user uses the pointer to interact with the virtual UI device. The representation of the UI device can include concentric shapes (or shapes with similar or the same centers of gravity) of high contrast. The neural network can be trained using training images with representations of virtual UI devices and pointers.Type: GrantFiled: April 13, 2022Date of Patent: April 18, 2023Assignee: Magic Leap, Inc.Inventors: Adrian Kaehler, Gary R. Bradski, Vijay Badrinarayanan
-
Patent number: 11600049Abstract: Techniques for estimating a perimeter of a room environment at least partially enclosed by a set of adjoining walls using posed images are disclosed. A set of images and a set of poses are obtained. A depth map is generated based on the set of images and the set of poses. A set of wall segmentation maps are generated based on the set of images, each of the set of wall segmentation maps indicating a target region of a corresponding image that contains the set of adjoining walls. A point cloud is generated based on the depth map and the set of wall segmentation maps, the point cloud including a plurality of points that are sampled along portions of the depth map that align with the target region. The perimeter of the environment along the set of adjoining walls is estimated based on the point cloud.Type: GrantFiled: April 23, 2020Date of Patent: March 7, 2023Assignee: Magic Leap, Inc.Inventors: Zhao Chen, Ameya Pramod Phalak, Vijay Badrinarayanan
-
Patent number: 11537895Abstract: Systems and methods for training a multitask network is disclosed. In one aspect, training the multitask network includes determining a gradient norm of a single-task loss adjusted by a task weight for each task, with respect to network weights of the multitask network, and a relative training rate for the task based on the single-task loss for the task. Subsequently, a gradient loss function, comprising a difference between (1) the determined gradient norm for each task and (2) a corresponding target gradient norm, can be determined. An updated task weight for the task can be determined and used in the next iteration of training the multitask network, using a gradient of the gradient loss function with respect to the task weight for the task.Type: GrantFiled: October 24, 2018Date of Patent: December 27, 2022Assignee: Magic Leap, Inc.Inventors: Zhao Chen, Vijay Badrinarayanan, Andrew Rabinovich
-
Publication number: 20220244781Abstract: Techniques related to the computation of gaze vectors of users of wearable devices are disclosed. A neural network may be trained through first and second training steps. The neural network may include a set of feature encoding layers and a plurality of sets of task-specific layers that each operate on an output of the set of feature encoding layers. During the first training step, a first image of a first eye may be provided to the neural network, eye segmentation data may be generated using the neural network, and the set of feature encoding layers may be trained. During the second training step, a second image of a second eye may be provided to the neural network, network output data may be generated using the neural network, and the plurality of sets of task-specific layers may be trained.Type: ApplicationFiled: February 17, 2022Publication date: August 4, 2022Applicant: Magic Leap, Inc.Inventors: Zhengyang Wu, Srivignesh Rajendran, Tarrence van As, Joelle Zimmermann, Vijay Badrinarayanan, Andrew Rabinovich
-
Publication number: 20220245404Abstract: Disclosed herein are examples of a wearable display system capable of determining a user interface (UI) event with respect to a virtual UI device (e.g., a button) and a pointer (e.g., a finger or a stylus) using a neural network. The wearable display system can render a representation of the UI device onto an image of the pointer captured when the virtual UI device is shown to the user and the user uses the pointer to interact with the virtual UI device. The representation of the UI device can include concentric shapes (or shapes with similar or the same centers of gravity) of high contrast. The neural network can be trained using training images with representations of virtual UI devices and pointers.Type: ApplicationFiled: April 13, 2022Publication date: August 4, 2022Inventors: Adrian Kaehler, Gary R. Bradski, Vijay Badrinarayanan
-
Publication number: 20220237815Abstract: Systems and methods for cuboid detection and keypoint localization in images are disclosed. In one aspect, a deep cuboid detector can be used for simultaneous cuboid detection and keypoint localization in monocular images. The deep cuboid detector can include a plurality of convolutional layers and non-convolutional layers of a trained convolution neural network for determining a convolutional feature map from an input image. A region proposal network of the deep cuboid detector can determine a bounding box surrounding a cuboid in the image using the convolutional feature map. The pooling layer and regressor layers of the deep cuboid detector can implement iterative feature pooling for determining a refined bounding box and a parameterized representation of the cuboid.Type: ApplicationFiled: April 11, 2022Publication date: July 28, 2022Inventors: Tomasz Jan Malisiewicz, Andrew Rabinovich, Vijay Badrinarayanan, Debidatta Dwibedi
-
Patent number: 11334765Abstract: A wearable display system can be capable of determining a user interface (UI) event with respect to a virtual UI device (e.g., a button) and a pointer (e.g., a finger or a stylus) using a neural network. The wearable display system can render a representation of the UI device onto an image of the pointer captured when the virtual UI device is shown to the user and the user uses the pointer to interact with the virtual UI device. The representation of the UI device can include concentric shapes (or shapes with similar or the same centers of gravity) of high contrast. The neural network can be trained using training images with representations of virtual UI devices and pointers.Type: GrantFiled: January 13, 2021Date of Patent: May 17, 2022Assignee: Magic Leap, Inc.Inventors: Adrian Kaehler, Gary R. Bradski, Vijay Badrinarayanan
-
Patent number: 11328443Abstract: Systems and methods for cuboid detection and keypoint localization in images are disclosed. In one aspect, a deep cuboid detector can be used for simultaneous cuboid detection and keypoint localization in monocular images. The deep cuboid detector can include a plurality of convolutional layers and non-convolutional layers of a trained convolution neural network for determining a convolutional feature map from an input image. A region proposal network of the deep cuboid detector can determine a bounding box surrounding a cuboid in the image using the convolutional feature map. The pooling layer and regressor layers of the deep cuboid detector can implement iterative feature pooling for determining a refined bounding box and a parameterized representation of the cuboid.Type: GrantFiled: January 12, 2021Date of Patent: May 10, 2022Assignee: Magic Leap, Inc.Inventors: Tomasz Jan Malisiewicz, Andrew Rabinovich, Vijay Badrinarayanan, Debidatta Dwibedi
-
Publication number: 20220044406Abstract: Systems and methods for eye image segmentation and image quality estimation are disclosed. In one aspect, after receiving an eye image, a device such as an augmented reality device can process the eye image using a convolutional neural network with a merged architecture to generate both a segmented eye image and a quality estimation of the eye image. The segmented eye image can include a background region, a sclera region, an iris region, or a pupil region. In another aspect, a convolutional neural network with a merged architecture can be trained for eye image segmentation and image quality estimation. In yet another aspect, the device can use the segmented eye image to determine eye contours such as a pupil contour and an iris contour. The device can use the eye contours to create a polar image of the iris region for computing an iris code or biometric authentication.Type: ApplicationFiled: August 20, 2021Publication date: February 10, 2022Inventors: Alexey Spizhevoy, Adrian Kaehler, Vijay Badrinarayanan