Patents by Inventor Srivignesh Rajendran
Srivignesh Rajendran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11853894Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.Type: GrantFiled: June 10, 2021Date of Patent: December 26, 2023Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Srivignesh Rajendran, Chen-Yu Lee
-
Patent number: 11803231Abstract: Techniques are disclosed for training a machine learning model to predict user expression. A plurality of images are received, each of the plurality of images containing at least a portion of a user's face. A plurality of values for a movement metric are calculated based on the plurality of images, each of the plurality of values for the movement metric being indicative of movement of the user's face. A plurality of values for an expression unit are calculated based on the plurality of values for the movement metric, each of the plurality of values for the expression unit corresponding to an extent to which the user's face is producing the expression unit. The machine learning model is trained using the plurality of images and the plurality of values for the expression unit.Type: GrantFiled: April 19, 2021Date of Patent: October 31, 2023Assignee: Magic Leap, Inc.Inventors: Daniel Jürg Donatsch, Srivignesh Rajendran
-
Patent number: 11775058Abstract: Systems and methods for estimating a gaze vector of an eye using a trained neural network. An input image of the eye may be received from a camera. The input image may be provided to the neural network. Network output data may be generated using the neural network. The network output data may include two-dimensional (2D) pupil data, eye segmentation data, and/or cornea center data. The gaze vector may be computed based on the network output data. The neural network may be previously trained by providing a training input image to the neural network, generating training network output data, receiving ground-truth (GT) data, computing error data based on a difference between the training network output data and the GT data, and modifying the neural network based on the error data.Type: GrantFiled: December 21, 2020Date of Patent: October 3, 2023Assignee: Magic Leap, Inc.Inventors: Vijay Badrinarayanan, Zhengyang Wu, Srivignesh Rajendran, Andrew Rabinovich
-
Patent number: 11657286Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.Type: GrantFiled: February 23, 2021Date of Patent: May 23, 2023Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
-
Publication number: 20220244781Abstract: Techniques related to the computation of gaze vectors of users of wearable devices are disclosed. A neural network may be trained through first and second training steps. The neural network may include a set of feature encoding layers and a plurality of sets of task-specific layers that each operate on an output of the set of feature encoding layers. During the first training step, a first image of a first eye may be provided to the neural network, eye segmentation data may be generated using the neural network, and the set of feature encoding layers may be trained. During the second training step, a second image of a second eye may be provided to the neural network, network output data may be generated using the neural network, and the plurality of sets of task-specific layers may be trained.Type: ApplicationFiled: February 17, 2022Publication date: August 4, 2022Applicant: Magic Leap, Inc.Inventors: Zhengyang Wu, Srivignesh Rajendran, Tarrence van As, Joelle Zimmermann, Vijay Badrinarayanan, Andrew Rabinovich
-
Publication number: 20210406609Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.Type: ApplicationFiled: June 10, 2021Publication date: December 30, 2021Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Srivignesh Rajendran, Chen-Yu Lee
-
Publication number: 20210326583Abstract: Techniques are disclosed for training a machine learning model to predict user expression. A plurality of images are received, each of the plurality of images containing at least a portion of a user's face. A plurality of values for a movement metric are calculated based on the plurality of images, each of the plurality of values for the movement metric being indicative of movement of the user's face. A plurality of values for an expression unit are calculated based on the plurality of values for the movement metric, each of the plurality of values for the expression unit corresponding to an extent to which the user's face is producing the expression unit. The machine learning model is trained using the plurality of images and the plurality of values for the expression unit.Type: ApplicationFiled: April 19, 2021Publication date: October 21, 2021Applicant: Magic Leap, Inc.Inventors: Daniel Jürg Donatsch, Srivignesh Rajendran
-
Patent number: 11048978Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.Type: GrantFiled: November 9, 2018Date of Patent: June 29, 2021Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Srivignesh Rajendran, Chen-Yu Lee
-
Publication number: 20210182636Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.Type: ApplicationFiled: February 23, 2021Publication date: June 17, 2021Applicant: MAGIC LEAP, INC.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
-
Publication number: 20210182554Abstract: Systems and methods for estimating a gaze vector of an eye using a trained neural network. An input image of the eye may be received from a camera. The input image may be provided to the neural network. Network output data may be generated using the neural network. The network output data may include two-dimensional (2D) pupil data, eye segmentation data, and/or cornea center data. The gaze vector may be computed based on the network output data. The neural network may be previously trained by providing a training input image to the neural network, generating training network output data, receiving ground-truth (GT) data, computing error data based on a difference between the training network output data and the GT data, and modifying the neural network based on the error data.Type: ApplicationFiled: December 21, 2020Publication date: June 17, 2021Applicant: Magic Leap, Inc.Inventors: Vijay Badrinarayanan, Zhengyang Wu, Srivignesh Rajendran, Andrew Rabinovich
-
Patent number: 10963758Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.Type: GrantFiled: March 27, 2019Date of Patent: March 30, 2021Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel Detone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
-
Publication number: 20190286951Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.Type: ApplicationFiled: March 27, 2019Publication date: September 19, 2019Applicant: MAGIC LEAP, INC.Inventors: Andrew RABINOVICH, Vijay BADRINARAYANAN, Daniel DETONE, Srivignesh RAJENDRAN, Douglas Bertram LEE, Tomasz MALISIEWICZ
-
Publication number: 20190147298Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.Type: ApplicationFiled: November 9, 2018Publication date: May 16, 2019Inventors: Andrew RABINOVICH, Vijay BADRINARAYANAN, Srivignesh RAJENDRAN, Chen-Yu LEE
-
Patent number: 10255529Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.Type: GrantFiled: March 13, 2017Date of Patent: April 9, 2019Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
-
Publication number: 20170262737Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.Type: ApplicationFiled: March 13, 2017Publication date: September 14, 2017Applicant: Magic Leap, Inc.Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz