Patents by Inventor Srivignesh Rajendran

Srivignesh Rajendran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11853894
    Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: December 26, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Srivignesh Rajendran, Chen-Yu Lee
  • Patent number: 11803231
    Abstract: Techniques are disclosed for training a machine learning model to predict user expression. A plurality of images are received, each of the plurality of images containing at least a portion of a user's face. A plurality of values for a movement metric are calculated based on the plurality of images, each of the plurality of values for the movement metric being indicative of movement of the user's face. A plurality of values for an expression unit are calculated based on the plurality of values for the movement metric, each of the plurality of values for the expression unit corresponding to an extent to which the user's face is producing the expression unit. The machine learning model is trained using the plurality of images and the plurality of values for the expression unit.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: October 31, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Daniel Jürg Donatsch, Srivignesh Rajendran
  • Patent number: 11775058
    Abstract: Systems and methods for estimating a gaze vector of an eye using a trained neural network. An input image of the eye may be received from a camera. The input image may be provided to the neural network. Network output data may be generated using the neural network. The network output data may include two-dimensional (2D) pupil data, eye segmentation data, and/or cornea center data. The gaze vector may be computed based on the network output data. The neural network may be previously trained by providing a training input image to the neural network, generating training network output data, receiving ground-truth (GT) data, computing error data based on a difference between the training network output data and the GT data, and modifying the neural network based on the error data.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: October 3, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Vijay Badrinarayanan, Zhengyang Wu, Srivignesh Rajendran, Andrew Rabinovich
  • Patent number: 11657286
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: May 23, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
  • Publication number: 20220244781
    Abstract: Techniques related to the computation of gaze vectors of users of wearable devices are disclosed. A neural network may be trained through first and second training steps. The neural network may include a set of feature encoding layers and a plurality of sets of task-specific layers that each operate on an output of the set of feature encoding layers. During the first training step, a first image of a first eye may be provided to the neural network, eye segmentation data may be generated using the neural network, and the set of feature encoding layers may be trained. During the second training step, a second image of a second eye may be provided to the neural network, network output data may be generated using the neural network, and the plurality of sets of task-specific layers may be trained.
    Type: Application
    Filed: February 17, 2022
    Publication date: August 4, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Zhengyang Wu, Srivignesh Rajendran, Tarrence van As, Joelle Zimmermann, Vijay Badrinarayanan, Andrew Rabinovich
  • Publication number: 20210406609
    Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.
    Type: Application
    Filed: June 10, 2021
    Publication date: December 30, 2021
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Srivignesh Rajendran, Chen-Yu Lee
  • Publication number: 20210326583
    Abstract: Techniques are disclosed for training a machine learning model to predict user expression. A plurality of images are received, each of the plurality of images containing at least a portion of a user's face. A plurality of values for a movement metric are calculated based on the plurality of images, each of the plurality of values for the movement metric being indicative of movement of the user's face. A plurality of values for an expression unit are calculated based on the plurality of values for the movement metric, each of the plurality of values for the expression unit corresponding to an extent to which the user's face is producing the expression unit. The machine learning model is trained using the plurality of images and the plurality of values for the expression unit.
    Type: Application
    Filed: April 19, 2021
    Publication date: October 21, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Daniel Jürg Donatsch, Srivignesh Rajendran
  • Patent number: 11048978
    Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: June 29, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Srivignesh Rajendran, Chen-Yu Lee
  • Publication number: 20210182636
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Application
    Filed: February 23, 2021
    Publication date: June 17, 2021
    Applicant: MAGIC LEAP, INC.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
  • Publication number: 20210182554
    Abstract: Systems and methods for estimating a gaze vector of an eye using a trained neural network. An input image of the eye may be received from a camera. The input image may be provided to the neural network. Network output data may be generated using the neural network. The network output data may include two-dimensional (2D) pupil data, eye segmentation data, and/or cornea center data. The gaze vector may be computed based on the network output data. The neural network may be previously trained by providing a training input image to the neural network, generating training network output data, receiving ground-truth (GT) data, computing error data based on a difference between the training network output data and the GT data, and modifying the neural network based on the error data.
    Type: Application
    Filed: December 21, 2020
    Publication date: June 17, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Vijay Badrinarayanan, Zhengyang Wu, Srivignesh Rajendran, Andrew Rabinovich
  • Patent number: 10963758
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: March 30, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel Detone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
  • Publication number: 20190286951
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Application
    Filed: March 27, 2019
    Publication date: September 19, 2019
    Applicant: MAGIC LEAP, INC.
    Inventors: Andrew RABINOVICH, Vijay BADRINARAYANAN, Daniel DETONE, Srivignesh RAJENDRAN, Douglas Bertram LEE, Tomasz MALISIEWICZ
  • Publication number: 20190147298
    Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.
    Type: Application
    Filed: November 9, 2018
    Publication date: May 16, 2019
    Inventors: Andrew RABINOVICH, Vijay BADRINARAYANAN, Srivignesh RAJENDRAN, Chen-Yu LEE
  • Patent number: 10255529
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: April 9, 2019
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
  • Publication number: 20170262737
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Application
    Filed: March 13, 2017
    Publication date: September 14, 2017
    Applicant: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz