Patents by Inventor Siddharth Mahendran

Siddharth Mahendran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127538
    Abstract: This document describes scene understanding for cross reality systems using occupancy grids. In one aspect, a method includes recognizing one or more objects in a model of a physical environment generated using images of the physical environment. For each object, a bounding box is fit around the object. An occupancy grid that includes a multiple cells is generated within the bounding box around the object. A value is assigned to each cell of the occupancy grid based on whether the cell includes a portion of the object. An object representation that includes information describing the occupancy grid for the object is generated. The object representations are sent to one or more devices.
    Type: Application
    Filed: February 3, 2022
    Publication date: April 18, 2024
    Inventors: Divya Ramnath, Shiyu Dong, Siddharth Choudhary, Siddharth Mahendran, Arumugam Kalai Kannan, Prateek Singhal, Khushi Gupta
  • Publication number: 20230290132
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an object recognition neural network using multiple data sources. One of the methods includes receiving training data that includes a plurality of training images from a first source and images from a second source. A set of training images are obtained from the training data. For each training image in the set of training images, contrast equalization is applied to the training image to generate a modified image. The modified image is processed using the neural network to generate an object recognition output for the modified image. A loss is determined based on errors between, for each training image in the set, the object recognition output for the modified image generated from the training image and ground-truth annotation for the training image. Parameters of the neural network are updated based on the determined loss.
    Type: Application
    Filed: July 28, 2021
    Publication date: September 14, 2023
    Inventors: Siddharth MAHENDRAN, Nitin BANSAL, Nitesh SEKHAR, Manushree GANGWAR, Khushi GUPTA, Prateek SINGHAL, Tarrence VAN AS, Adithya Shricharan Srinivasa RAO
  • Patent number: 11704806
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scalable three-dimensional (3-D) object recognition in a cross reality system. One of the methods includes maintaining object data specifying objects that have been recognized in a scene. A stream of input images of the scene is received, including a stream of color images and a stream of depth images. A color image is provided as input to an object recognition system. A recognition output that identifies a respective object mask for each object in the color image is received. A synchronization system determines a corresponding depth image for the color image. A 3-D bounding box generation system determines a respective 3-D bounding box for each object that has been recognized in the color image. Data specifying one or more 3-D bounding boxes is received as output from the 3-D bounding box generation system.
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: July 18, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Siddharth Choudhary, Divya Ramnath, Shiyu Dong, Siddharth Mahendran, Arumugam Kalai Kannan, Prateek Singhal, Khushi Gupta, Nitesh Sekhar, Manushree Gangwar
  • Publication number: 20230222332
    Abstract: Disclosed are systems, apparatuses, methods, and computer-readable media to train a neural network model implemented into a perception stack in an autonomous vehicle (AV) for detecting objects. A method includes pretraining an uninitialized ML model to yield a first ML model; training the first ML model with a first testing dataset for a first number of iterations based on a first configuration; analyzing the first ML model based on a convergence of the first ML model and a previous iteration of training; generating a report based on the analysis of the first ML; and after generating the report, training the first ML model to yield a second ML model.
    Type: Application
    Filed: December 17, 2021
    Publication date: July 13, 2023
    Inventors: Siddharth Mahendran, Teng Liu, Yong Jae Lee, Marzieh Parandehgheibi
  • Publication number: 20230194715
    Abstract: The subject disclosure relates to techniques for tracking keypoints on an object represented in a Light Detection and Ranging (LiDAR) point cloud. A process of the disclosed technology can include receiving, for each of a plurality of frames in a series, an identification of at least one keypoint on the object represented in LiDAR point clouds and a confidence score for the respective keypoint, wherein each of the plurality of frames includes LiDAR point clouds including the object at different times represented in the series, and determining kinematics for the object from a determined movement of the keypoint across the plurality of frames.
    Type: Application
    Filed: December 20, 2021
    Publication date: June 22, 2023
    Inventors: Abdelrahman Elogeel, Alexander Pon, Debanjan Nandi, Andres Hasfura, Carden Bagwell, Marzieh Parandehgheibi, Siddharth Mahendran, Teng Liu
  • Publication number: 20230196749
    Abstract: Disclosed are systems, apparatuses, methods, and computer-readable media to train a neural network model implemented into a perception stack in an autonomous vehicle (AV) for detecting objects. A method includes receiving a 3D light and detection ranging (LIDAR) data to train a neural network model having residual connections for detecting objects in LIDAR data; converting each frame of the LIDAR data into a voxelized frame to yield a training dataset of voxelized frames; and training the neural network model based on the training dataset of voxelized frames and a feedback control to control input from the training dataset of voxelized frames into the neural network model.
    Type: Application
    Filed: December 17, 2021
    Publication date: June 22, 2023
    Inventors: Siddharth Mahendran, Yong Jae Lee, Teng Liu Liu, Marzieh Parandehgheibi, Bo Tian, Aleksandr Sidorov
  • Publication number: 20230196180
    Abstract: The subject disclosure relates to techniques for identifying keypoints associated with an object based on LiDAR point cloud data. A process of the disclosed technology can include inputting the LiDAR point cloud data representing an object as perceived by a LiDAR sensor into an algorithm trained to identify the keypoints associated with the object, and identifying, by the algorithm, at least one keypoint associated with the object and a respective confidence score for the at least one keypoint.
    Type: Application
    Filed: December 20, 2021
    Publication date: June 22, 2023
    Inventors: Abdelrahman Elogeel, Alexander Pon, Debanjan Nandi, Andres Hasfura, Carden Bagwell, Marzieh Parandehgheibi, Siddharth Mahendran, Teng Liu
  • Publication number: 20220139057
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scalable three-dimensional (3-D) object recognition in a cross reality system. One of the methods includes maintaining object data specifying objects that have been recognized in a scene. A stream of input images of the scene is received, including a stream of color images and a stream of depth images. A color image is provided as input to an object recognition system. A recognition output that identifies a respective object mask for each object in the color image is received. A synchronization system determines a corresponding depth image for the color image. A 3-D bounding box generation system determines a respective 3-D bounding box for each object that has been recognized in the color image. Data specifying one or more 3-D bounding boxes is received as output from the 3-D bounding box generation system.
    Type: Application
    Filed: January 12, 2022
    Publication date: May 5, 2022
    Inventors: Siddharth Choudhary, Divya Ramnath, Shiyu Dong, Siddharth Mahendran, Arumugam Kalai Kannan, Prateek Singhal, Khushi Gupta, Nitesh Sekhar, Manushree Gangwar
  • Patent number: 11257300
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scalable three-dimensional (3-D) object recognition in a cross reality system. One of the methods includes maintaining object data specifying objects that have been recognized in a scene. A stream of input images of the scene is received, including a stream of color images and a stream of depth images. A color image is provided as input to an object recognition system. A recognition output that identifies a respective object mask for each object in the color image is received. A synchronization system determines a corresponding depth image for the color image. A 3-D bounding box generation system determines a respective 3-D bounding box for each object that has been recognized in the color image. Data specifying one or more 3-D bounding boxes is received as output from the 3-D bounding box generation system.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: February 22, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Siddharth Choudhary, Divya Ramnath, Shiyu Dong, Siddharth Mahendran, Arumugam Kalai Kannan, Prateek Singhal, Khushi Gupta, Nitesh Sekhar, Manushree Gangwar
  • Publication number: 20210407125
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for object recognition neural network for amodal center prediction. One of the methods includes receiving an image of an object captured by a camera. The image of the object is processed using an object recognition neural network that is configured to generate an object recognition output. The object recognition output includes data defining a predicted two-dimensional amodal center of the object, wherein the predicted two-dimensional amodal center of the object is a projection of a predicted three-dimensional center of the object under a camera pose of the camera that captured the image.
    Type: Application
    Filed: June 24, 2021
    Publication date: December 30, 2021
    Inventors: Siddharth Mahendran, Nitin Bansal, Nitesh Sekhar, Manushree Gangwar, Khushi Gupta, Prateek Singhal