Patents by Inventor Prateek Singhal

Prateek Singhal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11978159
    Abstract: A cross reality system that provides an immersive user experience by storing persistent spatial information about the physical world that one or multiple user devices can access to determine position within the physical world and that applications can access to specify the position of virtual objects within the physical world. Persistent spatial information enables users to have a shared virtual, as well as physical, experience when interacting with the cross reality system. Further, persistent spatial information may be used in maps of the physical world, enabling one or multiple devices to access and localize into previously stored maps, reducing the need to map a physical space before using the cross reality system in it. Persistent spatial information may be stored as persistent coordinate frames, which may include a transformation relative to a reference orientation and information derived from images in a location corresponding to the persistent coordinate frame.
    Type: Grant
    Filed: December 3, 2021
    Date of Patent: May 7, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Anush Mohan, Rafael Domingos Torres, Daniel Olshansky, Samuel A. Miller, Jehangir Tajik, Joel David Holder, Jeremy Dwayne Miranda, Robert Blake Taylor, Ashwin Swaminathan, Lomesh Agarwal, Hiral Honar Barot, Helder Toshiro Suzuki, Ali Shahrokni, Eran Guendelman, Prateek Singhal, Xuan Zhao, Siddharth Choudhary, Nicholas Atkinson Kramer, Kenneth William Tossell, Christian Ivan Robert Moore
  • Publication number: 20240135707
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Application
    Filed: October 8, 2023
    Publication date: April 25, 2024
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Publication number: 20240127538
    Abstract: This document describes scene understanding for cross reality systems using occupancy grids. In one aspect, a method includes recognizing one or more objects in a model of a physical environment generated using images of the physical environment. For each object, a bounding box is fit around the object. An occupancy grid that includes a multiple cells is generated within the bounding box around the object. A value is assigned to each cell of the occupancy grid based on whether the cell includes a portion of the object. An object representation that includes information describing the occupancy grid for the object is generated. The object representations are sent to one or more devices.
    Type: Application
    Filed: February 3, 2022
    Publication date: April 18, 2024
    Inventors: Divya Ramnath, Shiyu Dong, Siddharth Choudhary, Siddharth Mahendran, Arumugam Kalai Kannan, Prateek Singhal, Khushi Gupta
  • Patent number: 11947887
    Abstract: A system includes a memory that stores instructions and receives a circuit netlist, and includes a processing unit that accesses the memory and executes the instructions. The instructions include an EDA application that includes a test-point flop allocation module that is configured to evaluate the circuit netlist to determine compatibility of the test-point nodes in the circuit netlist. The test-point flop allocation module can further allocate each of the test-point flops to a test-point sharing group comprising a plurality of compatible test-point nodes. The EDA application also includes a circuit layout module configured to generate a circuit layout associated with the circuit design, the circuit layout comprising the functional logic and scan-chains comprising the test-point flops allocated to the test-point sharing groups in response to the circuit netlist. The circuit layout is employable to fabricate an integrated circuit (IC) chip.
    Type: Grant
    Filed: September 27, 2022
    Date of Patent: April 2, 2024
    Assignee: Cadence Design Systems, Inc.
    Inventors: Krishna Chakravadhanula, Brian Foutz, Prateek Kumar Rai, Sarthak Singhal, Christos Papameletis, Vivek Chickermane
  • Patent number: 11935071
    Abstract: A computing system that determines employee and organizational compliance includes one or more databases that store organizational data and one or more processors to execute instructions to perform various operations.
    Type: Grant
    Filed: July 8, 2022
    Date of Patent: March 19, 2024
    Assignee: PEOPLE CENTER, INC.
    Inventors: Sachin Aralasurali Suryanarayana, Tomer Schwartz, Parthasarathy Jeyaram, Prateek Agarwal, Shubham Choudhary, Sanket Singhal, Sanjay Lal Bhavnani
  • Patent number: 11823450
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Grant
    Filed: October 14, 2022
    Date of Patent: November 21, 2023
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Publication number: 20230290132
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an object recognition neural network using multiple data sources. One of the methods includes receiving training data that includes a plurality of training images from a first source and images from a second source. A set of training images are obtained from the training data. For each training image in the set of training images, contrast equalization is applied to the training image to generate a modified image. The modified image is processed using the neural network to generate an object recognition output for the modified image. A loss is determined based on errors between, for each training image in the set, the object recognition output for the modified image generated from the training image and ground-truth annotation for the training image. Parameters of the neural network are updated based on the determined loss.
    Type: Application
    Filed: July 28, 2021
    Publication date: September 14, 2023
    Inventors: Siddharth MAHENDRAN, Nitin BANSAL, Nitesh SEKHAR, Manushree GANGWAR, Khushi GUPTA, Prateek SINGHAL, Tarrence VAN AS, Adithya Shricharan Srinivasa RAO
  • Publication number: 20230273676
    Abstract: An apparatus configured to be worn on a head of a user, includes: a screen configured to present graphics to the user; a camera system configured to view an environment in which the user is located; and a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment; wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric.
    Type: Application
    Filed: May 9, 2023
    Publication date: August 31, 2023
    Applicant: MAGIC LEAP, INC.
    Inventors: Divya SHARMA, Ali SHAHROKNI, Anush MOHAN, Prateek SINGHAL, Xuan ZHAO, Sergiu SIMA, Benjamin LANGMANN
  • Patent number: 11704806
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scalable three-dimensional (3-D) object recognition in a cross reality system. One of the methods includes maintaining object data specifying objects that have been recognized in a scene. A stream of input images of the scene is received, including a stream of color images and a stream of depth images. A color image is provided as input to an object recognition system. A recognition output that identifies a respective object mask for each object in the color image is received. A synchronization system determines a corresponding depth image for the color image. A 3-D bounding box generation system determines a respective 3-D bounding box for each object that has been recognized in the color image. Data specifying one or more 3-D bounding boxes is received as output from the 3-D bounding box generation system.
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: July 18, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Siddharth Choudhary, Divya Ramnath, Shiyu Dong, Siddharth Mahendran, Arumugam Kalai Kannan, Prateek Singhal, Khushi Gupta, Nitesh Sekhar, Manushree Gangwar
  • Patent number: 11687151
    Abstract: An apparatus configured to be worn on a head of a user, includes: a screen configured to present graphics to the user; a camera system configured to view an environment in which the user is located; and a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment; wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: June 27, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Divya Sharma, Ali Shahrokni, Anush Mohan, Prateek Singhal, Xuan Zhao, Sergiu Sima, Benjamin Langmann
  • Publication number: 20230034363
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Application
    Filed: October 14, 2022
    Publication date: February 2, 2023
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Patent number: 11501529
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: November 15, 2022
    Assignee: MAGIC LEAP, INC.
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Patent number: 11386629
    Abstract: An augmented reality viewing system is described. A local coordinate frame of local content is transformed to a world coordinate frame. A further transformation is made to a head coordinate frame and a further transformation is made to a camera coordinate frame that includes all pupil positions of an eye. One or more users may interact in separate sessions with a viewing system. If a canonical map is available, the earlier map is downloaded onto a viewing device of a user. The viewing device then generates another map and localizes the subsequent map to the canonical map.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: July 12, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Jeremy Dwayne Miranda, Rafael Domingos Torres, Daniel Olshansky, Anush Mohan, Robert Blake Taylor, Samuel A. Miller, Jehangir Tajik, Ashwin Swaminathan, Lomesh Agarwal, Ali Shahrokni, Prateek Singhal, Joel David Holder, Xuan Zhao, Siddharth Choudhary, Helder Toshiro Suzuki, Hirai Honar Barot, Eran Guendelman, Michael Harold Liebenow, Christian Ivan Robert Moore
  • Publication number: 20220139057
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scalable three-dimensional (3-D) object recognition in a cross reality system. One of the methods includes maintaining object data specifying objects that have been recognized in a scene. A stream of input images of the scene is received, including a stream of color images and a stream of depth images. A color image is provided as input to an object recognition system. A recognition output that identifies a respective object mask for each object in the color image is received. A synchronization system determines a corresponding depth image for the color image. A 3-D bounding box generation system determines a respective 3-D bounding box for each object that has been recognized in the color image. Data specifying one or more 3-D bounding boxes is received as output from the 3-D bounding box generation system.
    Type: Application
    Filed: January 12, 2022
    Publication date: May 5, 2022
    Inventors: Siddharth Choudhary, Divya Ramnath, Shiyu Dong, Siddharth Mahendran, Arumugam Kalai Kannan, Prateek Singhal, Khushi Gupta, Nitesh Sekhar, Manushree Gangwar
  • Publication number: 20220092852
    Abstract: A cross reality system that provides an immersive user experience by storing persistent spatial information about the physical world that one or multiple user devices can access to determine position within the physical world and that applications can access to specify the position of virtual objects within the physical world. Persistent spatial information enables users to have a shared virtual, as well as physical, experience when interacting with the cross reality system. Further, persistent spatial information may be used in maps of the physical world, enabling one or multiple devices to access and localize into previously stored maps, reducing the need to map a physical space before using the cross reality system in it. Persistent spatial information may be stored as persistent coordinate frames, which may include a transformation relative to a reference orientation and information derived from images in a location corresponding to the persistent coordinate frame.
    Type: Application
    Filed: December 3, 2021
    Publication date: March 24, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Anush Mohan, Rafael Domingos Torres, Daniel Olshansky, Samuel A. Miller, Jehangir Tajik, Joel David Holder, Jeremy Dwayne Miranda, Robert Blake Taylor, Ashwin Swaminathan, Lomesh Agarwal, Hiral Honar Barot, Helder Toshiro Suzuki, Ali Shahrokni, Eran Guendelman, Prateek Singhal, Xuan Zhao, Siddharth Choudhary, Nicholas Atkinson Kramer, Kenneth William Tossell, Christian Ivan Robert Moore
  • Patent number: 11257300
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scalable three-dimensional (3-D) object recognition in a cross reality system. One of the methods includes maintaining object data specifying objects that have been recognized in a scene. A stream of input images of the scene is received, including a stream of color images and a stream of depth images. A color image is provided as input to an object recognition system. A recognition output that identifies a respective object mask for each object in the color image is received. A synchronization system determines a corresponding depth image for the color image. A 3-D bounding box generation system determines a respective 3-D bounding box for each object that has been recognized in the color image. Data specifying one or more 3-D bounding boxes is received as output from the 3-D bounding box generation system.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: February 22, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Siddharth Choudhary, Divya Ramnath, Shiyu Dong, Siddharth Mahendran, Arumugam Kalai Kannan, Prateek Singhal, Khushi Gupta, Nitesh Sekhar, Manushree Gangwar
  • Publication number: 20220036078
    Abstract: An apparatus configured to be worn on a head of a user, includes: a screen configured to present graphics to the user; a camera system configured to view an environment in which the user is located; and a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment; wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric.
    Type: Application
    Filed: October 13, 2021
    Publication date: February 3, 2022
    Applicant: MAGIC LEAP, INC.
    Inventors: Divya SHARMA, Ali SHAHROKNI, Anush MOHAN, Prateek SINGHAL, Xuan ZHAO, Sergiu SIMA, Benjamin LANGMANN
  • Patent number: 11227435
    Abstract: A cross reality system that provides an immersive user experience by storing persistent spatial information about the physical world that one or multiple user devices can access to determine position within the physical world and that applications can access to specify the position of virtual objects within the physical world. Persistent spatial information enables users to have a shared virtual, as well as physical, experience when interacting with the cross reality system. Further, persistent spatial information may be used in maps of the physical world, enabling one or multiple devices to access and localize into previously stored maps, reducing the need to map a physical space before using the cross reality system in it. Persistent spatial information may be stored as persistent coordinate frames, which may include a transformation relative to a reference orientation and information derived from images in a location corresponding to the persistent coordinate frame.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: January 18, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Anush Mohan, Rafael Domingos Torres, Daniel Olshansky, Samuel A. Miller, Jehangir Tajik, Joel David Holder, Jeremy Dwayne Miranda, Robert Blake Taylor, Ashwin Swaminathan, Lomesh Agarwal, Hiral Honar Barot, Helder Toshiro Suzuki, Ali Shahrokni, Eran Guendelman, Prateek Singhal, Xuan Zhao, Siddharth Choudhary, Nicholas Atkinson Kramer, Kenneth William Tossell, Christian Ivan Robert Moore
  • Publication number: 20210407125
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for object recognition neural network for amodal center prediction. One of the methods includes receiving an image of an object captured by a camera. The image of the object is processed using an object recognition neural network that is configured to generate an object recognition output. The object recognition output includes data defining a predicted two-dimensional amodal center of the object, wherein the predicted two-dimensional amodal center of the object is a projection of a predicted three-dimensional center of the object under a camera pose of the camera that captured the image.
    Type: Application
    Filed: June 24, 2021
    Publication date: December 30, 2021
    Inventors: Siddharth Mahendran, Nitin Bansal, Nitesh Sekhar, Manushree Gangwar, Khushi Gupta, Prateek Singhal
  • Patent number: 11182614
    Abstract: An apparatus configured to be worn on a head of a user, includes: a screen configured to present graphics to the user; a camera system configured to view an environment in which the user is located; and a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment; wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: November 23, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Divya Sharma, Ali Shahrokni, Anush Mohan, Prateek Singhal, Xuan Zhao, Sergiu Sima, Benjamin Langmann