Patents by Inventor Prateek Singhal

Prateek Singhal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220092852
    Abstract: A cross reality system that provides an immersive user experience by storing persistent spatial information about the physical world that one or multiple user devices can access to determine position within the physical world and that applications can access to specify the position of virtual objects within the physical world. Persistent spatial information enables users to have a shared virtual, as well as physical, experience when interacting with the cross reality system. Further, persistent spatial information may be used in maps of the physical world, enabling one or multiple devices to access and localize into previously stored maps, reducing the need to map a physical space before using the cross reality system in it. Persistent spatial information may be stored as persistent coordinate frames, which may include a transformation relative to a reference orientation and information derived from images in a location corresponding to the persistent coordinate frame.
    Type: Application
    Filed: December 3, 2021
    Publication date: March 24, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Anush Mohan, Rafael Domingos Torres, Daniel Olshansky, Samuel A. Miller, Jehangir Tajik, Joel David Holder, Jeremy Dwayne Miranda, Robert Blake Taylor, Ashwin Swaminathan, Lomesh Agarwal, Hiral Honar Barot, Helder Toshiro Suzuki, Ali Shahrokni, Eran Guendelman, Prateek Singhal, Xuan Zhao, Siddharth Choudhary, Nicholas Atkinson Kramer, Kenneth William Tossell, Christian Ivan Robert Moore
  • Patent number: 11257300
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scalable three-dimensional (3-D) object recognition in a cross reality system. One of the methods includes maintaining object data specifying objects that have been recognized in a scene. A stream of input images of the scene is received, including a stream of color images and a stream of depth images. A color image is provided as input to an object recognition system. A recognition output that identifies a respective object mask for each object in the color image is received. A synchronization system determines a corresponding depth image for the color image. A 3-D bounding box generation system determines a respective 3-D bounding box for each object that has been recognized in the color image. Data specifying one or more 3-D bounding boxes is received as output from the 3-D bounding box generation system.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: February 22, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Siddharth Choudhary, Divya Ramnath, Shiyu Dong, Siddharth Mahendran, Arumugam Kalai Kannan, Prateek Singhal, Khushi Gupta, Nitesh Sekhar, Manushree Gangwar
  • Publication number: 20220036078
    Abstract: An apparatus configured to be worn on a head of a user, includes: a screen configured to present graphics to the user; a camera system configured to view an environment in which the user is located; and a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment; wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric.
    Type: Application
    Filed: October 13, 2021
    Publication date: February 3, 2022
    Applicant: MAGIC LEAP, INC.
    Inventors: Divya SHARMA, Ali SHAHROKNI, Anush MOHAN, Prateek SINGHAL, Xuan ZHAO, Sergiu SIMA, Benjamin LANGMANN
  • Patent number: 11227435
    Abstract: A cross reality system that provides an immersive user experience by storing persistent spatial information about the physical world that one or multiple user devices can access to determine position within the physical world and that applications can access to specify the position of virtual objects within the physical world. Persistent spatial information enables users to have a shared virtual, as well as physical, experience when interacting with the cross reality system. Further, persistent spatial information may be used in maps of the physical world, enabling one or multiple devices to access and localize into previously stored maps, reducing the need to map a physical space before using the cross reality system in it. Persistent spatial information may be stored as persistent coordinate frames, which may include a transformation relative to a reference orientation and information derived from images in a location corresponding to the persistent coordinate frame.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: January 18, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Anush Mohan, Rafael Domingos Torres, Daniel Olshansky, Samuel A. Miller, Jehangir Tajik, Joel David Holder, Jeremy Dwayne Miranda, Robert Blake Taylor, Ashwin Swaminathan, Lomesh Agarwal, Hiral Honar Barot, Helder Toshiro Suzuki, Ali Shahrokni, Eran Guendelman, Prateek Singhal, Xuan Zhao, Siddharth Choudhary, Nicholas Atkinson Kramer, Kenneth William Tossell, Christian Ivan Robert Moore
  • Publication number: 20210407125
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for object recognition neural network for amodal center prediction. One of the methods includes receiving an image of an object captured by a camera. The image of the object is processed using an object recognition neural network that is configured to generate an object recognition output. The object recognition output includes data defining a predicted two-dimensional amodal center of the object, wherein the predicted two-dimensional amodal center of the object is a projection of a predicted three-dimensional center of the object under a camera pose of the camera that captured the image.
    Type: Application
    Filed: June 24, 2021
    Publication date: December 30, 2021
    Inventors: Siddharth Mahendran, Nitin Bansal, Nitesh Sekhar, Manushree Gangwar, Khushi Gupta, Prateek Singhal
  • Patent number: 11182614
    Abstract: An apparatus configured to be worn on a head of a user, includes: a screen configured to present graphics to the user; a camera system configured to view an environment in which the user is located; and a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment; wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: November 23, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Divya Sharma, Ali Shahrokni, Anush Mohan, Prateek Singhal, Xuan Zhao, Sergiu Sima, Benjamin Langmann
  • Publication number: 20210334537
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Application
    Filed: March 5, 2021
    Publication date: October 28, 2021
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Publication number: 20210209859
    Abstract: An augmented reality viewing system is described. A local coordinate frame of local content is transformed to a world coordinate frame. A further transformation is made to a head coordinate frame and a further transformation is made to a camera coordinate frame that includes all pupil positions of an eye. One or more users may interact in separate sessions with a viewing system. If a canonical map is available, the earlier map is downloaded onto a viewing device of a user. The viewing device then generates another map and localizes the subsequent map to the canonical map.
    Type: Application
    Filed: March 22, 2021
    Publication date: July 8, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Jeremy Dwayne Miranda, Rafael Domingos Torres, Daniel Olshansky, Anush Mohan, Robert Blake Taylor, Samuel A. Miller, Jehangir Tajik, Ashwin Swaminathan, Lomesh Agarwal, Ali Shahrokni, Prateek Singhal, Joel David Holder, Xuan Zhao, Siddharth Choudhary, Helder Toshiro Suzuki, Hiral Honar Barot, Eran Guendelman, Michael Harold Liebenow, Christian Ivan Robert Moore
  • Patent number: 10957112
    Abstract: An augmented reality viewing system is described. A local coordinate frame of local content is transformed to a world coordinate frame. A further transformation is made to a head coordinate frame and a further transformation is made to a camera coordinate frame that includes all pupil positions of an eye. One or more users may interact in separate sessions with a viewing system. If a canonical map is available, the earlier map is downloaded onto a viewing device of a user. The viewing device then generates another map and localizes the subsequent map to the canonical map.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: March 23, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Jeremy Dwayne Miranda, Rafael Domingos Torres, Daniel Olshansky, Anush Mohan, Robert Blake Taylor, Samuel A. Miller, Jehangir Tajik, Ashwin Swaminathan, Lomesh Agarwal, Ali Shahrokni, Prateek Singhal, Joel David Holder, Xuan Zhao, Siddharth Choudhary, Helder Toshiro Suzuki, Hiral Honar Barot, Eran Guendelman, Michael Harold Liebenow, Christian Ivan Robert Moore
  • Patent number: 10943120
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: March 9, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Publication number: 20200394848
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scalable three-dimensional (3-D) object recognition in a cross reality system. One of the methods includes maintaining object data specifying objects that have been recognized in a scene. A stream of input images of the scene is received, including a stream of color images and a stream of depth images. A color image is provided as input to an object recognition system. A recognition output that identifies a respective object mask for each object in the color image is received. A synchronization system determines a corresponding depth image for the color image. A 3-D bounding box generation system determines a respective 3-D bounding box for each object that has been recognized in the color image. Data specifying one or more 3-D bounding boxes is received as output from the 3-D bounding box generation system.
    Type: Application
    Filed: June 12, 2020
    Publication date: December 17, 2020
    Inventors: Siddharth Choudhary, Divya Ramnath, Shiyu Dong, Siddarth Mahendran, Arumugam Kalai Kannan, Prateek Singhal, Khushi Gupta, Nitesh Sekhar, Manushree Gangwar
  • Publication number: 20200090407
    Abstract: An augmented reality viewing system is described. A local coordinate frame of local content is transformed to a world coordinate frame. A further transformation is made to a head coordinate frame and a further transformation is made to a camera coordinate frame that includes all pupil positions of an eye. One or more users may interact in separate sessions with a viewing system. If a canonical map is available, the earlier map is downloaded onto a viewing device of a user. The viewing device then generates another map and localizes the subsequent map to the canonical map.
    Type: Application
    Filed: August 12, 2019
    Publication date: March 19, 2020
    Applicant: Magic Leap, Inc.
    Inventors: Jeremy Dwayne Miranda, Rafael Domingos Torres, Daniel Olshansky, Anush Mohan, Robert Blake Taylor, Samuel A. Miller, Jehangir Tajik, Ashwin Swaminathan, Lomesh Agarwal, Ali Shahrokni, Prateek Singhal, Joel David Holder, Xuan Zhao, Siddharth Choudhary, Helder Toshiro Suzuki, Hiral Honar Barot, Eran Guendelman, Michael Harold Liebenow
  • Publication number: 20200051328
    Abstract: A cross reality system that provides an immersive user experience by storing persistent spatial information about the physical world that one or multiple user devices can access to determine position within the physical world and that applications can access to specify the position of virtual objects within the physical world. Persistent spatial information enables users to have a shared virtual, as well as physical, experience when interacting with the cross reality system. Further, persistent spatial information may be used in maps of the physical world, enabling one or multiple devices to access and localize into previously stored maps, reducing the need to map a physical space before using the cross reality system in it. Persistent spatial information may be stored as persistent coordinate frames, which may include a transformation relative to a reference orientation and information derived from images in a location corresponding to the persistent coordinate frame.
    Type: Application
    Filed: October 4, 2019
    Publication date: February 13, 2020
    Applicant: Magic Leap, Inc.
    Inventors: Anush Mohan, Rafael Domingos Torres, Daniel Olshansky, Samuel A. Miller, Jehangir Tajik, Joel David Holder, Jeremy Dwayne Miranda, Robert Blake Taylor, Ashwin Swaminathan, Lomesh Agarwal, Hiral Honar Barot, Helder Toshiro Suzuki, Ali Shahrokni, Eran Guendelman, Prateek Singhal, Xuan Zhao, Siddharth Choudhary, Nick Kramer, Ken Tossell
  • Publication number: 20200034624
    Abstract: An apparatus configured to be worn on a head of a user, includes: a screen configured to present graphics to the user; a camera system configured to view an environment in which the user is located; and a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment; wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric.
    Type: Application
    Filed: July 24, 2019
    Publication date: January 30, 2020
    Applicant: MAGIC LEAP, INC.
    Inventors: Divya SHARMA, Ali SHAHROKNI, Anush MOHAN, Prateek SINGHAL, Xuan ZHAO, Sergiu SIMA, Benjamin LANGMANN
  • Publication number: 20190188474
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Application
    Filed: December 14, 2018
    Publication date: June 20, 2019
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal