Patents by Inventor Ashwin Swaminathan

Ashwin Swaminathan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11557099
    Abstract: A system form localizing an electronic device with dynamic buffering identifies, from the buffer, a first set of features that is extracted from a first image captured by the electronic device and receives, at the system, a second set of features that is extracted from a second image captured by the electronic device. The system further determines a first characteristic for the first set of features and a second characteristic for the second set of features and determines whether a triggering condition for dynamically changing a size of the buffer is satisfied based at least in part upon the first characteristic for the first set of features and the second characteristic for the second set of features.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: January 17, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Ali Shahrokni, Keng-Sheng Lin, Xuan Zhao, Christian Ivan Robert Moore, Ashwin Swaminathan
  • Patent number: 11551430
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps, even maps of very large environments, and render virtual content specified in relation to those maps. The cross reality system may quickly process a batch of images acquired with a portable device to determine whether there is sufficient consistency across the batch in the computed localization. Processing on at least one image from the batch may determine a rough localization of the device to the map. This rough localization result may be used in a refined localization process for the image for which it was generated. The rough localization result may also be selectively propagated to a refined localization process for other images in the batch, enabling rough localization processing to be skipped for the other images.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: January 10, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Danying Hu, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan, Mukta Prasad
  • Publication number: 20220406024
    Abstract: A system form localizing an electronic device with dynamic buffering identifies, from the buffer, a first set of features that is extracted from a first image captured by the electronic device and receives, at the system, a second set of features that is extracted from a second image captured by the electronic device. The system further determines a first characteristic for the first set of features and a second characteristic for the second set of features and determines whether a triggering condition for dynamically changing a size of the buffer is satisfied based at least in part upon the first characteristic for the first set of features and the second characteristic for the second set of features.
    Type: Application
    Filed: August 24, 2022
    Publication date: December 22, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Ali Shahrokni, Keng-Sheng Lin, Xuan Zhao, Christian Ivan Robert Moore, Ashwin Swaminathan
  • Patent number: 11532124
    Abstract: A cross reality system receives tracking information in a tracking map and first location metadata associated with at least a portion of the tracking map. A sub-portion of a canonical map is determined based at least in part on a correspondence between the first location metadata associated with the at least the portion of the tracking map and second location metadata associated with the sub-portion of the canonical map. The sub-portion of the canonical map may be merged with the at least the portion of the tracking map into a merged map. The cross reality system may further generate the tracking map by using at least the pose information from one or more images and localize the tracking map to the canonical may at least by using a persistent coordinate frame in the canonical map and the location metadata associated with the location represented in the tracking map.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: December 20, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Moshe Bouhnik, Ben Weisbih, Miguel Andres Granados Velasquez, Ali Shahrokni, Ashwin Swaminathan
  • Patent number: 11501529
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: November 15, 2022
    Assignee: MAGIC LEAP, INC.
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Publication number: 20220358733
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps of very large scale environments and render virtual content specified in relation to those maps. The cross reality system may build a persisted map, which may be in canonical form, by merging tracking maps from the multiple devices. A map merge process determines mergibility of a tracking map with a canonical map and merges a tracking map with a canonical map in accordance with mergibility criteria, such as, when a gravity direction of the tracking map aligns with a gravity direction of the canonical map. Refraining from merging maps if the orientation of the tracking map with respect to gravity is not preserved avoids distortions in persisted maps and results in multiple devices, which may use the maps to determine their locations, to present more realistic and immersive experiences for their users.
    Type: Application
    Filed: July 1, 2022
    Publication date: November 10, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Mukta Prasad, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan
  • Patent number: 11410395
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps of very large scale environments and render virtual content specified in relation to those maps. The cross reality system may build a persisted map, which may be in canonical form, by merging tracking maps from the multiple devices. A map merge process determines mergibility of a tracking map with a canonical map and merges a tracking map with a canonical map in accordance with mergibility criteria, such as, when a gravity direction of the tracking map aligns with a gravity direction of the canonical map. Refraining from merging maps if the orientation of the tracking map with respect to gravity is not preserved avoids distortions in persisted maps and results in multiple devices, which may use the maps to determine their locations, to present more realistic and immersive experiences for their users.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: August 9, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Mukta Prasad, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan
  • Publication number: 20220230382
    Abstract: Examples of the disclosure describe systems and methods for presenting virtual content on a wearable head device. In some embodiments, a state of a wearable head device is determined by minimizing a total error based on a reduced weight associated with a reprojection error. A view reflecting the determined state of the wearable head device is presented via a display of the wearable head device. In some embodiments, a wearable head device calculates a preintegration term based on the image data received via a sensor of the wearable head device and the inertial data received via a first IMU and a second IMU of the wearable head device. The wearable head device estimates a position of the device based on the preintegration term, and the wearable head device presents the virtual content based on the position of the device.
    Type: Application
    Filed: April 7, 2022
    Publication date: July 21, 2022
    Inventors: Yu-Hsiang HUANG, Evan Gregory LEVINE, Igor NAPOLSKIKH, Dominik Michael KASPER, Manel Quim SANCHEZ NICUESA, Sergiu SIMA, Benjamin LANGMANN, Ashwin SWAMINATHAN, Martin Georg ZAHNERT, Blazej Marek CZUPRYNSKI, Joao Antonio Pereira FARO, Christoph TOBLER, Omid GHASEMALIZADEH
  • Patent number: 11386629
    Abstract: An augmented reality viewing system is described. A local coordinate frame of local content is transformed to a world coordinate frame. A further transformation is made to a head coordinate frame and a further transformation is made to a camera coordinate frame that includes all pupil positions of an eye. One or more users may interact in separate sessions with a viewing system. If a canonical map is available, the earlier map is downloaded onto a viewing device of a user. The viewing device then generates another map and localizes the subsequent map to the canonical map.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: July 12, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Jeremy Dwayne Miranda, Rafael Domingos Torres, Daniel Olshansky, Anush Mohan, Robert Blake Taylor, Samuel A. Miller, Jehangir Tajik, Ashwin Swaminathan, Lomesh Agarwal, Ali Shahrokni, Prateek Singhal, Joel David Holder, Xuan Zhao, Siddharth Choudhary, Helder Toshiro Suzuki, Hirai Honar Barot, Eran Guendelman, Michael Harold Liebenow, Christian Ivan Robert Moore
  • Patent number: 11328475
    Abstract: Examples of the disclosure describe systems and methods for presenting virtual content on a wearable head device. In some embodiments, a state of a wearable head device is determined by minimizing a total error based on a reduced weight associated with a reprojection error. A view reflecting the determined state of the wearable head device is presented via a display of the wearable head device. In some embodiments, a wearable head device calculates a first preintegration term and second preintegration term based on the image data received via a sensor of the wearable head device and the inertial data received via a first IMU and a second IMU of the wearable head device. The wearable head device estimates a position of the device based on the first and second preintegration terms, and the wearable head device presents the virtual content based on the position of the device.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: May 10, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Yu-Hsiang Huang, Evan Gregory Levine, Igor Napolskikh, Dominik Michael Kasper, Manel Quim Sanchez Nicuesa, Sergiu Sima, Benjamin Langmann, Ashwin Swaminathan, Martin Georg Zahnert, Blazej Marek Czuprynski, Joao Antonio Pereira Faro, Christoph Tobler, Omid Ghasemalizadeh
  • Publication number: 20220092852
    Abstract: A cross reality system that provides an immersive user experience by storing persistent spatial information about the physical world that one or multiple user devices can access to determine position within the physical world and that applications can access to specify the position of virtual objects within the physical world. Persistent spatial information enables users to have a shared virtual, as well as physical, experience when interacting with the cross reality system. Further, persistent spatial information may be used in maps of the physical world, enabling one or multiple devices to access and localize into previously stored maps, reducing the need to map a physical space before using the cross reality system in it. Persistent spatial information may be stored as persistent coordinate frames, which may include a transformation relative to a reference orientation and information derived from images in a location corresponding to the persistent coordinate frame.
    Type: Application
    Filed: December 3, 2021
    Publication date: March 24, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Anush Mohan, Rafael Domingos Torres, Daniel Olshansky, Samuel A. Miller, Jehangir Tajik, Joel David Holder, Jeremy Dwayne Miranda, Robert Blake Taylor, Ashwin Swaminathan, Lomesh Agarwal, Hiral Honar Barot, Helder Toshiro Suzuki, Ali Shahrokni, Eran Guendelman, Prateek Singhal, Xuan Zhao, Siddharth Choudhary, Nicholas Atkinson Kramer, Kenneth William Tossell, Christian Ivan Robert Moore
  • Patent number: 11244649
    Abstract: Techniques are described for calibrating a device having a first sensor and a second sensor. Techniques include capturing sensor data using the first sensor and the second sensor. The device maintains a calibration profile including a translation parameter and a rotation parameter to model a spatial relationship between the first sensor and the second sensor. Techniques include determining a calibration level associated with the calibration profile at a first time. Techniques include determining, based on the calibration level, to perform a calibration process. Techniques include performing the calibration process at the first time by generating one or both of a calibrated translation parameter and a calibrated rotation parameter and replacing one or both of the translation parameter and the rotation parameter with one or both of the calibrated translation parameter and the calibrated rotation parameter.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: February 8, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Yu-Tseh Chi, Jean-Yves Bouguet, Divya Sharma, Lei Huang, Dennis William Strelow, Etienne Gregoire Grossmann, Evan Gregory Levine, Adam Harmat, Ashwin Swaminathan
  • Patent number: 11238659
    Abstract: A method to efficiently update and manage outputs of real time or offline 3D reconstruction and scanning in a mobile device having limited resource and connection to the Internet is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data, in either single user applications or multi-user applications sharing and updating the same 3D reconstruction data. The method includes a block-based 3D data representation that allows local update and maintains neighbor consistency at the same time, and a multi-layer caching mechanism that retrieves, prefetches, and stores 3D data efficiently for XR applications. Altitude information, which may be expressed as a building floor for indoor environments, may be associated with sparse and/or dense representations of the physical world, to increase the accuracy of localization results and/or in rendering virtual content more realistically.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: February 1, 2022
    Assignee: Magic Leap, Inc.
    Inventor: Ashwin Swaminathan
  • Patent number: 11227435
    Abstract: A cross reality system that provides an immersive user experience by storing persistent spatial information about the physical world that one or multiple user devices can access to determine position within the physical world and that applications can access to specify the position of virtual objects within the physical world. Persistent spatial information enables users to have a shared virtual, as well as physical, experience when interacting with the cross reality system. Further, persistent spatial information may be used in maps of the physical world, enabling one or multiple devices to access and localize into previously stored maps, reducing the need to map a physical space before using the cross reality system in it. Persistent spatial information may be stored as persistent coordinate frames, which may include a transformation relative to a reference orientation and information derived from images in a location corresponding to the persistent coordinate frame.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: January 18, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Anush Mohan, Rafael Domingos Torres, Daniel Olshansky, Samuel A. Miller, Jehangir Tajik, Joel David Holder, Jeremy Dwayne Miranda, Robert Blake Taylor, Ashwin Swaminathan, Lomesh Agarwal, Hiral Honar Barot, Helder Toshiro Suzuki, Ali Shahrokni, Eran Guendelman, Prateek Singhal, Xuan Zhao, Siddharth Choudhary, Nicholas Atkinson Kramer, Kenneth William Tossell, Christian Ivan Robert Moore
  • Publication number: 20210358173
    Abstract: Methods and apparatus for providing a representation of an environment, for example, in an XR system, and any suitable computer vision and robotics applications. A representation of an environment may include one or more planar features. The representation of the environment may be provided by jointly optimizing plane parameters of the planar features and sensor poses that the planar features are observed at. The joint optimization may be based on a reduced matrix and a reduced residual vector in lieu of the Jacobian matrix and the original residual vector.
    Type: Application
    Filed: May 10, 2021
    Publication date: November 18, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Lipu Zhou, Frank Thomas Steinbruecker, Ashwin Swaminathan, Hui Ju, Daniel Esteban Koppel, Konstantinos Zampogiannis, Pooja Piyush Mehta, Vinayram Balakumar
  • Publication number: 20210334537
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Application
    Filed: March 5, 2021
    Publication date: October 28, 2021
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Publication number: 20210279953
    Abstract: A cross reality system receives tracking information in a tracking map and first location metadata associated with at least a portion of the tracking map. A sub-portion of a canonical map is determined based at least in part on a correspondence between the first location metadata associated with the at least the portion of the tracking map and second location metadata associated with the sub-portion of the canonical map. The sub-portion of the canonical map may be merged with the at least the portion of the tracking map into a merged map. The cross reality system may further generate the tracking map by using at least the pose information from one or more images and localize the tracking map to the canonical may at least by using a persistent coordinate frame in the canonical map and the location metadata associated with the location represented in the tracking map.
    Type: Application
    Filed: February 19, 2021
    Publication date: September 9, 2021
    Applicant: MAGIC LEAP, INC.
    Inventors: Moshe Bouhnik, Ben Weisbih, Miguel Andres Granados Velasquez, Ali Shahrokni, Ashwin Swaminathan
  • Publication number: 20210279909
    Abstract: A method of efficiently and accurately computing a pose of an image with respect to other image information. The image may be acquired with a camera on a portable device and the other information may be a map, such that the computation of pose localizes the device relative to the map. Such a technique may be applied in a cross reality system to enable devices to efficiently and accurately access previously persisted maps. Localizing with respect to a map may enable multiple cross reality devices to render virtual content at locations specified in relation to those maps, providing an enhanced experience for uses of the system. The method may be used in other devices and for other purposes, such as for navigation of autonomous vehicles.
    Type: Application
    Filed: March 2, 2021
    Publication date: September 9, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Lipu Zhou, Ashwin Swaminathan, Frank Thomas Steinbruecker, Daniel Esteban Koppel
  • Publication number: 20210264674
    Abstract: A system form localizing an electronic device with dynamic buffering identifies, from the buffer, a first set of features that is extracted from a first image captured by the electronic device and receives, at the system, a second set of features that is extracted from a second image captured by the electronic device. The system further determines a first characteristic for the first set of features and a second characteristic for the second set of features and determines whether a triggering condition for dynamically changing a size of the buffer is satisfied based at least in part upon the first characteristic for the first set of features and the second characteristic for the second set of features.
    Type: Application
    Filed: February 25, 2021
    Publication date: August 26, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Ali Shahrokni, Keng-Sheng Lin, Xuan Zhao, Christian Ivan Robert Moore, Ashwin Swaminathan
  • Publication number: 20210264685
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps, even maps of very large environments, and render virtual content specified in relation to those maps. The cross reality system may quickly process a batch of images acquired with a portable device to determine whether there is sufficient consistency across the batch in the computed localization. Processing on at least one image from the batch may determine a rough localization of the device to the map. This rough localization result may be used in a refined localization process for the image for which it was generated. The rough localization result may also be selectively propagated to a refined localization process for other images in the batch, enabling rough localization processing to be skipped for the other images.
    Type: Application
    Filed: February 25, 2021
    Publication date: August 26, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Danying Hu, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan, Mukta Prasad