Patents by Inventor Miguel Andres Granados Velasquez

Miguel Andres Granados Velasquez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135707
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Application
    Filed: October 8, 2023
    Publication date: April 25, 2024
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Publication number: 20240135656
    Abstract: A cross reality system enables portable devices to access stored maps and efficiently and accurately render virtual content specified in relation to those maps. The system may process images acquired with a portable device to quickly and accurately localize the portable device to the persisted maps by constraining the result of localization based on the estimated direction of gravity of a persisted map and the coordinate frame in which data in a localization request is posed. The system may actively align the data in the localization request with an estimated direction of gravity during the localization processing, and/or a portable device may establish a coordinate frame in which the data in the localization request is posed aligned with an estimated direction of gravity such that the subsequently acquired data for inclusion in a localization request, when posed in that coordinate frame, is passively aligned with the estimated direction of gravity.
    Type: Application
    Filed: December 26, 2023
    Publication date: April 25, 2024
    Applicant: Magic Leap, Inc.
    Inventors: Javier Victorio Gomez Gonzalez, Miguel Andres Granados Velasquez, Mukta Prasad, Dominik Michael Kasper, Eran Guendelman, Keng-Sheng Lin
  • Publication number: 20240137665
    Abstract: A wearable display system including multiple cameras and a processor is disclosed. A greyscale camera and a color camera can be arranged to provide a central view field associated with both cameras and a peripheral view field associated with one of the two cameras. One or more of the two cameras may be a plenoptic camera. The wearable display system may acquire light field information using the at least one plenoptic camera and create a world model using the first light field information and first depth information stereoscopically determined from images acquired by the greyscale camera and the color camera. The wearable display system can track head pose using the at least one plenoptic camera and the world model. The wearable display system can track objects in the central view field and the peripheral view fields using the one or two plenoptic cameras, when the objects satisfy a depth criterion.
    Type: Application
    Filed: December 15, 2023
    Publication date: April 25, 2024
    Applicant: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic, Miguel Andres Granados Velasquez, Javier Victorio
  • Publication number: 20240062491
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps of very large scale environments and render virtual content specified in relation to those maps. The cross reality system may build a persisted map, which may be in canonical form, by merging tracking maps from the multiple devices. A map merge process determines mergibility of a tracking map with a canonical map and merges a tracking map with a canonical map in accordance with mergibility criteria, such as, when a gravity direction of the tracking map aligns with a gravity direction of the canonical map. Refraining from merging maps if the orientation of the tracking map with respect to gravity is not preserved avoids distortions in persisted maps and results in multiple devices, which may use the maps to determine their locations, to present more realistic and immersive experiences for their users.
    Type: Application
    Filed: August 28, 2023
    Publication date: February 22, 2024
    Applicant: Magic Leap, Inc.
    Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Mukta Prasad, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan
  • Patent number: 11900547
    Abstract: A cross reality system enables portable devices to access stored maps and efficiently and accurately render virtual content specified in relation to those maps. The system may process images acquired with a portable device to quickly and accurately localize the portable device to the persisted maps by constraining the result of localization based on the estimated direction of gravity of a persisted map and the coordinate frame in which data in a localization request is posed. The system may actively align the data in the localization request with an estimated direction of gravity during the localization processing, and/or a portable device may establish a coordinate frame in which the data in the localization request is posed aligned with an estimated direction of gravity such that the subsequently acquired data for inclusion in a localization request, when posed in that coordinate frame, is passively aligned with the estimated direction of gravity.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: February 13, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Javier Victorio Gomez Gonzalez, Miguel Andres Granados Velasquez, Mukta Prasad, Dominik Michael Kasper, Eran Guendelman, Keng-Sheng Lin
  • Patent number: 11889209
    Abstract: A wearable display system including multiple cameras and a processor is disclosed. A greyscale camera and a color camera can be arranged to provide a central view field associated with both cameras and a peripheral view field associated with one of the two cameras. One or more of the two cameras may be a plenoptic camera. The wearable display system may acquire light field information using the at least one plenoptic camera and create a world model using the first light field information and first depth information stereoscopically determined from images acquired by the greyscale camera and the color camera. The wearable display system can track head pose using the at least one plenoptic camera and the world model. The wearable display system can track objects in the central view field and the peripheral view fields using the one or two plenoptic cameras, when the objects satisfy a depth criterion.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: January 30, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic, Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez
  • Patent number: 11823450
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Grant
    Filed: October 14, 2022
    Date of Patent: November 21, 2023
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Patent number: 11790619
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps of very large scale environments and render virtual content specified in relation to those maps. The cross reality system may build a persisted map, which may be in canonical form, by merging tracking maps from the multiple devices. A map merge process determines mergibility of a tracking map with a canonical map and merges a tracking map with a canonical map in accordance with mergibility criteria, such as, when a gravity direction of the tracking map aligns with a gravity direction of the canonical map. Refraining from merging maps if the orientation of the tracking map with respect to gravity is not preserved avoids distortions in persisted maps and results in multiple devices, which may use the maps to determine their locations, to present more realistic and immersive experiences for their users.
    Type: Grant
    Filed: July 1, 2022
    Date of Patent: October 17, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Mukta Prasad, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan
  • Publication number: 20230119217
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps, even maps of very large environments, and render virtual content specified in relation to those maps. The cross reality system may quickly process a batch of images acquired with a portable device to determine whether there is sufficient consistency across the batch in the computed localization. Processing on at least one image from the batch may determine a rough localization of the device to the map. This rough localization result may be used in a refined localization process for the image for which it was generated. The rough localization result may also be selectively propagated to a refined localization process for other images in the batch, enabling rough localization processing to be skipped for the other images.
    Type: Application
    Filed: December 7, 2022
    Publication date: April 20, 2023
    Applicant: Magic Leap, Inc.
    Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Danying Hu, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan, Mukta Prasad
  • Publication number: 20230108794
    Abstract: A portable electronic system receives a set of one or more canonical maps and determines the sparse map based at least in part upon one or more anchors pertaining to the physical environment. The sparse map is localized to at least one canonical map in the set of one or more canonical maps, and a new canonical map is created at least by merging sparse map data of the sparse map into the at least one canonical map. The set of one or more canonical maps may be determined from a universe of canonical maps comprising a plurality of canonical maps by applying a hierarchical filtering scheme to the universe. The sparse map may be localized to the at least one canonical map at least by splitting the sparse map into a plurality of connected components and by one or more merger operations.
    Type: Application
    Filed: November 17, 2022
    Publication date: April 6, 2023
    Applicant: MAGIC LEAP, INC.
    Inventors: Moshe BOUHNIK, Ben WEISBIH, Miguel Andres GRANADOS VELASQUEZ, Ali SHAHROKNI, Ashwin SWAMINATHAN
  • Publication number: 20230034363
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Application
    Filed: October 14, 2022
    Publication date: February 2, 2023
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Patent number: 11551430
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps, even maps of very large environments, and render virtual content specified in relation to those maps. The cross reality system may quickly process a batch of images acquired with a portable device to determine whether there is sufficient consistency across the batch in the computed localization. Processing on at least one image from the batch may determine a rough localization of the device to the map. This rough localization result may be used in a refined localization process for the image for which it was generated. The rough localization result may also be selectively propagated to a refined localization process for other images in the batch, enabling rough localization processing to be skipped for the other images.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: January 10, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Danying Hu, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan, Mukta Prasad
  • Patent number: 11532124
    Abstract: A cross reality system receives tracking information in a tracking map and first location metadata associated with at least a portion of the tracking map. A sub-portion of a canonical map is determined based at least in part on a correspondence between the first location metadata associated with the at least the portion of the tracking map and second location metadata associated with the sub-portion of the canonical map. The sub-portion of the canonical map may be merged with the at least the portion of the tracking map into a merged map. The cross reality system may further generate the tracking map by using at least the pose information from one or more images and localize the tracking map to the canonical may at least by using a persistent coordinate frame in the canonical map and the location metadata associated with the location represented in the tracking map.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: December 20, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Moshe Bouhnik, Ben Weisbih, Miguel Andres Granados Velasquez, Ali Shahrokni, Ashwin Swaminathan
  • Patent number: 11501529
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: November 15, 2022
    Assignee: MAGIC LEAP, INC.
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
  • Publication number: 20220358733
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps of very large scale environments and render virtual content specified in relation to those maps. The cross reality system may build a persisted map, which may be in canonical form, by merging tracking maps from the multiple devices. A map merge process determines mergibility of a tracking map with a canonical map and merges a tracking map with a canonical map in accordance with mergibility criteria, such as, when a gravity direction of the tracking map aligns with a gravity direction of the canonical map. Refraining from merging maps if the orientation of the tracking map with respect to gravity is not preserved avoids distortions in persisted maps and results in multiple devices, which may use the maps to determine their locations, to present more realistic and immersive experiences for their users.
    Type: Application
    Filed: July 1, 2022
    Publication date: November 10, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Mukta Prasad, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan
  • Patent number: 11410395
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps of very large scale environments and render virtual content specified in relation to those maps. The cross reality system may build a persisted map, which may be in canonical form, by merging tracking maps from the multiple devices. A map merge process determines mergibility of a tracking map with a canonical map and merges a tracking map with a canonical map in accordance with mergibility criteria, such as, when a gravity direction of the tracking map aligns with a gravity direction of the canonical map. Refraining from merging maps if the orientation of the tracking map with respect to gravity is not preserved avoids distortions in persisted maps and results in multiple devices, which may use the maps to determine their locations, to present more realistic and immersive experiences for their users.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: August 9, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Mukta Prasad, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan
  • Publication number: 20220132056
    Abstract: A wearable display system including multiple cameras and a processor is disclosed. A greyscale camera and a color camera can be arranged to provide a central view field associated with both cameras and a peripheral view field associated with one of the two cameras. One or more of the two cameras may be a plenoptic camera. The wearable display system may acquire light field information using the at least one plenoptic camera and create a world model using the first light field information and first depth information stereoscopically determined from images acquired by the greyscale camera and the color camera. The wearable display system can track head pose using the at least one plenoptic camera and the world model. The wearable display system can track objects in the central view field and the peripheral view fields using the one or two plenoptic cameras, when the objects satisfy a depth criterion.
    Type: Application
    Filed: February 7, 2020
    Publication date: April 28, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic, Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez
  • Publication number: 20220051441
    Abstract: A wearable display system with a limited number of cameras. Two cameras can be arranged to provide an overlapping central view field and a peripheral view field associated with one of the two cameras. A third camera can be arranged to provide a color view field overlapping the central view field. The wearable display system may be coupled to a processor configured to generate a world model and track hand motion in the central view field using the two cameras. The processor may be configured to perform a calibration routine to compensate for distortions during use of the wearable display system. The processor may be configured to identify and address portions of the world model including incomplete depth information by obtaining additional depth information, such as by enabling emitters, detecting planar surfaces in the physical world, or identifying relevant object templates in the world model.
    Type: Application
    Filed: December 19, 2019
    Publication date: February 17, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander llic, Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez
  • Publication number: 20210343087
    Abstract: A cross reality system enables portable devices to access stored maps and efficiently and accurately render virtual content specified in relation to those maps. The system may process images acquired with a portable device to quickly and accurately localize the portable device to the persisted maps by constraining the result of localization based on the estimated direction of gravity of a persisted map and the coordinate frame in which data in a localization request is posed. The system may actively align the data in the localization request with an estimated direction of gravity during the localization processing, and/or a portable device may establish a coordinate frame in which the data in the localization request is posed aligned with an estimated direction of gravity such that the subsequently acquired data for inclusion in a localization request, when posed in that coordinate frame, is passively aligned with the estimated direction of gravity.
    Type: Application
    Filed: April 28, 2021
    Publication date: November 4, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Javier Victorio Gomez Gonzalez, Miguel Andres Granados Velasquez, Mukta Prasad, Dominik Michael Kasper, Eran Guendelman, Keng-Sheng Lin
  • Publication number: 20210334537
    Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
    Type: Application
    Filed: March 5, 2021
    Publication date: October 28, 2021
    Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal