Patents by Inventor Ashwin Swaminathan
Ashwin Swaminathan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210256768Abstract: A cross reality system enables any of multiple devices to efficiently access previously stored maps. Both stored maps and tracking maps used by portable devices may have any of multiple types of location metadata associated with them. The location metadata may be used to select a set of candidate maps for operations, such as localization or map merge, that involve finding a match between a location defined by location information from a portable device and any of a number of previously stored maps. The types of location metadata may prioritized for use in selecting the subset. To aid in selection of candidate maps, a universe of stored maps may be indexed based on geo-location information. A cross reality platform may update that index as it interacts with devices that supply geo-location information in connection with location information and may propagate that geo-location information to devices that do not supply it.Type: ApplicationFiled: February 11, 2021Publication date: August 19, 2021Applicant: Magic Leap, Inc.Inventors: Xuan Zhao, Christian Ivan Robert Moore, Sen Lin, Ali Shahrokni, Ashwin Swaminathan
-
Publication number: 20210256767Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps of very large scale environments and render virtual content specified in relation to those maps. The cross reality system may build a persisted map, which may be in canonical form, by merging tracking maps from the multiple devices. A map merge process determines mergibility of a tracking map with a canonical map and merges a tracking map with a canonical map in accordance with mergibility criteria, such as, when a gravity direction of the tracking map aligns with a gravity direction of the canonical map. Refraining from merging maps if the orientation of the tracking map with respect to gravity is not preserved avoids distortions in persisted maps and results in multiple devices, which may use the maps to determine their locations, to present more realistic and immersive experiences for their users.Type: ApplicationFiled: February 11, 2021Publication date: August 19, 2021Applicant: Magic Leap, Inc.Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Mukta Prasad, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan
-
Patent number: 11079841Abstract: Methods and apparatus relating to enabling augmented reality applications using eye gaze tracking are disclosed. An exemplary method according to the disclosure includes displaying an image to a user of a scene viewable by the user, receiving information indicative of an eye gaze of the user, determining an area of interest within the image based on the eye gaze information, determining an image segment based on the area of interest, initiating an object recognition process on the image segment, and displaying results of the object recognition process.Type: GrantFiled: October 2, 2019Date of Patent: August 3, 2021Assignee: QUALCOMM IncorporatedInventors: Ashwin Swaminathan, Mahesh Ramachandran
-
Publication number: 20210209859Abstract: An augmented reality viewing system is described. A local coordinate frame of local content is transformed to a world coordinate frame. A further transformation is made to a head coordinate frame and a further transformation is made to a camera coordinate frame that includes all pupil positions of an eye. One or more users may interact in separate sessions with a viewing system. If a canonical map is available, the earlier map is downloaded onto a viewing device of a user. The viewing device then generates another map and localizes the subsequent map to the canonical map.Type: ApplicationFiled: March 22, 2021Publication date: July 8, 2021Applicant: Magic Leap, Inc.Inventors: Jeremy Dwayne Miranda, Rafael Domingos Torres, Daniel Olshansky, Anush Mohan, Robert Blake Taylor, Samuel A. Miller, Jehangir Tajik, Ashwin Swaminathan, Lomesh Agarwal, Ali Shahrokni, Prateek Singhal, Joel David Holder, Xuan Zhao, Siddharth Choudhary, Helder Toshiro Suzuki, Hiral Honar Barot, Eran Guendelman, Michael Harold Liebenow, Christian Ivan Robert Moore
-
Publication number: 20210118401Abstract: Techniques are described for calibrating a device having a first sensor and a second sensor. Techniques include capturing sensor data using the first sensor and the second sensor. The device maintains a calibration profile including a translation parameter and a rotation parameter to model a spatial relationship between the first sensor and the second sensor. Techniques include determining a calibration level associated with the calibration profile at a first time. Techniques include determining, based on the calibration level, to perform a calibration process. Techniques include performing the calibration process at the first time by generating one or both of a calibrated translation parameter and a calibrated rotation parameter and replacing one or both of the translation parameter and the rotation parameter with one or both of the calibrated translation parameter and the calibrated rotation parameter.Type: ApplicationFiled: November 3, 2020Publication date: April 22, 2021Applicant: Magic Leap, Inc.Inventors: Yu-Tseh CHI, Jean-Yves BOUGUET, Divya SHARMA, Lei HUANG, Dennis William STRELOW, Etienne Gregoire GROSSMANN, Evan Gregory LEVINE, Adam HARMAT, Ashwin SWAMINATHAN
-
Publication number: 20210118218Abstract: Examples of the disclosure describe systems and methods for presenting virtual content on a wearable head device. In some embodiments, a state of a wearable head device is determined by minimizing a total error based on a reduced weight associated with a reprojection error. A view reflecting the determined state of the wearable head device is presented via a display of the wearable head device. In some embodiments, a wearable head device calculates a first preintegration term and second preintegration term based on the image data received via a sensor of the wearable head device and the inertial data received via a first IMU and a second IMU of the wearable head device. The wearable head device estimates a position of the device based on the first and second preintegration terms, and the wearable head device presents the virtual content based on the position of the device.Type: ApplicationFiled: October 16, 2020Publication date: April 22, 2021Inventors: Yu-Hsiang HUANG, Evan Gregory LEVINE, Igor NAPOLSKIKH, Dominik Michael KASPER, Manel Quim SANCHEZ NICUESA, Sergiu SIMA, Benjamin LANGMANN, Ashwin SWAMINATHAN, Martin Georg ZAHNERT, Blazej Marek CZUPRYNSKI, Joao Antonio Pereira FARO, Christoph TOBLER, Omid GHASEMALIZADEH
-
Publication number: 20210110614Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously stored maps and render virtual content specified in relation to those maps. The cross reality system may include a cloud-based localization service that responds to requests from devices to localize with respect to a stored map. The request may include one or more sets of feature descriptors extracted from an image of the physical world around the device. Those features may be posed relative to a coordinate frame used by the local device. The localization service may identify one or more stored maps with a matching set of features. Based on a transformation required to align the features from the device with the matching set of features, the localization service may compute and return to the device a transformation to relate its local coordinate frame to a coordinate frame of the stored map.Type: ApplicationFiled: October 15, 2020Publication date: April 15, 2021Applicant: Magic Leap, Inc.Inventors: Ali Shahrokni, Daniel Olshansky, Xuan Zhao, Rafael Domingos Torres, Joel David Holder, Keng-Sheng Lin, Ashwin Swaminathan, Anush Mohan
-
Publication number: 20210109940Abstract: A confidentiality preserving system and method for performing a rank-ordered search and retrieval of contents of a data collection. The system includes at least one computer system including a search and retrieval algorithm using term frequency and/or similar features for rank-ordering selective contents of the data collection, and enabling secure retrieval of the selective contents based on the rank-order. The search and retrieval algorithm includes a baseline algorithm, a partially server oriented algorithm, and/or a fully server oriented algorithm. The partially and/or fully server oriented algorithms use homomorphic and/or order preserving encryption for enabling search capability from a user other than an owner of the contents of the data collection. The confidentiality preserving method includes using term frequency for rank-ordering selective contents of the data collection, and retrieving the selective contents based on the rank-order.Type: ApplicationFiled: December 4, 2020Publication date: April 15, 2021Inventors: Ashwin SWAMINATHAN, Yinian MAO, Guan-Ming SU, Hongmei GOU, Avinash VARNA, Shan HE, Min WU, Douglas W. OARD
-
Patent number: 10957112Abstract: An augmented reality viewing system is described. A local coordinate frame of local content is transformed to a world coordinate frame. A further transformation is made to a head coordinate frame and a further transformation is made to a camera coordinate frame that includes all pupil positions of an eye. One or more users may interact in separate sessions with a viewing system. If a canonical map is available, the earlier map is downloaded onto a viewing device of a user. The viewing device then generates another map and localizes the subsequent map to the canonical map.Type: GrantFiled: August 12, 2019Date of Patent: March 23, 2021Assignee: Magic Leap, Inc.Inventors: Jeremy Dwayne Miranda, Rafael Domingos Torres, Daniel Olshansky, Anush Mohan, Robert Blake Taylor, Samuel A. Miller, Jehangir Tajik, Ashwin Swaminathan, Lomesh Agarwal, Ali Shahrokni, Prateek Singhal, Joel David Holder, Xuan Zhao, Siddharth Choudhary, Helder Toshiro Suzuki, Hiral Honar Barot, Eran Guendelman, Michael Harold Liebenow, Christian Ivan Robert Moore
-
Patent number: 10943120Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.Type: GrantFiled: December 14, 2018Date of Patent: March 9, 2021Assignee: Magic Leap, Inc.Inventors: Martin Georg Zahnert, Joao Antonio Pereira Faro, Miguel Andres Granados Velasquez, Dominik Michael Kasper, Ashwin Swaminathan, Anush Mohan, Prateek Singhal
-
Patent number: 10893148Abstract: Systems and methods provide a unified call log for user devices in a device group. A user device receives an invite message for an incoming call and stores first call log information including a caller identifier based on the invite message. The user device receives a status message indicating that another user device has answered the incoming call and stores second call log information including a timestamp for the status message. The user device receives a call information message for a new line between the user device and a device associated with the caller telephone number, determines that the new line is a handover call associated with the incoming call, and stores third call log information for the new line. The user device determines that the handover call on the new line is ended and generates a call log entry including information from the incoming call and the handover call.Type: GrantFiled: September 11, 2019Date of Patent: January 12, 2021Assignee: Verizon Patent and Licensing Inc.Inventors: Rafael Andres Gaviria Velez, Ashwin Swaminathan
-
Publication number: 20200410766Abstract: A method to efficiently update and manage outputs of real time or offline 3D reconstruction and scanning in a mobile device having limited resource and connection to the Internet is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data, in either single user applications or multi-user applications sharing and updating the same 3D reconstruction data. The method includes a block-based 3D data representation that allows local update and maintains neighbor consistency at the same time, and a multi-layer caching mechanism that retrieves, prefetches, and stores 3D data efficiently for XR applications. Altitude information, which may be expressed as a building floor for indoor environments, may be associated with sparse and/or dense representations of the physical world, to increase the accuracy of localization results and/or in rendering virtual content more realistically.Type: ApplicationFiled: May 21, 2020Publication date: December 31, 2020Applicant: Magic Leap, Inc.Inventor: Ashwin Swaminathan
-
Patent number: 10854165Abstract: A method for calibrating a device having a first sensor and a second sensor. The method includes capturing sensor data using the first sensor and the second sensor. The device maintains a calibration profile including a translation parameter and a rotation parameter to model a spatial relationship between the first sensor and the second sensor. The method also includes determining a calibration level associated with the calibration profile at a first time. The method further includes determining, based on the calibration level, to perform a calibration process. The method further includes performing the calibration process at the first time by generating one or both of a calibrated translation parameter and a calibrated rotation parameter and replacing one or both of the translation parameter and the rotation parameter with one or both of the calibrated translation parameter and the calibrated rotation parameter.Type: GrantFiled: December 21, 2018Date of Patent: December 1, 2020Assignee: Magic Leap, Inc.Inventors: Yu-Tseh Chi, Jean-Yves Bouguet, Divya Sharma, Lei Huang, Dennis William Strelow, Etienne Gregoire Grossmann, Evan Gregory Levine, Adam Harmat, Ashwin Swaminathan
-
Patent number: 10694106Abstract: Methods, systems, and techniques to enhance computer vision application processing are disclosed. In particular, the methods, systems, and techniques may reduce power consumption for computer vision applications and improve processing efficiency for computer vision applications.Type: GrantFiled: June 12, 2014Date of Patent: June 23, 2020Assignee: QUALCOMM IncorporatedInventors: Fitzgerald John Archibald, Khosro Mohammad Rabii, Hima Bindu Damecharla, Tadeusz Jarosinski, Ashwin Swaminathan
-
Publication number: 20200106886Abstract: Systems and methods provide a unified call log for user devices in a device group. A user device receives an invite message for an incoming call and stores first call log information including a caller identifier based on the invite message. The user device receives a status message indicating that another user device has answered the incoming call and stores second call log information including a timestamp for the status message. The user device receives a call information message for a new line between the user device and a device associated with the caller telephone number, determines that the new line is a handover call associated with the incoming call, and stores third call log information for the new line. The user device determines that the handover call on the new line is ended and generates a call log entry including information from the incoming call and the handover call.Type: ApplicationFiled: September 11, 2019Publication date: April 2, 2020Inventors: Rafael Andres Gaviria Velez, Ashwin Swaminathan
-
Publication number: 20200090407Abstract: An augmented reality viewing system is described. A local coordinate frame of local content is transformed to a world coordinate frame. A further transformation is made to a head coordinate frame and a further transformation is made to a camera coordinate frame that includes all pupil positions of an eye. One or more users may interact in separate sessions with a viewing system. If a canonical map is available, the earlier map is downloaded onto a viewing device of a user. The viewing device then generates another map and localizes the subsequent map to the canonical map.Type: ApplicationFiled: August 12, 2019Publication date: March 19, 2020Applicant: Magic Leap, Inc.Inventors: Jeremy Dwayne Miranda, Rafael Domingos Torres, Daniel Olshansky, Anush Mohan, Robert Blake Taylor, Samuel A. Miller, Jehangir Tajik, Ashwin Swaminathan, Lomesh Agarwal, Ali Shahrokni, Prateek Singhal, Joel David Holder, Xuan Zhao, Siddharth Choudhary, Helder Toshiro Suzuki, Hiral Honar Barot, Eran Guendelman, Michael Harold Liebenow
-
Publication number: 20200051328Abstract: A cross reality system that provides an immersive user experience by storing persistent spatial information about the physical world that one or multiple user devices can access to determine position within the physical world and that applications can access to specify the position of virtual objects within the physical world. Persistent spatial information enables users to have a shared virtual, as well as physical, experience when interacting with the cross reality system. Further, persistent spatial information may be used in maps of the physical world, enabling one or multiple devices to access and localize into previously stored maps, reducing the need to map a physical space before using the cross reality system in it. Persistent spatial information may be stored as persistent coordinate frames, which may include a transformation relative to a reference orientation and information derived from images in a location corresponding to the persistent coordinate frame.Type: ApplicationFiled: October 4, 2019Publication date: February 13, 2020Applicant: Magic Leap, Inc.Inventors: Anush Mohan, Rafael Domingos Torres, Daniel Olshansky, Samuel A. Miller, Jehangir Tajik, Joel David Holder, Jeremy Dwayne Miranda, Robert Blake Taylor, Ashwin Swaminathan, Lomesh Agarwal, Hiral Honar Barot, Helder Toshiro Suzuki, Ali Shahrokni, Eran Guendelman, Prateek Singhal, Xuan Zhao, Siddharth Choudhary, Nick Kramer, Ken Tossell
-
Publication number: 20200033943Abstract: Methods and apparatus relating to enabling augmented reality applications using eye gaze tracking are disclosed. An exemplary method according to the disclosure includes displaying an image to a user of a scene viewable by the user, receiving information indicative of an eye gaze of the user, determining an area of interest within the image based on the eye gaze information, determining an image segment based on the area of interest, initiating an object recognition process on the image segment, and displaying results of the object recognition process.Type: ApplicationFiled: October 2, 2019Publication date: January 30, 2020Inventors: Ashwin SWAMINATHAN, Mahesh RAMACHANDRAN
-
Patent number: 10474233Abstract: Methods and apparatus relating to enabling augmented reality applications using eye gaze tracking are disclosed. An exemplary method according to the disclosure includes displaying an image to a user of a scene viewable by the user, receiving information indicative of an eye gaze of the user, determining an area of interest within the image based on the eye gaze information, determining an image segment based on the area of interest, initiating an object recognition process on the image segment, and displaying results of the object recognition process.Type: GrantFiled: May 2, 2018Date of Patent: November 12, 2019Assignee: QUALCOMM IncorporatedInventors: Ashwin Swaminathan, Mahesh Ramachandran
-
Patent number: 10455093Abstract: Systems and methods provide a unified call log for user devices in a device group. A user device receives an invite message for an incoming call and stores first call log information including a caller identifier based on the invite message. The user device receives a status message indicating that another user device has answered the incoming call and stores second call log information including a timestamp for the status message. The user device receives a call information message for a new line between the user device and a device associated with the caller telephone number, determines that the new line is a handover call associated with the incoming call, and stores third call log information for the new line. The user device determines that the handover call on the new line is ended and generates a call log entry including information from the incoming call and the handover call.Type: GrantFiled: September 27, 2018Date of Patent: October 22, 2019Assignee: Verizon Patent and Licensing Inc.Inventors: Rafael Andres Gaviria Velez, Ashwin Swaminathan