Patents by Inventor Gerhard Reitmayr
Gerhard Reitmayr has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12100107Abstract: Techniques are provided for generating three-dimensional models of objects from one or more images or frames. For example, at least one frame of an object in a scene can be obtained. A portion of the object is positioned on a plane in the at least one frame. The plane can be detected in the at least one frame and, based on the detected plane, the object can be segmented from the plane in the at least one frame. A three-dimensional (3D) model of the object can be generated based on segmenting the object from the plane. A refined mesh can be generated for a portion of the 3D model corresponding to the portion of the object positioned on the plane.Type: GrantFiled: July 17, 2023Date of Patent: September 24, 2024Assignee: QUALCOMM IncorporatedInventors: Ke-Li Cheng, Kuang-Man Huang, Michel Adib Sarkis, Gerhard Reitmayr, Ning Bi
-
Publication number: 20240296576Abstract: This disclosure provides systems, methods, and devices for image signal processing that support artificial intelligence (AI)-based processing of image data for reconstructing 3D worlds. In a first aspect, a method of image processing includes receiving a plurality of image frames representing a scene; determining a first depth prediction for the scene based on the plurality of image frames; determining a reconstructed mesh from the plurality of image frames; determining a second depth prediction for the scene based on the reconstructed mesh; and determining a third depth prediction based on the first depth prediction and the second depth prediction. Other aspects and features are also claimed and described.Type: ApplicationFiled: August 10, 2023Publication date: September 5, 2024Inventors: Mohsen Ghafoorian, Georgi Dikov, Xuepeng Shi, Jihong Ju, Gerhard Reitmayr
-
Publication number: 20240104869Abstract: Techniques and systems are provided for providing recommendations for extended reality systems. In some examples, a system determines one or more environmental features associated with a real-world environment of an extended reality system. The system determines one or more user features associated with a user of the extended reality system. The system also outputs, based on the one or more environmental features and the one or more user features, a notification associated with at least one application supported by the extended reality system.Type: ApplicationFiled: December 6, 2023Publication date: March 28, 2024Inventors: Mehrad TAVAKOLI, Robert TARTZ, Scott BEITH, Gerhard REITMAYR
-
Patent number: 11887262Abstract: Techniques and systems are provided for providing recommendations for extended reality systems. In some examples, a system determines one or more environmental features associated with a real-world environment of an extended reality system. The system determines one or more user features associated with a user of the extended reality system. The system also outputs, based on the one or more environmental features and the one or more user features, a notification associated with at least one application supported by the extended reality system.Type: GrantFiled: December 22, 2021Date of Patent: January 30, 2024Assignee: QUALCOMM IncorporatedInventors: Mehrad Tavakoli, Robert Tartz, Scott Beith, Gerhard Reitmayr
-
Publication number: 20240005607Abstract: Techniques are provided for generating three-dimensional models of objects from one or more images or frames. For example, at least one frame of an object in a scene can be obtained. A portion of the object is positioned on a plane in the at least one frame. The plane can be detected in the at least one frame and, based on the detected plane, the object can be segmented from the plane in the at least one frame. A three-dimensional (3D) model of the object can be generated based on segmenting the object from the plane. A refined mesh can be generated for a portion of the 3D model corresponding to the portion of the object positioned on the plane.Type: ApplicationFiled: July 17, 2023Publication date: January 4, 2024Inventors: Ke-Li CHENG, Kuang-Man HUANG, Michel Adib SARKIS, Gerhard REITMAYR, Ning BI
-
Publication number: 20230342943Abstract: Examples are described for processing images to mask dynamic objects out of images to improve feature tracking between images. A device receives an image of an environment captured by an image sensor. The image depicts at least a static portion of the environment and a dynamic object in the environment. The device identifies a portion of the image that includes a depiction of the dynamic object. For example, the device can detect a bounding box around the dynamic object, or can detect which pixels in the image correspond to the dynamic object. The device generates a masked image at least by masking the portion of the image. The device identifies features in the masked image, and uses the features from the masked image for feature tracking from other images of the environment, masked or otherwise. The device can use this feature tracking for mapping, localization, and/or relocation.Type: ApplicationFiled: June 28, 2023Publication date: October 26, 2023Inventors: Abhijeet BISAIN, Gerhard REITMAYR
-
Patent number: 11769258Abstract: Systems and techniques are described herein for processing images. The systems and techniques can be implemented by various types of systems, such as by an extended reality (XR) system or device. In some cases, a first processor receives an image of an environment captured by an image sensor, identifies features depicted in the image, and generates descriptors for the features. The first processor sends the descriptors to a second processor, which may be more powerful than the first processor. The second processor receives the descriptors. The second processor associates the plurality of features with a map of the environment based on at least a subset of the plurality of descriptors. For example, the second processor can track at least a subset of the features based on at least a subset of the descriptors and based on feature information from one or more additional images of the environment.Type: GrantFiled: February 3, 2021Date of Patent: September 26, 2023Assignee: QUALCOMM IncorporatedInventors: Ajit Deepak Gupte, Gerhard Reitmayr, Abhijeet Bisain, Pushkar Gorur Sheshagiri, Chayan Sharma, Ajit Venkat Rao
-
Patent number: 11756227Abstract: Systems and techniques are provided for determining and applying corrected poses in digital content experiences. An example method can include receiving, from one or more sensors associated with an apparatus, inertial measurements and one or more frames of a scene; based on the one or more frames and the inertial measurements, determining, via a first filter, an angular and linear motion of the apparatus and a gravity vector indicating a direction of gravitational force interacting with the apparatus; when a motion of the apparatus is below a threshold, determining, via a second filter, an updated gravity vector indicating a direction of gravitational force interacting with the apparatus; determining, based on the updated gravity vector, parameters for aligning an axis of the scene with a gravity direction in a real-world spatial frame; and aligning, using the parameters, the axis of the scene with the gravity direction in the real-world spatial frame.Type: GrantFiled: May 4, 2021Date of Patent: September 12, 2023Assignee: QUALCOMM IncorporatedInventors: Srujan Babu Nandipati, Pushkar Gorur Sheshagiri, Chiranjib Choudhuri, Ajit Deepak Gupte, Gerhard Reitmayr
-
Patent number: 11748949Abstract: Techniques are provided for generating three-dimensional models of objects from one or more images or frames. For example, at least one frame of an object in a scene can be obtained. A portion of the object is positioned on a plane in the at least one frame. The plane can be detected in the at least one frame and, based on the detected plane, the object can be segmented from the plane in the at least one frame. A three-dimensional (3D) model of the object can be generated based on segmenting the object from the plane. A refined mesh can be generated for a portion of the 3D model corresponding to the portion of the object positioned on the plane.Type: GrantFiled: May 13, 2022Date of Patent: September 5, 2023Assignee: QUALCOMM IncorporatedInventors: Ke-Li Cheng, Kuang-Man Huang, Michel Adib Sarkis, Gerhard Reitmayr, Ning Bi
-
Patent number: 11748913Abstract: Systems and techniques are provided for modeling three-dimensional (3D) meshes using images. An example method can include receiving, via a neural network system, an image of a target and metadata associated with the image and/or a device that captured the image; determining, based on the image and metadata, first 3D mesh parameters of a first 3D mesh of the target, the first 3D mesh parameters and first 3D mesh corresponding to a first reference frame associated with the image and/or the device; and determining, based on the first 3D mesh parameters, second 3D mesh parameters for a second 3D mesh of the target, the second 3D mesh parameters and second 3D mesh corresponding to a second reference frame, the second reference frame including a 3D coordinate system of a real-world scene where the target is located.Type: GrantFiled: March 1, 2021Date of Patent: September 5, 2023Assignee: QUALCOMM IncorporatedInventors: Ashar Ali, Gokce Dane, Upal Mahbub, Samuel Sunarjo, Gerhard Reitmayr
-
Publication number: 20230274778Abstract: Systems, methods, and computer-readable media are provided for providing pose estimation in extended reality systems. An example method can include tracking, in a lower-power processing mode using a set of lower-power circuit elements on an integrated circuit, a position and orientation of a computing device during a lower-power processing period, the set of lower-power circuit elements including a static random-access memory (SRAM); suspending, based on a triggering event, the tracking in the lower-power processing mode; initiating a higher-power processing mode for tracking the position and orientation of the computing device during a higher-power processing period; and tracking, in the higher-power processing mode using a set of higher-power circuit elements on the integrated circuit and a dynamic random-access memory (DRAM), the position and orientation of the computing device during the higher-power processing period.Type: ApplicationFiled: May 8, 2023Publication date: August 31, 2023Inventors: Wesley James HOLLAND, Mehrad TAVAKOLI, Injoon HONG, Huang HUANG, Simon Peter William BOOTH, Gerhard REITMAYR
-
Patent number: 11727576Abstract: Examples are described for processing images to mask dynamic objects out of images to improve feature tracking between images. A device receives an image of an environment captured by an image sensor. The image depicts at least a static portion of the environment and a dynamic object in the environment. The device identifies a portion of the image that includes a depiction of the dynamic object. For example, the device can detect a bounding box around the dynamic object, or can detect which pixels in the image correspond to the dynamic object. The device generates a masked image at least by masking the portion of the image. The device identifies features in the masked image, and uses the features from the masked image for feature tracking from other images of the environment, masked or otherwise. The device can use this feature tracking for mapping, localization, and/or relocation.Type: GrantFiled: December 18, 2020Date of Patent: August 15, 2023Assignee: QUALCOMM IncorporatedInventors: Abhijeet Bisain, Gerhard Reitmayr
-
Patent number: 11682454Abstract: Systems, methods, and computer-readable media are provided for providing pose estimation in extended reality systems. An example method can include tracking, in a lower-power processing mode using a set of lower-power circuit elements on an integrated circuit, a position and orientation of a computing device during a lower-power processing period, the set of lower-power circuit elements including a static random-access memory (SRAM); suspending, based on a triggering event, the tracking in the lower-power processing mode; initiating a higher-power processing mode for tracking the position and orientation of the computing device during a higher-power processing period; and tracking, in the higher-power processing mode using a set of higher-power circuit elements on the integrated circuit and a dynamic random-access memory (DRAM), the position and orientation of the computing device during the higher-power processing period.Type: GrantFiled: November 3, 2021Date of Patent: June 20, 2023Assignee: QUALCOMM IncorporatedInventors: Wesley James Holland, Mehrad Tavakoli, Injoon Hong, Huang Huang, Simon Peter William Booth, Gerhard Reitmayr
-
Patent number: 11632162Abstract: This disclosure provides systems, methods, and devices for wireless communication that support enhanced beam management using extended reality (XR) perception data. In a first aspect, a method of wireless communication includes establishing a communication connection between a user equipment (UE) and a serving base station using a current serving beam selected by the UE from a plurality of available beams paired with a serving base station beam. The method further includes obtaining, perception information from one or more extended reality sensors associated with the UE and determining, in response to detection of UE movement, a transpositional representation of the movement using the perception information. The UE may then select a new serving beam in accordance with the transpositional representation. Other aspects and features are also claimed and described.Type: GrantFiled: September 30, 2021Date of Patent: April 18, 2023Assignee: QUALCOMM IncorporatedInventors: Hussein Metwaly Saad, Peerapol Tinnakornsrisuphap, Prashanth Haridas Hande, Gerhard Reitmayr, Abhijeet Bisain, Chih-Ping Li
-
Publication number: 20230096553Abstract: This disclosure provides systems, methods, and devices for wireless communication that support enhanced beam management using extended reality (XR) perception data. In a first aspect, a method of wireless communication includes establishing a communication connection between a user equipment (UE) and a serving base station using a current serving beam selected by the UE from a plurality of available beams paired with a serving base station beam. The method further includes obtaining, perception information from one or more extended reality sensors associated with the UE and determining, in response to detection of UE movement, a transpositional representation of the movement using the perception information. The UE may then select a new serving beam in accordance with the transpositional representation. Other aspects and features are also claimed and described.Type: ApplicationFiled: September 30, 2021Publication date: March 30, 2023Inventors: Hussein Metwaly Saad, Peerapol Tinnakornsrisuphap, Prashanth Haridas Hande, Gerhard Reitmayr, Abhijeet Bisain, Chih-Ping Li
-
Publication number: 20230095621Abstract: Systems and techniques are described herein for processing frames. The systems and techniques can be implemented by various types of systems, such as by an extended reality (XR) system or device. In some cases, a process can include obtaining feature information associated with a feature in a current frame, wherein the feature information is based on one or more previous frames; determining an estimated pose of the apparatus associated with the current frame; obtaining a distance associated with the feature in the current frame; and determining an estimated scale of the feature in the current frame based on the feature information associated with the feature, the estimated pose, and the distance associated with the feature.Type: ApplicationFiled: September 24, 2021Publication date: March 30, 2023Inventors: Pushkar GORUR SHESHAGIRI, Ajit Deepak GUPTE, Chiranjib CHOUDHURI, Gerhard REITMAYR, Youngmin PARK
-
Patent number: 11514658Abstract: Systems and techniques are provided for modeling three-dimensional (3D) meshes using multi-view image data. An example method can include determining, based on a first image of a target, first 3D mesh parameters for the target corresponding to a first coordinate frame; determining, based on a second image of the target, second 3D mesh parameters for the target corresponding to a second coordinate frame; determining third 3D mesh parameters for the target in a third coordinate frame, the third 3D mesh parameters being based on the first and second 3D mesh parameters and relative rotation and translation parameters of image sensors that captured the first and second images; determining a loss associated with the third 3D mesh parameters, the loss being based on the first and second 3D mesh parameters and the relative rotation and translation parameters; determining 3D mesh parameters based on the loss and third 3D mesh parameters.Type: GrantFiled: January 20, 2021Date of Patent: November 29, 2022Assignee: QUALCOMM IncorporatedInventors: Ashar Ali, Gokee Dane, Gerhard Reitmayr
-
Publication number: 20220366597Abstract: Systems and techniques are provided for determining and applying corrected poses in digital content experiences. An example method can include receiving, from one or more sensors associated with an apparatus, inertial measurements and one or more frames of a scene; based on the one or more frames and the inertial measurements, determining, via a first filter, an angular and linear motion of the apparatus and a gravity vector indicating a direction of gravitational force interacting with the apparatus; when a motion of the apparatus is below a threshold, determining, via a second filter, an updated gravity vector indicating a direction of gravitational force interacting with the apparatus; determining, based on the updated gravity vector, parameters for aligning an axis of the scene with a gravity direction in a real-world spatial frame; and aligning, using the parameters, the axis of the scene with the gravity direction in the real-world spatial frame.Type: ApplicationFiled: May 4, 2021Publication date: November 17, 2022Inventors: Srujan Babu NANDIPATI, Pushkar GORUR SHESHAGIRI, Chiranjib CHOUDHURI, Ajit Deepak GUPTE, Gerhard REITMAYR
-
Publication number: 20220343602Abstract: Techniques are provided for generating three-dimensional models of objects from one or more images or frames. For example, at least one frame of an object in a scene can be obtained. A portion of the object is positioned on a plane in the at least one frame. The plane can be detected in the at least one frame and, based on the detected plane, the object can be segmented from the plane in the at least one frame. A three-dimensional (3D) model of the object can be generated based on segmenting the object from the plane. A refined mesh can be generated for a portion of the 3D model corresponding to the portion of the object positioned on the plane.Type: ApplicationFiled: May 13, 2022Publication date: October 27, 2022Inventors: Ke-Li CHENG, Kuang-Man HUANG, Michel Adib SARKIS, Gerhard REITMAYR, Ning BI
-
Patent number: 11481931Abstract: Systems, methods, and non-transitory media are provided for generating virtual private spaces for extended reality (XR) experiences. An example method can include initiating a virtual session for presenting virtual content and identifying, for the virtual session, a portion of a physical space for use as a virtual private space for presenting at least a portion of the virtual content. The method can include outputting boundary information defining a boundary of the virtual private space, and generate at least the portion of the virtual content for the virtual private space. At least the portion of the virtual content is viewable in the virtual private space by one or more authorized users of the virtual session and is not viewable by one or more unauthorized users.Type: GrantFiled: July 7, 2020Date of Patent: October 25, 2022Assignee: QUALCOMM IncorporatedInventors: Scott Beith, Robert Tartz, Ananthapadmanabhan Arasanipalai Kandhadai, Gerhard Reitmayr, Mehrad Tavakoli