Patents by Inventor Peter Meier

Peter Meier has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210166489
    Abstract: A method of displaying virtual information in a view of a real environment comprising the following steps: providing the system relative to at least one part of the real environment and providing accuracy information of the current pose, providing multiple pieces of virtual information, and assigning a respective one of the pieces of virtual information to one of different parameters indicative of different pose accuracy information, and displaying at least one of the pieces of virtual information in the view of the real environment according to the accuracy information of the current pose in relation to the assigned parameter.
    Type: Application
    Filed: February 8, 2021
    Publication date: June 3, 2021
    Inventor: Peter Meier
  • Publication number: 20210074031
    Abstract: In one implementation, a method includes: while causing presentation of video content having a current plot setting, receiving a user input indicating a request to explore the current plot setting; obtaining synthesized reality (SR) content associated with the current plot setting in response to receiving the user input; causing presentation of the SR content associated with the current plot setting; receiving one or more user interactions with the SR content; and adjusting the presentation of the SR content in response to receiving the one or more user interactions with the SR content.
    Type: Application
    Filed: January 18, 2019
    Publication date: March 11, 2021
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Publication number: 20210076014
    Abstract: A method of projecting digital information on a real object in a real environment includes the steps of projecting digital information on a real object or part of a real object with a visible light projector, capturing at least one image of the real object with the projected digital information using a camera, providing a depth sensor registered with the camera, the depth sensor capturing depth data of the real object or part of the real object, and calculating a spatial transformation between the visible light projector and the real object based on the at least one image and the depth data. The invention is also concerned with a corresponding system.
    Type: Application
    Filed: September 22, 2020
    Publication date: March 11, 2021
    Inventors: Peter Meier, Mohamed Selim Ben Himane, Daniel Kurz
  • Publication number: 20210064910
    Abstract: Various implementations disclosed herein include devices, systems, and methods that detect surfaces and reflections in such surfaces. Some implementations involve providing a CGR environment that includes virtual content that replaces the appearance of a user or the user's device in a mirror or other surface providing a reflection. For example, a CGR environment may be modified to include a reflection of the user that does not include the device that the user is holding or wearing. In another example, the CGR environment is modified so that virtual content, such as a newer version of the electronic device or a virtual wand, replaces the electronic device in the reflection. In another example, the CGR environment is modified so that virtual content, such as a user avatar, replaces the user in the reflection.
    Type: Application
    Filed: August 25, 2020
    Publication date: March 4, 2021
    Inventors: Peter Meier, Daniel Kurz, Brian Chris Clark, Mohamed Selim Ben Himane
  • Patent number: 10922886
    Abstract: An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: February 16, 2021
    Assignee: Apple Inc.
    Inventors: Patrick S. Piemonte, Daniel De Rocha Rosario, Jason D Gosnell, Peter Meier
  • Patent number: 10916056
    Abstract: A method of displaying virtual information in a view of a real environment comprising the following steps: providing the system relative to at least one part of the real environment and providing accuracy information of the current pose, providing multiple pieces of virtual information, and assigning a respective one of the pieces of virtual information to one of different parameters indicative of different pose accuracy information, and displaying at least one of the pieces of virtual information in the view of the real environment according to the accuracy information of the current pose in relation to the assigned parameter.
    Type: Grant
    Filed: October 26, 2015
    Date of Patent: February 9, 2021
    Assignee: Apple Inc.
    Inventor: Peter Meier
  • Patent number: 10878617
    Abstract: A method for representing virtual information in a view of a real environment is provided that includes: providing a system setup including at least one display device, wherein the system setup is adapted for blending in virtual information on the display device in at least part of the view, determining a position and orientation of a viewing point relative to at least one component of the real environment, providing a geometry model of the real environment, providing at least one item of virtual information and a position of the at least one item of virtual information, determining whether the position of the item of virtual information is inside a 2D or 3D geometrical shape, determining a criterion which is indicative of whether the built-in real object is at least partially visible or non-visible in the view of the real environment, and blending in the at least one item of virtual information on the display device in at least part of the view of the real environment.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: December 29, 2020
    Assignee: Apple Inc.
    Inventors: Lejing Wang, Peter Meier, Stefan Misslinger
  • Publication number: 20200387712
    Abstract: In one implementation, a method includes: identifying a first plot-within a scene associated with a portion of video content; synthesizing a scene description for the scene that corresponds to a trajectory of the first plot-effectuator within a setting associated with the scene and actions performed by the first plot-effectuator; and generating a corresponding synthesized reality (SR) reconstruction of the scene by driving a first digital asset associated with the first plot-effectuator according to the scene description for the scene.
    Type: Application
    Filed: January 18, 2019
    Publication date: December 10, 2020
    Inventors: Ian M. RICHTER, Daniel ULBRICHT, Jean-Daniel E. NAHMIAS, Omar ELAFIFI, Peter MEIER
  • Patent number: 10845601
    Abstract: In one implementation, a method involves obtaining light intensity data from a stream of pixel events output by an event camera of a head-mounted device (“HMD”). Each pixel event is generated in response to a pixel sensor of the event camera detecting a change in light intensity that exceeds a comparator threshold. A set of optical sources disposed on a secondary device that are visible to the event camera are identified by recognizing defined illumination parameters associated with the optical sources using the light intensity data. Location data is generated for the optical sources in an HMD reference frame using the light intensity data. A correspondence between the secondary device and the HMD is determined by mapping the location data in the HMD reference frame to respective known locations of the optical sources relative to the secondary device reference frame.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: November 24, 2020
    Assignee: Apple Inc.
    Inventor: Peter Meier
  • Publication number: 20200364939
    Abstract: The invention relates to a method of representing a virtual object in a view of a real environment which comprises the steps of providing image information of a first image of at least part of a human face captured by a first camera, providing at least one human face specific characteristic, determining at least part of an image area of the face in the first image as a face region of the first image, determining at least one first light falling on the face according to the face region of the first image and the at least one human face specific characteristic, and blending in the virtual object on a display device in the view of the real environment according to the at least one first light. The invention also relates to a system for representing a virtual object in a view of a real environment.
    Type: Application
    Filed: August 3, 2020
    Publication date: November 19, 2020
    Inventors: Sebastian Knorr, Peter Meier
  • Publication number: 20200355504
    Abstract: An apparatus, method, and computer readable medium related to determining the position of a device, for example determining the pose of a camera. Varying embodiments discuss the use of sensors and captured images to construct an environment property map, which provides reference information in the form of environment properties that are associated with positions, such as camera poses. Embodiments of the disclosure discuss using the environment property map online (e.g. in real time) in order to determine the position of a device. In some embodiments, the environment property map provides a coarse position that is used to refine or limit the necessary work for determining a more precise position.
    Type: Application
    Filed: July 24, 2020
    Publication date: November 12, 2020
    Inventors: Peter Meier, Christian Lipski
  • Publication number: 20200342231
    Abstract: In one implementation, a method includes: obtaining image data from an image sensor; recognizing a portion of an object within the image data; obtaining synthesized reality (SR) content—such as mixed reality, augmented reality, augmented virtuality, or virtual reality content—associated with the portion of the object; and displaying the SR content in association with the portion of the object. In some implementations, the SR content is dependent on the orientation of an electronic device or the user relative to the object. In some implementations, the SR content is generated based on sensor data associated with the object.
    Type: Application
    Filed: January 18, 2019
    Publication date: October 29, 2020
    Inventors: Ian M. Richter, Mohamed Selim Ben Himane, Peter Meier
  • Patent number: 10819962
    Abstract: A method of projecting digital information on a real object in a real environment includes the steps of projecting digital information on a real object or part of a real object with a visible light projector, capturing at least one image of the real object with the projected digital information using a camera, providing a depth sensor registered with the camera, the depth sensor capturing depth data of the real object or part of the real object, and calculating a spatial transformation between the visible light projector and the real object based on the at least one image and the depth data. The invention is also concerned with a corresponding system.
    Type: Grant
    Filed: December 28, 2012
    Date of Patent: October 27, 2020
    Assignee: Apple Inc.
    Inventors: Peter Meier, Mohamed Selim Ben Himane, Daniel Kurz
  • Patent number: 10739142
    Abstract: An apparatus, method, and computer readable medium related to determining the position of a device, for example determining the pose of a camera. Varying embodiments discuss the use of sensors and captured images to construct an environment property map, which provides reference information in the form of environment properties that are associated with positions, such as camera poses. Embodiments of the disclosure discuss using the environment property map online (e.g. in real time) in order to determine the position of a device. In some embodiments, the environment property map provides a coarse position that is used to refine or limit the necessary work for determining a more precise position.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: August 11, 2020
    Assignee: Apple Inc.
    Inventors: Peter Meier, Christian Lipski
  • Patent number: 10733804
    Abstract: The invention relates to a method of representing a virtual object in a view of a real environment which comprises the steps of providing image information of a first image of at least part of a human face captured by a first camera, providing at least one human face specific characteristic, determining at least part of an image area of the face in the first image as a face region of the first image, determining at least one first light falling on the face according to the face region of the first image and the at least one human face specific characteristic, and blending in the virtual object on a display device in the view of the real environment according to the at least one first light. The invention also relates to a system for representing a virtual object in a view of a real environment.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: August 4, 2020
    Assignee: Apple Inc.
    Inventors: Sebastian Knorr, Peter Meier
  • Publication number: 20200242846
    Abstract: The invention relates to a method for ergonomically representing virtual information in a real environment, including the following steps: providing at least one view of a real environment and of a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device, ascertaining a position and orientation of at least one part of the system setup relative to at least one component of the real environment, subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region, and blending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of t
    Type: Application
    Filed: April 8, 2020
    Publication date: July 30, 2020
    Inventors: Peter Meier, Stefan Misslinger, Anton Fedosov
  • Publication number: 20200226383
    Abstract: In an exemplary process for providing content in an augmented reality environment, image data correspond to a physical environment are obtained. Based on the image data, predefined entities of the plurality of predefined entities in the physical environment are identified using classifiers corresponding to predefined entities. Based on the one or more of the identified predefined entities, a geometric layout of the physical environment is determined. Based on the geometric layout, an area corresponding to a particular entity is determined. The particular entity corresponds to one or more identified predefined entities. Based on the area corresponding to the particular entity, the particular entity in the physical environment is identified using classifiers corresponding to the determined area. Based on the identified particular entity, a type of the physical environment is determined.
    Type: Application
    Filed: March 27, 2020
    Publication date: July 16, 2020
    Inventors: Peter Meier, Daniel Ulbricht
  • Publication number: 20200193712
    Abstract: The invention relates to a method for ergonomically representing virtual information in a real environment, including the following steps: providing at least one view of a real environment and of a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device, ascertaining a position and orientation of at least one part of the system setup relative to at least one component of the real environment, subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region, and blending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of t
    Type: Application
    Filed: December 16, 2019
    Publication date: June 18, 2020
    Inventors: Peter Meier, Frank A. Angermann
  • Patent number: 10671662
    Abstract: A method for analyzing an image of a real object, generated by at least one camera includes the following steps: generating at least a first image by the camera capturing at least one real object, defining a first search domain comprising multiple data sets of the real object, each of the data sets being indicative of a respective portion of the real object, and analyzing at least one characteristic property of the first image with respect to the first search domain, in order to determine whether the at least one characteristic property corresponds to information of at least a particular one of the data sets of the first search domain. If it is determined that the at least one characteristic property corresponds to information of at least a particular one of the data sets, a second search domain comprising only the particular one of the data sets is defined and the second search domain is used for analyzing the first image and/or at least a second image.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: June 2, 2020
    Assignee: Apple Inc.
    Inventors: Sebastian Lieberknecht, Peter Meier
  • Patent number: 10665025
    Abstract: The invention relates to a method for representing a virtual object in a real environment, having the following steps: generating a two-dimensional representation of a real environment by means of a recording device, ascertaining a position of the recording device relative to at least one component of the real environment, segmenting at least one area of the real environment in the two-dimensional image on the basis of non-manually generated 3D information for identifying at least one segment of the real environment in distinction to a remaining part of the real environment while supplying corresponding segmentation data, and merging the two-dimensional image of the real environment with the virtual object or, by means of an optical, semitransparent element directly with reality with consideration of the segmentation data. The invention permits any collisions of virtual objects with real objects that occur upon merging with a real environment to be represented in a way largely close to reality.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: May 26, 2020
    Assignee: Apple Inc.
    Inventors: Peter Meier, Stefan Holzer