Patents by Inventor Peter Meier

Peter Meier has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230131109
    Abstract: Various implementations disclosed herein include devices, systems, and methods that detect surfaces and reflections in such surfaces. Some implementations involve providing a CGR environment that includes virtual content that replaces the appearance of a user or the user's device in a mirror or other surface providing a reflection. For example, a CGR environment may be modified to include a reflection of the user that does not include the device that the user is holding or wearing. In another example, the CGR environment is modified so that virtual content, such as a newer version of the electronic device or a virtual wand, replaces the electronic device in the reflection. In another example, the CGR environment is modified so that virtual content, such as a user avatar, replaces the user in the reflection.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 27, 2023
    Inventors: Peter MEIER, Daniel KURZ, Brian Chris CLARK, Mohamed Selim BEN HIMANE
  • Patent number: 11568760
    Abstract: Detecting a chewing noise from a user during a chewing session, triggering operation of a camera, obtaining image data capturing a food product, identifying the food product based on image data, determining a measurement of the chewing session, determining a volume of the food product based on the measurement of the chewing session, and determining a calorie intake based on the food product, the volume of the food product, and the measurement of the chewing session.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: January 31, 2023
    Assignee: Apple Inc.
    Inventor: Peter Meier
  • Patent number: 11562540
    Abstract: The invention relates to a method for ergonomically representing virtual information in a real environment, including the following steps: providing at least one view of a real environment and of a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device, ascertaining a position and orientation of at least one part of the system setup relative to at least one component of the real environment, subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region, and blending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of t
    Type: Grant
    Filed: April 8, 2020
    Date of Patent: January 24, 2023
    Assignee: Apple Inc.
    Inventors: Peter Meier, Frank A. Angermann
  • Publication number: 20220375123
    Abstract: A method for representing virtual information in a view of a real environment comprises providing a virtual object having a global position and orientation with respect to a geographic global coordinate system, with first pose data on the global position and orientation of the virtual object, in a database of a server, taking an image of a real environment by a mobile device and providing second pose data as to at which position and with which orientation with respect to the geographic global coordinate system the image was taken. The method further includes displaying the image on a display of the mobile device, accessing the virtual object in the database and positioning the virtual object in the image on the basis of the first and second pose data, manipulating the virtual object or adding a further virtual object, and providing the manipulated virtual object with modified first pose data or the further virtual object with third pose data in the database.
    Type: Application
    Filed: August 8, 2022
    Publication date: November 24, 2022
    Inventors: Peter Meier, Michael Kuhn, Frank Angermann
  • Patent number: 11462000
    Abstract: Various implementations disclosed herein include devices, systems, and methods that detect surfaces and reflections in such surfaces. Some implementations involve providing a CGR environment that includes virtual content that replaces the appearance of a user or the user's device in a mirror or other surface providing a reflection. For example, a CGR environment may be modified to include a reflection of the user that does not include the device that the user is holding or wearing. In another example, the CGR environment is modified so that virtual content, such as a newer version of the electronic device or a virtual wand, replaces the electronic device in the reflection. In another example, the CGR environment is modified so that virtual content, such as a user avatar, replaces the user in the reflection.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: October 4, 2022
    Assignee: Apple Inc.
    Inventors: Peter Meier, Daniel Kurz, Brian Chris Clark, Mohamed Selim Ben Himane
  • Publication number: 20220279147
    Abstract: A method of tracking a mobile device comprising at least one camera in a real environment comprises the steps of receiving image information associated with at least one image captured by the at least one camera, generating a first geometrical model of at least part of the real environment based on environmental data or mobile system state data acquired in an acquisition process by at least one sensor of a mobile system, which is different from the mobile device, and performing a tracking process based on the image information associated with the at least one image and at least partially according to the first geometrical model, wherein the tracking process determines at least one parameter of a pose of the mobile device relative to the real environment.
    Type: Application
    Filed: May 16, 2022
    Publication date: September 1, 2022
    Inventors: Peter Meier, Sebastian Lieberknecht
  • Publication number: 20220270335
    Abstract: In various implementations, a device surveys a scene and presents, within the scene, a extended reality (XR) environment including one or more assets that evolve over time (e.g., change location or age). Modeling such an XR environment at various timescales can be computationally intensive, particularly when modeling the XR environment over larger timescales. Accordingly, in various implementations, different models are used to determine the environment state of the XR environment when presenting the XR environment at different timescales.
    Type: Application
    Filed: May 9, 2022
    Publication date: August 25, 2022
    Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu, Ian M. Richter
  • Patent number: 11410391
    Abstract: A method for representing virtual information in a view of a real environment comprises providing a virtual object having a global position and orientation with respect to a geographic global coordinate system, with first pose data on the global position and orientation of the virtual object, in a database of a server, taking an image of a real environment by a mobile device and providing second pose data as to at which position and with which orientation with respect to the geographic global coordinate system the image was taken. The method further includes displaying the image on a display of the mobile device, accessing the virtual object in the database and positioning the virtual object in the image on the basis of the first and second pose data, manipulating the virtual object or adding a further virtual object, and providing the manipulated virtual object with modified first pose data or the further virtual object with third pose data in the database.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: August 9, 2022
    Assignee: Apple Inc.
    Inventors: Peter Meier, Michael Kuhn, Frank Angermann
  • Patent number: 11403511
    Abstract: In some implementations at an electronic device, training a dual EDNN includes defining a data structure of attributes corresponding to defined parts of a task, processing a first instance of an input using a first EDNN to produce a first output while encoding a first set of the attributes in a first latent space, and processing a second instance of the input using a second EDNN to produce a second output while encoding attribute differences from attribute averages in a second latent space. The device then determines a second set of the attributes based on the attribute differences and the attribute averages. The device then adjusts parameters of the first and second EDNNs based on comparing the first instance of the input to the first output, the second instance of the input to the second output, and the first set of attributes to the second set of attributes.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: August 2, 2022
    Assignee: Apple Inc.
    Inventors: Peter Meier, Tanmay Batra
  • Patent number: 11391952
    Abstract: In one implementation, a method involves obtaining light intensity data from a stream of pixel events output by an event camera of a head-mounted device (“HMD”). Each pixel event is generated in response to a pixel sensor of the event camera detecting a change in light intensity that exceeds a comparator threshold. A set of optical sources disposed on a secondary device that are visible to the event camera are identified by recognizing defined illumination parameters associated with the optical sources using the light intensity data. Location data is generated for the optical sources in an HMD reference frame using the light intensity data. A correspondence between the secondary device and the HMD is determined by mapping the location data in the HMD reference frame to respective known locations of the optical sources relative to the secondary device reference frame.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: July 19, 2022
    Assignee: Apple Inc.
    Inventor: Peter Meier
  • Publication number: 20220222869
    Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.
    Type: Application
    Filed: March 29, 2022
    Publication date: July 14, 2022
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Patent number: 11386653
    Abstract: In one implementation, a method includes: identifying a first plot-within a scene associated with a portion of video content; synthesizing a scene description for the scene that corresponds to a trajectory of the first plot-effectuator within a setting associated with the scene and actions performed by the first plot-effectuator; and generating a corresponding synthesized reality (SR) reconstruction of the scene by driving a first digital asset associated with the first plot-effectuator according to the scene description for the scene.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: July 12, 2022
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Patent number: 11373377
    Abstract: In various implementations, a device surveys a scene and presents, within the scene, a extended reality (XR) environment including one or more assets that evolve over time (e.g., change location or age). Modeling such an XR environment at various timescales can be computationally intensive, particularly when modeling the XR environment over larger timescales. Accordingly, in various implementations, different models are used to determine the environment state of the XR environment when presenting the XR environment at different timescales.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: June 28, 2022
    Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu, Ian M. Richter
  • Publication number: 20220177238
    Abstract: A device (1) for feeding a processing device with powdery material (2) comprises a first chamber (3) having at least one fluidizing device (5) configured to fluidize and/or to potentially fluidize the powdery material (2), at least one second chamber (6) being in connection with the first chamber (3) such that fluidized and/or potentially fluidized powdery material (2) is transportable from the first chamber (3) into the second chamber (6), and at least one third chamber (9) being in connection with the second chamber (6) such, that the potentially fluidized powdery material (2) is transportable from the second chamber (6) into the third chamber (9). The device (1) is configured to defluidize the powdery material (2) such that it is present as defluidized powdery material (2) in the third chamber (9). The third chamber (9) has a discharge element (10) configured to discharge the defluidized powdery material (2).
    Type: Application
    Filed: March 26, 2020
    Publication date: June 9, 2022
    Applicant: REEL Alesa AG
    Inventors: Damian STAUFFER, Peter MEIER
  • Patent number: 11336867
    Abstract: A method of tracking a mobile device comprising at least one camera in a real environment comprises the steps of receiving image information associated with at least one image captured by the at least one camera, generating a first geometrical model of at least part of the real environment based on environmental data or mobile system state data acquired in an acquisition process by at least one sensor of a mobile system, which is different from the mobile device, and performing a tracking process based on the image information associated with the at least one image and at least partially according to the first geometrical model, wherein the tracking process determines at least one parameter of a pose of the mobile device relative to the real environment. The invention is also related to a method of generating a geometrical model of at least part of a real environment using image information from at least one camera of a mobile device.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: May 17, 2022
    Assignee: Apple Inc.
    Inventors: Peter Meier, Sebastian Lieberknecht
  • Patent number: 11328456
    Abstract: In one implementation, a method includes: while causing presentation of video content having a current plot setting, receiving a user input indicating a request to explore the current plot setting; obtaining synthesized reality (SR) content associated with the current plot setting in response to receiving the user input; causing presentation of the SR content associated with the current plot setting; receiving one or more user interactions with the SR content; and adjusting the presentation of the SR content in response to receiving the one or more user interactions with the SR content.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: May 10, 2022
    Assignee: APPLE INC.
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Patent number: 11315308
    Abstract: A method for representing virtual information in a view of a real environment is provided that includes: providing a system setup including at least one display device, wherein the system setup is adapted for blending in virtual information on the display device in at least part of the view, determining a position and orientation of a viewing point relative to at least one component of the real environment, providing a geometry model of the real environment, providing at least one item of virtual information and a position of the at least one item of virtual information, determining whether the position of the item of virtual information is inside a 2D or 3D geometrical shape, determining a criterion which is indicative of whether the built-in real object is at least partially visible or non-visible in the view of the real environment, and blending in the at least one item of virtual information on the display device in at least part of the view of the real environment.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: April 26, 2022
    Assignee: Apple Inc.
    Inventors: Lejing Wang, Peter Meier, Stefan Misslinger
  • Publication number: 20220046222
    Abstract: Various implementations disclosed are for detecting moving objects that are in a field of view of a head-mountable device (HMD). In various implementations, the HMD includes a display, an event camera, a non-transitory memory, and a processor coupled with the display, the event camera and the non-transitory memory. In some implementations, the method includes synthesizing a first optical flow characterizing one or more objects in a field of view of the event camera based on depth data associated with the one or more objects. In some implementations, the method includes determining a second optical flow characterizing the one or more objects in the field of view of the event camera based on event image data provided by the event camera. In some implementations, the method includes determining that a first object of the one or more objects is moving based on the first optical flow and the second optical flow.
    Type: Application
    Filed: October 25, 2021
    Publication date: February 10, 2022
    Inventor: Peter Meier
  • Patent number: D942577
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: February 1, 2022
    Assignee: Inventionland, LLC
    Inventors: George M. Davison, Peter A. Meier, Nathan R. Field, Jeffrey Clayton Carlino, Jarrod L. Campbell
  • Patent number: D952759
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: May 24, 2022
    Assignee: Inventionland, LLC.
    Inventors: George M. Davison, Peter A. Meier, Nathan R. Field, Jeffrey Clayton Carlino, Jarrod L. Campbell