Patents by Inventor Peter Meier

Peter Meier has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200160605
    Abstract: A method for representing virtual information in a view of a real environment comprises providing a virtual object having a global position and orientation with respect to a geographic global coordinate system, with first pose data on the global position and orientation of the virtual object, in a database of a server, taking an image of a real environment by a mobile device and providing second pose data as to at which position and with which orientation with respect to the geographic global coordinate system the image was taken. The method further includes displaying the image on a display of the mobile device, accessing the virtual object in the database and positioning the virtual object in the image on the basis of the first and second pose data, manipulating the virtual object or adding a further virtual object, and providing the manipulated virtual object with modified first pose data or the further virtual object with third pose data in the database.
    Type: Application
    Filed: October 21, 2019
    Publication date: May 21, 2020
    Inventors: Peter Meier, Michael Kuhn, Frank Angermann
  • Publication number: 20200162703
    Abstract: A method of tracking a mobile device comprising at least one camera in a real environment comprises the steps of receiving image information associated with at least one image captured by the at least one camera, generating a first geometrical model of at least part of the real environment based on environmental data or mobile system state data acquired in an acquisition process by at least one sensor of a mobile system, which is different from the mobile device, and performing a tracking process based on the image information associated with the at least one image and at least partially according to the first geometrical model, wherein the tracking process determines at least one parameter of a pose of the mobile device relative to the real environment.
    Type: Application
    Filed: January 22, 2020
    Publication date: May 21, 2020
    Inventors: Peter Meier, Sebastian Lieberknecht
  • Patent number: 10659750
    Abstract: The disclosure relates to a method and system for presenting at least part of an image of a real object in a view of a real environment, comprising providing a first image of at least part of a real object captured by a first camera, determining at least part of the real object in the first image as an object image area, determining a first 3D plane relative to the first camera, the first camera being at a position where the first image is captured, providing at least one image feature related to the real object in the first image, providing at least one first ray passing an optical center of the first camera being at a position where the first image is captured and the at least one image feature, determining, according to a first plane normal direction of the first 3D plane, at least one first angle between the first 3D plane and the at least one first ray, providing a second image of a real environment captured by a second camera, determining a second 3D plane relative to the second camera, the second camer
    Type: Grant
    Filed: July 23, 2014
    Date of Patent: May 19, 2020
    Assignee: Apple Inc.
    Inventors: Peter Meier, Lejing Wang, Stefan Misslinger
  • Patent number: 10607350
    Abstract: The invention provides methods of detecting and describing features from an intensity image. In one of several aspects, the method comprises the steps of providing an intensity image captured by a capturing device, providing a method for determining a depth of at least one element in the intensity image, in a feature detection process detecting at least one feature in the intensity image, wherein the feature detection is performed by processing image intensity information of the intensity image at a scale which depends on the depth of at least one element in the intensity image, and providing a feature descriptor of the at least one detected feature. For example, the feature descriptor contains at least one first parameter based on information provided by the intensity image and at least one second parameter which is indicative of the scale.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: March 31, 2020
    Assignee: Apple Inc.
    Inventors: Daniel Kurz, Peter Meier
  • Publication number: 20200074280
    Abstract: In some implementations a neural network is trained to perform a main task using a clustering constraint, for example, using both a main task training loss and a clustering training loss. Training inputs are inputted into a main task neural network to produce output labels predicting locations of the parts of the objects in the training inputs. Data from pooled layers of the main task neural network is inputted into a clustering neural network. The main task neural network and the clustering neural network are trained based on a main task loss from the main task neural network and a clustering loss from the clustering neural network. The main task loss is determined by comparing differences between the output labels and the training labels. The clustering loss encourages the clustering network to learn to label the parts of the objects individually, e.g., to learn groups corresponding to the object parts.
    Type: Application
    Filed: July 17, 2019
    Publication date: March 5, 2020
    Inventors: Peter Meier, Tanmay Batra
  • Patent number: 10580162
    Abstract: A method for determining the pose of a camera relative to a real environment includes the following steps: taking at least one image of a real environment by means of a camera, the image containing at least part of a real object, performing a tracking method that evaluates information with respect to correspondences between features associated with the real object and corresponding features of the real object as it is contained in the image of the real environment, so as to obtain conclusions about the pose of the camera, determining at least one parameter of an environmental situation, and performing the tracking method in accordance with the at least one parameter. Analogously, the method can also be utilized in a method for recognizing an object of a real environment in an image taken by a camera.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: March 3, 2020
    Assignee: Apple Inc.
    Inventor: Peter Meier
  • Patent number: 10582166
    Abstract: A method of tracking a mobile device comprising at least one camera in a real environment comprises the steps of receiving image information associated with at least one image captured by the at least one camera, generating a first geometrical model of at least part of the real environment based on environmental data or mobile system state data acquired in an acquisition process by at least one sensor of a mobile system, which is different from the mobile device, and performing a tracking process based on the image information associated with the at least one image and at least partially according to the first geometrical model, wherein the tracking process determines at least one parameter of a pose of the mobile device relative to the real environment. The invention is also related to a method of generating a geometrical model of at least part of a real environment using image information from at least one camera of a mobile device.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: March 3, 2020
    Assignee: Apple Inc.
    Inventors: Peter Meier, Sebastian Lieberknecht
  • Publication number: 20200065653
    Abstract: In some implementations at an electronic device, training a dual EDNN includes defining a data structure of attributes corresponding to defined parts of a task, processing a first instance of an input using a first EDNN to produce a first output while encoding a first set of the attributes in a first latent space, and processing a second instance of the input using a second EDNN to produce a second output while encoding attribute differences from attribute averages in a second latent space. The device then determines a second set of the attributes based on the attribute differences and the attribute averages. The device then adjusts parameters of the first and second EDNNs based on comparing the first instance of the input to the first output, the second instance of the input to the second output, and the first set of attributes to the second set of attributes.
    Type: Application
    Filed: July 18, 2019
    Publication date: February 27, 2020
    Inventors: Peter Meier, Tanmay Batra
  • Publication number: 20200027276
    Abstract: The invention relates to a method for representing a virtual object in a real environment, having the following steps: generating a two-dimensional image of a real environment by means of a recording device, ascertaining a position of the recording device relative to at least one component of the real environment, segmenting at least one area of the real environment unmarked in reality in the two-dimensional image for identifying at least one segment of the real environment in distinction to a remaining part of the real environment while supplying corresponding segmentation data, and merging the virtual object with the two-dimensional image of the real environment with consideration of the segmentation data such that at least one part of the segment of the real environment is removed from the image of the real environment. The invention permits any collisions of virtual objects with real objects that occur upon merging with a real environment to be represented in a way largely close to reality.
    Type: Application
    Filed: July 29, 2019
    Publication date: January 23, 2020
    Inventors: Peter Meier, Stefan Holzer
  • Publication number: 20200013223
    Abstract: The invention relates to a method of representing a virtual object in a view of a real environment which comprises the steps of providing image information of a first image of at least part of a human face captured by a first camera, providing at least one human face specific characteristic, determining at least part of an image area of the face in the first image as a face region of the first image, determining at least one first light falling on the face according to the face region of the first image and the at least one human face specific characteristic, and blending in the virtual object on a display device in the view of the real environment according to the at least one first light. The invention also relates to a system for representing a virtual object in a view of a real environment.
    Type: Application
    Filed: September 16, 2019
    Publication date: January 9, 2020
    Inventors: Sebastian Knorr, Peter Meier
  • Patent number: 10453267
    Abstract: A method for representing virtual information in a view of a real environment comprises providing a virtual object having a global position and orientation with respect to a geographic global coordinate system, with first pose data on the global position and orientation of the virtual object, in a database of a server, taking an image of a real environment by a mobile device and providing second pose data as to at which position and with which orientation with respect to the geographic global coordinate system the image was taken. The method further includes displaying the image on a display of the mobile device, accessing the virtual object in the database and positioning the virtual object in the image on the basis of the first and second pose data, manipulating the virtual object or adding a further virtual object, and providing the manipulated virtual object with modified first pose data or the further virtual object with third pose data in the database.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: October 22, 2019
    Assignee: Apple Inc.
    Inventors: Peter Meier, Michael Kuhn, Frank Angermann
  • Publication number: 20190318168
    Abstract: An exemplary process for identifying a type of a physical environment amongst a plurality of types of physical environments is provided. The process includes obtaining, using the one or more cameras, image data corresponding to a physical environment. The process further includes identifying at least one portion of an entity in the physical environment based on the image data; determining, based on the identified at least one portion of the entity, whether the entity is an entity of a first type; determining a type of the physical environment if the entity is an entity of the first type; and presenting one or more virtual objects and a representation of the entity.
    Type: Application
    Filed: March 29, 2019
    Publication date: October 17, 2019
    Inventors: Peter MEIER, Michael J. ROCKWELL
  • Patent number: 10438373
    Abstract: The invention is related to a method and system for determining a pose of a first camera, comprising providing or receiving a spatial relationship (Rvc 1) between a visual content displayed on a display device and the first camera, receiving image information associated with an image (B1) of at least part of the displayed visual content captured by a second camera, and determining a pose of the first camera according to the image information associated with the image (B1) and the spatial relationship (Rvc1).
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: October 8, 2019
    Assignee: Apple Inc.
    Inventors: Lejing Wang, Peter Meier
  • Patent number: 10417824
    Abstract: The invention relates to a method of representing a virtual object in a view of a real environment which comprises the steps of providing image information of a first image of at least part of a human face captured by a first camera, providing at least one human face specific characteristic, determining at least part of an image area of the face in the first image as a face region of the first image, determining at least one first light falling on the face according to the face region of the first image and the at least one human face specific characteristic, and blending in the virtual object on a display device in the view of the real environment according to the at least one first light. The invention also relates to a system for representing a virtual object in a view of a real environment.
    Type: Grant
    Filed: March 25, 2014
    Date of Patent: September 17, 2019
    Assignee: Apple Inc.
    Inventors: Sebastian Knorr, Peter Meier
  • Patent number: 10366538
    Abstract: The invention relates to a method for representing a virtual object in a real environment, having the following steps: generating a two-dimensional image of a real environment by means of a recording device, ascertaining a position of the recording device relative to at least one component of the real environment, segmenting at least one area of the real environment unmarked in reality in the two-dimensional image for identifying at least one segment of the real environment in distinction to a remaining part of the real environment while supplying corresponding segmentation data, and merging the virtual object with the two-dimensional image of the real environment with consideration of the segmentation data such that at least one part of the segment of the real environment is removed from the image of the real environment. The invention permits any collisions of virtual objects with real objects that occur upon merging with a real environment to be represented in a way largely close to reality.
    Type: Grant
    Filed: October 19, 2015
    Date of Patent: July 30, 2019
    Assignee: Apple Inc.
    Inventors: Peter Meier, Stefan Holzer
  • Publication number: 20190188915
    Abstract: The invention relates to a method for representing a virtual object in a real environment, having the following steps: generating a two-dimensional representation of a real environment by means of a recording device, ascertaining a position of the recording device relative to at least one component of the real environment, segmenting at least one area of the real environment in the two-dimensional image on the basis of non-manually generated 3D information for identifying at least one segment of the real environment in distinction to a remaining part of the real environment while supplying corresponding segmentation data, and merging the two-dimensional image of the real environment with the virtual object or, by means of an optical, semitransparent element directly with reality with consideration of the segmentation data. The invention permits any collisions of virtual objects with real objects that occur upon merging with a real environment to be represented in a way largely close to reality.
    Type: Application
    Filed: August 6, 2018
    Publication date: June 20, 2019
    Inventors: Peter Meier, Stefan Holzer
  • Publication number: 20190180512
    Abstract: There is disclosed a method and mobile device for displaying points of interest in a view of a real environment displayed on a screen of the mobile device with a functionality for interaction with a user, which comprises the steps of: capturing an image of the real environment or a part of the real environment using a camera, determining at least one point of interest related to the real environment, determining an image position of the at least one point of interest in the image, displaying at least part of the image on at least part of the screen, overlaying a computer-generated indicator with the at least part of the image on the screen at a screen position according to the image position of the at least one point of interest, displaying a computer-generated virtual object related to the at least one point of interest on the screen at a screen position determined according to the screen position of the computer-generated indicator and which is adjacent to a bottom edge of the screen, displaying a visually
    Type: Application
    Filed: February 6, 2019
    Publication date: June 13, 2019
    Inventors: Anton Fedosov, Stefan Misslinger, Peter Meier
  • Publication number: 20190172220
    Abstract: A method for determining the pose of a camera relative to a real environment includes the following steps: taking at least one image of a real environment by means of a camera, the image containing at least part of a real object, performing a tracking method that evaluates information with respect to correspondences between features associated with the real object and corresponding features of the real object as it is contained in the image of the real environment, so as to obtain conclusions about the pose of the camera, determining at least one parameter of an environmental situation, and performing the tracking method in accordance with the at least one parameter. Analogously, the method can also be utilized in a method for recognizing an object of a real environment in an image taken by a camera.
    Type: Application
    Filed: January 25, 2019
    Publication date: June 6, 2019
    Inventor: Peter Meier
  • Patent number: 10229511
    Abstract: A method for determining the pose of a camera relative to a real environment includes the following steps: taking at least one image of a real environment by means of a camera, the image containing at least part of a real object, performing a tracking method that evaluates information with respect to correspondences between features associated with the real object and corresponding features of the real object as it is contained in the image of the real environment, so as to obtain conclusions about the pose of the camera, determining at least one parameter of an environmental situation, and performing the tracking method in accordance with the at least one parameter. Analogously, the method can also be utilized in a method for recognizing an object of a real environment in an image taken by a camera.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: March 12, 2019
    Assignee: Apple Inc.
    Inventor: Peter Meier
  • Publication number: 20190075274
    Abstract: A method of tracking a mobile device comprising at least one camera in a real environment comprises the steps of receiving image information associated with at least one image captured by the at least one camera, generating a first geometrical model of at least part of the real environment based on environmental data or mobile system state data acquired in an acquisition process by at least one sensor of a mobile system, which is different from the mobile device, and performing a tracking process based on the image information associated with the at least one image and at least partially according to the first geometrical model, wherein the tracking process determines at least one parameter of a pose of the mobile device relative to the real environment.
    Type: Application
    Filed: November 5, 2018
    Publication date: March 7, 2019
    Inventors: Peter Meier, Sebastian Lieberknecht