Patents by Inventor Peter Meier

Peter Meier has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9888235
    Abstract: An image processing method includes the steps of providing at least one image of at least one object or part of the at least one object, and providing a coordinate system in relation to the image, providing at least one degree of freedom in the coordinate system or at least one sensor data in the coordinate system, and computing image data of the at least one image or at least one part of the at least one image constrained or aligned by the at least one degree of freedom or the at least one sensor data.
    Type: Grant
    Filed: June 5, 2013
    Date of Patent: February 6, 2018
    Assignee: Apple Inc.
    Inventors: Peter Meier, Mohamed Selim Benhimane
  • Patent number: 9846942
    Abstract: The invention is related to a method and system for determining a pose of a first camera, comprising providing or receiving a spatial relationship (Rvc1) between a visual content displayed on a display device and the first camera, receiving image information associated with an image (B1) of at least part of the displayed visual content captured by a second camera, and determining a pose of the first camera according to the image information associated with the image (B1) and the spatial relationship (Rvc1).
    Type: Grant
    Filed: October 9, 2013
    Date of Patent: December 19, 2017
    Assignee: Apple Inc.
    Inventors: Lejing Wang, Peter Meier
  • Publication number: 20170278258
    Abstract: The invention provides methods of detecting and describing features from an intensity image. In one of several aspects, the method comprises the steps of providing an intensity image captured by a capturing device, providing a method for determining a depth of at least one element in the intensity image, in a feature detection process detecting at least one feature in the intensity image, wherein the feature detection is performed by processing image intensity information of the intensity image at a scale which depends on the depth of at least one element in the intensity image, and providing a feature descriptor of the at least one detected feature. For example, the feature descriptor contains at least one first parameter based on information provided by the intensity image and at least one second parameter which is indicative of the scale.
    Type: Application
    Filed: June 9, 2017
    Publication date: September 28, 2017
    Inventors: Daniel Kurz, Peter Meier
  • Publication number: 20170213393
    Abstract: There is disclosed a method and mobile device for displaying points of interest in a view of a real environment displayed on a screen of the mobile device with a functionality for interaction with a user, which comprises the steps of: capturing an image of the real environment or a part of the real environment using a camera, determining at least one point of interest related to the real environment, determining an image position of the at least one point of interest in the image, displaying at least part of the image on at least part of the screen, overlaying a computer-generated indicator with the at least part of the image on the screen at a screen position according to the image position of the at least one point of interest, displaying a computer-generated virtual object related to the at least one point of interest on the screen at a screen position determined according to the screen position of the computer-generated indicator and which is adjacent to a bottom edge of the screen, displaying a visually
    Type: Application
    Filed: April 5, 2017
    Publication date: July 27, 2017
    Inventors: Anton Fedosov, Stefan Misslinger, Peter Meier
  • Publication number: 20170214899
    Abstract: The disclosure relates to a method and system for presenting at least part of an image of a real object in a view of a real environment, comprising providing a first image of at least part of a real object captured by a first camera, determining at least part of the real object in the first image as an object image area, determining a first 3D plane relative to the first camera, the first camera being at a position where the first image is captured, providing at least one image feature related to the real object in the first image, providing at least one first ray passing an optical center of the first camera being at a position where the first image is captured and the at least one image feature, determining, according to a first plane normal direction of the first 3D plane, at least one first angle between the first 3D plane and the at least one first ray, providing a second image of a real environment captured by a second camera, determining a second 3D plane relative to the second camera, the second camer
    Type: Application
    Filed: July 23, 2014
    Publication date: July 27, 2017
    Inventors: Peter Meier, Lejing Wang, Stefan Misslinger
  • Patent number: 9679384
    Abstract: The invention provides methods of detecting and describing features from an intensity image. In one of several aspects, the method comprises the steps of providing an intensity image captured by a capturing device, providing a method for determining a depth of at least one element in the intensity image, in a feature detection process detecting at least one feature in the intensity image, wherein the feature detection is performed by processing image intensity information of the intensity image at a scale which depends on the depth of at least one element in the intensity image, and providing a feature descriptor of the at least one detected feature. For example, the feature descriptor contains at least one first parameter based on information provided by the intensity image and at least one second parameter which is indicative of the scale.
    Type: Grant
    Filed: August 31, 2011
    Date of Patent: June 13, 2017
    Assignee: Apple Inc.
    Inventors: Daniel Kurz, Peter Meier
  • Patent number: 9646422
    Abstract: There is disclosed a method and mobile device for displaying points of interest in a view of a real environment displayed on a screen of the mobile device with a functionality for interaction with a user, which comprises the steps of: capturing an image of the real environment or a part of the real environment using a camera, determining at least one point of interest related to the real environment, determining an image position of the at least one point of interest in the image, displaying at least part of the image on at least part of the screen, overlaying a computer-generated indicator with the at least part of the image on the screen at a screen position according to the image position of the at least one point of interest, displaying a computer-generated virtual object related to the at least one point of interest on the screen at a screen position determined according to the screen position of the computer-generated indicator and which is adjacent to a bottom edge of the screen, displaying a visually
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: May 9, 2017
    Assignee: Apple Inc.
    Inventors: Anton Fedosov, Stefan Misslinger, Peter Meier
  • Publication number: 20170109929
    Abstract: The invention relates to a method for representing a virtual object in a real environment, having the following steps: generating a two-dimensional image of a real environment by means of a recording device, ascertaining a position of the recording device relative to at least one component of the real environment, segmenting at least one area of the real environment unmarked in reality in the two-dimensional image for identifying at least one segment of the real environment in distinction to a remaining part of the real environment while supplying corresponding segmentation data, and merging the virtual object with the two-dimensional image of the real environment with consideration of the segmentation data such that at least one part of the segment of the real environment is removed from the image of the real environment. The invention permits any collisions of virtual objects with real objects that occur upon merging with a real environment to be represented in a way largely close to reality.
    Type: Application
    Filed: October 19, 2015
    Publication date: April 20, 2017
    Inventors: Peter Meier, Stefan Holzer
  • Publication number: 20170109931
    Abstract: The invention relates to a method of representing a virtual object in a view of a real environment which comprises the steps of providing image information of a first image of at least part of a human face captured by a first camera, providing at least one human face specific characteristic, determining at least part of an image area of the face in the first image as a face region of the first image, determining at least one first light falling on the face according to the face region of the first image and the at least one human face specific characteristic, and blending in the virtual object on a display device in the view of the real environment according to the at least one first light. The invention also relates to a system for representing a virtual object in a view of a real environment.
    Type: Application
    Filed: March 25, 2014
    Publication date: April 20, 2017
    Inventors: Sebastian Knorr, Peter Meier
  • Patent number: 9560273
    Abstract: The invention is related to a wearable information system having at least one camera, the information system operable to have a low-power mode and a high power mode. The information system is configured such that the high-power mode is activated by a detection of at least one object in at least one field of view of the at least one camera.
    Type: Grant
    Filed: February 21, 2014
    Date of Patent: January 31, 2017
    Assignee: Apple Inc.
    Inventors: Peter Meier, Thomas Severin
  • Patent number: 9558581
    Abstract: A method for representing virtual information in a view of a real environment is provided that includes the following steps: providing a system setup comprising at least one display device, determining a position of a viewing point relative to at least one component of the real environment, providing a geometry model of the real environment, providing at least one item of virtual information and its position, determining a visualization mode of blending in the at least one item of virtual information on the display device according to the position of the viewing point and the geometry model, calculating a ray between the viewing point and the item of virtual information, and determining a number of boundary intersections by the ray, wherein if the number of boundary intersections is less than 2, the item of virtual information is blended in a non-occlusion mode, otherwise in an occlusion mode.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: January 31, 2017
    Assignee: Apple Inc.
    Inventors: Lejing Wang, Peter Meier, Stefan Misslinger
  • Publication number: 20170013195
    Abstract: The invention is related to a wearable information system having at least one camera, the information system operable to have a low-power mode and a high power mode. The information system is configured such that the high-power mode is activated by a detection of at least one object in at least one field of view of the at least one camera.
    Type: Application
    Filed: September 22, 2016
    Publication date: January 12, 2017
    Inventors: Peter Meier, Thomas Severin
  • Publication number: 20170011558
    Abstract: The invention relates to a method for representing a virtual object in a real environment, having the following steps: generating a two-dimensional representation of a real environment by means of a recording device, ascertaining a position of the recording device relative to at least one component of the real environment, segmenting at least one area of the real environment in the two-dimensional image on the basis of non-manually generated 3D information for identifying at least one segment of the real environment in distinction to a remaining part of the real environment while supplying corresponding segmentation data, and merging the two-dimensional image of the real environment with the virtual object or, by means of an optical, semitransparent element directly with reality with consideration of the segmentation data. The invention permits any collisions of virtual objects with real objects that occur upon merging with a real environment to be represented in a way largely close to reality.
    Type: Application
    Filed: July 8, 2016
    Publication date: January 12, 2017
    Inventors: Peter Meier, Stefan Holzer
  • Publication number: 20160350906
    Abstract: A method of tracking a mobile device comprising at least one camera in a real environment comprises the steps of receiving image information associated with at least one image captured by the at least one camera, generating a first geometrical model of at least part of the real environment based on environmental data or mobile system state data acquired in an acquisition process by at least one sensor of a mobile system, which is different from the mobile device, and performing a tracking process based on the image information associated with the at least one image and at least partially according to the first geometrical model, wherein the tracking process determines at least one parameter of a pose of the mobile device relative to the real environment.
    Type: Application
    Filed: May 27, 2016
    Publication date: December 1, 2016
    Inventors: Peter Meier, Sebastian Lieberknecht
  • Publication number: 20160246818
    Abstract: A method for analyzing an image of a real object, generated by at least one camera includes the following steps: generating at least a first image by the camera capturing at least one real object, defining a first search domain comprising multiple data sets of the real object, each of the data sets being indicative of a respective portion of the real object, and analyzing at least one characteristic property of the first image with respect to the first search domain, in order to determine whether the at least one characteristic property corresponds to information of at least a particular one of the data sets of the first search domain. If it is determined that the at least one characteristic property corresponds to information of at least a particular one of the data sets, a second search domain comprising only the particular one of the data sets is defined and the second search domain is used for analyzing the first image and/or at least a second image.
    Type: Application
    Filed: February 15, 2016
    Publication date: August 25, 2016
    Inventors: Sebastian Lieberknecht, Peter Meier
  • Publication number: 20160240011
    Abstract: There is disclosed a method and mobile device for displaying points of interest in a view of a real environment displayed on a screen of the mobile device with a functionality for interaction with a user, which comprises the steps of: capturing an image of the real environment or a part of the real environment using a camera, determining at least one point of interest related to the real environment, determining an image position of the at least one point of interest in the image, displaying at least part of the image on at least part of the screen, overlaying a computer-generated indicator with the at least part of the image on the screen at a screen position according to the image position of the at least one point of interest, displaying a computer-generated virtual object related to the at least one point of interest on the screen at a screen position determined according to the screen position of the computer-generated indicator and which is adjacent to a bottom edge of the screen, displaying a visually
    Type: Application
    Filed: September 24, 2013
    Publication date: August 18, 2016
    Inventors: Anton Fedosov, Stefan Misslinger, Peter Meier
  • Publication number: 20160232657
    Abstract: The invention is related to a method and system for determining a pose of a first camera, comprising providing or receiving a spatial relationship (Rvc 1) between a visual content displayed on a display device and the first camera, receiving image information associated with an image (B1) of at least part of the displayed visual content captured by a second camera, and determining a pose of the first camera according to the image information associated with the image (B1) and the spatial relationship (Rvc1).
    Type: Application
    Filed: October 9, 2013
    Publication date: August 11, 2016
    Inventors: Lejing Wang, Peter Meier
  • Patent number: 9400941
    Abstract: A method of matching image features with reference features comprises the steps of providing a current image, providing a set of reference features, wherein each of the reference features comprises at least one first parameter which is at least partially indicative of a position and/or orientation of the reference feature with respect to a global coordinate system, wherein the global coordinate system is an earth coordinate system or an object coordinate system, or at least partially indicative of a position of the reference feature with respect to an altitude, detecting at least one feature in the current image in a feature detection process, associating with the detected feature at least one second parameter which is at least partially indicative of a position and/or orientation of the detected feature, or which is at least partially indicative of a position of the detected feature with respect to an altitude, and matching the detected feature with a reference feature by determining a similarity measure.
    Type: Grant
    Filed: August 31, 2011
    Date of Patent: July 26, 2016
    Assignee: Metaio GmbH
    Inventors: Daniel Kurz, Peter Meier
  • Publication number: 20160210786
    Abstract: A method of displaying virtual information in a view of a real environment comprising the following steps: providing the system relative to at least one part of the real environment and providing accuracy information of the current pose, providing multiple pieces of virtual information, and assigning a respective one of the pieces of virtual information to one of different parameters indicative of different pose accuracy information, and displaying at least one of the pieces of virtual information in the view of the real environment according to the accuracy information of the current pose in relation to the assigned parameter.
    Type: Application
    Filed: October 26, 2015
    Publication date: July 21, 2016
    Inventor: Peter Meier
  • Patent number: D798695
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: October 3, 2017
    Inventors: George M. Davison, Peter A. Meier, Timothy A. Craig, John M. Margolis, Matthew D. McClatchey, Donald H. Saller