Patents by Inventor Marc Pollefeys

Marc Pollefeys has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220230079
    Abstract: In various examples there is an apparatus with at least one processor and a memory storing instructions that, when executed by the at least one processor, perform a method for recognizing an action of a user. The method comprises accessing at least one stream of pose data derived from captured sensor data depicting the user; sending the pose data to a machine learning system having been trained to recognize actions from pose data; and receiving at least one recognized action from the machine learning system.
    Type: Application
    Filed: January 21, 2021
    Publication date: July 21, 2022
    Inventors: Bugra TEKÍN, Marc POLLEFEYS, Federica BOGO
  • Patent number: 11176374
    Abstract: The described implementations relate to images and depth information and generating useful information from the images and depth information. One example can identify planes in a semantically-labeled 3D voxel representation of a scene. The example can infer missing information by extending planes associated with structural elements of the scene. The example can also generate a watertight manifold representation of the scene at least in part from the inferred missing information.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: November 16, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michelle Brook, William Guyman, Szymon P. Stachniak, Hendrik M. Langerak, Silvano Galliani, Marc Pollefeys
  • Publication number: 20200349351
    Abstract: The described implementations relate to images and depth information and generating useful information from the images and depth information. One example can identify planes in a semantically-labeled 3D voxel representation of a scene. The example can infer missing information by extending planes associated with structural elements of the scene. The example can also generate a watertight manifold representation of the scene at least in part from the inferred missing information.
    Type: Application
    Filed: May 1, 2019
    Publication date: November 5, 2020
    Inventors: Michelle BROOK, William GUYMAN, Szymon P. STACHNIAK, Hendrik M. LANGERAK, Silvano GALLIANI, Marc POLLEFEYS
  • Patent number: 10198533
    Abstract: A method of determining a registration between Scanworlds may include determining a first viewpoint of a setting based on first point data of a first Scanworld. The first Scanworld may include information about the setting as taken by a first laser scanner at a first location. The method may further include determining a second viewpoint of the setting based on second point data of a second Scanworld. The second Scanworld may include information about the setting as taken by a second laser scanner at a second location. The method may further include generating a first rectified image based on the first viewpoint and generating a second rectified image based on the second viewpoint. Additionally, the method may include determining a registration between the first Scanworld and the second Scanworld based on the first viewpoint and the second viewpoint.
    Type: Grant
    Filed: October 22, 2013
    Date of Patent: February 5, 2019
    Assignee: HEXAGON TECHNOLOGY CENTER GMBH
    Inventors: Bernhard Zeisl, Kevin Köser, Marc Pollefeys, Gregory Walsh
  • Patent number: 10057561
    Abstract: Scenes reconstruction may be performed using videos that capture the scenes at high resolution and frame rate. Scene reconstruction may be associated with determining camera orientation and/or location (“camera pose”) throughout the video, three-dimensional coordinates of feature points detected in frames of the video, and/or other information. Individual videos may have multiple frames. Feature points may be detected in, and tracked over, the frames. Estimations of camera pose may be made for individual subsets of frames. One or more estimations of camera pose may be determined as fixed estimations. The estimated camera poses for the frames included in the subsets of frames may be updated based on the fixed estimations. Camera pose for frames not included in the subsets of frames may be determined to provide globally consistent camera poses and three-dimensional coordinates for feature points of the video.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: August 21, 2018
    Assignees: Disney Enterprises, Inc., ETH Zurich
    Inventors: Benjamin Resch, Hendrik Lensch, Marc Pollefeys, Oliver Wang, Alexander Sorkine Hornung
  • Patent number: 10037615
    Abstract: This disclosure related to systems and methods of palette-based color editing. A set of principle colors may be identified from among individual colors of individual pixels of an image. Color distributions about the individual principle colors may be determined. An individual color distribution may include a homogenous range of colors that may vary with respect to an individual principle color. Pixels may be associated with one or more pixel groups based on correspondences between individual colors of the individual pixels and individual colors included in the individual color distributions corresponding to the individual pixel groups. Color editing may be effectuated by modifying colors of pixels in a given group independently from other colors of other pixels in other groups.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: July 31, 2018
    Assignees: Disney Enterprises, Inc., ETH ZURICH
    Inventors: Yagiz Aksoy, Tunc Ozan Aydin, Marc Pollefeys, Aljosa Smolic
  • Publication number: 20180005409
    Abstract: This disclosure related to systems and methods of palette-based color editing. A set of principle colors may be identified from among individual colors of individual pixels of an image. Color distributions about the individual principle colors may be determined. An individual color distribution may include a homogenous range of colors that may vary with respect to an individual principle color. Pixels may be associated with one or more pixel groups based on correspondences between individual colors of the individual pixels and individual colors included in the individual color distributions corresponding to the individual pixel groups. Color editing may be effectuated by modifying colors of pixels in a given group independently from other colors of other pixels in other groups.
    Type: Application
    Filed: July 1, 2016
    Publication date: January 4, 2018
    Inventors: Yagiz Aksoy, Tunc Ozan Aydin, Marc Pollefeys, Aljosa Smolic
  • Publication number: 20170237968
    Abstract: Scenes reconstruction may be performed using videos that capture the scenes at high resolution and frame rate. Scene reconstruction may be associated with determining camera orientation and/or location (“camera pose”) throughout the video, three-dimensional coordinates of feature points detected in frames of the video, and/or other information. Individual videos may have multiple frames. Feature points may be detected in, and tracked over, the frames. Estimations of camera pose may be made for individual subsets of frames. One or more estimations of camera pose may be determined as fixed estimations. The estimated camera poses for the frames included in the subsets of frames may be updated based on the fixed estimations. Camera pose for frames not included in the subsets of frames may be determined to provide globally consistent camera poses and three-dimensional coordinates for feature points of the video.
    Type: Application
    Filed: May 1, 2017
    Publication date: August 17, 2017
    Inventors: Benjamin Resch, Hendrik Lensch, Marc Pollefeys, Oliver Wang, Alexander Sorkine Hornung
  • Patent number: 9648303
    Abstract: Scenes reconstruction may be performed using videos that capture the scenes at high resolution and frame rate. Scene reconstruction may be associated with determining camera orientation and/or location (“camera pose”) throughout the video, three-dimensional coordinates of feature points detected in frames of the video, and/or other information. Individual videos may have multiple frames. Feature points may be detected in, and tracked over, the frames. Estimations of camera pose may be made for individual subsets of frames. One or more estimations of camera pose may be determined as fixed estimations. The estimated camera poses for the frames included in the subsets of frames may be updated based on the fixed estimations. Camera pose for frames not included in the subsets of frames may be determined to provide globally consistent camera poses and three-dimensional coordinates for feature points of the video.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: May 9, 2017
    Assignees: Disney Enterprises, Inc., ETH Zurich
    Inventors: Benjamin Resch, Hendrik Lensch, Marc Pollefeys, Oliver Wang, Alexander Sorkine Hornung
  • Patent number: 9424650
    Abstract: To generate a pixel-accurate depth map, data from a range-estimation sensor (e.g., a time-of flight sensor) is combined with data from multiple cameras to produce a high-quality depth measurement for pixels in an image. To do so, a depth measurement system may use a plurality of cameras mounted on a support structure to perform a depth hypothesis technique to generate a first depth-support value. Furthermore, the apparatus may include a range-estimation sensor which generates a second depth-support value. In addition, the system may project a 3D point onto the auxiliary cameras and compare the color of the associated pixel in the auxiliary camera with the color of the pixel in reference camera to generate a third depth-support value. The system may combine these support values for each pixel in an image to determine respective depth values. Using these values, the system may generate a depth map for the image.
    Type: Grant
    Filed: June 12, 2013
    Date of Patent: August 23, 2016
    Assignee: Disney Enterprises, Inc.
    Inventors: Jereon van Baar, Paul A. Beardsley, Marc Pollefeys, Markus Gross
  • Publication number: 20160210761
    Abstract: A camera of an apparatus for determining a set of model data describing an object in three dimensions from two-dimensional image frames taken from the object, is responsible for taking the two-dimensional image frames from the object. A processor of the apparatus is adapted to determine an interim set of model data representing a portion of the object which is derivable from a set of image frames supplied by the camera so far.
    Type: Application
    Filed: September 18, 2014
    Publication date: July 21, 2016
    Applicant: ETH ZURICH
    Inventors: Marc POLLEFEYS, Petri TANSKANEN, Lorenz MEIER, Kalin KOLEV
  • Publication number: 20150112645
    Abstract: A method of determining a registration between Scanworlds may include determining a first viewpoint of a setting based on first point data of a first Scanworld. The first Scanworld may include information about the setting as taken by a first laser scanner at a first location. The method may further include determining a second viewpoint of the setting based on second point data of a second Scanworld. The second Scanworld may include information about the setting as taken by a second laser scanner at a second location. The method may further include generating a first rectified image based on the first viewpoint and generating a second rectified image based on the second viewpoint. Additionally, the method may include determining a registration between the first Scanworld and the second Scanworld based on the first viewpoint and the second viewpoint.
    Type: Application
    Filed: October 22, 2013
    Publication date: April 23, 2015
    Inventors: Bernhard Zeisl, Kevin Köser, Marc Pollefeys, Gregory Walsh
  • Publication number: 20140368615
    Abstract: To generate a pixel-accurate depth map, data from a range-estimation sensor (e.g., a time-of flight sensor) is combined with data from multiple cameras to produce a high-quality depth measurement for pixels in an image. To do so, a depth measurement system may use a plurality of cameras mounted on a support structure to perform a depth hypothesis technique to generate a first depth-support value. Furthermore, the apparatus may include a range-estimation sensor which generates a second depth-support value. In addition, the system may project a 3D point onto the auxiliary cameras and compare the color of the associated pixel in the auxiliary camera with the color of the pixel in reference camera to generate a third depth-support value. The system may combine these support values for each pixel in an image to determine respective depth values. Using these values, the system may generate a depth map for the image.
    Type: Application
    Filed: June 12, 2013
    Publication date: December 18, 2014
    Inventors: Jeroen van Baar, Paul A. Beardsley, Marc Pollefeys, Markus Gross
  • Patent number: 8913055
    Abstract: A system and method are disclosed for online mapping of large-scale environments using a hybrid representation of a metric Euclidean environment map and a topological map. The system includes a scene module, a location recognition module, a local adjustment module and a global adjustment module. The scene flow module is for detecting and tracking video features of the frames of an input video sequence. The scene flow module is also configured to identify multiple keyframes of the input video sequence and add the identified keyframes into an initial environment map of the input video sequence. The location recognition module is for detecting loop closures in the environment map. The local adjustment module enforces local metric properties of the keyframes in the environment map, and the global adjustment module is for optimizing the entire environment map subject to global metric properties of the keyframes in the keyframe pose graph.
    Type: Grant
    Filed: May 30, 2012
    Date of Patent: December 16, 2014
    Assignees: Honda Motor Co., Ltd., The University of North Carolina at Chapel Hill, ETH Zurich
    Inventors: Jongwoo Lim, Jan-Michael Frahm, Marc Pollefeys
  • Publication number: 20130250056
    Abstract: An LRO format encoder provides images for three-dimensional (3D) viewing, that are at least the quality of images provided by conventional encoding techniques using equivalent bandwidth, with encoding and decoding requiring less power than conventional formats. The LRO format facilitates encoding of multiple images using an innovative multiview low energy (MLE) CODEC that reduces the power consumption requirements of encoding and decoding 3D content, as compared to conventional techniques. A significant feature of the MLE CODEC is that a decoded view from a lower processing level is used for one of the components of the LRO format for at least one higher processing level.
    Type: Application
    Filed: October 6, 2011
    Publication date: September 26, 2013
    Applicant: NOMAD3D SAS
    Inventors: Alain Fogel, Marc Pollefeys
  • Publication number: 20120306847
    Abstract: A system and method are disclosed for online mapping of large-scale environments using a hybrid representation of a metric Euclidean environment map and a topological map. The system includes a scene module, a location recognition module, a local adjustment module and a global adjustment module. The scene flow module is for detecting and tracking video features of the frames of an input video sequence. The scene flow module is also configured to identify multiple keyframes of the input video sequence and add the identified keyframes into an initial environment map of the input video sequence. The location recognition module is for detecting loop closures in the environment map. The local adjustment module enforces local metric properties of the keyframes in the environment map, and the global adjustment module is for optimizing the entire environment map subject to global metric properties of the keyframes in the keyframe pose graph.
    Type: Application
    Filed: May 30, 2012
    Publication date: December 6, 2012
    Applicant: Honda Motor Co., Ltd.
    Inventors: Jongwoo Lim, Jan-Michael Frahm, Marc Pollefeys