Patents by Inventor Stephan Würmlin

Stephan Würmlin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9406131
    Abstract: A method for generating a 3D representation of a dynamically changing 3D scene, which includes the steps of: acquiring at least two synchronised video streams (120) from at least two cameras located at different locations and observing the same 3D scene (102); determining camera parameters, which comprise the orientation and zoom setting, for the at least two cameras (103); tracking the movement of objects (310a,b, 312a,b; 330a,b, 331a,b, 332a,b; 410a,b, 411a,b; 430a,b, 431a,b; 420a,b, 421a,b) in the at least two video streams (104); determining the identity of the objects in the at least two video streams (105); determining the 3D position of the objects by combining the information from the at least two video streams (106); wherein the step of tracking (104) the movement of objects in the at least two video streams uses position information derived from the 3D position of the objects in one or more earlier instants in time.
    Type: Grant
    Filed: May 24, 2007
    Date of Patent: August 2, 2016
    Assignee: LIBEROVISION AG
    Inventors: Stephan Würmlin, Christoph Niederberger
  • Patent number: 8830236
    Abstract: A computer-implemented method for estimating a pose of an articulated object model that is a computer based 3D model of a real world object observed by one or more source cameras, including the steps of obtaining a source image from a video stream; processing the source image to extract a source image segment maintaining, in a database, a set of reference silhouettes, each being associated with an articulated object model and a corresponding reference pose; comparing the source image segment to the reference silhouettes and selecting reference silhouettes by taking into account, for each reference silhouette, a matching error that indicates how closely the reference silhouette matches the source image segment retrieving the corresponding reference poses of the articulated object models; and computing an estimate of the pose of the articulated object model from the reference poses of the selected reference silhouettes.
    Type: Grant
    Filed: April 28, 2011
    Date of Patent: September 9, 2014
    Assignee: Liberovision AG
    Inventors: Marcel Germann, Stephan Wuermlin Stadler, Richard Keiser, Remo Ziegler, Christoph Niederberger, Alexander Hornung, Marcus Gross
  • Patent number: 8355083
    Abstract: What is disclosed is a computer-implemented image-processing system and method for the automatic generation of video sequences that can be associated with a televised event. The methods can include the steps of: Defining a reference keyframe from a reference view from a source image sequence; From one or more keyframes, automatically computing one or more sets of virtual camera parameters; Generating a virtual camera flight path, which is described by a change of virtual camera parameters over time, and which defines a movement of a virtual camera and a corresponding change of a virtual view; and Rendering and storing a virtual video stream defined by the virtual camera flight path.
    Type: Grant
    Filed: July 22, 2011
    Date of Patent: January 15, 2013
    Inventors: Richard Keiser, Christoph Niederberger, Stephan Wuermlin Stadler, Remo Ziegler
  • Publication number: 20120188452
    Abstract: What is disclosed is a computer-implemented image-processing system and method for the automatic generation of video sequences that can be associated with a televised event. The methods can include the steps of: Defining a reference keyframe from a reference view from a source image sequence; From one or more keyframes, automatically computing one or more sets of virtual camera parameters; Generating a virtual camera flight path, which is described by a change of virtual camera parameters over time, and which defines a movement of a virtual camera and a corresponding change of a virtual view; and Rendering and storing a virtual video stream defined by the virtual camera flight path.
    Type: Application
    Filed: July 22, 2011
    Publication date: July 26, 2012
    Applicant: LIBEROVISION AG
    Inventors: Richard KEISER, Christoph NIEDERBERGER, Stephan WUERMLIN STADLER, Remo ZIEGLER
  • Publication number: 20110267344
    Abstract: A computer-implemented method for estimating a pose of an articulated object model (4), wherein the articulated object model (4) is a computer based 3D model (1) of a real world object (14) observed by one or more source cameras (9), and wherein the pose of the articulated object model (4) is defined by the spatial location of joints (2) of the articulated object model (4), comprises the steps of obtaining a source image (10) from a video stream; processing the source image (10) to extract a source image segment (13); maintaining, in a database, a set of reference silhouettes, each being associated with an articulated object model (4) and a corresponding reference pose; comparing the source image segment (13) to the reference silhouettes and selecting reference silhouettes by taking into account, for each reference silhouette, a matching error that indicates how closely the reference silhouette matches the source image segment (13) and/or a coherence error that indicates how much the reference pose is con
    Type: Application
    Filed: April 28, 2011
    Publication date: November 3, 2011
    Applicant: LIBEROVISION AG
    Inventors: Marcel Germann, Stephan Wuermlin Stadler, Richard Keiser, Remo Ziegler, Christoph Niederberger, Alexander Hornung, Marcus Gross
  • Publication number: 20090315978
    Abstract: A method for generating a 3D representation of a dynamically changing 3D scene, which includes the steps of: acquiring at least two synchronised video streams (120) from at least two cameras located at different locations and observing the same 3D scene (102); determining camera parameters, which comprise the orientation and zoom setting, for the at least two cameras (103); tracking the movement of objects (310a,b, 312a,b; 330a,b, 331 a,b, 332a,b; 410a,b, 411a,b; 430a,b, 431a,b; 420a,b, 421 a,b) in the at least two video streams (104); determining the identity of the objects in the at least two video streams (105); determining the 3D position of the objects by combining the information from the at least two video streams (106); wherein the step of tracking (104) the movement of objects in the at least two video streams uses position information derived from the 3D position of the objects in one or more earlier instants in time.
    Type: Application
    Filed: May 24, 2007
    Publication date: December 24, 2009
    Applicant: EIDGENOSSISCHE TECHNISCHE HOCHSCHULE ZURICH
    Inventors: Stephan Würmlin, Christoph Niederberger
  • Patent number: 7324594
    Abstract: A system encodes videos acquired of a moving object in a scene by multiple fixed cameras. Camera calibration data of each camera are first determined. The camera calibration data of each camera are associated with the corresponding video. A segmentation mask for each frame of each video is determined. The segmentation mask identifies only foreground pixels in the frame associated with the object. A shape encoder then encodes the segmentation masks, a position encoder encodes a position of each pixel, and a color encoder encodes a color of each pixel. The encoded data can be combined into a single bitstream and transferred to a decoder. At the decoder, the bitstream is decoded to an output video having an arbitrary user selected viewpoint. A dynamic 3D point model defines a geometry of the moving object. Splat sizes and surface normals used during the rendering can be explicitly determined by the encoder, or explicitly by the decoder.
    Type: Grant
    Filed: November 26, 2003
    Date of Patent: January 29, 2008
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Edouard Lamboray, Michael Waschbüsch, Stephan Würmlin, Markus Gross, Hanspeter Pfister
  • Patent number: 7034822
    Abstract: A method and system generates 3D video images from point samples obtained from primary video data in a 3D coordinate system. Each point sample contains 3D coordinates in a 3D coordinate system, as well as colour and/or intensity information. On subsequently rendering, the point samples are modified continuously according to an updating of the 3D primary video data. The point samples are arranged in a hierarchic data structure in a manner such that each point sample is an end point, or leaf node, in a hierarchical tree, wherein the branch points in the hierarchy tree are average values of the nodes lower in the hierarchy of the tree.
    Type: Grant
    Filed: June 18, 2003
    Date of Patent: April 25, 2006
    Assignee: Swiss Federal Institute of Technology Zurich
    Inventors: Markus Gross, Edouard Lamboray, Stephan Würmlin