Patents by Inventor Stephan Würmlin
Stephan Würmlin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9406131Abstract: A method for generating a 3D representation of a dynamically changing 3D scene, which includes the steps of: acquiring at least two synchronised video streams (120) from at least two cameras located at different locations and observing the same 3D scene (102); determining camera parameters, which comprise the orientation and zoom setting, for the at least two cameras (103); tracking the movement of objects (310a,b, 312a,b; 330a,b, 331a,b, 332a,b; 410a,b, 411a,b; 430a,b, 431a,b; 420a,b, 421a,b) in the at least two video streams (104); determining the identity of the objects in the at least two video streams (105); determining the 3D position of the objects by combining the information from the at least two video streams (106); wherein the step of tracking (104) the movement of objects in the at least two video streams uses position information derived from the 3D position of the objects in one or more earlier instants in time.Type: GrantFiled: May 24, 2007Date of Patent: August 2, 2016Assignee: LIBEROVISION AGInventors: Stephan Würmlin, Christoph Niederberger
-
Patent number: 8830236Abstract: A computer-implemented method for estimating a pose of an articulated object model that is a computer based 3D model of a real world object observed by one or more source cameras, including the steps of obtaining a source image from a video stream; processing the source image to extract a source image segment maintaining, in a database, a set of reference silhouettes, each being associated with an articulated object model and a corresponding reference pose; comparing the source image segment to the reference silhouettes and selecting reference silhouettes by taking into account, for each reference silhouette, a matching error that indicates how closely the reference silhouette matches the source image segment retrieving the corresponding reference poses of the articulated object models; and computing an estimate of the pose of the articulated object model from the reference poses of the selected reference silhouettes.Type: GrantFiled: April 28, 2011Date of Patent: September 9, 2014Assignee: Liberovision AGInventors: Marcel Germann, Stephan Wuermlin Stadler, Richard Keiser, Remo Ziegler, Christoph Niederberger, Alexander Hornung, Marcus Gross
-
Patent number: 8355083Abstract: What is disclosed is a computer-implemented image-processing system and method for the automatic generation of video sequences that can be associated with a televised event. The methods can include the steps of: Defining a reference keyframe from a reference view from a source image sequence; From one or more keyframes, automatically computing one or more sets of virtual camera parameters; Generating a virtual camera flight path, which is described by a change of virtual camera parameters over time, and which defines a movement of a virtual camera and a corresponding change of a virtual view; and Rendering and storing a virtual video stream defined by the virtual camera flight path.Type: GrantFiled: July 22, 2011Date of Patent: January 15, 2013Inventors: Richard Keiser, Christoph Niederberger, Stephan Wuermlin Stadler, Remo Ziegler
-
Publication number: 20120188452Abstract: What is disclosed is a computer-implemented image-processing system and method for the automatic generation of video sequences that can be associated with a televised event. The methods can include the steps of: Defining a reference keyframe from a reference view from a source image sequence; From one or more keyframes, automatically computing one or more sets of virtual camera parameters; Generating a virtual camera flight path, which is described by a change of virtual camera parameters over time, and which defines a movement of a virtual camera and a corresponding change of a virtual view; and Rendering and storing a virtual video stream defined by the virtual camera flight path.Type: ApplicationFiled: July 22, 2011Publication date: July 26, 2012Applicant: LIBEROVISION AGInventors: Richard KEISER, Christoph NIEDERBERGER, Stephan WUERMLIN STADLER, Remo ZIEGLER
-
Publication number: 20110267344Abstract: A computer-implemented method for estimating a pose of an articulated object model (4), wherein the articulated object model (4) is a computer based 3D model (1) of a real world object (14) observed by one or more source cameras (9), and wherein the pose of the articulated object model (4) is defined by the spatial location of joints (2) of the articulated object model (4), comprises the steps of obtaining a source image (10) from a video stream; processing the source image (10) to extract a source image segment (13); maintaining, in a database, a set of reference silhouettes, each being associated with an articulated object model (4) and a corresponding reference pose; comparing the source image segment (13) to the reference silhouettes and selecting reference silhouettes by taking into account, for each reference silhouette, a matching error that indicates how closely the reference silhouette matches the source image segment (13) and/or a coherence error that indicates how much the reference pose is conType: ApplicationFiled: April 28, 2011Publication date: November 3, 2011Applicant: LIBEROVISION AGInventors: Marcel Germann, Stephan Wuermlin Stadler, Richard Keiser, Remo Ziegler, Christoph Niederberger, Alexander Hornung, Marcus Gross
-
Publication number: 20090315978Abstract: A method for generating a 3D representation of a dynamically changing 3D scene, which includes the steps of: acquiring at least two synchronised video streams (120) from at least two cameras located at different locations and observing the same 3D scene (102); determining camera parameters, which comprise the orientation and zoom setting, for the at least two cameras (103); tracking the movement of objects (310a,b, 312a,b; 330a,b, 331 a,b, 332a,b; 410a,b, 411a,b; 430a,b, 431a,b; 420a,b, 421 a,b) in the at least two video streams (104); determining the identity of the objects in the at least two video streams (105); determining the 3D position of the objects by combining the information from the at least two video streams (106); wherein the step of tracking (104) the movement of objects in the at least two video streams uses position information derived from the 3D position of the objects in one or more earlier instants in time.Type: ApplicationFiled: May 24, 2007Publication date: December 24, 2009Applicant: EIDGENOSSISCHE TECHNISCHE HOCHSCHULE ZURICHInventors: Stephan Würmlin, Christoph Niederberger
-
Patent number: 7324594Abstract: A system encodes videos acquired of a moving object in a scene by multiple fixed cameras. Camera calibration data of each camera are first determined. The camera calibration data of each camera are associated with the corresponding video. A segmentation mask for each frame of each video is determined. The segmentation mask identifies only foreground pixels in the frame associated with the object. A shape encoder then encodes the segmentation masks, a position encoder encodes a position of each pixel, and a color encoder encodes a color of each pixel. The encoded data can be combined into a single bitstream and transferred to a decoder. At the decoder, the bitstream is decoded to an output video having an arbitrary user selected viewpoint. A dynamic 3D point model defines a geometry of the moving object. Splat sizes and surface normals used during the rendering can be explicitly determined by the encoder, or explicitly by the decoder.Type: GrantFiled: November 26, 2003Date of Patent: January 29, 2008Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventors: Edouard Lamboray, Michael Waschbüsch, Stephan Würmlin, Markus Gross, Hanspeter Pfister
-
Patent number: 7034822Abstract: A method and system generates 3D video images from point samples obtained from primary video data in a 3D coordinate system. Each point sample contains 3D coordinates in a 3D coordinate system, as well as colour and/or intensity information. On subsequently rendering, the point samples are modified continuously according to an updating of the 3D primary video data. The point samples are arranged in a hierarchic data structure in a manner such that each point sample is an end point, or leaf node, in a hierarchical tree, wherein the branch points in the hierarchy tree are average values of the nodes lower in the hierarchy of the tree.Type: GrantFiled: June 18, 2003Date of Patent: April 25, 2006Assignee: Swiss Federal Institute of Technology ZurichInventors: Markus Gross, Edouard Lamboray, Stephan Würmlin