Patents by Inventor Stephan Wurmlin

Stephan Wurmlin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9406131
    Abstract: A method for generating a 3D representation of a dynamically changing 3D scene, which includes the steps of: acquiring at least two synchronised video streams (120) from at least two cameras located at different locations and observing the same 3D scene (102); determining camera parameters, which comprise the orientation and zoom setting, for the at least two cameras (103); tracking the movement of objects (310a,b, 312a,b; 330a,b, 331a,b, 332a,b; 410a,b, 411a,b; 430a,b, 431a,b; 420a,b, 421a,b) in the at least two video streams (104); determining the identity of the objects in the at least two video streams (105); determining the 3D position of the objects by combining the information from the at least two video streams (106); wherein the step of tracking (104) the movement of objects in the at least two video streams uses position information derived from the 3D position of the objects in one or more earlier instants in time.
    Type: Grant
    Filed: May 24, 2007
    Date of Patent: August 2, 2016
    Assignee: LIBEROVISION AG
    Inventors: Stephan Würmlin, Christoph Niederberger
  • Publication number: 20140037213
    Abstract: A method of processing image data includes providing an image sequence such as a video sequence, or a camera transition, identifying a region-of-interest in at least one image of the image sequence, defining a transition region around the region-of-interest and defining a remaining portion of the image to be a default region or background region, applying different image effects to the region-of-interest, the transition region and the background region.
    Type: Application
    Filed: April 2, 2012
    Publication date: February 6, 2014
    Applicant: LIBEROVISION AG
    Inventors: Christoph Niederberger, Stephan Wurmlin Stadler, Remo Ziegler, Marco Feriencik, Andreas Burch, Urs Donni, Richard Keiser, Julia Vogel Wenzin
  • Publication number: 20090315978
    Abstract: A method for generating a 3D representation of a dynamically changing 3D scene, which includes the steps of: acquiring at least two synchronised video streams (120) from at least two cameras located at different locations and observing the same 3D scene (102); determining camera parameters, which comprise the orientation and zoom setting, for the at least two cameras (103); tracking the movement of objects (310a,b, 312a,b; 330a,b, 331 a,b, 332a,b; 410a,b, 411a,b; 430a,b, 431a,b; 420a,b, 421 a,b) in the at least two video streams (104); determining the identity of the objects in the at least two video streams (105); determining the 3D position of the objects by combining the information from the at least two video streams (106); wherein the step of tracking (104) the movement of objects in the at least two video streams uses position information derived from the 3D position of the objects in one or more earlier instants in time.
    Type: Application
    Filed: May 24, 2007
    Publication date: December 24, 2009
    Applicant: EIDGENOSSISCHE TECHNISCHE HOCHSCHULE ZURICH
    Inventors: Stephan Würmlin, Christoph Niederberger
  • Patent number: 7324594
    Abstract: A system encodes videos acquired of a moving object in a scene by multiple fixed cameras. Camera calibration data of each camera are first determined. The camera calibration data of each camera are associated with the corresponding video. A segmentation mask for each frame of each video is determined. The segmentation mask identifies only foreground pixels in the frame associated with the object. A shape encoder then encodes the segmentation masks, a position encoder encodes a position of each pixel, and a color encoder encodes a color of each pixel. The encoded data can be combined into a single bitstream and transferred to a decoder. At the decoder, the bitstream is decoded to an output video having an arbitrary user selected viewpoint. A dynamic 3D point model defines a geometry of the moving object. Splat sizes and surface normals used during the rendering can be explicitly determined by the encoder, or explicitly by the decoder.
    Type: Grant
    Filed: November 26, 2003
    Date of Patent: January 29, 2008
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Edouard Lamboray, Michael Waschbüsch, Stephan Würmlin, Markus Gross, Hanspeter Pfister
  • Patent number: 7034822
    Abstract: A method and system generates 3D video images from point samples obtained from primary video data in a 3D coordinate system. Each point sample contains 3D coordinates in a 3D coordinate system, as well as colour and/or intensity information. On subsequently rendering, the point samples are modified continuously according to an updating of the 3D primary video data. The point samples are arranged in a hierarchic data structure in a manner such that each point sample is an end point, or leaf node, in a hierarchical tree, wherein the branch points in the hierarchy tree are average values of the nodes lower in the hierarchy of the tree.
    Type: Grant
    Filed: June 18, 2003
    Date of Patent: April 25, 2006
    Assignee: Swiss Federal Institute of Technology Zurich
    Inventors: Markus Gross, Edouard Lamboray, Stephan Würmlin
  • Publication number: 20050117019
    Abstract: A system encodes videos acquired of a moving object in a scene by multiple fixed cameras. Camera calibration data of each camera are first determined. The camera calibration data of each camera are associated with the corresponding video. A segmentation mask for each frame of each video is determined. The segmentation mask identifies only foreground pixels in the frame associated with the object. A shape encoder then encodes the segmentation masks, a position encoder encodes a position of each pixel, and a color encoder encodes a color of each pixel. The encoded data can be combined into a single bitstream and transferred to a decoder. At the decoder, the bitstream is decoded to an output video having an arbitrary user selected viewpoint. A dynamic 3D point model defines a geometry of the moving object. Splat sizes and surface normals used during the rendering can be explicitly determined by the encoder, or explicitly by the decoder.
    Type: Application
    Filed: November 26, 2003
    Publication date: June 2, 2005
    Inventors: Edouard Lamboray, Michael Waschbusch, Stephan Wurmlin, Markus Gross, Hanspeter Pfister
  • Publication number: 20050017968
    Abstract: A method provides a virtual reality environment by acquiring multiple videos of an object such as a person at one location with multiple cameras. The videos are reduced to a differential stream of 3D operators and associated operands. These are used to maintain a 3D model of point samples representing the object. The point samples have 3D coordinates and intensity information derived from the videos. The 3D model of the person can then be rendered from any arbitrary point of view at another remote location while acquiring and reducing the video and maintaining the 3D model in real-time.
    Type: Application
    Filed: July 21, 2003
    Publication date: January 27, 2005
    Inventors: Stephan Wurmlin, Markus Gross, Edouard Lamboray
  • Publication number: 20040046864
    Abstract: A method and system generates 3D video images from point samples obtained from primary video data in a 3D coordinate system. Each point sample contains 3D coordinates in a 3D coordinate system, as well as colour and/or intensity information. On subsequently rendering, the point samples are modified continuously according to an updating of the 3D primary video data. The point samples are arranged in a hierarchic data structure in a manner such that each point sample is an end point, or leaf node, in a hierarchical tree, wherein the branch points in the hierarchy tree are average values of the nodes lower in the hierarchy of the tree.
    Type: Application
    Filed: June 18, 2003
    Publication date: March 11, 2004
    Inventors: Markus Gross, Edouard Lamboray, Stephan Wurmlin