Abstract: A method identifies and extracts images of one or more users. The method obtains an array of pixel values that constitutes a scene image and a corresponding array of depth values that constitutes a depth map. The depth map and the image are registered. The method obtains an array containing the 3D positions of the pixel value points in real-world coordinates by coordinate transformation of the depth map and image. The method then clusters the points into groups according to their relative positions so each group contains points in the same region of space and corresponds to a user location. The method defines individual volumes of interest around each user location. The method selects points from the array of 3D positions located in the volumes of interest to obtain segmentation masks for each user. The segmentation masks are then applied to the image to extract images of the users.
Abstract: The invention relates to a method for identifying and extracting images of one or more users in an interactive environment comprising the steps of: —obtaining a depth map (7) of a scene in the form of an array of depth values, and an image (8) of said scene in the form of a corresponding array of pixel values, said depth map (7) and said image (8) being registered; applying a coordinate transformation to said depth map (7) and said image (8) for obtaining a corresponding array (15) containing the 3D positions in a real-world coordinates system and pixel values points; —grouping said points according to their relative positions, by using a clustering process (18) so that each group contains points that are in the same region of space and correspond to a user location (19); —defining individual volumes of interest (20) each corresponding to one of said user locations (19); —selecting, from said array (15) containing the 3D positions and pixel values, the points located in said volumes of interest for obtaining
Abstract: An interactive multimedia applications device and method for an interactive multimedia application comprises one or more live media capture devices providing a media stream, an engine comprising a real time media processing module for processing said media stream, and rendering means connected to multimedia output devices. In addition, said device comprises (i) a virtual scenario description repository adapted for storing a plurality of scenarios expressed in a scenario programming language; (ii) a memory module adapted for storing an internal representation of one of said scenarios, and an internal representation of a virtual scene and (iii) a parser/loader for parsing a selected one of said plurality of scenarios, and loading it in said memory module.
Type:
Grant
Filed:
April 26, 2005
Date of Patent:
June 5, 2012
Assignee:
Alterface S.A.
Inventors:
David Ergo, Xavier Wielemans, Xavier Marichal