Abstract: A method and apparatus for performing gesture recognition. In one embodiment of the invention, the method includes the steps of receiving one or more raw frames from one or more cameras, each of the one or more raw frames representing a time sequence of images, determining one or more regions of the one or more received raw frames that comprise highly textured regions, segmenting the one or more determined highly textured regions in accordance textured features thereof to determine one or more segments thereof, determining one or more regions of the one or more received raw frames that comprise other than highly textured regions, and segmenting the one or more determined other than highly textured regions in accordance with color thereof to determine one or more segments thereof. One or more of the segments are then tracked through the one or more raw frames representing the time sequence of images.
Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
Type:
Application
Filed:
December 7, 2006
Publication date:
June 14, 2007
Applicant:
EDGE 3 TECHNOLOGIES LLC
Inventors:
William Glomski, Tarek El Dokor, Joshua King, James Holmes, Maria Ngomba