Patents by Inventor Tarek A. El Dokor

Tarek A. El Dokor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8451220
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Grant
    Filed: August 13, 2012
    Date of Patent: May 28, 2013
    Assignee: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua E King, James E Holmes, William E Glomski, Maria N. Ngomba
  • Patent number: 8405656
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Grant
    Filed: August 28, 2012
    Date of Patent: March 26, 2013
    Assignee: Edge 3 Technologies
    Inventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
  • Patent number: 8396252
    Abstract: A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, locating the palm cluster of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command.
    Type: Grant
    Filed: May 20, 2010
    Date of Patent: March 12, 2013
    Assignee: Edge 3 Technologies
    Inventor: Tarek A. El Dokor
  • Patent number: 8395620
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Grant
    Filed: July 31, 2012
    Date of Patent: March 12, 2013
    Assignee: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
  • Publication number: 20120319946
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Application
    Filed: August 28, 2012
    Publication date: December 20, 2012
    Applicant: EDGE 3 TECHNOLOGIES, INC.
    Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
  • Publication number: 20120306795
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Application
    Filed: August 13, 2012
    Publication date: December 6, 2012
    Applicant: EDGE 3 TECHNOLOGIES LLC
    Inventors: William E. Glomski, Tarek El Dokor, Joshua T. King, James E. Holmes, Maria N. Ngomba
  • Publication number: 20120293412
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Application
    Filed: July 31, 2012
    Publication date: November 22, 2012
    Applicant: EDGE 3 TECHNOLOGIES, INC.
    Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
  • Patent number: 8279168
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Grant
    Filed: December 7, 2006
    Date of Patent: October 2, 2012
    Assignee: Edge 3 Technologies LLC
    Inventors: William E. Glomski, Tarek El Dokor, Joshua T. King, James E. Holmes, Maria N. Ngomba
  • Patent number: 8259109
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Grant
    Filed: March 25, 2012
    Date of Patent: September 4, 2012
    Assignee: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
  • Publication number: 20120206573
    Abstract: A method and system to determine the disparity associated with one or more textured regions of a plurality of images is presented. The method comprises the steps of breaking up the texture into its color primitives, further segmenting the textured object into any number of objects comprising such primitives, and then calculating a disparity of these objects. The textured objects emerge in the disparity domain, after having their disparity calculated. Accordingly, the method is further comprised of defining one or more textured regions in a first of a plurality of images, determining a corresponding one or more textured regions in a second of the plurality of images, segmenting the textured regions into their color primitives, and calculating a disparity between the first and second of the plurality of images in accordance with the segmented color primitives.
    Type: Application
    Filed: February 10, 2011
    Publication date: August 16, 2012
    Applicant: Edge 3 Technologies, Inc.
    Inventors: Tarek El Dokor, Joshua King, Jordan Cluster, James Edward Holmes
  • Publication number: 20120206458
    Abstract: A near-touch interface is provided that utilizes stereo cameras and a series of targeted structured light tessellations, emanating from the screen as a light source and incident on objects in the field-of-view. After radial distortion from a series of wide-angle lenses is mitigated, a surface-based spatio-temporal stereo algorithm is utilized to estimate initial depth values. Once these values are calculated, a subsequent refinement step may be applied in which light source tessellations are used to flash a structure onto targeted components of the scene, where initial near-interaction disparity values have been calculated. The combination of a spherical stereo algorithm, and smoothing with structured light source tessellations, provides for a very reliable and fast near-field depth engine, and resolves issues that are associated with depth estimates for embedded solutions of this approach.
    Type: Application
    Filed: July 24, 2011
    Publication date: August 16, 2012
    Applicant: Edge 3 Technologies, Inc.
    Inventor: Tarek El Dokor
  • Publication number: 20120207383
    Abstract: A method and system for segmenting a plurality of images. The method comprises the steps of segmenting the image through a novel clustering technique that is, generating a composite depth map including temporally stable segments of the image as well as segments in subsequent images that have changed. These changes may be determined by determining one or more differences between the temporally stable depth map and segments included in one or more subsequent frames. Thereafter, the portions of the one or more subsequent frames that include segments including changes from their corresponding segments in the temporally stable depth map are processed and are combined with the segments from the temporally stable depth map to compute their associated disparities in one or more subsequent frames. The images may include a pair of stereo images acquired through a stereo camera system at a substantially similar time.
    Type: Application
    Filed: February 10, 2011
    Publication date: August 16, 2012
    Applicant: Edge 3 Technologies, Inc.
    Inventors: Tarek El Dokor, Joshua King, Jordan Cluster, James Edward Holmes
  • Publication number: 20120207388
    Abstract: A method and system for generating a disparity map. The method comprises the steps of generating a first disparity map based upon a first image and a second image acquired at a first time, acquiring at least a third image and a fourth image at a second time, and determining one or more portions comprising a difference between one of the first and second images and a corresponding one of the third and fourth images. A disparity map update is generated for the one or more determined portions, and a disparity map is generated based upon the third image and the fourth image by combining the disparity map update and the first disparity map.
    Type: Application
    Filed: February 10, 2011
    Publication date: August 16, 2012
    Applicant: Edge 3 Technologies, Inc.
    Inventors: Tarek El Dokor, Joshua King, Jordan Cluster, James Edwards Holmes
  • Publication number: 20120196660
    Abstract: Method, computer program and system for tracking movement of a subject within a video game. The method includes receiving data from a plurality of fixed position sensors comprising a distributed network of time of flight camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to features indicative of movement of the subject relative to the fixed position sensors and the subject, presenting one or more objects as the subject of a video game on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the features of the subject to determine interaction of one or more of the features of the subject and one or more of the one or more objects on one or more of the one or more the three dimensional display screens.
    Type: Application
    Filed: March 5, 2012
    Publication date: August 2, 2012
    Applicant: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
  • Publication number: 20120194422
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Application
    Filed: March 25, 2012
    Publication date: August 2, 2012
    Applicant: EDGE 3 TECHNOLOGIES, INC.
    Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
  • Patent number: 8223147
    Abstract: Method, computer program and system for tracking movement of a subject within a video game. The method includes receiving data from a plurality of fixed position sensors comprising a distributed network of time of flight camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to features indicative of movement of the subject relative to the fixed position sensors and the subject, presenting one or more objects as the subject of a video game on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the features of the subject to determine interaction of one or more of the features of the subject and one or more of the one or more objects on one or more of the one or more the three dimensional display screens.
    Type: Grant
    Filed: March 5, 2012
    Date of Patent: July 17, 2012
    Assignee: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
  • Publication number: 20120162378
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a plurality of fixed position sensors comprising a distributed network of time of flight camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to features indicative of motion of the subject relative to the fixed position sensors and one or more other portions of the subject, and presenting one or more objects on one or more three dimensional display screens. The plurality of fixed position sensors are used to track motion of the features of the subject to manipulate the volumetric three-dimensional representation to determine interaction of one or more of the features of the subject and one or more of the one or more objects on one or more of the one or more three dimensional display screens.
    Type: Application
    Filed: February 26, 2012
    Publication date: June 28, 2012
    Applicant: EDGE 3 TECHNOLOGIES LLC
    Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
  • Patent number: 8207967
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a plurality of fixed position sensors comprising a distributed network of time of flight camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to features indicative of motion of the subject relative to the fixed position sensors and one or more other portions of the subject, and presenting one or more objects on one or more three dimensional display screens. The plurality of fixed position sensors are used to track motion of the features of the subject to manipulate the volumetric three-dimensional representation to determine interaction of one or more of the features of the subject and one or more of the one or more objects on one or more of the one or more three dimensional display screens.
    Type: Grant
    Filed: February 26, 2012
    Date of Patent: June 26, 2012
    Assignee: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
  • Patent number: 8144148
    Abstract: A method and system for vision-based interaction in a virtual environment is disclosed. According to one embodiment, a computer-implemented method comprises receiving data from a plurality of sensors to generate a meshed volumetric three-dimensional representation of a subject. A plurality of clusters is identified within the meshed volumetric three-dimensional representation that corresponds to motion features. The motion features include hands, feet, knees, elbows, head, and shoulders. The plurality of sensors is used to track motion of the subject and manipulate the motion features of the meshed volumetric three-dimensional representation.
    Type: Grant
    Filed: February 8, 2008
    Date of Patent: March 27, 2012
    Assignee: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua T. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
  • Publication number: 20120057779
    Abstract: A method and apparatus for processing image data is provided. The method includes the steps of employing a main processing network for classifying one or more features of the image data, employing a monitor processing network for determining one or more confusing classifications of the image data, and spawning a specialist processing network to process image data associated with the one or more confusing classifications.
    Type: Application
    Filed: August 31, 2011
    Publication date: March 8, 2012
    Applicant: EDGE 3 TECHNOLOGIES, INC.
    Inventor: Tarek El Dokor