Patents by Inventor William E. Glomski
William E. Glomski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9694283Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.Type: GrantFiled: February 10, 2013Date of Patent: July 4, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Tarek El Dokor, Joshua E King, James E. Holmes, Justin R. Gigliotto, William E. Glomski
-
Patent number: 9684427Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: GrantFiled: July 3, 2014Date of Patent: June 20, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Tarek El Dokor, Joshua T. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
-
Patent number: 9272202Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.Type: GrantFiled: February 10, 2013Date of Patent: March 1, 2016Assignee: Edge 3 Technologies, Inc.Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotto, William E. Glomski
-
Publication number: 20150020031Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: ApplicationFiled: July 3, 2014Publication date: January 15, 2015Inventors: Tarek El Dokor, Joshua T. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
-
Patent number: 8803801Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: GrantFiled: May 7, 2013Date of Patent: August 12, 2014Assignee: Edge 3 Technologies, Inc.Inventors: Tarek El Dokor, Joshua T. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
-
Publication number: 20130241826Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: ApplicationFiled: May 7, 2013Publication date: September 19, 2013Applicant: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua E. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
-
Patent number: 8451220Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: GrantFiled: August 13, 2012Date of Patent: May 28, 2013Assignee: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua E King, James E Holmes, William E Glomski, Maria N. Ngomba
-
Patent number: 8405656Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.Type: GrantFiled: August 28, 2012Date of Patent: March 26, 2013Assignee: Edge 3 TechnologiesInventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
-
Patent number: 8395620Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.Type: GrantFiled: July 31, 2012Date of Patent: March 12, 2013Assignee: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
-
Publication number: 20120319946Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.Type: ApplicationFiled: August 28, 2012Publication date: December 20, 2012Applicant: EDGE 3 TECHNOLOGIES, INC.Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
-
Publication number: 20120306795Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: ApplicationFiled: August 13, 2012Publication date: December 6, 2012Applicant: EDGE 3 TECHNOLOGIES LLCInventors: William E. Glomski, Tarek El Dokor, Joshua T. King, James E. Holmes, Maria N. Ngomba
-
Publication number: 20120293412Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.Type: ApplicationFiled: July 31, 2012Publication date: November 22, 2012Applicant: EDGE 3 TECHNOLOGIES, INC.Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
-
Patent number: 8279168Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: GrantFiled: December 7, 2006Date of Patent: October 2, 2012Assignee: Edge 3 Technologies LLCInventors: William E. Glomski, Tarek El Dokor, Joshua T. King, James E. Holmes, Maria N. Ngomba
-
Patent number: 8259109Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.Type: GrantFiled: March 25, 2012Date of Patent: September 4, 2012Assignee: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
-
Publication number: 20120194422Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.Type: ApplicationFiled: March 25, 2012Publication date: August 2, 2012Applicant: EDGE 3 TECHNOLOGIES, INC.Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
-
Publication number: 20120196660Abstract: Method, computer program and system for tracking movement of a subject within a video game. The method includes receiving data from a plurality of fixed position sensors comprising a distributed network of time of flight camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to features indicative of movement of the subject relative to the fixed position sensors and the subject, presenting one or more objects as the subject of a video game on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the features of the subject to determine interaction of one or more of the features of the subject and one or more of the one or more objects on one or more of the one or more the three dimensional display screens.Type: ApplicationFiled: March 5, 2012Publication date: August 2, 2012Applicant: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
-
Patent number: 8223147Abstract: Method, computer program and system for tracking movement of a subject within a video game. The method includes receiving data from a plurality of fixed position sensors comprising a distributed network of time of flight camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to features indicative of movement of the subject relative to the fixed position sensors and the subject, presenting one or more objects as the subject of a video game on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the features of the subject to determine interaction of one or more of the features of the subject and one or more of the one or more objects on one or more of the one or more the three dimensional display screens.Type: GrantFiled: March 5, 2012Date of Patent: July 17, 2012Assignee: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
-
Publication number: 20120162378Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a plurality of fixed position sensors comprising a distributed network of time of flight camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to features indicative of motion of the subject relative to the fixed position sensors and one or more other portions of the subject, and presenting one or more objects on one or more three dimensional display screens. The plurality of fixed position sensors are used to track motion of the features of the subject to manipulate the volumetric three-dimensional representation to determine interaction of one or more of the features of the subject and one or more of the one or more objects on one or more of the one or more three dimensional display screens.Type: ApplicationFiled: February 26, 2012Publication date: June 28, 2012Applicant: EDGE 3 TECHNOLOGIES LLCInventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
-
Patent number: 8207967Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a plurality of fixed position sensors comprising a distributed network of time of flight camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to features indicative of motion of the subject relative to the fixed position sensors and one or more other portions of the subject, and presenting one or more objects on one or more three dimensional display screens. The plurality of fixed position sensors are used to track motion of the features of the subject to manipulate the volumetric three-dimensional representation to determine interaction of one or more of the features of the subject and one or more of the one or more objects on one or more of the one or more three dimensional display screens.Type: GrantFiled: February 26, 2012Date of Patent: June 26, 2012Assignee: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
-
Patent number: 8144148Abstract: A method and system for vision-based interaction in a virtual environment is disclosed. According to one embodiment, a computer-implemented method comprises receiving data from a plurality of sensors to generate a meshed volumetric three-dimensional representation of a subject. A plurality of clusters is identified within the meshed volumetric three-dimensional representation that corresponds to motion features. The motion features include hands, feet, knees, elbows, head, and shoulders. The plurality of sensors is used to track motion of the subject and manipulate the motion features of the meshed volumetric three-dimensional representation.Type: GrantFiled: February 8, 2008Date of Patent: March 27, 2012Assignee: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua T. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski