Patents by Inventor Joshua T. King
Joshua T. King has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12105887Abstract: A method and apparatus for performing gesture recognition. In one embodiment of the invention, the method includes the steps of receiving one or more raw frames from one or more cameras, each of the one or more raw frames representing a time sequence of images, determining one or more regions of the one or more received raw frames that comprise highly textured regions, segmenting the one or more determined highly textured regions in accordance textured features thereof to determine one or more segments thereof, determining one or more regions of the one or more received raw frames that comprise other than highly textured regions, and segmenting the one or more determined other than highly textured regions in accordance with color thereof to determine one or more segments thereof. One or more of the segments are then tracked through the one or more raw frames representing the time sequence of images.Type: GrantFiled: May 28, 2023Date of Patent: October 1, 2024Assignee: Golden Edge Holding CorporationInventors: Tarek A. El Dokor, Joshua T. King
-
Patent number: 11703951Abstract: A method and apparatus for performing gesture recognition. In one embodiment of the invention, the method includes the steps of receiving one or more raw frames from one or more cameras, each of the one or more raw frames representing a time sequence of images, determining one or more regions of the one or more received raw frames that comprise highly textured regions, segmenting the one or more determined highly textured regions in accordance textured features thereof to determine one or more segments thereof, determining one or more regions of the one or more received raw frames that comprise other than highly textured regions, and segmenting the one or more determined other than highly textured regions in accordance with color thereof to determine one or more segments thereof. One or more of the segments are then tracked through the one or more raw frames representing he time sequence of images.Type: GrantFiled: January 30, 2022Date of Patent: July 18, 2023Assignee: Edge 3 TechnologiesInventors: Tarek A. El Dokor, Joshua T. King
-
Patent number: 11237637Abstract: A method and apparatus for performing gesture recognition. In one embodiment of the invention, the method includes the steps of receiving one or more raw frames from one or more cameras, each of the one or more raw frames representing a time sequence of images, determining one or more regions of the one or more received raw frames that comprise highly textured regions, segmenting the one or more determined highly textured regions in accordance textured features thereof to determine one or more segments thereof, determining one or more regions of the one or more received raw frames that comprise other than highly textured regions, and segmenting the one or more determined other than highly textured regions in accordance with color thereof to determine one or more segments thereof. One or more of the segments are then tracked through the one or more raw frames representing the time sequence of images.Type: GrantFiled: August 15, 2016Date of Patent: February 1, 2022Assignee: Edge 3 TechnologiesInventors: Tarek A. El Dokor, Joshua T. King
-
Patent number: 9684427Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: GrantFiled: July 3, 2014Date of Patent: June 20, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Tarek El Dokor, Joshua T. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
-
Publication number: 20170045950Abstract: A method and apparatus for performing gesture recognition. In one embodiment of the invention, the method includes the steps of receiving one or more raw frames from one or more cameras, each of the one or more raw frames representing a time sequence of images, determining one or more regions of the one or more received raw frames that comprise highly textured regions, segmenting the one or more determined highly textured regions in accordance textured features thereof to determine one or more segments thereof, determining one or more regions of the one or more received raw frames that comprise other than highly textured regions, and segmenting the one or more determined other than highly textured regions in accordance with color thereof to determine one or more segments thereof. One or more of the segments are then tracked through the one or more raw frames representing the time sequence of images.Type: ApplicationFiled: August 15, 2016Publication date: February 16, 2017Inventors: Tarek A. El Dokor, Joshua T. King
-
Patent number: 9417700Abstract: In one embodiment of the invention, the a method includes the steps of receiving one or more raw frames from one or more cameras, each of the one or more raw frames representing a time sequence of images, determining one or more regions of the one or more received raw frames that comprise highly textured regions, segmenting the one or more determined highly textured regions in accordance textured features thereof to determine one or more segments thereof, determining one or more regions of the one or more received raw frames that comprise other than highly textured regions, and segmenting the one or more determined other than highly textured regions in accordance with color thereof to determine one or more segments thereof. One or more of the segments are then tracked through the one or more raw frames representing the time sequence of images.Type: GrantFiled: May 20, 2010Date of Patent: August 16, 2016Assignee: Edge3 TechnologiesInventors: Tarek A. El Dokor, Joshua T. King
-
Publication number: 20160103499Abstract: A gesture control system includes a processor, the processor in communication with a plurality of sensors. The processor is configured to perform the steps of detecting, using the plurality of sensors, a gesture in a volume occupied by a plurality of occupants, analyzing a prior knowledge to associate the gesture with one of the plurality of occupants, and generating an output, the output being determined by the gesture and the one of the plurality of occupants.Type: ApplicationFiled: October 2, 2015Publication date: April 14, 2016Applicants: Edge3 Technologies, LLC, Honda Motor Co., Ltd.Inventors: Stuart Yamamoto, Tarek A. El Dokor, Graeme Asher, Jordan Cluster, Matthew Conway, Joshua T. King, James E. Holmes, Matt McElvogue, Churu Yun
-
Publication number: 20150020031Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: ApplicationFiled: July 3, 2014Publication date: January 15, 2015Inventors: Tarek El Dokor, Joshua T. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
-
Patent number: 8803801Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: GrantFiled: May 7, 2013Date of Patent: August 12, 2014Assignee: Edge 3 Technologies, Inc.Inventors: Tarek El Dokor, Joshua T. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
-
Publication number: 20120306795Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: ApplicationFiled: August 13, 2012Publication date: December 6, 2012Applicant: EDGE 3 TECHNOLOGIES LLCInventors: William E. Glomski, Tarek El Dokor, Joshua T. King, James E. Holmes, Maria N. Ngomba
-
Patent number: 8279168Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: GrantFiled: December 7, 2006Date of Patent: October 2, 2012Assignee: Edge 3 Technologies LLCInventors: William E. Glomski, Tarek El Dokor, Joshua T. King, James E. Holmes, Maria N. Ngomba
-
Patent number: 8144148Abstract: A method and system for vision-based interaction in a virtual environment is disclosed. According to one embodiment, a computer-implemented method comprises receiving data from a plurality of sensors to generate a meshed volumetric three-dimensional representation of a subject. A plurality of clusters is identified within the meshed volumetric three-dimensional representation that corresponds to motion features. The motion features include hands, feet, knees, elbows, head, and shoulders. The plurality of sensors is used to track motion of the subject and manipulate the motion features of the meshed volumetric three-dimensional representation.Type: GrantFiled: February 8, 2008Date of Patent: March 27, 2012Assignee: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua T. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
-
Publication number: 20100295783Abstract: A method and apparatus for performing gesture recognition. In one embodiment of the invention, the method includes the steps of receiving one or more raw frames from one or more cameras, each of the one or more raw frames representing a time sequence of images, determining one or more regions of the one or more received raw frames that comprise highly textured regions, segmenting the one or more determined highly textured regions in accordance textured features thereof to determine one or more segments thereof, determining one or more regions of the one or more received raw frames that comprise other than highly textured regions, and segmenting the one or more determined other than highly textured regions in accordance with color thereof to determine one or more segments thereof. One or more of the segments are then tracked through the one or more raw frames representing the time sequence of images.Type: ApplicationFiled: May 20, 2010Publication date: November 25, 2010Applicant: EDGE3 TECHNOLOGIES LLCInventors: Tarek A. El Dokor, Joshua T. King