Patents by Inventor James E. Holmes

James E. Holmes has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150286952
    Abstract: Some embodiments provide systems and methods for enabling a learning implicit gesture control system for use by an occupant of a vehicle. The method includes identifying features received from a plurality of sensors and comparing the features to antecedent knowledge stored in memory. A system output action that corresponds to the features can then be provided in the form of a first vehicle output. The method further includes detecting a second vehicle output from the plurality of sensors and updating the antecedent knowledge to associate the system output action with the second vehicle output.
    Type: Application
    Filed: April 3, 2014
    Publication date: October 8, 2015
    Inventors: Tarek A. El Dokor, Joshua King, Jordan Cluster, James E. Holmes
  • Publication number: 20150032331
    Abstract: An in-vehicle computing system allows a user to control components of the vehicle by performing gestures. The user provides a selecting input to indicate that he wishes to control one of the components. After the component is identified, the user performs a gesture to control the component. The gesture and the component that was previously selected are analyzed to generate a command for the component. Since the command is based on both the gesture and the identified component, the user can perform the same gesture in the same position within the vehicle to control different components.
    Type: Application
    Filed: October 14, 2014
    Publication date: January 29, 2015
    Inventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Publication number: 20150020031
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Application
    Filed: July 3, 2014
    Publication date: January 15, 2015
    Inventors: Tarek El Dokor, Joshua T. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
  • Patent number: 8886399
    Abstract: An in-vehicle computing system allows a user to control components of the vehicle by performing gestures. The user provides a selecting input to indicate that he wishes to control one of the components. After the component is identified, the user performs a gesture to control the component. The gesture and the component that was previously selected are analyzed to generate a command for the component. Since the command is based on both the gesture and the identified component, the user can perform the same gesture in the same position within the vehicle to control different components.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: November 11, 2014
    Assignees: Honda Motor Co., Ltd., Edge 3 Technologies LLC
    Inventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Publication number: 20140330515
    Abstract: A user, such as the driver of a vehicle, to retrieve information related to a point of interest (POI) near the vehicle by pointing at the POI or performing some other gesture to identify the POI. Gesture recognition is performed on the gesture to generate a target region that includes the POI that the user identified. After generating the target region, information about the POI can be retrieved by querying a server-based POI service with the target region or by searching in a micromap that is stored locally. The retrieved POI information can then be provided to the user via a display and/or speaker in the vehicle. This process beneficially allows a user to rapidly identify and retrieve information about a POI near the vehicle without having to navigate a user interface by manipulating a touchscreen or physical buttons.
    Type: Application
    Filed: July 16, 2014
    Publication date: November 6, 2014
    Inventors: Tarek A. El Dokor, Jordon Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Publication number: 20140278068
    Abstract: A user, such as the driver of a vehicle, to retrieve information related to a point of interest (POI) near the vehicle by pointing at the POI or performing some other gesture to identify the POI. Gesture recognition is performed on the gesture to generate a target region that includes the POI that the user identified. After generating the target region, information about the POI can be retrieved by querying a server-based POI service with the target region or by searching in a micromap that is stored locally. The retrieved POI information can then be provided to the user via a display and/or speaker in the vehicle. This process beneficially allows a user to rapidly identify and retrieve information about a POI near the vehicle without having to navigate a user interface by manipulating a touchscreen or physical buttons.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Publication number: 20140277936
    Abstract: An in-vehicle computing system allows a user to control components of the vehicle by performing gestures. The user provides a selecting input to indicate that he wishes to control one of the components. After the component is identified, the user performs a gesture to control the component. The gesture and the component that was previously selected are analyzed to generate a command for the component. Since the command is based on both the gesture and the identified component, the user can perform the same gesture in the same position within the vehicle to control different components.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Patent number: 8818716
    Abstract: A user, such as the driver of a vehicle, to retrieve information related to a point of interest (POI) near the vehicle by pointing at the POI or performing some other gesture to identify the POI. Gesture recognition is performed on the gesture to generate a target region that includes the POI that the user identified. After generating the target region, information about the POI can be retrieved by querying a server-based POI service with the target region or by searching in a micromap that is stored locally. The retrieved POI information can then be provided to the user via a display and/or speaker in the vehicle. This process beneficially allows a user to rapidly identify and retrieve information about a POI near the vehicle without having to navigate a user interface by manipulating a touchscreen or physical buttons.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: August 26, 2014
    Assignees: Honda Motor Co., Ltd., Edge 3 Technologies LLC
    Inventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Patent number: 8803801
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Grant
    Filed: May 7, 2013
    Date of Patent: August 12, 2014
    Assignee: Edge 3 Technologies, Inc.
    Inventors: Tarek El Dokor, Joshua T. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
  • Publication number: 20130241826
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Application
    Filed: May 7, 2013
    Publication date: September 19, 2013
    Applicant: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
  • Patent number: 8451220
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Grant
    Filed: August 13, 2012
    Date of Patent: May 28, 2013
    Assignee: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua E King, James E Holmes, William E Glomski, Maria N. Ngomba
  • Patent number: 8405656
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Grant
    Filed: August 28, 2012
    Date of Patent: March 26, 2013
    Assignee: Edge 3 Technologies
    Inventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
  • Patent number: 8395620
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Grant
    Filed: July 31, 2012
    Date of Patent: March 12, 2013
    Assignee: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
  • Publication number: 20120319946
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Application
    Filed: August 28, 2012
    Publication date: December 20, 2012
    Applicant: EDGE 3 TECHNOLOGIES, INC.
    Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
  • Publication number: 20120306795
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Application
    Filed: August 13, 2012
    Publication date: December 6, 2012
    Applicant: EDGE 3 TECHNOLOGIES LLC
    Inventors: William E. Glomski, Tarek El Dokor, Joshua T. King, James E. Holmes, Maria N. Ngomba
  • Publication number: 20120293412
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Application
    Filed: July 31, 2012
    Publication date: November 22, 2012
    Applicant: EDGE 3 TECHNOLOGIES, INC.
    Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
  • Patent number: 8279168
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Grant
    Filed: December 7, 2006
    Date of Patent: October 2, 2012
    Assignee: Edge 3 Technologies LLC
    Inventors: William E. Glomski, Tarek El Dokor, Joshua T. King, James E. Holmes, Maria N. Ngomba
  • Patent number: 8259109
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Grant
    Filed: March 25, 2012
    Date of Patent: September 4, 2012
    Assignee: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski
  • Publication number: 20120196660
    Abstract: Method, computer program and system for tracking movement of a subject within a video game. The method includes receiving data from a plurality of fixed position sensors comprising a distributed network of time of flight camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to features indicative of movement of the subject relative to the fixed position sensors and the subject, presenting one or more objects as the subject of a video game on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the features of the subject to determine interaction of one or more of the features of the subject and one or more of the one or more objects on one or more of the one or more the three dimensional display screens.
    Type: Application
    Filed: March 5, 2012
    Publication date: August 2, 2012
    Applicant: Edge 3 Technologies LLC
    Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski
  • Publication number: 20120194422
    Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.
    Type: Application
    Filed: March 25, 2012
    Publication date: August 2, 2012
    Applicant: EDGE 3 TECHNOLOGIES, INC.
    Inventors: Tarek El Dokor, Joshua E. King, James E. Holmes, Justin R. Gigliotti, William E. Glomski