Patents Assigned to Edge 3 Technologies LLC
-
Patent number: 11366513Abstract: Various methods and apparatuses are provided to determine an occurrence of a first user indication and determine a plurality of Points of Interest (POI) corresponding to the first user indication, and to determine an occurrence of a second user indication and responsively determine a narrowed POI from the plurality of POIs. An action corresponding to the narrowed POI and the first and/or second user indication is determined and effected.Type: GrantFiled: March 1, 2021Date of Patent: June 21, 2022Assignees: HONDA MOTOR CO., LTD., EDGE 3 TECHNOLOGIES, LLCInventors: Stuart Yamamoto, Pedram Vaghefinazari, Tarek A. El Dokor, Jordan Cluster
-
Patent number: 11275447Abstract: A user, such as the driver of a vehicle, to retrieve information related to a point of interest (POI) near the vehicle by pointing at the POI or performing some other gesture to identify the POI. Gesture recognition is performed on the gesture to generate a target region that includes the POI that the user identified. After generating the target region, information about the POI can be retrieved by querying a server-based POI service with the target region or by searching in a micromap that is stored locally. The retrieved POI information can then be provided to the user via a display and/or speaker in the vehicle. This process beneficially allows a user to rapidly identify and retrieve information about a POI near the vehicle without having to navigate a user interface by manipulating a touchscreen or physical buttons.Type: GrantFiled: August 8, 2018Date of Patent: March 15, 2022Assignees: HONDA MOTOR CO., LTD., EDGE 3 TECHNOLOGIES LLCInventors: Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Tarek A. El Dokor, Jordan Cluster, James E. Holmes
-
Patent number: 11243613Abstract: A method includes monitoring a plurality of system inputs, and detecting a behavioral pattern performed by a user and associated with the plurality of system inputs, When the behavioral pattern is detected, the method includes associating, in a memory, a gesture with at least one action, the at least one action being determined by the plurality of system inputs, and, upon detecting the gesture, executing the action associated with the gesture.Type: GrantFiled: July 29, 2019Date of Patent: February 8, 2022Assignees: HONDA MOTOR CO., LTD., EDGE 3 TECHNOLOGIES LLCInventors: Stuart Masakazu Yamamoto, Tarek A. El Dokor
-
Patent number: 10936050Abstract: Various methods and apparatuses are provided to determine an occurrence of a first user indication and determine a plurality of Points of Interest (POI) corresponding to the first user indication, and to determine an occurrence of a second user indication and responsively determine a narrowed POI from the plurality of POIs. An action corresponding to the narrowed POI and the first and/or second user indication is determined and effected.Type: GrantFiled: June 16, 2014Date of Patent: March 2, 2021Assignees: HONDA MOTOR CO., LTD., EDGE3 TECHNOLOGIES, LLCInventors: Stuart Yamamoto, Pedram Vaghefinazari, Tarek A. El Dokor, Jordan Cluster
-
Patent number: 10409382Abstract: A method includes monitoring a plurality of system inputs, and detecting a behavioral pattern performed by a user and associated with the plurality of system inputs, When the behavioral pattern is detected, the method includes associating, in a memory, a gesture with at least one action, the at least one action being determined by the plurality of system inputs, and, upon detecting the gesture, executing the action associated with the gesture.Type: GrantFiled: October 2, 2015Date of Patent: September 10, 2019Assignees: Honda Motor Co., Ltd., EDGE 3 TECHNOLOGIES LLCInventors: Stuart Yamamoto, Tarek A. El Dokor
-
Patent number: 9469251Abstract: A system for capturing image data for gestures from a passenger or a driver in a vehicle with a dynamic illumination level comprises a low-lux sensor equipped to capture image data in an environment with an illumination level below an illumination threshold, a high-lux sensor equipped to capture image data in the environment with the illumination level above the illumination threshold, and an object recognition module for activating the sensors. The object recognition module determines the illumination level of the environment and activates the low-lux sensor if the illumination level is below the illumination threshold. If the illumination level is above the threshold, the object recognition module activates the high-lux sensor.Type: GrantFiled: February 26, 2016Date of Patent: October 18, 2016Assignees: Honda Motor Co., Ltd., Edge 3 Technologies, LLCInventors: Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Ritchie Winson Huang, Josh Tyler King, Tarek El Doker
-
Publication number: 20160103499Abstract: A gesture control system includes a processor, the processor in communication with a plurality of sensors. The processor is configured to perform the steps of detecting, using the plurality of sensors, a gesture in a volume occupied by a plurality of occupants, analyzing a prior knowledge to associate the gesture with one of the plurality of occupants, and generating an output, the output being determined by the gesture and the one of the plurality of occupants.Type: ApplicationFiled: October 2, 2015Publication date: April 14, 2016Applicants: Edge3 Technologies, LLC, Honda Motor Co., Ltd.Inventors: Stuart Yamamoto, Tarek A. El Dokor, Graeme Asher, Jordan Cluster, Matthew Conway, Joshua T. King, James E. Holmes, Matt McElvogue, Churu Yun
-
Patent number: 9302621Abstract: A system for capturing image data for gestures from a passenger or a driver in a vehicle with a dynamic illumination level comprises a low-lux sensor equipped to capture image data in an environment with an illumination level below an illumination threshold, a high-lux sensor equipped to capture image data in the environment with the illumination level above the illumination threshold, and an object recognition module for activating the sensors. The object recognition module determines the illumination level of the environment and activates the low-lux sensor if the illumination level is below the illumination threshold. If the illumination level is above the threshold, the object recognition module activates the high-lux sensor.Type: GrantFiled: June 10, 2014Date of Patent: April 5, 2016Assignees: Honda Motor Co., Ltd., Edge 3 Technologies, LLCInventors: Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Ritchie Winson Huang, Josh Tyler King, Tarek El Dokor
-
Publication number: 20140371955Abstract: A system and method for combining two separate types of human machine interfaces, e.g., a voice signal and a gesture signal, performing voice recognition to a voice signal and gesture recognition to the gesture signal. Based on a confidence determination using the voice recognition result and the gesture recognition result the system can, for example, immediately perform the command/request, request confirmation of the command/request or determine that the command/request was not identified.Type: ApplicationFiled: August 28, 2014Publication date: December 18, 2014Applicants: EDGE 3 TECHNOLOGIES LLC, Honda Motor Co., Ltd.Inventors: Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Tarek El Dokor, Josh Tyler King
-
Patent number: 8914163Abstract: A system and method for combining two separate types of human machine interfaces, e.g., a voice signal and a gesture signal, performing voice recognition to a voice signal and gesture recognition to the gesture signal. Based on a confidence determination using the voice recognition result and the gesture recognition result the system can, for example, immediately perform the command/request, request confirmation of the command/request or determine that the command/request was not identified.Type: GrantFiled: May 1, 2014Date of Patent: December 16, 2014Assignees: Honda Motor Co., Ltd., Edge3 Technologies, LLCInventors: Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Tarek El Dokor, Josh Tyler King
-
Patent number: 8886399Abstract: An in-vehicle computing system allows a user to control components of the vehicle by performing gestures. The user provides a selecting input to indicate that he wishes to control one of the components. After the component is identified, the user performs a gesture to control the component. The gesture and the component that was previously selected are analyzed to generate a command for the component. Since the command is based on both the gesture and the identified component, the user can perform the same gesture in the same position within the vehicle to control different components.Type: GrantFiled: March 15, 2013Date of Patent: November 11, 2014Assignees: Honda Motor Co., Ltd., Edge 3 Technologies LLCInventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
-
Publication number: 20140285664Abstract: A system for capturing image data for gestures from a passenger or a driver in a vehicle with a dynamic illumination level comprises a low-lux sensor equipped to capture image data in an environment with an illumination level below an illumination threshold, a high-lux sensor equipped to capture image data in the environment with the illumination level above the illumination threshold, and an object recognition module for activating the sensors. The object recognition module determines the illumination level of the environment and activates the low-lux sensor if the illumination level is below the illumination threshold. If the illumination level is above the threshold, the object recognition module activates the high-lux sensor.Type: ApplicationFiled: June 10, 2014Publication date: September 25, 2014Applicants: EDGE 3 TECHNOLOGIES LLC, HONDA MOTOR CO., LTD.Inventors: Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Ritchie Winson Huang, Josh Tyler King, Tarek El Dokor
-
Patent number: 8818716Abstract: A user, such as the driver of a vehicle, to retrieve information related to a point of interest (POI) near the vehicle by pointing at the POI or performing some other gesture to identify the POI. Gesture recognition is performed on the gesture to generate a target region that includes the POI that the user identified. After generating the target region, information about the POI can be retrieved by querying a server-based POI service with the target region or by searching in a micromap that is stored locally. The retrieved POI information can then be provided to the user via a display and/or speaker in the vehicle. This process beneficially allows a user to rapidly identify and retrieve information about a POI near the vehicle without having to navigate a user interface by manipulating a touchscreen or physical buttons.Type: GrantFiled: March 15, 2013Date of Patent: August 26, 2014Assignees: Honda Motor Co., Ltd., Edge 3 Technologies LLCInventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
-
Patent number: 8781171Abstract: A system for capturing image data for gestures from a passenger or a driver in a vehicle with a dynamic illumination level comprises a low-lux sensor equipped to capture image data in an environment with an illumination level below an illumination threshold, a high-lux sensor equipped to capture image data in the environment with the illumination level above the illumination threshold, and an object recognition module for activating the sensors. The object recognition module determines the illumination level of the environment and activates the low-lux sensor if the illumination level is below the illumination threshold. If the illumination level is above the threshold, the object recognition module activates the high-lux sensor.Type: GrantFiled: October 24, 2012Date of Patent: July 15, 2014Assignees: Honda Motor Co., Ltd., Edge 3 Technologies LLCInventors: Josh Tyler King, Tarek El Dokor, Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Ritchie Winson Huang
-
Publication number: 20140099019Abstract: A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command.Type: ApplicationFiled: December 2, 2013Publication date: April 10, 2014Applicant: Edge3 Technologies LLCInventor: Tarek El Dokor
-
Patent number: 8625855Abstract: A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command.Type: GrantFiled: February 7, 2013Date of Patent: January 7, 2014Assignee: Edge 3 Technologies LLCInventor: Tarek El Dokor
-
Publication number: 20130241826Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: ApplicationFiled: May 7, 2013Publication date: September 19, 2013Applicant: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua E. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
-
Publication number: 20130156296Abstract: A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command.Type: ApplicationFiled: February 7, 2013Publication date: June 20, 2013Applicant: EDGE3 TECHNOLOGIES LLCInventor: EDGE3 TECHNOLOGIES LLC
-
Patent number: 8451220Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).Type: GrantFiled: August 13, 2012Date of Patent: May 28, 2013Assignee: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua E King, James E Holmes, William E Glomski, Maria N. Ngomba
-
Patent number: 8395620Abstract: Method, computer program and system for tracking movement of a subject. The method includes receiving data from a distributed network of camera sensors employing one or more emitted light sources associated with one or more of the one or more camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to motion features indicative of movement of the motion features of the subject, presenting one or more objects on one or more three dimensional display screens, and using the plurality of fixed position sensors to track motion of the motion features of the subject and track manipulation of the motion features of the volumetric three-dimensional representation to determine interaction of one or more of the motion features of the subject and one or more of the one or more objects on the three dimensional display.Type: GrantFiled: July 31, 2012Date of Patent: March 12, 2013Assignee: Edge 3 Technologies LLCInventors: Tarek El Dokor, Joshua E King, James E Holmes, Justin R Gigliotti, William E Glomski