Patents by Inventor Tarek El Dokor

Tarek El Dokor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8970589
    Abstract: A near-touch interface is provided that utilizes stereo cameras and a series of targeted structured light tessellations, emanating from the screen as a light source and incident on objects in the field-of-view. After radial distortion from a series of wide-angle lenses is mitigated, a surface-based spatio-temporal stereo algorithm is utilized to estimate initial depth values. Once these values are calculated, a subsequent refinement step may be applied in which light source tessellations are used to flash a structure onto targeted components of the scene, where initial near-interaction disparity values have been calculated. The combination of a spherical stereo algorithm, and smoothing with structured light source tessellations, provides for a very reliable and fast near-field depth engine, and resolves issues that are associated with depth estimates for embedded solutions of this approach.
    Type: Grant
    Filed: July 24, 2011
    Date of Patent: March 3, 2015
    Assignee: Edge 3 Technologies, Inc.
    Inventor: Tarek El Dokor
  • Publication number: 20150036922
    Abstract: A method and apparatus for processing image data is provided. The method includes the steps of employing a main processing network for classifying one or more features of the image data, employing a monitor processing network for determining one or more confusing classifications of the image data, and spawning a specialist processing network to process image data associated with the one or more confusing classifications.
    Type: Application
    Filed: October 18, 2014
    Publication date: February 5, 2015
    Inventor: Tarek El Dokor
  • Publication number: 20150032331
    Abstract: An in-vehicle computing system allows a user to control components of the vehicle by performing gestures. The user provides a selecting input to indicate that he wishes to control one of the components. After the component is identified, the user performs a gesture to control the component. The gesture and the component that was previously selected are analyzed to generate a command for the component. Since the command is based on both the gesture and the identified component, the user can perform the same gesture in the same position within the vehicle to control different components.
    Type: Application
    Filed: October 14, 2014
    Publication date: January 29, 2015
    Inventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Publication number: 20150020031
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Application
    Filed: July 3, 2014
    Publication date: January 15, 2015
    Inventors: Tarek El Dokor, Joshua T. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
  • Patent number: 8928590
    Abstract: A gesture-enabled keyboard and method are defined. The gesture-enabled keyboard includes a keyboard housing including one or more keyboard keys for typing and a pair of stereo camera sensors mounted within the keyboard housing, a field of view of the pair of stereo camera sensors projecting substantially perpendicularly to the plane of the keyboard housing. A background of the field of view is updated when one or more alternative input devices are in use. A gesture region including a plurality of interaction zones and a virtual membrane defining a region of transition from one of the plurality of interaction zones to another of the plurality of interaction zones is defined within the field of view of the pair of stereo camera sensors. Gesture interaction is enabled when one or more gesture objects are positioned within the gesture region, and when one or more alternative input devices are not in use.
    Type: Grant
    Filed: May 15, 2012
    Date of Patent: January 6, 2015
    Assignee: Edge 3 Technologies, Inc.
    Inventor: Tarek El Dokor
  • Publication number: 20140371955
    Abstract: A system and method for combining two separate types of human machine interfaces, e.g., a voice signal and a gesture signal, performing voice recognition to a voice signal and gesture recognition to the gesture signal. Based on a confidence determination using the voice recognition result and the gesture recognition result the system can, for example, immediately perform the command/request, request confirmation of the command/request or determine that the command/request was not identified.
    Type: Application
    Filed: August 28, 2014
    Publication date: December 18, 2014
    Applicants: EDGE 3 TECHNOLOGIES LLC, Honda Motor Co., Ltd.
    Inventors: Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Tarek El Dokor, Josh Tyler King
  • Patent number: 8914163
    Abstract: A system and method for combining two separate types of human machine interfaces, e.g., a voice signal and a gesture signal, performing voice recognition to a voice signal and gesture recognition to the gesture signal. Based on a confidence determination using the voice recognition result and the gesture recognition result the system can, for example, immediately perform the command/request, request confirmation of the command/request or determine that the command/request was not identified.
    Type: Grant
    Filed: May 1, 2014
    Date of Patent: December 16, 2014
    Assignees: Honda Motor Co., Ltd., Edge3 Technologies, LLC
    Inventors: Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Tarek El Dokor, Josh Tyler King
  • Patent number: 8891859
    Abstract: A method and apparatus for processing image data is provided. The method includes the steps of employing a main processing network for classifying one or more features of the image data, employing a monitor processing network for determining one or more confusing classifications of the image data, and spawning a specialist processing network to process image data associated with the one or more confusing classifications.
    Type: Grant
    Filed: January 1, 2014
    Date of Patent: November 18, 2014
    Assignee: Edge 3 Technologies, Inc.
    Inventor: Tarek El Dokor
  • Patent number: 8886399
    Abstract: An in-vehicle computing system allows a user to control components of the vehicle by performing gestures. The user provides a selecting input to indicate that he wishes to control one of the components. After the component is identified, the user performs a gesture to control the component. The gesture and the component that was previously selected are analyzed to generate a command for the component. Since the command is based on both the gesture and the identified component, the user can perform the same gesture in the same position within the vehicle to control different components.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: November 11, 2014
    Assignees: Honda Motor Co., Ltd., Edge 3 Technologies LLC
    Inventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Publication number: 20140330515
    Abstract: A user, such as the driver of a vehicle, to retrieve information related to a point of interest (POI) near the vehicle by pointing at the POI or performing some other gesture to identify the POI. Gesture recognition is performed on the gesture to generate a target region that includes the POI that the user identified. After generating the target region, information about the POI can be retrieved by querying a server-based POI service with the target region or by searching in a micromap that is stored locally. The retrieved POI information can then be provided to the user via a display and/or speaker in the vehicle. This process beneficially allows a user to rapidly identify and retrieve information about a POI near the vehicle without having to navigate a user interface by manipulating a touchscreen or physical buttons.
    Type: Application
    Filed: July 16, 2014
    Publication date: November 6, 2014
    Inventors: Tarek A. El Dokor, Jordon Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Publication number: 20140285664
    Abstract: A system for capturing image data for gestures from a passenger or a driver in a vehicle with a dynamic illumination level comprises a low-lux sensor equipped to capture image data in an environment with an illumination level below an illumination threshold, a high-lux sensor equipped to capture image data in the environment with the illumination level above the illumination threshold, and an object recognition module for activating the sensors. The object recognition module determines the illumination level of the environment and activates the low-lux sensor if the illumination level is below the illumination threshold. If the illumination level is above the threshold, the object recognition module activates the high-lux sensor.
    Type: Application
    Filed: June 10, 2014
    Publication date: September 25, 2014
    Applicants: EDGE 3 TECHNOLOGIES LLC, HONDA MOTOR CO., LTD.
    Inventors: Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Ritchie Winson Huang, Josh Tyler King, Tarek El Dokor
  • Publication number: 20140277936
    Abstract: An in-vehicle computing system allows a user to control components of the vehicle by performing gestures. The user provides a selecting input to indicate that he wishes to control one of the components. After the component is identified, the user performs a gesture to control the component. The gesture and the component that was previously selected are analyzed to generate a command for the component. Since the command is based on both the gesture and the identified component, the user can perform the same gesture in the same position within the vehicle to control different components.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Publication number: 20140278068
    Abstract: A user, such as the driver of a vehicle, to retrieve information related to a point of interest (POI) near the vehicle by pointing at the POI or performing some other gesture to identify the POI. Gesture recognition is performed on the gesture to generate a target region that includes the POI that the user identified. After generating the target region, information about the POI can be retrieved by querying a server-based POI service with the target region or by searching in a micromap that is stored locally. The retrieved POI information can then be provided to the user via a display and/or speaker in the vehicle. This process beneficially allows a user to rapidly identify and retrieve information about a POI near the vehicle without having to navigate a user interface by manipulating a touchscreen or physical buttons.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Publication number: 20140267612
    Abstract: A method, system and computer program are provided that present a real-time approach to Chromaticity maximization to be used in image segmentation. The ambient illuminant in a scene may be first approximated. The input image may then be preprocessed to remove the impact of the illuminant, and approximate an ambient white light source instead. The resultant image is then choma-maximized. The result is an adaptive Chromaticity maximization algorithm capable of adapting to a wide dynamic range of illuminations. A segmentation algorithm is put in place as well that takes advantage of such an approach. This approach also has applications in HDR photography and real-time HDR video.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventors: Tarek El Dokor, Jordan Cluster, Joshua King
  • Publication number: 20140244072
    Abstract: A system and method for combining two separate types of human machine interfaces, e.g., a voice signal and a gesture signal, performing voice recognition to a voice signal and gesture recognition to the gesture signal. Based on a confidence determination using the voice recognition result and the gesture recognition result the system can, for example, immediately perform the command/request, request confirmation of the command/request or determine that the command/request was not identified.
    Type: Application
    Filed: May 1, 2014
    Publication date: August 28, 2014
    Inventors: Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Tarek El Dokor, Josh Tyler King
  • Patent number: 8818716
    Abstract: A user, such as the driver of a vehicle, to retrieve information related to a point of interest (POI) near the vehicle by pointing at the POI or performing some other gesture to identify the POI. Gesture recognition is performed on the gesture to generate a target region that includes the POI that the user identified. After generating the target region, information about the POI can be retrieved by querying a server-based POI service with the target region or by searching in a micromap that is stored locally. The retrieved POI information can then be provided to the user via a display and/or speaker in the vehicle. This process beneficially allows a user to rapidly identify and retrieve information about a POI near the vehicle without having to navigate a user interface by manipulating a touchscreen or physical buttons.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: August 26, 2014
    Assignees: Honda Motor Co., Ltd., Edge 3 Technologies LLC
    Inventors: Tarek A. El Dokor, Jordan Cluster, James E. Holmes, Pedram Vaghefinazari, Stuart M. Yamamoto
  • Patent number: 8803801
    Abstract: A three-dimensional virtual-touch human-machine interface system (20) and a method (100) of operating the system (20) are presented. The system (20) incorporates a three-dimensional time-of-flight sensor (22), a three-dimensional autostereoscopic display (24), and a computer (26) coupled to the sensor (22) and the display (24). The sensor (22) detects a user object (40) within a three-dimensional sensor space (28). The display (24) displays an image (42) within a three-dimensional display space (32). The computer (26) maps a position of the user object (40) within an interactive volumetric field (36) mutually within the sensor space (28) and the display space (32), and determines when the positions of the user object (40) and the image (42) are substantially coincident. Upon detection of coincidence, the computer (26) executes a function programmed for the image (42).
    Type: Grant
    Filed: May 7, 2013
    Date of Patent: August 12, 2014
    Assignee: Edge 3 Technologies, Inc.
    Inventors: Tarek El Dokor, Joshua T. King, James E. Holmes, William E. Glomski, Maria N. Ngomba
  • Publication number: 20140219559
    Abstract: A method and system for segmenting a plurality of images. The method comprises the steps of segmenting the image through a novel clustering technique that is, generating a composite depth map including temporally stable segments of the image as well as segments in subsequent images that have changed. These changes may be determined by determining one or more differences between the temporally stable depth map and segments included in one or more subsequent frames. Thereafter, the portions of the one or more subsequent frames that include segments including changes from their corresponding segments in the temporally stable depth map are processed and are combined with the segments from the temporally stable depth map to compute their associated disparities in one or more subsequent frames. The images may include a pair of stereo images acquired through a stereo camera system at a substantially similar time.
    Type: Application
    Filed: January 7, 2014
    Publication date: August 7, 2014
    Inventors: Tarek El Dokor, Joshua King, Jordan Cluster, James Edward Holmes
  • Patent number: 8798358
    Abstract: A method and system for generating a disparity map. The method comprises the steps of generating a first disparity map based upon a first image and a second image acquired at a first time, acquiring at least a third image and a fourth image at a second time, and determining one or more portions comprising a difference between one of the first and second images and a corresponding one of the third and fourth images. A disparity map update is generated for the one or more determined portions, and a disparity map is generated based upon the third image and the fourth image by combining the disparity map update and the first disparity map.
    Type: Grant
    Filed: October 9, 2013
    Date of Patent: August 5, 2014
    Assignee: Edge 3 Technologies, Inc.
    Inventors: Tarek El Dokor, Jordan Cluster, Joshua King, James Edward Holmes
  • Publication number: 20140205183
    Abstract: A method and apparatus for segmenting an image are provided. The method may include the steps of clustering pixels from one of a plurality of images into one or more segments, determining one or more unstable segments changing by more than a predetermined threshold from a prior of the plurality of images, determining one or more segments transitioning from an unstable to a stable segment, determining depth for one or more of the one or more segments that have changed by more than the predetermined threshold, determining depth for one or more of the one or more transitioning segments, and combining the determined depth for the one or more unstable segments and the one or more transitioning segments with a predetermined depth of all segments changing less than the predetermined threshold from the prior of the plurality of images.
    Type: Application
    Filed: March 27, 2014
    Publication date: July 24, 2014
    Applicant: Edge 3 Technologies, Inc.
    Inventors: Tarek El Dokor, Jordan Cluster