Patents by Inventor Gila Kamhi

Gila Kamhi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170316516
    Abstract: Methods and systems are provided for enhancing user's social experience of a trip in an autonomous vehicle. In one embodiment, a method includes: maintaining information associated with a user's social preferences in a data storage device; and selectively coordinating the trip and riders in the autonomous vehicle based on the information maintained in the data storage device.
    Type: Application
    Filed: April 6, 2017
    Publication date: November 2, 2017
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: CLAUDIA V. GOLDMAN-SHENHAR, GILA KAMHI
  • Patent number: 9791917
    Abstract: Apparatuses, methods, and storage media for modifying augmented reality in response to user interaction are described. In one instance, the apparatus for modifying augmented reality may include a processor, a scene capture camera coupled with the processor to capture a physical scene, and an augmentation management module to be operated by the processor. The augmentation management module may obtain and analyze the physical scene, generate one or more virtual articles to augment a rendering of the physical scene based on a result of the analysis, track user interaction with the rendered augmented scene, and modify or complement the virtual articles in response to the tracked user interaction. Other embodiments may be described and claimed.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: October 17, 2017
    Assignee: Intel Corporation
    Inventors: Gila Kamhi, Amit Moran
  • Publication number: 20170285641
    Abstract: A system including an input group that, when executed by a processing unit, obtains contextual input data for use in determining a contextual mode, wherein the contextual input is based on at least one type of context. Example contexts include an environmental context; a user state; a user driving destination; a profile of the user; and a profile of another user. The system also includes a deliberation group that, when executed by the processing unit, determines, based on the contextual input data, a contextual mode for use in controlling a vehicle function. The system also includes an output group that, when executed by the hardware-based processing unit, initiates implementation, at a vehicle of transportation, of the contextual mode determined to control the vehicle function.
    Type: Application
    Filed: April 3, 2017
    Publication date: October 5, 2017
    Inventors: Claudia V. Goldman-Shenhar, Gila Kamhi, Nadav Baron, Barak Hershkovitz
  • Publication number: 20170285345
    Abstract: Systems, apparatuses, and/or methods to augment reality. An object identifier may identify an object in a field of view of a user that includes a reflection of the user from a reflective surface, such as a surface of a traditional mirror. In addition, a reality augmenter may generate an augmented reality object based on the identification of the object. In one example, eyeglasses including a relatively transparent display screen may be coupled with an image capture device on the user and the augmented reality object may be observable by the user on the transparent display screen when the user wears the eyeglasses. A localizer may position the augmented reality object on the transparent display screen relative to the reflection of the user that passes though the transparent display screen during natural visual perception of the reflection by the user.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Ron Ferens, Barak Hurwitz, Camila Dorin, Gila Kamhi
  • Patent number: 9761059
    Abstract: Computer-readable storage media, computing device and methods associated with dynamic modification of a rendering of a physical scene. In embodiments, one or more computer-readable storage media may have instructions stored thereon which, when executed by a computing device, may provide the computing device with a dynamic augmentation module. The dynamic augmentation module may, in some embodiments, cause the computing device to receive a manipulation of a physical scene. In response to receipt of the manipulation, the dynamic augmentation module may cause the computing device to dynamically modify a rendering of the physical scene. In some embodiments, this may be accomplished through real-time application of one or more virtual articles to the rendering of the physical scene or alteration of one or more virtual articles added to the rendering of the physical scene. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: May 5, 2014
    Date of Patent: September 12, 2017
    Assignee: Intel Corporation
    Inventors: Kobi Nistel, Barak Hurwitz, Gila Kamhi, Dror Reif, Vladimir Cooperman
  • Patent number: 9754416
    Abstract: An augmented reality (AR) device includes a 3D video camera to capture video images and corresponding depth information, a display device to display the video data, and an AR module to add a virtual 3D model to the displayed video data. A depth mapping module generates a 3D map based on the depth information, a dynamic scene recognition and tracking module processes the video images and the 3D map to detect and track a target object within a field of view of the 3D video camera, and an augmented video rendering module renders an augmented video of the virtual 3D model dynamically interacting with the target object. The augmented video is displayed on the display device in real time. The AR device may further include a context module to select the virtual 3D model based on context data comprising a current location of the augmented reality device.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: September 5, 2017
    Assignee: INTEL CORPORATION
    Inventors: Gila Kamhi, Barak Hurwitz, Vladimir Kouperman, Kobi Nistel
  • Patent number: 9697867
    Abstract: A narrative presentation system may include at least one optical sensor capable of detecting objects added to the field-of-view of the at least one optical sensor. Using data contained in signals received from the at least one optical sensor, an adaptive narrative presentation circuit identifies an object added to the field-of-view and identifies an aspect of a narrative presentation logically associated with the identified object. The adaptive narrative presentation circuit modifies the aspect of the narrative presentation identified as logically associated with the identified object.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: July 4, 2017
    Assignee: INTEL CORPORATION
    Inventors: Gila Kamhi, Nadav Zamir, Kobi Nistel, Ron Ferens, Amit Moran, Barak Hurwitz, Vladimir Kouperman
  • Publication number: 20170157766
    Abstract: This disclosure pertains to machine object determination based on human interaction. In general, a device such as a robot may be capable of interacting with a person (e.g., user) to select an object. The user may identify the target object for the device, which may determine whether the target object is known. If the device determines that target object is known, the device may confirm the target object to the user. If the device determines that the target object is not known, the device may then determine a group of characteristics for use in determining the object from potential target objects, and may select a characteristic that most substantially reduces a number of potential target objects. After the characteristic is determined, the device may formulate an inquiry to the user utilizing the characteristic. Characteristics may be selected until the device determines the target object and confirms it to the user.
    Type: Application
    Filed: December 3, 2015
    Publication date: June 8, 2017
    Applicant: Intel Corporation
    Inventors: GILA KAMHI, AMIT MORAN, KOBI NISTEL, DAVID CHETTRIT
  • Patent number: 9633622
    Abstract: Disclosed in some examples are methods systems and machine readable mediums in which actions or states of a first user (e.g., natural interactions) having a first corresponding computing device are observed by a sensor on a second computing device corresponding to a second user. A notification describing the observed actions or states of a first user may be shared across a network with the first corresponding computing device. In this way, the first computing device may be provided with information concerning the state of the user without having to directly sense the user.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: April 25, 2017
    Assignee: Intel Corporation
    Inventors: Gila Kamhi, Ron Ferens, Vladimir Vova Cooperman, Kobi Nistel, Barak Hurwitz
  • Publication number: 20170094259
    Abstract: Techniques related to 3D image capture with dynamic cameras.
    Type: Application
    Filed: August 17, 2016
    Publication date: March 30, 2017
    Inventors: Vladimir Kouperman, Gila Kamhi, Nadav Zamir, Barak Hurwitz
  • Publication number: 20170092322
    Abstract: A narrative presentation system may include at least one optical sensor capable of detecting objects added to the field-of-view of the at least one optical sensor. Using data contained in signals received from the at least one optical sensor, an adaptive narrative presentation circuit identifies an object added to the field-of-view and identifies an aspect of a narrative presentation logically associated with the identified object. The adaptive narrative presentation circuit modifies the aspect of the narrative presentation identified as logically associated with the identified object.
    Type: Application
    Filed: September 25, 2015
    Publication date: March 30, 2017
    Applicant: Intel Corporation
    Inventors: GILA KAMHI, NADAV ZAMIR, KOBI NISTEL, RON FERENS, AMIT MORAN, BARAK HURWITZ, VLADIMIR KOUPERMAN
  • Publication number: 20170061683
    Abstract: A mechanism is described for facilitating smart measurement of body dimensions despite loose clothing and/or other obscurities according to one embodiment. A method of embodiments, as described herein, includes capturing, by one or more capturing/sensing components of a computing device, a scan of a body of a user, and computing one or more primary measurements relating to one or more primary areas of the body, where the one or more primary measurements are computed based on depth data of the one or more primary areas of the body, where the depth data is obtained from the scan. The method may further include receiving at least one of secondary measurements and a three-dimensional (3D) avatar of the body based on the primary measurements, and preparing a report including body dimensions of the body based on at least one of the secondary measurements and the 3D avatar, and presenting the report at a display device.
    Type: Application
    Filed: November 30, 2015
    Publication date: March 2, 2017
    Applicant: INTEL CORPORATION
    Inventors: CAMILA DORIN, BARAK HURWITZ, GILA KAMHI
  • Publication number: 20170046965
    Abstract: Generally, this disclosure provides systems, devices, methods and computer readable media for user and environment aware robots for use in educational applications. A system may include a camera to obtain image data and user analysis circuitry to analyze the image data to identify a student and obtain educational history associated with the student. The system may also include environmental analysis circuitry to analyze the image data and identify a projection surface. The system may further include scene augmentation circuitry to generate a scene comprising selected portions of the educational material based on the identified student and the educational history; and an image projector to project the scene onto the projection surface.
    Type: Application
    Filed: August 12, 2015
    Publication date: February 16, 2017
    Applicant: INTEL CORPORATION
    Inventors: GILA KAMHI, AMIT MORAN
  • Patent number: 9478037
    Abstract: Techniques to provide efficient stereo block matching may include receiving an object from a scene. Pixels in the scene may be identified based on the object. Stereo block matching may be performed for only the identified pixels in order to generate a depth map. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 8, 2011
    Date of Patent: October 25, 2016
    Assignee: INTEL CORPORATION
    Inventors: Dror Reif, Gila Kamhi, Barak Hurwitz
  • Publication number: 20160284134
    Abstract: Apparatuses, methods, and storage media for modifying augmented reality in response to user interaction are described. In one instance, the apparatus for modifying augmented reality may include a processor, a scene capture camera coupled with the processor to capture a physical scene, and an augmentation management module to be operated by the processor. The augmentation management module may obtain and analyze the physical scene, generate one or more virtual articles to augment a rendering of the physical scene based on a result of the analysis, track user interaction with the rendered augmented scene, and modify or complement the virtual articles in response to the tracked user interaction. Other embodiments may be described and claimed.
    Type: Application
    Filed: March 24, 2015
    Publication date: September 29, 2016
    Inventors: Gila Kamhi, Amit Moran
  • Publication number: 20160284135
    Abstract: A method comprising acquiring depth image data, scanning the depth image data to generate a three-dimensional (3D) model of an object included in the depth image data and animating the object for insertion into an application for interaction with a user.
    Type: Application
    Filed: March 25, 2015
    Publication date: September 29, 2016
    Inventors: Gila Kamhi, Barak Hurwitz, Tal Alexander-Amor
  • Patent number: 9445081
    Abstract: Techniques related to 3D image capture with dynamic cameras.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: September 13, 2016
    Assignee: Intel Corporation
    Inventors: Vladimir Kouperman, Gila Kamhi, Nadav Zamir, Barak Hurwitz
  • Patent number: 9389779
    Abstract: Technologies for depth-based gesture control include a computing device having a display and a depth sensor. The computing device is configured to recognize an input gesture performed by a user, determine a depth relative to the display of the input gesture based on data from the depth sensor, assign a depth plane to the input gesture as a function of the depth, and execute a user interface command based on the input gesture and the assigned depth plane. The user interface command may control a virtual object selected by depth plane, including a player character in a game. The computing device may recognize primary and secondary virtual touch planes and execute a secondary user interface command for input gestures on the secondary virtual touch plane, such as magnifying or selecting user interface elements or enabling additional functionality based on the input gesture. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: July 12, 2016
    Assignee: Intel Corporation
    Inventors: Glen J. Anderson, Dror Reif, Barak Hurwitz, Gila Kamhi
  • Publication number: 20160180797
    Abstract: Disclosed in some examples are methods systems and machine readable mediums in which actions or states of a first user (e.g., natural interactions) having a first corresponding computing device are observed by a sensor on a second computing device corresponding to a second user. A notification describing the observed actions or states of a first user may be shared across a network with the first corresponding computing device. In this way, the first computing device may be provided with information concerning the state of the user without having to directly sense the user.
    Type: Application
    Filed: December 18, 2014
    Publication date: June 23, 2016
    Inventors: Gila Kamhi, Ron Ferens, Vladimir Vova Cooperman, Kobi Nistel, Barak Hurwitz
  • Publication number: 20160180590
    Abstract: An augmented reality (AR) device includes a 3D video camera to capture video images and corresponding depth information, a display device to display the video data, and an AR module to add a virtual 3D model to the displayed video data. A depth mapping module generates a 3D map based on the depth information, a dynamic scene recognition and tracking module processes the video images and the 3D map to detect and track a target object within a field of view of the 3D video camera, and an augmented video rendering module renders an augmented video of the virtual 3D model dynamically interacting with the target object. The augmented video is displayed on the display device in real time. The AR device may further include a context module to select the virtual 3D model based on context data comprising a current location of the augmented reality device.
    Type: Application
    Filed: December 23, 2014
    Publication date: June 23, 2016
    Applicant: lntel Corporation
    Inventors: GILA KAMHI, BARAK HURWITZ, VLADIMIR COOPERMAN, KOBI NISTEL