Patents by Inventor Yaron Yanai

Yaron Yanai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230186584
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Application
    Filed: February 6, 2023
    Publication date: June 15, 2023
    Applicant: Tahoe Research, Ltd.
    Inventors: Amit BLEIWEISS, Chen PAZ, Ofir LEVY, Itamar BEN-ARI, Yaron YANAI
  • Patent number: 11574453
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: February 7, 2023
    Assignee: Tahoe Research, Ltd.
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Patent number: 11048333
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: June 29, 2021
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Publication number: 20210056768
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Application
    Filed: September 4, 2020
    Publication date: February 25, 2021
    Applicant: INTEL CORPORATION
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Patent number: 10769862
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Grant
    Filed: August 2, 2018
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Publication number: 20200225756
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Application
    Filed: March 6, 2018
    Publication date: July 16, 2020
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Patent number: 10488938
    Abstract: Systems, apparatuses and methods may track air gestures within a bounding box in a field of view (FOV) of a camera. Air gestures made within the bounding box may be translated and mapped to a display screen. Should the hand, or other member making the air gesture, exit the bounds of the bounding box, the box will be dragged along by the member to a new location within the FOV.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: November 26, 2019
    Inventors: Ayelet Hashahar Swissa, Maxim Schwartz, Maoz Madmony, Kfir Viente, Yaron Yanai
  • Patent number: 10447998
    Abstract: Systems, apparatuses and methods may provide for detecting a snapshot request to conduct a long range depth capture, wherein the snapshot request is associated with a short range depth capture. Additionally, an infrared (IR) projector may be activated at a first power level for a first duration in response to the snapshot request, wherein the first power level is greater than a second power level corresponding to the short range depth capture and the first duration is less than a second duration corresponding to the short range depth capture.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Yinon Oshrat, Dagan Eshar, Yaron Yanai
  • Publication number: 20190278376
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Application
    Filed: March 6, 2018
    Publication date: September 12, 2019
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Patent number: 10373380
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: August 6, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Patent number: 10229542
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: March 12, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Publication number: 20190004609
    Abstract: Systems, apparatuses and methods may track air gestures within a bounding box in a field of view (FOV) of a camera. Air gestures made within the bounding box may be translated and mapped to a display screen. Should the hand, or other member making the air gesture, exit the bounds of the bounding box, the box will be dragged along by the member to a new location within the FOV.
    Type: Application
    Filed: June 30, 2017
    Publication date: January 3, 2019
    Inventors: Ayelet Hashahar Swissa, Maxim Schwartz, Maoz Madmony, Kfir Viente, Yaron Yanai
  • Publication number: 20180357834
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Application
    Filed: August 2, 2018
    Publication date: December 13, 2018
    Applicant: INTEL CORPORATION
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Publication number: 20180284914
    Abstract: A head-mounted display (HMD) device to be worn by a user in a physical environment (PE) is controlled. A 3D virtual environment (VE) is modeled to include a virtual controllable object subject to virtual control input. Motion of the position, head, and hands of the user is monitored in the PE, and a physical surface in the PE is detected. A virtual user interface (vUI) is placed in the VE relative to a virtual perspective of the user. The vUI includes an information display and at least one virtual touch control to produce the virtual control input in response to virtual manipulation of the virtual touch control. The vUI's placement is determined to coincide with the physical surface in the PE relative to the position of the user in the PE.
    Type: Application
    Filed: March 30, 2017
    Publication date: October 4, 2018
    Inventors: Yaron Yanai, Eliyahu Elhadad, Kfir Viente, Amir Rosenberger, Maoz Madmony
  • Patent number: 10068385
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: September 4, 2018
    Assignee: Intel Corporation
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Patent number: 9939914
    Abstract: Systems and methods for combining three-dimensional tracking of a user's movements with a three-dimensional user interface display is described. A tracking module processes depth data of a user performing movements, for example, movements of the user's hands and fingers. The tracked movements are used to animate a representation of the hand and fingers, and the animated representation is displayed to the user using three-dimensional display. Also displayed are one or more virtual objects with which the user can interact. In some embodiments, the interaction of the user with the virtual objects controls an electronic device.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: April 10, 2018
    Assignee: INTEL CORPORATION
    Inventors: Shahar Fleishman, Gershom Kutliroff, Yaron Yanai
  • Publication number: 20180098050
    Abstract: Systems, apparatuses and methods may provide for detecting a snapshot request to conduct a long range depth capture, wherein the snapshot request is associated with a short range depth capture. Additionally, an infrared (IR) projector may be activated at a first power level for a first duration in response to the snapshot request, wherein the first power level is greater than a second power level corresponding to the short range depth capture and the first duration is less than a second duration corresponding to the short range depth capture.
    Type: Application
    Filed: September 30, 2016
    Publication date: April 5, 2018
    Inventors: Yinon Oshrat, Dagan Eshar, Yaron Yanai
  • Patent number: 9928605
    Abstract: Various systems and methods for real-time cascaded object recognition are described herein. A system for real-time cascaded object recognition comprises a processor; and a memory, including instructions, which when executed on the processor, cause the processor to perform the operations comprising: accessing image data at the system, the image data of an environment around the system, the image data is captured by a camera system; determining a set of regions in the image data, the set of regions including candidate objects; transmitting a subset of the image data corresponding to the set of regions to a remote server, the remote server to analyze the subset of the image data and detect an object in the subset of the image data; and receiving at the system from the remote server, an indication of the object detected in the subset of the image data.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: March 27, 2018
    Assignee: Intel Corporation
    Inventors: Amit Bleiweiss, Yaron Yanai, Yinon Oshrat, Amir Rosenberger
  • Patent number: 9910498
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Grant
    Filed: June 25, 2012
    Date of Patent: March 6, 2018
    Assignee: INTEL CORPORATION
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Publication number: 20170243352
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Application
    Filed: February 18, 2016
    Publication date: August 24, 2017
    Applicant: INTEL CORPORATION
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger