Patents by Inventor Gershom Kutliroff

Gershom Kutliroff has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11048333
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: June 29, 2021
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Patent number: 10719759
    Abstract: SLAM systems are provided that utilize an artificial neural network to both map environments and locate positions within the environments. In some example embodiments, a sensor arrangement is used to map an environment. The sensor arrangement acquires sensor data from the various sensors and associates the sensor data, or data derived from the sensor data, with spatial regions in the environment. The sensor data may include image data and inertial measurement data that effectively describes the visual appearance of a spatial region at a particular location and orientation. This diverse sensor data may be fused into camera poses. The map of the environment includes camera poses organized by spatial region within the environment. Further, in these examples, an artificial neural network is adapted to the features of the environment by a transfer learning process using image data associated with camera poses.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: July 21, 2020
    Assignee: Intel Corporation
    Inventor: Gershom Kutliroff
  • Publication number: 20200225756
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Application
    Filed: March 6, 2018
    Publication date: July 16, 2020
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Patent number: 10573018
    Abstract: Techniques are provided for context-based 3D scene reconstruction employing fusion of multiple instances of an object within the scene. A methodology implementing the techniques according to an embodiment includes receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, based on the 3D reconstruction, the camera pose and the image frames. The method may further include classifying the detected objects into one or more object classes; grouping two or more instances of objects in one of the object classes based on a measure of similarity of features between the object instances; and combining point clouds associated with each of the grouped object instances to generate a fused object.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: February 25, 2020
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Shahar Fleishman, Mark Kliger
  • Patent number: 10482681
    Abstract: Techniques are provided for segmentation of objects in a 3D image of a scene. An example method may include receiving, 3D image frames of a scene. Each of the frames is associated with a pose of a depth camera that generated the 3D image frames. The method may also include detecting the objects in each of the frames based on object recognition; associating a label with the detected object; calculating a 2D bounding box around the object; and calculating a 3D location of the center of the bounding box. The method may further include matching the detected object to an existing object boundary set, created from a previously received image frame, based on the label and the location of the center of the bounding box, or, if the match fails, creating a new object boundary set associated with the detected object.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: November 19, 2019
    Assignee: Intel Corporation
    Inventor: Gershom Kutliroff
  • Patent number: 10452789
    Abstract: Systems, apparatuses and/or methods may provide for generating a packing order of items within a container that consolidates the items into a reduced space. Items may be scanned with a three-dimensional (3D) imager, and models may be generated of the items based on the data from the 3D imager. The items may be located within minimal-volume enclosing bounding boxes, which may be analyzed to determine whether they may be merged together in one of their bounding boxes, or into a new bounding box that is spatially advantageous in terms of packing. If a combination of items is realizable and is determined to take up less space in a bounding box than the bounding boxes of the items considered separately, then they may be merged into a single bounding box. Thus, a spatially efficient packing sequence for a plurality of real objects may be generated to maximize packing efficiency.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: October 22, 2019
    Assignee: Intel Corporation
    Inventors: Maoz Madmony, Shahar Fleishman, Mark Kliger, Gershom Kutliroff
  • Publication number: 20190278376
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Application
    Filed: March 6, 2018
    Publication date: September 12, 2019
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Patent number: 10373380
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: August 6, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Patent number: 10229542
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: March 12, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Publication number: 20190057299
    Abstract: SLAM systems are provided that utilize an artificial neural network to both map environments and locate positions within the environments. In some example embodiments, a sensor arrangement is used to map an environment. The sensor arrangement acquires sensor data from the various sensors and associates the sensor data, or data derived from the sensor data, with spatial regions in the environment. The sensor data may include image data and inertial measurement data that effectively describes the visual appearance of a spatial region at a particular location and orientation. This diverse sensor data may be fused into camera poses. The map of the environment includes camera poses organized by spatial region within the environment. Further, in these examples, an artificial neural network is adapted to the features of the environment by a transfer learning process using image data associated with camera poses.
    Type: Application
    Filed: August 23, 2018
    Publication date: February 21, 2019
    Applicant: INTEL CORPORATION
    Inventor: Gershom Kutliroff
  • Patent number: 10062010
    Abstract: SLAM systems are provided that utilize an artificial neural network to both map environments and locate positions within the environments. In some example embodiments, a sensor arrangement is used to map an environment. The sensor arrangement acquires sensor data from the various sensors and associates the sensor data, or data derived from the sensor data, with spatial regions in the environment. The sensor data may include image data and inertial measurement data that effectively describes the visual appearance of a spatial region at a particular location and orientation. This diverse sensor data may be fused into camera poses. The map of the environment includes camera poses organized by spatial region within the environment. Further, in these examples, an artificial neural network is adapted to the features of the environment by a transfer learning process using image data associated with camera poses.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: August 28, 2018
    Assignee: INTEL CORPORATION
    Inventor: Gershom Kutliroff
  • Patent number: 9939914
    Abstract: Systems and methods for combining three-dimensional tracking of a user's movements with a three-dimensional user interface display is described. A tracking module processes depth data of a user performing movements, for example, movements of the user's hands and fingers. The tracked movements are used to animate a representation of the hand and fingers, and the animated representation is displayed to the user using three-dimensional display. Also displayed are one or more virtual objects with which the user can interact. In some embodiments, the interaction of the user with the virtual objects controls an electronic device.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: April 10, 2018
    Assignee: INTEL CORPORATION
    Inventors: Shahar Fleishman, Gershom Kutliroff, Yaron Yanai
  • Patent number: 9910498
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Grant
    Filed: June 25, 2012
    Date of Patent: March 6, 2018
    Assignee: INTEL CORPORATION
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Publication number: 20180018805
    Abstract: Techniques are provided for context-based 3D scene reconstruction employing fusion of multiple instances of an object within the scene. A methodology implementing the techniques according to an embodiment includes receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, based on the 3D reconstruction, the camera pose and the image frames. The method may further include classifying the detected objects into one or more object classes; grouping two or more instances of objects in one of the object classes based on a measure of similarity of features between the object instances; and combining point clouds associated with each of the grouped object instances to generate a fused object.
    Type: Application
    Filed: July 13, 2016
    Publication date: January 18, 2018
    Applicant: INTEL CORPORATION
    Inventors: Gershom Kutliroff, Shahar Fleishman, Mark Kliger
  • Publication number: 20170243352
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Application
    Filed: February 18, 2016
    Publication date: August 24, 2017
    Applicant: INTEL CORPORATION
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Publication number: 20170228940
    Abstract: Techniques are provided for segmentation of objects in a 3D image of a scene. An example method may include receiving, 3D image frames of a scene. Each of the frames is associated with a pose of a depth camera that generated the 3D image frames. The method may also include detecting the objects in each of the frames based on object recognition; associating a label with the detected object; calculating a 2D bounding box around the object; and calculating a 3D location of the center of the bounding box. The method may further include matching the detected object to an existing object boundary set, created from a previously received image frame, based on the label and the location of the center of the bounding box, or, if the match fails, creating a new object boundary set associated with the detected object.
    Type: Application
    Filed: February 9, 2016
    Publication date: August 10, 2017
    Applicant: INTEL CORPORATION
    Inventor: Gershom Kutliroff
  • Publication number: 20170154127
    Abstract: Systems, apparatuses and/or methods may provide for generating a packing order of items within a container that consolidates the items into a reduced space. Items may be scanned with a three-dimensional (3D) imager, and models may be generated of the items based on the data from the 3D imager. The items may be located within minimal-volume enclosing bounding boxes, which may be analyzed to determine whether they may be merged together in one of their bounding boxes, or into a new bounding box that is spatially advantageous in terms of packing. If a combination of items is realizable and is determined to take up less space in a bounding box than the bounding boxes of the items considered separately, then they may be merged into a single bounding box. Thus, a spatially efficient packing sequence for a plurality of real objects may be generated to maximize packing efficiency.
    Type: Application
    Filed: November 30, 2015
    Publication date: June 1, 2017
    Inventors: Maoz Madmony, Shahar Fleishman, Mark Kliger, Gershom Kutliroff
  • Patent number: 9639943
    Abstract: Techniques are provided for generating a 3-Dimensional (3D) reconstruction of a handheld object. An example method may include receiving 3D image frames of the object from a static depth camera, each frame including a color image and a depth map. Each of the frames is associated with an updated pose of the object during scanning. The method may also include extracting a segment of each frame corresponding to the object and the hand; isolating the hand from each extracted segment; and filtering each frame by removing regions of the frame outside of the extracted segment and removing regions of the frame corresponding to the isolated hand. The method may further include calculating the updated object pose in each filtered frame; and calculating a 3D position for each depth pixel from the depth map of each filtered frame based on the updated object pose associated with that filtered frame.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: May 2, 2017
    Assignee: INTEL CORPORATION
    Inventors: Gershom Kutliroff, Maoz Madmony
  • Publication number: 20170038850
    Abstract: Systems and methods for combining three-dimensional tracking of a user's movements with a three-dimensional user interface display is described. A tracking module processes depth data of a user performing movements, for example, movements of the user's hands and fingers. The tracked movements are used to animate a representation of the hand and fingers, and the animated representation is displayed to the user using three-dimensional display. Also displayed are one or more virtual objects with which the user can interact. In some embodiments, the interaction of the user with the virtual objects controls an electronic device.
    Type: Application
    Filed: October 19, 2016
    Publication date: February 9, 2017
    Inventors: Shahar Fleishman, Gershom Kutliroff, Yaron Yanai
  • Publication number: 20160379092
    Abstract: SLAM systems are provided that utilize an artificial neural network to both map environments and locate positions within the environments. In some example embodiments, a sensor arrangement is used to map an environment. The sensor arrangement acquires sensor data from the various sensors and associates the sensor data, or data derived from the sensor data, with spatial regions in the environment. The sensor data may include image data and inertial measurement data that effectively describes the visual appearance of a spatial region at a particular location and orientation. This diverse sensor data may be fused into camera poses. The map of the environment includes camera poses organized by spatial region within the environment. Further, in these examples, an artificial neural network is adapted to the features of the environment by a transfer learning process using image data associated with camera poses.
    Type: Application
    Filed: June 26, 2015
    Publication date: December 29, 2016
    Applicant: INTEL CORPORATION
    Inventor: GERSHOM KUTLIROFF