Patents by Inventor Gershom Kutliroff
Gershom Kutliroff has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11048333Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.Type: GrantFiled: March 6, 2018Date of Patent: June 29, 2021Assignee: Intel CorporationInventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
-
Patent number: 10719759Abstract: SLAM systems are provided that utilize an artificial neural network to both map environments and locate positions within the environments. In some example embodiments, a sensor arrangement is used to map an environment. The sensor arrangement acquires sensor data from the various sensors and associates the sensor data, or data derived from the sensor data, with spatial regions in the environment. The sensor data may include image data and inertial measurement data that effectively describes the visual appearance of a spatial region at a particular location and orientation. This diverse sensor data may be fused into camera poses. The map of the environment includes camera poses organized by spatial region within the environment. Further, in these examples, an artificial neural network is adapted to the features of the environment by a transfer learning process using image data associated with camera poses.Type: GrantFiled: August 23, 2018Date of Patent: July 21, 2020Assignee: Intel CorporationInventor: Gershom Kutliroff
-
Publication number: 20200225756Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.Type: ApplicationFiled: March 6, 2018Publication date: July 16, 2020Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
-
Patent number: 10573018Abstract: Techniques are provided for context-based 3D scene reconstruction employing fusion of multiple instances of an object within the scene. A methodology implementing the techniques according to an embodiment includes receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, based on the 3D reconstruction, the camera pose and the image frames. The method may further include classifying the detected objects into one or more object classes; grouping two or more instances of objects in one of the object classes based on a measure of similarity of features between the object instances; and combining point clouds associated with each of the grouped object instances to generate a fused object.Type: GrantFiled: July 13, 2016Date of Patent: February 25, 2020Assignee: Intel CorporationInventors: Gershom Kutliroff, Shahar Fleishman, Mark Kliger
-
Patent number: 10482681Abstract: Techniques are provided for segmentation of objects in a 3D image of a scene. An example method may include receiving, 3D image frames of a scene. Each of the frames is associated with a pose of a depth camera that generated the 3D image frames. The method may also include detecting the objects in each of the frames based on object recognition; associating a label with the detected object; calculating a 2D bounding box around the object; and calculating a 3D location of the center of the bounding box. The method may further include matching the detected object to an existing object boundary set, created from a previously received image frame, based on the label and the location of the center of the bounding box, or, if the match fails, creating a new object boundary set associated with the detected object.Type: GrantFiled: February 9, 2016Date of Patent: November 19, 2019Assignee: Intel CorporationInventor: Gershom Kutliroff
-
Patent number: 10452789Abstract: Systems, apparatuses and/or methods may provide for generating a packing order of items within a container that consolidates the items into a reduced space. Items may be scanned with a three-dimensional (3D) imager, and models may be generated of the items based on the data from the 3D imager. The items may be located within minimal-volume enclosing bounding boxes, which may be analyzed to determine whether they may be merged together in one of their bounding boxes, or into a new bounding box that is spatially advantageous in terms of packing. If a combination of items is realizable and is determined to take up less space in a bounding box than the bounding boxes of the items considered separately, then they may be merged into a single bounding box. Thus, a spatially efficient packing sequence for a plurality of real objects may be generated to maximize packing efficiency.Type: GrantFiled: November 30, 2015Date of Patent: October 22, 2019Assignee: Intel CorporationInventors: Maoz Madmony, Shahar Fleishman, Mark Kliger, Gershom Kutliroff
-
Publication number: 20190278376Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.Type: ApplicationFiled: March 6, 2018Publication date: September 12, 2019Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
-
Patent number: 10373380Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.Type: GrantFiled: February 18, 2016Date of Patent: August 6, 2019Assignee: Intel CorporationInventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
-
Patent number: 10229542Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.Type: GrantFiled: February 18, 2016Date of Patent: March 12, 2019Assignee: Intel CorporationInventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
-
Publication number: 20190057299Abstract: SLAM systems are provided that utilize an artificial neural network to both map environments and locate positions within the environments. In some example embodiments, a sensor arrangement is used to map an environment. The sensor arrangement acquires sensor data from the various sensors and associates the sensor data, or data derived from the sensor data, with spatial regions in the environment. The sensor data may include image data and inertial measurement data that effectively describes the visual appearance of a spatial region at a particular location and orientation. This diverse sensor data may be fused into camera poses. The map of the environment includes camera poses organized by spatial region within the environment. Further, in these examples, an artificial neural network is adapted to the features of the environment by a transfer learning process using image data associated with camera poses.Type: ApplicationFiled: August 23, 2018Publication date: February 21, 2019Applicant: INTEL CORPORATIONInventor: Gershom Kutliroff
-
Patent number: 10062010Abstract: SLAM systems are provided that utilize an artificial neural network to both map environments and locate positions within the environments. In some example embodiments, a sensor arrangement is used to map an environment. The sensor arrangement acquires sensor data from the various sensors and associates the sensor data, or data derived from the sensor data, with spatial regions in the environment. The sensor data may include image data and inertial measurement data that effectively describes the visual appearance of a spatial region at a particular location and orientation. This diverse sensor data may be fused into camera poses. The map of the environment includes camera poses organized by spatial region within the environment. Further, in these examples, an artificial neural network is adapted to the features of the environment by a transfer learning process using image data associated with camera poses.Type: GrantFiled: June 26, 2015Date of Patent: August 28, 2018Assignee: INTEL CORPORATIONInventor: Gershom Kutliroff
-
Patent number: 9939914Abstract: Systems and methods for combining three-dimensional tracking of a user's movements with a three-dimensional user interface display is described. A tracking module processes depth data of a user performing movements, for example, movements of the user's hands and fingers. The tracked movements are used to animate a representation of the hand and fingers, and the animated representation is displayed to the user using three-dimensional display. Also displayed are one or more virtual objects with which the user can interact. In some embodiments, the interaction of the user with the virtual objects controls an electronic device.Type: GrantFiled: October 19, 2016Date of Patent: April 10, 2018Assignee: INTEL CORPORATIONInventors: Shahar Fleishman, Gershom Kutliroff, Yaron Yanai
-
Patent number: 9910498Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.Type: GrantFiled: June 25, 2012Date of Patent: March 6, 2018Assignee: INTEL CORPORATIONInventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
-
Publication number: 20180018805Abstract: Techniques are provided for context-based 3D scene reconstruction employing fusion of multiple instances of an object within the scene. A methodology implementing the techniques according to an embodiment includes receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, based on the 3D reconstruction, the camera pose and the image frames. The method may further include classifying the detected objects into one or more object classes; grouping two or more instances of objects in one of the object classes based on a measure of similarity of features between the object instances; and combining point clouds associated with each of the grouped object instances to generate a fused object.Type: ApplicationFiled: July 13, 2016Publication date: January 18, 2018Applicant: INTEL CORPORATIONInventors: Gershom Kutliroff, Shahar Fleishman, Mark Kliger
-
Publication number: 20170243352Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.Type: ApplicationFiled: February 18, 2016Publication date: August 24, 2017Applicant: INTEL CORPORATIONInventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
-
Publication number: 20170228940Abstract: Techniques are provided for segmentation of objects in a 3D image of a scene. An example method may include receiving, 3D image frames of a scene. Each of the frames is associated with a pose of a depth camera that generated the 3D image frames. The method may also include detecting the objects in each of the frames based on object recognition; associating a label with the detected object; calculating a 2D bounding box around the object; and calculating a 3D location of the center of the bounding box. The method may further include matching the detected object to an existing object boundary set, created from a previously received image frame, based on the label and the location of the center of the bounding box, or, if the match fails, creating a new object boundary set associated with the detected object.Type: ApplicationFiled: February 9, 2016Publication date: August 10, 2017Applicant: INTEL CORPORATIONInventor: Gershom Kutliroff
-
Publication number: 20170154127Abstract: Systems, apparatuses and/or methods may provide for generating a packing order of items within a container that consolidates the items into a reduced space. Items may be scanned with a three-dimensional (3D) imager, and models may be generated of the items based on the data from the 3D imager. The items may be located within minimal-volume enclosing bounding boxes, which may be analyzed to determine whether they may be merged together in one of their bounding boxes, or into a new bounding box that is spatially advantageous in terms of packing. If a combination of items is realizable and is determined to take up less space in a bounding box than the bounding boxes of the items considered separately, then they may be merged into a single bounding box. Thus, a spatially efficient packing sequence for a plurality of real objects may be generated to maximize packing efficiency.Type: ApplicationFiled: November 30, 2015Publication date: June 1, 2017Inventors: Maoz Madmony, Shahar Fleishman, Mark Kliger, Gershom Kutliroff
-
Patent number: 9639943Abstract: Techniques are provided for generating a 3-Dimensional (3D) reconstruction of a handheld object. An example method may include receiving 3D image frames of the object from a static depth camera, each frame including a color image and a depth map. Each of the frames is associated with an updated pose of the object during scanning. The method may also include extracting a segment of each frame corresponding to the object and the hand; isolating the hand from each extracted segment; and filtering each frame by removing regions of the frame outside of the extracted segment and removing regions of the frame corresponding to the isolated hand. The method may further include calculating the updated object pose in each filtered frame; and calculating a 3D position for each depth pixel from the depth map of each filtered frame based on the updated object pose associated with that filtered frame.Type: GrantFiled: December 21, 2015Date of Patent: May 2, 2017Assignee: INTEL CORPORATIONInventors: Gershom Kutliroff, Maoz Madmony
-
Publication number: 20170038850Abstract: Systems and methods for combining three-dimensional tracking of a user's movements with a three-dimensional user interface display is described. A tracking module processes depth data of a user performing movements, for example, movements of the user's hands and fingers. The tracked movements are used to animate a representation of the hand and fingers, and the animated representation is displayed to the user using three-dimensional display. Also displayed are one or more virtual objects with which the user can interact. In some embodiments, the interaction of the user with the virtual objects controls an electronic device.Type: ApplicationFiled: October 19, 2016Publication date: February 9, 2017Inventors: Shahar Fleishman, Gershom Kutliroff, Yaron Yanai
-
Publication number: 20160379092Abstract: SLAM systems are provided that utilize an artificial neural network to both map environments and locate positions within the environments. In some example embodiments, a sensor arrangement is used to map an environment. The sensor arrangement acquires sensor data from the various sensors and associates the sensor data, or data derived from the sensor data, with spatial regions in the environment. The sensor data may include image data and inertial measurement data that effectively describes the visual appearance of a spatial region at a particular location and orientation. This diverse sensor data may be fused into camera poses. The map of the environment includes camera poses organized by spatial region within the environment. Further, in these examples, an artificial neural network is adapted to the features of the environment by a transfer learning process using image data associated with camera poses.Type: ApplicationFiled: June 26, 2015Publication date: December 29, 2016Applicant: INTEL CORPORATIONInventor: GERSHOM KUTLIROFF