Patents by Inventor Shahar Fleishman

Shahar Fleishman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11048333
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: June 29, 2021
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Publication number: 20200225756
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Application
    Filed: March 6, 2018
    Publication date: July 16, 2020
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Patent number: 10685446
    Abstract: A system, article, and method of recurrent semantic segmentation for image processing by factoring historical semantic segmentation.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: June 16, 2020
    Assignee: Intel Corporation
    Inventors: Shahar Fleishman, Naomi Ken Korem, Mark Kliger
  • Patent number: 10649536
    Abstract: Hand dimensions are determined for hand and gesture recognition with a computing interface. An input sequence of frames is received from a camera. Frames of the sequence are identified in which a hand is recognized. Points are identified in the identified frames corresponding to features of the recognized hand. A value is determined for each of a set of different feature lengths of the recognized hand using the identified points for each identified frame. Each different feature length value is collected for the identified frames independently of each other feature length value. Each different feature length value is analyzed to determine an estimate of each different feature length, and the estimated feature lengths are applied to a hand tracking system, the hand tracking system for applying commands to a computer system.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: May 12, 2020
    Assignee: Intel Corporation
    Inventors: Alon Lerner, Shahar Fleishman
  • Patent number: 10643382
    Abstract: Convolutional Neural Networks are applied to object meshes to allow three-dimensional objects to be analyzed. In one example, a method includes performing convolutions on a mesh, wherein the mesh represents a three-dimensional object of an image, the mesh having a plurality of vertices and a plurality of edges between the vertices, performing pooling on the convolutions of an edge of a mesh, and applying fully connected and loss layers to the pooled convolutions to provide metadata about the three-dimensional object.
    Type: Grant
    Filed: April 4, 2017
    Date of Patent: May 5, 2020
    Assignee: Intel Corporation
    Inventors: Shahar Fleishman, Mark Kliger
  • Patent number: 10573018
    Abstract: Techniques are provided for context-based 3D scene reconstruction employing fusion of multiple instances of an object within the scene. A methodology implementing the techniques according to an embodiment includes receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, based on the 3D reconstruction, the camera pose and the image frames. The method may further include classifying the detected objects into one or more object classes; grouping two or more instances of objects in one of the object classes based on a measure of similarity of features between the object instances; and combining point clouds associated with each of the grouped object instances to generate a fused object.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: February 25, 2020
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Shahar Fleishman, Mark Kliger
  • Patent number: 10475195
    Abstract: Techniques are provided for global (non-rigid) scan point registration between a scanned object and an associated model, from an arbitrary initial starting position, based on a combination of iterative coarse registration and fine registration. A methodology implementing the techniques according to an embodiment includes generating a model transformation based on a coarse registration between the model and the point scan. The method further includes calculating an alignment metric based on a distance measurement between the point scan and the transformed model. If the alignment metric exceeds a selected threshold value, a fine registration is performed between the transformed model and the point scan. Otherwise, the method continues by performing a random rotation of the model; a translation of the rotated model towards a centroid of the point scan; and iterating the coarse registration using the translated model until the alignment metric is achieved, after which the fine registration is performed.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: November 12, 2019
    Assignee: INTEL CORPORATION
    Inventors: Rana Hanocka, Shahar Fleishman, Jackie Assa
  • Patent number: 10452789
    Abstract: Systems, apparatuses and/or methods may provide for generating a packing order of items within a container that consolidates the items into a reduced space. Items may be scanned with a three-dimensional (3D) imager, and models may be generated of the items based on the data from the 3D imager. The items may be located within minimal-volume enclosing bounding boxes, which may be analyzed to determine whether they may be merged together in one of their bounding boxes, or into a new bounding box that is spatially advantageous in terms of packing. If a combination of items is realizable and is determined to take up less space in a bounding box than the bounding boxes of the items considered separately, then they may be merged into a single bounding box. Thus, a spatially efficient packing sequence for a plurality of real objects may be generated to maximize packing efficiency.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: October 22, 2019
    Assignee: Intel Corporation
    Inventors: Maoz Madmony, Shahar Fleishman, Mark Kliger, Gershom Kutliroff
  • Publication number: 20190311248
    Abstract: A system and method for random sampled convolutions are disclosed to efficiently boost a convolutional neural network (CNN) expressive power without adding computation cost. The method for random sampled convolutions selects a receptive field size and generates filters with a subset of the receptive field elements, the number of learnable parameters, as being active, wherein the number learnable parameters corresponds to computing characteristics, such as SIMD capability, of the processing system upon which the CNN is executed. Several random filters may be generated, with each being run separately on the CNN. The random filter that causes the fastest convergence is selected over the others. The placement of the random filter in the CNN may be per layer, per channel, or per convergence operation. The CNN employing the random sampled convolutions method performs as well as other CNNs utilizing the same receptive field size.
    Type: Application
    Filed: June 21, 2019
    Publication date: October 10, 2019
    Applicant: Intel Corporation
    Inventors: Shahar Fleishman, Raizy Kellermann, Rana Hanocka
  • Publication number: 20190278376
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Application
    Filed: March 6, 2018
    Publication date: September 12, 2019
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Patent number: 10373380
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: August 6, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Patent number: 10229542
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: March 12, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Publication number: 20190043203
    Abstract: A system, article, and method of recurrent semantic segmentation for image processing by factoring historical semantic segmentation.
    Type: Application
    Filed: January 12, 2018
    Publication date: February 7, 2019
    Applicant: Intel Corporation
    Inventors: Shahar FLEISHMAN, Naomi KEN KOREM, Mark KLIGER
  • Publication number: 20180336439
    Abstract: An example apparatus for detecting novel data includes a discriminator trained using a generator to receive data to be classified. The discriminator may also be trained to classify the received data as novel data in response to detecting that the received data does not correspond to known categories of data.
    Type: Application
    Filed: June 19, 2017
    Publication date: November 22, 2018
    Applicant: INTEL CORPORATION
    Inventors: Mark Kliger, Shahar Fleishman
  • Publication number: 20180286120
    Abstract: Convolutional Neural Networks are applied to object meshes to allow three-dimensional objects to be analyzed. In one example, a method includes performing convolutions on a mesh, wherein the mesh represents a three-dimensional object of an image, the mesh having a plurality of vertices and a plurality of edges between the vertices, performing pooling on the convolutions of an edge of a mesh, and applying fully connected and loss layers to the pooled convolutions to provide metadata about the three-dimensional object.
    Type: Application
    Filed: April 4, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Shahar Fleishman, Mark Kliger
  • Publication number: 20180260965
    Abstract: Techniques are provided for global (non-rigid) scan point registration between a scanned object and an associated model, from an arbitrary initial starting position, based on a combination of iterative coarse registration and fine registration. A methodology implementing the techniques according to an embodiment includes generating a model transformation based on a coarse registration between the model and the point scan. The method further includes calculating an alignment metric based on a distance measurement between the point scan and the transformed model. If the alignment metric exceeds a selected threshold value, a fine registration is performed between the transformed model and the point scan. Otherwise, the method continues by performing a random rotation of the model; a translation of the rotated model towards a centroid of the point scan; and iterating the coarse registration using the translated model until the alignment metric is achieved, after which the fine registration is performed.
    Type: Application
    Filed: March 9, 2017
    Publication date: September 13, 2018
    Applicant: INTEL CORPORATION
    Inventors: Rana Hanocka, Shahar Fleishman, Jackie Assa
  • Patent number: 9939914
    Abstract: Systems and methods for combining three-dimensional tracking of a user's movements with a three-dimensional user interface display is described. A tracking module processes depth data of a user performing movements, for example, movements of the user's hands and fingers. The tracked movements are used to animate a representation of the hand and fingers, and the animated representation is displayed to the user using three-dimensional display. Also displayed are one or more virtual objects with which the user can interact. In some embodiments, the interaction of the user with the virtual objects controls an electronic device.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: April 10, 2018
    Assignee: INTEL CORPORATION
    Inventors: Shahar Fleishman, Gershom Kutliroff, Yaron Yanai
  • Patent number: 9911219
    Abstract: Techniques related to pose estimation for an articulated body are discussed. Such techniques may include extracting, segmenting, classifying, and labeling blobs, generating initial kinematic parameters that provide spatial relationships of elements of a kinematic model representing an articulated body, and refining the kinematic parameters to provide a pose estimation for the articulated body.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: March 6, 2018
    Assignee: Intel Corporation
    Inventors: Shahar Fleishman, Mark Kliger, Alon Lerner
  • Patent number: 9910498
    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
    Type: Grant
    Filed: June 25, 2012
    Date of Patent: March 6, 2018
    Assignee: INTEL CORPORATION
    Inventors: Gershom Kutliroff, Yaron Yanai, Amit Bleiweiss, Shahar Fleishman, Yotam Livny, Jonathan Epstein
  • Publication number: 20180018805
    Abstract: Techniques are provided for context-based 3D scene reconstruction employing fusion of multiple instances of an object within the scene. A methodology implementing the techniques according to an embodiment includes receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, based on the 3D reconstruction, the camera pose and the image frames. The method may further include classifying the detected objects into one or more object classes; grouping two or more instances of objects in one of the object classes based on a measure of similarity of features between the object instances; and combining point clouds associated with each of the grouped object instances to generate a fused object.
    Type: Application
    Filed: July 13, 2016
    Publication date: January 18, 2018
    Applicant: INTEL CORPORATION
    Inventors: Gershom Kutliroff, Shahar Fleishman, Mark Kliger