Patents by Inventor Mark Kliger

Mark Kliger has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11861944
    Abstract: Video output is generated based on first video data that depicts the user performing an activity. Poses of the user during performance of the activity are compared with second video data that depicts an instructor performing the activity. Corresponding poses of the user's body and the instructor's body may be determined through comparison of the first and second video data. The video data is used to determine the rate of motion of the user and to generate video output in which a visual representation of the instructor moves at a rate similar to the that of the user. For example, video output generated based on an instructional fitness video may be synchronized so that movement of the presented instructor matches the rate of movement of the user performing an exercise, improving user comprehension and performance.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: January 2, 2024
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Ido Yerushalmy, Ianir Ideses, Eli Alshan, Mark Kliger, Liza Potikha, Dotan Kaufman, Sharon Alpert, Eduard Oks, Noam Sorek
  • Patent number: 11783542
    Abstract: Devices and techniques are generally described for three dimensional mesh generation. In various examples, first two-dimensional (2D) image data representing a human body may be received from a first image sensor. Second 2D image data representing the human body may be received from a second image sensor. A first pose parameter and a first shape parameter may be determined using a first three-dimensional (3D) mesh prediction model and the first 2D image data. A second pose parameter and a second shape parameter may be determined using a second 3D mesh prediction model and the second 2D image data. In various examples, an updated 3D mesh prediction model may be generated from the first 3D mesh prediction model based at least in part on a first difference between the first pose parameter and the second pose parameter and a second difference between the first shape parameter and the second shape parameter.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: October 10, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Matan Goldman, Lior Fritz, Omer Meir, Imry Kissos, Yaar Harari, Eduard Oks, Mark Kliger
  • Patent number: 11771863
    Abstract: Systems for assisting a user in performance of a meditation activity or another type of activity are described. The systems receive user input and sensor data indicating physiological values associated with the user. These values are used to determine a recommended type of activity and a length of time for the activity. While the user performs the activity, sensors are used to measure physiological values, and an output that is provided to the user is selected based on the measured physiological values. The output may be selected to assist the user in reaching target physiological values, such as a slower respiration rate. After completion of the activity, additional physiological values are used to determine the effectiveness of the activity and the output that was provided. The effectiveness of the activity and the output may be used to determine future recommendations and future output.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: October 3, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Eli Alshan, Mark Kliger, Ido Yerushalmy, Liza Potikha, Dotan Kaufman, Ianir Ideses, Eduard Oks, Noam Sorek
  • Publication number: 20220261574
    Abstract: Characteristics of a user's movement are evaluated based on performance of activities by a user within a field of view of a camera. Video data representing performance of a series of movements by the user is acquired by the camera. Pose data is determined based on the video data, the pose data representing positions of the user's body while performing the movements. The pose data is compared to a set of existing videos that correspond to known errors to identify errors performed by the user. The errors may be used to generate scores for various characteristics of the user's movement. Based on the errors, exercises or other activities to improve the movement of the user may be determined and included in an output presented to the user.
    Type: Application
    Filed: February 16, 2021
    Publication date: August 18, 2022
    Inventors: EDUARD OKS, RIDGE CARPENTER, LAMARR SMITH, CLAIRE MCGOWAN, ELIZABETH REISMAN, IANIR IDESES, ELI ALSHAN, MARK KLIGER, MATAN GOLDMAN, LIZA POTIKHA, IDO YERUSHALMY, DOTAN KAUFMAN, GUY ADAM, OMER MEIR, LIOR FRITZ, IMRY KISSOS, GEORGY MELAMED, ERAN BORENSTEIN, SHARON ALPERT, NOAM SOREK
  • Publication number: 20220167858
    Abstract: The present invention is for a method and system for pain classification and monitoring optionally in a subject that is an awake, semi-awake or sedated.
    Type: Application
    Filed: February 16, 2022
    Publication date: June 2, 2022
    Inventors: Galit Zuckerman-Stark, Mark Kliger
  • Patent number: 11259708
    Abstract: The present invention is for a method and system for pain classification and monitoring optionally in a subject that is an awake, semi-awake or sedated.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: March 1, 2022
    Assignee: Medasense Biometrics Ltd.
    Inventors: Galit Zuckerman-Stark, Mark Kliger
  • Publication number: 20200359914
    Abstract: The present invention is for a method and system for pain classification and monitoring optionally in a subject that is an awake, semi-awake or sedated.
    Type: Application
    Filed: August 3, 2020
    Publication date: November 19, 2020
    Inventors: Galit Zuckerman-Stark, Mark Kliger
  • Patent number: 10743778
    Abstract: The present invention is for a method and system for pain classification and monitoring optionally in a subject that is an awake, semi-awake or sedated.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: August 18, 2020
    Assignee: Medasense Biometrics Ltd.
    Inventors: Galit Zuckerman-Stark, Mark Kliger
  • Patent number: 10685446
    Abstract: A system, article, and method of recurrent semantic segmentation for image processing by factoring historical semantic segmentation.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: June 16, 2020
    Assignee: Intel Corporation
    Inventors: Shahar Fleishman, Naomi Ken Korem, Mark Kliger
  • Patent number: 10643382
    Abstract: Convolutional Neural Networks are applied to object meshes to allow three-dimensional objects to be analyzed. In one example, a method includes performing convolutions on a mesh, wherein the mesh represents a three-dimensional object of an image, the mesh having a plurality of vertices and a plurality of edges between the vertices, performing pooling on the convolutions of an edge of a mesh, and applying fully connected and loss layers to the pooled convolutions to provide metadata about the three-dimensional object.
    Type: Grant
    Filed: April 4, 2017
    Date of Patent: May 5, 2020
    Assignee: Intel Corporation
    Inventors: Shahar Fleishman, Mark Kliger
  • Patent number: 10573018
    Abstract: Techniques are provided for context-based 3D scene reconstruction employing fusion of multiple instances of an object within the scene. A methodology implementing the techniques according to an embodiment includes receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, based on the 3D reconstruction, the camera pose and the image frames. The method may further include classifying the detected objects into one or more object classes; grouping two or more instances of objects in one of the object classes based on a measure of similarity of features between the object instances; and combining point clouds associated with each of the grouped object instances to generate a fused object.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: February 25, 2020
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Shahar Fleishman, Mark Kliger
  • Patent number: 10452789
    Abstract: Systems, apparatuses and/or methods may provide for generating a packing order of items within a container that consolidates the items into a reduced space. Items may be scanned with a three-dimensional (3D) imager, and models may be generated of the items based on the data from the 3D imager. The items may be located within minimal-volume enclosing bounding boxes, which may be analyzed to determine whether they may be merged together in one of their bounding boxes, or into a new bounding box that is spatially advantageous in terms of packing. If a combination of items is realizable and is determined to take up less space in a bounding box than the bounding boxes of the items considered separately, then they may be merged into a single bounding box. Thus, a spatially efficient packing sequence for a plurality of real objects may be generated to maximize packing efficiency.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: October 22, 2019
    Assignee: Intel Corporation
    Inventors: Maoz Madmony, Shahar Fleishman, Mark Kliger, Gershom Kutliroff
  • Patent number: 10373380
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: August 6, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Patent number: 10229542
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: March 12, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Publication number: 20190043203
    Abstract: A system, article, and method of recurrent semantic segmentation for image processing by factoring historical semantic segmentation.
    Type: Application
    Filed: January 12, 2018
    Publication date: February 7, 2019
    Applicant: Intel Corporation
    Inventors: Shahar FLEISHMAN, Naomi KEN KOREM, Mark KLIGER
  • Publication number: 20180336439
    Abstract: An example apparatus for detecting novel data includes a discriminator trained using a generator to receive data to be classified. The discriminator may also be trained to classify the received data as novel data in response to detecting that the received data does not correspond to known categories of data.
    Type: Application
    Filed: June 19, 2017
    Publication date: November 22, 2018
    Applicant: INTEL CORPORATION
    Inventors: Mark Kliger, Shahar Fleishman
  • Publication number: 20180286120
    Abstract: Convolutional Neural Networks are applied to object meshes to allow three-dimensional objects to be analyzed. In one example, a method includes performing convolutions on a mesh, wherein the mesh represents a three-dimensional object of an image, the mesh having a plurality of vertices and a plurality of edges between the vertices, performing pooling on the convolutions of an edge of a mesh, and applying fully connected and loss layers to the pooled convolutions to provide metadata about the three-dimensional object.
    Type: Application
    Filed: April 4, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Shahar Fleishman, Mark Kliger
  • Patent number: 9911219
    Abstract: Techniques related to pose estimation for an articulated body are discussed. Such techniques may include extracting, segmenting, classifying, and labeling blobs, generating initial kinematic parameters that provide spatial relationships of elements of a kinematic model representing an articulated body, and refining the kinematic parameters to provide a pose estimation for the articulated body.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: March 6, 2018
    Assignee: Intel Corporation
    Inventors: Shahar Fleishman, Mark Kliger, Alon Lerner
  • Publication number: 20180018805
    Abstract: Techniques are provided for context-based 3D scene reconstruction employing fusion of multiple instances of an object within the scene. A methodology implementing the techniques according to an embodiment includes receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, based on the 3D reconstruction, the camera pose and the image frames. The method may further include classifying the detected objects into one or more object classes; grouping two or more instances of objects in one of the object classes based on a measure of similarity of features between the object instances; and combining point clouds associated with each of the grouped object instances to generate a fused object.
    Type: Application
    Filed: July 13, 2016
    Publication date: January 18, 2018
    Applicant: INTEL CORPORATION
    Inventors: Gershom Kutliroff, Shahar Fleishman, Mark Kliger
  • Patent number: 9747717
    Abstract: Techniques related to non-rigid transformations for articulated bodies are discussed. Such techniques may include repeatedly selecting target positions for matching a kinematic model of an articulated body, generating virtual end-effectors for the kinematic model and corresponding to the target positions, generating an inverse kinematics problem including a Jacobian matrix, and determining a change in kinematic model parameters based on the inverse kinematics problem until a convergence is attained.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: August 29, 2017
    Assignee: Intel Corporation
    Inventors: Shahar Fleishman, Mark Kliger, Alon Lerner