Patents by Inventor Shahar BEN-EZRA

Shahar BEN-EZRA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12093834
    Abstract: A perception system, comprising a set of reference sensors; a set of test sensors and a computing device, which is configured for receiving first training signals from the set of reference sensors and receiving second training signals from the set of test sensors, the set of reference sensors and the set of test sensors simultaneously exposed to a common scene; processing the first training signals to obtain reference images containing reference depth information associated with the scene; and using the second training signals and the reference images to train a neural network for transforming subsequent test signals from the set of test sensors into test images containing inferred depth information.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: September 17, 2024
    Assignee: Vaya Vision Sensing Ltd.
    Inventors: Youval Nehmadi, Shahar Ben Ezra, Shmuel Mangan, Mark Wagner, Anna Cohen, Itzik Avital
  • Publication number: 20230281844
    Abstract: The present disclosure relates to a device for verifying estimated depth information. The device obtains an image of a scene, wherein the image comprises at least one object of interest having a set of points, obtains a height information for at least one point from the set of points of the object of interest, and estimates a first depth information for the at least one point, based on the obtained height information and detects a corresponding position of the at least one point in the obtained image. The device further receives, from another device, a second depth information for the at least one point, and determines a validity of the estimated second depth information, based on determining a measure of dissimilarity between the first depth information and the second depth information for the at least one point.
    Type: Application
    Filed: May 11, 2023
    Publication date: September 7, 2023
    Inventors: Shahar Ben Ezra, Zohar Barzelay, Eli Sason, Yilun Chen
  • Publication number: 20230237811
    Abstract: A method and a computing device for object detection and tracking from a video input are described. The method and the computing device may be used to, for example, track objects of interest, such as lane markings, in traffic. A plurality of frames corresponding to a video may be analyzed in a spatiotemporal domain by a neural network. The neural network may be trained using data synthesized in the spatiotemporal domain.
    Type: Application
    Filed: March 21, 2023
    Publication date: July 27, 2023
    Inventors: Darya Frolova, Shahar Ben Ezra, Pavel Kisilev, Xiaoli She, Yu Xie
  • Publication number: 20230134125
    Abstract: A system in a vehicle includes an image sensor to obtain images in an image sensor coordinate system and a depth sensor to obtain point clouds in a depth sensor coordinate system. Processing circuitry implements a neural network to determine a validation state of a transformation matrix that transforms the point clouds in the depth sensor coordinate system to transformed point clouds in the image sensor coordinate system. The transformation matrix includes rotation parameters and translation parameters.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 4, 2023
    Inventors: Michael Baltaxe, Dan Levi, Noa Garnett, Doron Portnoy, Amit Batikoff, Shahar Ben Ezra, Tal Furman
  • Publication number: 20220335729
    Abstract: A perception system, comprising a set of reference sensors; a set of test sensors and a computing device, which is configured for receiving first training signals from the set of reference sensors and receiving second training signals from the set of test sensors, the set of reference sensors and the set of test sensors simultaneously exposed to a common scene; processing the first training signals to obtain reference images containing reference depth information associated with the scene; and using the second training signals and the reference images to train a neural network for transforming subsequent test signals from the set of test sensors into test images containing inferred depth information.
    Type: Application
    Filed: September 22, 2020
    Publication date: October 20, 2022
    Inventors: Youval Nehmadi, Shahar Ben Ezra, Shmuel Mangan, Mark Wagner, Anna Cohen, Itzik Avital
  • Patent number: 10445928
    Abstract: A system and method for generating a high-density three-dimensional (3D) map are disclosed. The system comprises acquiring at least one high density image of a scene using at least one passive sensor; acquiring at least one new set of distance measurements of the scene using at least one active sensor; acquiring a previously generated 3D map of the scene comprising a previous set of distance measurements; merging the at least one new set of distance measurements with the previous set of upsampled distance measurements, wherein merging the at least one new set of distance measurements further includes accounting for a motion transformation between a previous high-density image frame and the acquired high density image and the acquired distance measurements; and overlaying the new set of distance measurements on the high-density image via an upsampling interpolation, creating an output 3D map.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: October 15, 2019
    Assignee: VAYAVISION LTD.
    Inventors: Youval Nehmadi, Shmuel Mangan, Shahar Ben-Ezra, Anna Cohen, Ronny Cohen, Lev Goldentouch, Shmuel Ur
  • Publication number: 20180232947
    Abstract: A system and method for generating a high-density three-dimensional (3D) map are disclosed. The system comprises acquiring at least one high density image of a scene using at least one passive sensor; acquiring at least one new set of distance measurements of the scene using at least one active sensor; acquiring a previously generated 3D map of the scene comprising a previous set of distance measurements; merging the at least one new set of distance measurements with the previous set of upsampled distance measurements, wherein merging the at least one new set of distance measurements further includes accounting for a motion transformation between a previous high-density image frame and the acquired high density image and the acquired distance measurements; and overlaying the new set of distance measurements on the high-density image via an upsampling interpolation, creating an output 3D map.
    Type: Application
    Filed: February 8, 2018
    Publication date: August 16, 2018
    Applicant: VayaVision, Ltd.
    Inventors: Youval NEHMADI, Shmuel MANGAN, Shahar BEN-EZRA, Anna COHEN, Ronny COHEN, Lev GOLDENTOUCH, Shmuel UR