Patents by Inventor Eran Asa

Eran Asa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240096014
    Abstract: A computer implemented method of creating data for a host vehicle simulation, comprising: in each of a plurality of iterations of a host vehicle simulation using at least one processor for: obtaining from an environment simulation engine a semantic-data dataset representing a plurality of scene objects in a geographical area, each one of the plurality of scene objects comprises at least object location coordinates and a plurality of values of semantically described parameters; creating a 3D visual realistic scene emulating the geographical area according to the dataset; applying at least one noise pattern associated with at least one sensor of a vehicle simulated by the host vehicle simulation engine on the virtual 3D visual realistic scene to create sensory ranging data simulation of the geographical area; converting the sensory ranging data simulation to an enhanced dataset emulating the geographical area, the enhanced dataset comprises a plurality of enhanced scene objects.
    Type: Application
    Filed: November 24, 2023
    Publication date: March 21, 2024
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Guy TSAFRIR, Eran ASA
  • Publication number: 20230306680
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Application
    Filed: May 29, 2023
    Publication date: September 28, 2023
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Patent number: 11694388
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: July 4, 2023
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
  • Publication number: 20220188579
    Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real dat
    Type: Application
    Filed: March 7, 2022
    Publication date: June 16, 2022
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Patent number: 11270165
    Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real dat
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: March 8, 2022
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
  • Publication number: 20210350185
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Application
    Filed: July 23, 2021
    Publication date: November 11, 2021
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Publication number: 20210312244
    Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real dat
    Type: Application
    Filed: October 15, 2019
    Publication date: October 7, 2021
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Patent number: 11100371
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: August 24, 2021
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
  • Publication number: 20200210779
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Application
    Filed: October 7, 2019
    Publication date: July 2, 2020
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Patent number: 10460208
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: producing a plurality of synthetic training signals, each simulating one of a plurality of signals simultaneously captured from a common training scene by a plurality of sensors, and a plurality of training depth maps each qualifying one of the plurality of synthetic training signals according to the common training scene; training a machine learning model based on the plurality of synthetic training signals and the plurality of training depth maps; using the machine learning model to compute a plurality of computed depth maps based on a plurality of real signals, the plurality of real signals are captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of the plurality of sensors, each of the plurality of computed depth maps.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: October 29, 2019
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa
  • Publication number: 20180349526
    Abstract: A computer implemented method of creating data for a host vehicle simulation, comprising: in each of a plurality of iterations of a host vehicle simulation using at least one processor for: obtaining from an environment simulation engine a semantic-data dataset representing a plurality of scene objects in a geographical area, each one of the plurality of scene objects comprises at least object location coordinates and a plurality of values of semantically described parameters; creating a 3D visual realistic scene emulating the geographical area according to the dataset; applying at least one noise pattern associated with at least one sensor of a vehicle simulated by the host vehicle simulation engine on the virtual 3D visual realistic scene to create sensory ranging data simulation of the geographical area; converting the sensory ranging data simulation to an enhanced dataset emulating the geographical area, the enhanced dataset comprises a plurality of enhanced scene objects.
    Type: Application
    Filed: May 29, 2018
    Publication date: December 6, 2018
    Inventors: Dan Atsmon, Guy Tsafrir, Eran Asa