Patents Assigned to Cognata Ltd.
  • Patent number: 12260489
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Grant
    Filed: May 29, 2023
    Date of Patent: March 25, 2025
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
  • Publication number: 20250014276
    Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.
    Type: Application
    Filed: September 17, 2024
    Publication date: January 9, 2025
    Applicant: Cognata Ltd.
    Inventor: Dan ATSMON
  • Publication number: 20240403717
    Abstract: A method for training a computer-vision based perception model, comprising: increasing diversity of backgrounds behind objects in synthetic training data by: inserting into a scene in simulation data at least one simulation object distributed around a sensor position in the scene, such that the at least one simulation object is oriented towards the sensor position, to produce new simulation data; and computing at least one simulated sensor signal using the new simulation data, simulating at least one signal captured by a simulated sensor located in the sensor position; and providing the new simulation data and the at least one simulated sensor signal as synthetic training data to at least one computer-vision based perception model for training the model to detect and additionally or alternatively classify one or more objects in one or more sensor signals.
    Type: Application
    Filed: October 25, 2022
    Publication date: December 5, 2024
    Applicant: Cognata Ltd.
    Inventor: Dan ATSMON
  • Publication number: 20240394518
    Abstract: A method for generating training data for a machine learning model comprising: accessing a plurality of output values of a machine learning model computed in response to a plurality of input data samples; analyzing the plurality of output values and the plurality of input data samples to compute a plurality of required data sample characteristics associated with at least one unsatisfactory output value of the plurality of output values; generating at least one new input data sample by providing a data generator with a plurality of generation constraints comprising the plurality of required data sample characteristics; and adding the at least one new input data sample to a data repository for producing training data for the machine learning model; wherein the at least one new input data sample comprises at least part of a simulated driving environment for training the machine learning model to operate in an autonomous automotive system.
    Type: Application
    Filed: May 28, 2024
    Publication date: November 28, 2024
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Alon Avraham ATSMON
  • Patent number: 12112432
    Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.
    Type: Grant
    Filed: August 11, 2022
    Date of Patent: October 8, 2024
    Assignee: Cognata Ltd.
    Inventor: Dan Atsmon
  • Patent number: 12065149
    Abstract: There is provided a system for adapting parameters of a vehicle for reduction of likelihood of an adverse event, comprising: hardware processor(s) executing a code for: performing, for each respective driver of multiple drivers: obtaining an indication of a vehicle driven by the respective driver, obtaining an indication of a certain advanced driver assistance system (ADAS) selected from multiple ADAS for installation in the vehicle, obtaining an environmental profile indicative of a prediction of an environment in which the vehicle with installed ADAS is predicted for driving therein at a future time interval, defining a simulation model in which the vehicle with installed ADAS is driving according to the environment profile, computing a risk of an adverse event during the future time interval by executing the simulation model, and selecting parameter(s) of the vehicle for adaptation thereof according to a predicted likelihood of reducing the risk of the adverse event.
    Type: Grant
    Filed: March 17, 2020
    Date of Patent: August 20, 2024
    Assignee: Cognata Ltd.
    Inventor: Alon Atsmon
  • Patent number: 12061965
    Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real dat
    Type: Grant
    Filed: March 7, 2022
    Date of Patent: August 13, 2024
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
  • Publication number: 20240257410
    Abstract: A system for generating synthetic data, comprising at least one processing circuitry adapted for: computing a sequence of partial simulation images, where each of the sequence of partial simulation images is associated with an estimated simulation time and with part of a simulated environment at the respective estimated simulation time thereof; computing at least one simulated point-cloud, each simulating a point-cloud captured in a capture interval by a sensor operated in a scanning pattern from an environment equivalent to a simulated environment, by applying to each partial simulation image of the sequence of partial simulation images a capture mask computed according to the scanning pattern and a relation between the capture interval and an estimated simulation time associated with the partial simulation image; and providing the at least one simulated point-cloud to a training engine to train a perception system comprising the sensor.
    Type: Application
    Filed: February 1, 2024
    Publication date: August 1, 2024
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Guy GOLDNER, Ilan TSAFRIR
  • Publication number: 20240199071
    Abstract: A method for generating a driving assistant model, comprising: computing at least one semantic driving scenario by computing at least one permutation of at least one initial semantic driving scenario; providing the at least one semantic driving scenario to a simulation generator to produce simulated driving data describing at least one simulated driving environment; training a driving assistant model using the simulated driving data to produce a trained driving assistant model; and providing by the trained driving assistant model at least one driving instruction to at least one autonomous driving model while the at least one autonomous driving model is operating.
    Type: Application
    Filed: December 18, 2023
    Publication date: June 20, 2024
    Applicant: Cognata Ltd.
    Inventor: Dan ATSMON
  • Publication number: 20240096014
    Abstract: A computer implemented method of creating data for a host vehicle simulation, comprising: in each of a plurality of iterations of a host vehicle simulation using at least one processor for: obtaining from an environment simulation engine a semantic-data dataset representing a plurality of scene objects in a geographical area, each one of the plurality of scene objects comprises at least object location coordinates and a plurality of values of semantically described parameters; creating a 3D visual realistic scene emulating the geographical area according to the dataset; applying at least one noise pattern associated with at least one sensor of a vehicle simulated by the host vehicle simulation engine on the virtual 3D visual realistic scene to create sensory ranging data simulation of the geographical area; converting the sensory ranging data simulation to an enhanced dataset emulating the geographical area, the enhanced dataset comprises a plurality of enhanced scene objects.
    Type: Application
    Filed: November 24, 2023
    Publication date: March 21, 2024
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Guy TSAFRIR, Eran ASA
  • Publication number: 20230316789
    Abstract: There is provided a method for annotating digital images for training a machine learning model, comprising: generating, from digital images and a plurality of dense depth maps, each associated with one of the digital images, an aligned three-dimensional stacked scene representation of a scene, where the digital images are captured by sensor(s) at the scene, and where each point in the three-dimensional stacked scene is associated with a stability score indicative of a likelihood the point is associated with a static object of the scene, removing from the three-dimensional stacked scene unstable points to produce a static three-dimensional stacked scene, detecting in at least one of the digital images static object(s) according to the static three-dimensional stacked scene, and classifying and annotating the static object(s). The machine learning model may be trained on the images annotated with a ground truth of the static object(s).
    Type: Application
    Filed: September 14, 2021
    Publication date: October 5, 2023
    Applicant: Cognata Ltd.
    Inventors: Ilan TSAFRIR, Guy TSAFRIR, Ehud SPIEGEL, Dan ATSMON
  • Publication number: 20230306680
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Application
    Filed: May 29, 2023
    Publication date: September 28, 2023
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Patent number: 11694388
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: July 4, 2023
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
  • Publication number: 20230202511
    Abstract: A system for generating simulated driving scenarios, comprising at least one hardware processor adapted for generating a plurality of simulated driving scenarios, each generated by providing a plurality of input driving objects to a machine learning model, where the machine learning model is trained using another machine learning model, trained to compute a classification indicative of a likelihood that a simulated driving scenario produced by the machine learning model comprises an interesting driving scenario.
    Type: Application
    Filed: May 27, 2021
    Publication date: June 29, 2023
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Ehud SPIEGEL
  • Publication number: 20220383591
    Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.
    Type: Application
    Filed: August 11, 2022
    Publication date: December 1, 2022
    Applicant: Cognata Ltd.
    Inventor: Dan ATSMON
  • Patent number: 11417057
    Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: August 16, 2022
    Assignee: Cognata Ltd.
    Inventor: Dan Atsmon
  • Publication number: 20220188579
    Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real dat
    Type: Application
    Filed: March 7, 2022
    Publication date: June 16, 2022
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Publication number: 20220153279
    Abstract: There is provided a system for adapting parameters of a vehicle for reduction of likelihood of an adverse event, comprising: hardware processor(s) executing a code for: performing, for each respective driver of multiple drivers: obtaining an indication of a vehicle driven by the respective driver, obtaining an indication of a certain advanced driver assistance system (ADAS) selected from multiple ADAS for installation in the vehicle, obtaining an environmental profile indicative of a prediction of an environment in which the vehicle with installed ADAS is predicted for driving therein at a future time interval, defining a simulation model in which the vehicle with installed ADAS is driving according to the environment profile, computing a risk of an adverse event during the future time interval by executing the simulation model, and selecting parameter(s) of the vehicle for adaptation thereof according to a predicted likelihood of reducing the risk of the adverse event.
    Type: Application
    Filed: March 17, 2020
    Publication date: May 19, 2022
    Applicant: Cognata Ltd.
    Inventor: Alon ATSMON
  • Patent number: 11270165
    Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real dat
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: March 8, 2022
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
  • Publication number: 20210350185
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Application
    Filed: July 23, 2021
    Publication date: November 11, 2021
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL