Patents Assigned to Cognata Ltd.
  • Publication number: 20240096014
    Abstract: A computer implemented method of creating data for a host vehicle simulation, comprising: in each of a plurality of iterations of a host vehicle simulation using at least one processor for: obtaining from an environment simulation engine a semantic-data dataset representing a plurality of scene objects in a geographical area, each one of the plurality of scene objects comprises at least object location coordinates and a plurality of values of semantically described parameters; creating a 3D visual realistic scene emulating the geographical area according to the dataset; applying at least one noise pattern associated with at least one sensor of a vehicle simulated by the host vehicle simulation engine on the virtual 3D visual realistic scene to create sensory ranging data simulation of the geographical area; converting the sensory ranging data simulation to an enhanced dataset emulating the geographical area, the enhanced dataset comprises a plurality of enhanced scene objects.
    Type: Application
    Filed: November 24, 2023
    Publication date: March 21, 2024
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Guy TSAFRIR, Eran ASA
  • Publication number: 20230316789
    Abstract: There is provided a method for annotating digital images for training a machine learning model, comprising: generating, from digital images and a plurality of dense depth maps, each associated with one of the digital images, an aligned three-dimensional stacked scene representation of a scene, where the digital images are captured by sensor(s) at the scene, and where each point in the three-dimensional stacked scene is associated with a stability score indicative of a likelihood the point is associated with a static object of the scene, removing from the three-dimensional stacked scene unstable points to produce a static three-dimensional stacked scene, detecting in at least one of the digital images static object(s) according to the static three-dimensional stacked scene, and classifying and annotating the static object(s). The machine learning model may be trained on the images annotated with a ground truth of the static object(s).
    Type: Application
    Filed: September 14, 2021
    Publication date: October 5, 2023
    Applicant: Cognata Ltd.
    Inventors: Ilan TSAFRIR, Guy TSAFRIR, Ehud SPIEGEL, Dan ATSMON
  • Publication number: 20230306680
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Application
    Filed: May 29, 2023
    Publication date: September 28, 2023
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Patent number: 11694388
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: July 4, 2023
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
  • Publication number: 20230202511
    Abstract: A system for generating simulated driving scenarios, comprising at least one hardware processor adapted for generating a plurality of simulated driving scenarios, each generated by providing a plurality of input driving objects to a machine learning model, where the machine learning model is trained using another machine learning model, trained to compute a classification indicative of a likelihood that a simulated driving scenario produced by the machine learning model comprises an interesting driving scenario.
    Type: Application
    Filed: May 27, 2021
    Publication date: June 29, 2023
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Ehud SPIEGEL
  • Publication number: 20220383591
    Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.
    Type: Application
    Filed: August 11, 2022
    Publication date: December 1, 2022
    Applicant: Cognata Ltd.
    Inventor: Dan ATSMON
  • Patent number: 11417057
    Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: August 16, 2022
    Assignee: Cognata Ltd.
    Inventor: Dan Atsmon
  • Publication number: 20220188579
    Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real dat
    Type: Application
    Filed: March 7, 2022
    Publication date: June 16, 2022
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Publication number: 20220153279
    Abstract: There is provided a system for adapting parameters of a vehicle for reduction of likelihood of an adverse event, comprising: hardware processor(s) executing a code for: performing, for each respective driver of multiple drivers: obtaining an indication of a vehicle driven by the respective driver, obtaining an indication of a certain advanced driver assistance system (ADAS) selected from multiple ADAS for installation in the vehicle, obtaining an environmental profile indicative of a prediction of an environment in which the vehicle with installed ADAS is predicted for driving therein at a future time interval, defining a simulation model in which the vehicle with installed ADAS is driving according to the environment profile, computing a risk of an adverse event during the future time interval by executing the simulation model, and selecting parameter(s) of the vehicle for adaptation thereof according to a predicted likelihood of reducing the risk of the adverse event.
    Type: Application
    Filed: March 17, 2020
    Publication date: May 19, 2022
    Applicant: Cognata Ltd.
    Inventor: Alon ATSMON
  • Patent number: 11270165
    Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real dat
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: March 8, 2022
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
  • Publication number: 20210350185
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Application
    Filed: July 23, 2021
    Publication date: November 11, 2021
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Publication number: 20210312244
    Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real dat
    Type: Application
    Filed: October 15, 2019
    Publication date: October 7, 2021
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Patent number: 11100371
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: August 24, 2021
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
  • Publication number: 20200210779
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.
    Type: Application
    Filed: October 7, 2019
    Publication date: July 2, 2020
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
  • Publication number: 20200098172
    Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.
    Type: Application
    Filed: November 25, 2019
    Publication date: March 26, 2020
    Applicant: Cognata Ltd.
    Inventor: Dan ATSMON
  • Patent number: 10489972
    Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.
    Type: Grant
    Filed: May 29, 2017
    Date of Patent: November 26, 2019
    Assignee: Cognata Ltd.
    Inventor: Dan Atsmon
  • Patent number: 10460208
    Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: producing a plurality of synthetic training signals, each simulating one of a plurality of signals simultaneously captured from a common training scene by a plurality of sensors, and a plurality of training depth maps each qualifying one of the plurality of synthetic training signals according to the common training scene; training a machine learning model based on the plurality of synthetic training signals and the plurality of training depth maps; using the machine learning model to compute a plurality of computed depth maps based on a plurality of real signals, the plurality of real signals are captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of the plurality of sensors, each of the plurality of computed depth maps.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: October 29, 2019
    Assignee: Cognata Ltd.
    Inventors: Dan Atsmon, Eran Asa
  • Publication number: 20190228571
    Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.
    Type: Application
    Filed: May 29, 2017
    Publication date: July 25, 2019
    Applicant: Cognata Ltd.
    Inventor: Dan ATSMON