Patents by Inventor Dan Atsmon
Dan Atsmon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11959771Abstract: A method of enhancing positioning of a moving vehicle based on visual identification of visual objects, comprising obtaining from a location sensor a global positioning and a movement vector of a moving vehicle, capturing one or more images using one or more imaging devices mounted on the moving vehicle to depict at least partial view of a surroundings of the moving vehicle, analyzing the image(s) to identify one or more visual objects having a known geographical position obtained according to the global positioning from a visual data record associated with a navigation map, analyzing the image(s) to calculate a relative positioning of the moving vehicle with respect to the identified visual object(s), calculating an enhanced positioning of the moving vehicle based on the relative positioning and applying the enhanced positioning to a navigation system of the moving vehicle.Type: GrantFiled: March 31, 2021Date of Patent: April 16, 2024Assignee: RED BEND LTD.Inventors: Ohad Akiva, Dan Atsmon
-
Publication number: 20240096014Abstract: A computer implemented method of creating data for a host vehicle simulation, comprising: in each of a plurality of iterations of a host vehicle simulation using at least one processor for: obtaining from an environment simulation engine a semantic-data dataset representing a plurality of scene objects in a geographical area, each one of the plurality of scene objects comprises at least object location coordinates and a plurality of values of semantically described parameters; creating a 3D visual realistic scene emulating the geographical area according to the dataset; applying at least one noise pattern associated with at least one sensor of a vehicle simulated by the host vehicle simulation engine on the virtual 3D visual realistic scene to create sensory ranging data simulation of the geographical area; converting the sensory ranging data simulation to an enhanced dataset emulating the geographical area, the enhanced dataset comprises a plurality of enhanced scene objects.Type: ApplicationFiled: November 24, 2023Publication date: March 21, 2024Applicant: Cognata Ltd.Inventors: Dan ATSMON, Guy TSAFRIR, Eran ASA
-
Publication number: 20230316789Abstract: There is provided a method for annotating digital images for training a machine learning model, comprising: generating, from digital images and a plurality of dense depth maps, each associated with one of the digital images, an aligned three-dimensional stacked scene representation of a scene, where the digital images are captured by sensor(s) at the scene, and where each point in the three-dimensional stacked scene is associated with a stability score indicative of a likelihood the point is associated with a static object of the scene, removing from the three-dimensional stacked scene unstable points to produce a static three-dimensional stacked scene, detecting in at least one of the digital images static object(s) according to the static three-dimensional stacked scene, and classifying and annotating the static object(s). The machine learning model may be trained on the images annotated with a ground truth of the static object(s).Type: ApplicationFiled: September 14, 2021Publication date: October 5, 2023Applicant: Cognata Ltd.Inventors: Ilan TSAFRIR, Guy TSAFRIR, Ehud SPIEGEL, Dan ATSMON
-
Publication number: 20230306680Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.Type: ApplicationFiled: May 29, 2023Publication date: September 28, 2023Applicant: Cognata Ltd.Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
-
Patent number: 11694388Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.Type: GrantFiled: July 23, 2021Date of Patent: July 4, 2023Assignee: Cognata Ltd.Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
-
Publication number: 20230202511Abstract: A system for generating simulated driving scenarios, comprising at least one hardware processor adapted for generating a plurality of simulated driving scenarios, each generated by providing a plurality of input driving objects to a machine learning model, where the machine learning model is trained using another machine learning model, trained to compute a classification indicative of a likelihood that a simulated driving scenario produced by the machine learning model comprises an interesting driving scenario.Type: ApplicationFiled: May 27, 2021Publication date: June 29, 2023Applicant: Cognata Ltd.Inventors: Dan ATSMON, Ehud SPIEGEL
-
Patent number: 11651337Abstract: A method, system, and computer program product for analyzing images of visual objects, such as currency and/or payment cards, captured on a mobile device. The analysis allows determining the authenticity and/or total amount of value of the currency and/or payment cards. The system may be used to verify the authenticity of hard currency, to count the total amount of the currency captured in one or more images, and to convert the currency using real time monetary exchange rates. The mobile device may be used to verify the identity of a credit card user by analyzing one or more images of the card holder's face and/or card holder's signature, card holder's name on the card, card number, and/or card security code.Type: GrantFiled: December 9, 2019Date of Patent: May 16, 2023Inventors: Alon Atsmon, Dan Atsmon
-
Publication number: 20220383591Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.Type: ApplicationFiled: August 11, 2022Publication date: December 1, 2022Applicant: Cognata Ltd.Inventor: Dan ATSMON
-
Publication number: 20220337655Abstract: A method of handling multimedia data in which packets of a multimedia file from a first computer are received by a second computer. In case, a sub-portion of the multimedia file, representative of the multimedia file, was received by the second computer, before the entire file was received by the second computer, that sub portion is handled by the second computer transmitting to the first computer, although the entire file was not received by the second computer at the time of the transmission of the result. Additionally, an image processing server is described comprising: a network interface adapted to receive packets, a communication manager adapted to manage reception of multimedia files through the input interface and to conclude when a sub-portion of a multimedia file, representative of the multimedia file received, and an image handling unit configured to handle said sub-portions.Type: ApplicationFiled: July 3, 2022Publication date: October 20, 2022Inventors: Dan ATSMON, Alon ATSMON
-
Patent number: 11417057Abstract: A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.Type: GrantFiled: November 25, 2019Date of Patent: August 16, 2022Assignee: Cognata Ltd.Inventor: Dan Atsmon
-
Patent number: 11381633Abstract: A method of handling multimedia data in which packets of a multimedia file from a first computer are received by a second computer. In case, a sub-portion of the multimedia file, representative of the multimedia file, was received by the second computer, before the entire file was received by the second computer, that sub portion is handled by the second computer transmitting to the first computer, although the entire file was not received by the second computer at the time of the transmission of the result. Additionally, an image processing server is described comprising: a network interface adapted to receive packets, a communication manager adapted to manage reception of multimedia files through the input interface and to conclude when a sub-portion of a multimedia file, representative of the multimedia file received, and an image handling unit configured to handle said sub-portions.Type: GrantFiled: January 4, 2021Date of Patent: July 5, 2022Inventors: Dan Atsmon, Alon Atsmon
-
Publication number: 20220188579Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real datType: ApplicationFiled: March 7, 2022Publication date: June 16, 2022Applicant: Cognata Ltd.Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
-
Patent number: 11270165Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real datType: GrantFiled: October 15, 2019Date of Patent: March 8, 2022Assignee: Cognata Ltd.Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
-
Publication number: 20210350185Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.Type: ApplicationFiled: July 23, 2021Publication date: November 11, 2021Applicant: Cognata Ltd.Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
-
Publication number: 20210312244Abstract: A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real datType: ApplicationFiled: October 15, 2019Publication date: October 7, 2021Applicant: Cognata Ltd.Inventors: Dan ATSMON, Eran ASA, Ehud SPIEGEL
-
Patent number: 11100371Abstract: A system for creating synthetic data for testing an autonomous system, comprising at least one hardware processor adapted to execute a code for: using a machine learning model to compute a plurality of depth maps based on a plurality of real signals captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; applying a point of view transformation to the plurality of real signals and the plurality of depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and providing the synthetic data to at least one testing engine to test an autonomous system comprising the target sensor.Type: GrantFiled: October 7, 2019Date of Patent: August 24, 2021Assignee: Cognata Ltd.Inventors: Dan Atsmon, Eran Asa, Ehud Spiegel
-
Publication number: 20210247193Abstract: A method of enhancing positioning of a moving vehicle based on visual identification of visual objects, comprising obtaining from a location sensor a global positioning and a movement vector of a moving vehicle, capturing one or more images using one or more imaging devices mounted on the moving vehicle to depict at least partial view of a surroundings of the moving vehicle, analyzing the image(s) to identify one or more visual objects having a known geographical position obtained according to the global positioning from a visual data record associated with a navigation map, analyzing the image(s) to calculate a relative positioning of the moving vehicle with respect to the identified visual object(s), calculating an enhanced positioning of the moving vehicle based on the relative positioning and applying the enhanced positioning to a navigation system of the moving vehicle.Type: ApplicationFiled: March 31, 2021Publication date: August 12, 2021Inventors: Ohad AKIVA, Dan ATSMON
-
Publication number: 20210126962Abstract: A method of handling multimedia data in which packets of a multimedia file from a first computer are received by a second computer. In case, a sub-portion of the multimedia file, representative of the multimedia file, was received by the second computer, before the entire file was received by the second computer, that sub portion is handled by the second computer transmitting to the first computer, although the entire file was not received by the second computer at the time of the transmission of the result. Additionally, an image processing server is described comprising: a network interface adapted to receive packets, a communication manager adapted to manage reception of multimedia files through the input interface and to conclude when a sub-portion of a multimedia file, representative of the multimedia file received, and an image handling unit configured to handle said sub-portions.Type: ApplicationFiled: January 4, 2021Publication date: April 29, 2021Inventors: Dan ATSMON, Alon ATSMON
-
Patent number: 10969229Abstract: A method of enhancing positioning of a moving vehicle based on visual identification of visual objects, comprising obtaining from a location sensor a global positioning and a movement vector of a moving vehicle, capturing one or more images using one or more imaging devices mounted on the moving vehicle to depict at least partial view of a surroundings of the moving vehicle, analyzing the image(s) to identify one or more visual objects having a known geographical position obtained according to the global positioning from a visual data record associated with a navigation map, analyzing the image(s) to calculate a relative positioning of the moving vehicle with respect to the identified visual object(s), calculating an enhanced positioning of the moving vehicle based on the relative positioning and applying the enhanced positioning to a navigation system of the moving vehicle.Type: GrantFiled: January 2, 2018Date of Patent: April 6, 2021Assignee: RED BEND LTD.Inventors: Ohad Akiva, Dan Atsmon
-
Patent number: 10887374Abstract: A method of handling multimedia data in which packets of a multimedia file from a first computer are received by a second computer. In case, a sub-portion of the multimedia file, representative of the multimedia file, was received by the second computer, before the entire file was received by the second computer, that sub portion is handled by the second computer transmitting to the first computer, although the entire file was not received by the second computer at the time of the transmission of the result. Additionally, an image processing server is described comprising: a network interface adapted to receive packets, a communication manager adapted to manage reception of multimedia files through the input interface and to conclude when a sub-portion of a multimedia file, representative of the multimedia file received, and an image handling unit configured to handle said sub-portions.Type: GrantFiled: January 28, 2019Date of Patent: January 5, 2021Inventors: Dan Atsmon, Alon Atsmon