Patents by Inventor Dzmitry TSISHKOU

Dzmitry TSISHKOU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11281941
    Abstract: A danger ranking training method comprising training a first deep neural network for generic object recognition within generic images, training a second deep neural network for specific object recognition within images of a specific application, training a third deep neural network for specific scene flow prediction within image sequences of the application, training a fourth deep neural network for potential danger areas localization within images or image sequences of the application using at least one human trained danger tagging method, training a fifth deep neural network for non-visible specific object anticipation and/or visible specific object prediction within image or image sequences of the application, and determining at least one danger pixel within an image or an image sequence of the application using an end-to-end deep neural network as a sequence of transfer learning of the five deep neural networks followed by one or several end-to-end top layers.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: March 22, 2022
    Assignee: IMRA EUROPE S.A.S.
    Inventors: Dzmitry Tsishkou, Rémy Bendahan
  • Patent number: 10776664
    Abstract: Some embodiments are directed to a method to reinforce deep neural network learning capacity to classify rare cases, which includes the steps of training a first deep neural network used to classify generic cases of original data into specified labels; localizing discriminative class-specific features within the original data processed through the first deep neural network and mapping the discriminative class-specific features as spatial-probabilistic labels; training a second-deep neural network used to classify rare cases of the original data into the spatial-probabilistic labels; and training a combined deep neural network used to classify both generic and rare cases of the original data into primary combined specified and spatial-probabilistic labels.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: September 15, 2020
    Assignee: IMRA EUROPE S.A.S.
    Inventors: Dzmitry Tsishkou, Rémy Bendahan
  • Patent number: 10663594
    Abstract: Some embodiments are directed to a processing method of a three-dimensional point cloud, including: obtaining a 3D point cloud from a predetermined view point of a depth sensor; extracting 3D coordinates and intensity data from each point of the 3D point cloud with respect to the view point; transforming 3D coordinates and intensity data into at least three two-dimensional spaces, namely an intensity 2D space function of the intensity data of each point, a height 2D space function of an elevation data of each point, and a distance 2D space function of a distance data between each point of 3D point cloud and the view point, defining a single multi-channel 2D space.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: May 26, 2020
    Assignee: IMRA EUROPE S.A.S.
    Inventors: Dzmitry Tsishkou, Frédéric Abad, Rémy Bendahan
  • Publication number: 20190180144
    Abstract: A danger ranking training method comprising training a first deep neural network for generic object recognition within generic images, training a second deep neural network for specific object recognition within images of a specific application, training a third deep neural network for specific scene flow prediction within image sequences of the application, training a fourth deep neural network for potential danger areas localization within images or image sequences of the application using at least one human trained danger tagging method, training a fifth deep neural network for non-visible specific object anticipation and/or visible specific object prediction within image or image sequences of the application, and determining at least one danger pixel within an image or an image sequence of the application using an end-to-end deep neural network as a sequence of transfer learning of the five deep neural networks followed by one or several end-to-end top layers.
    Type: Application
    Filed: December 7, 2018
    Publication date: June 13, 2019
    Inventors: Dzmitry TSISHKOU, Rémy BENDAHAN
  • Publication number: 20190122077
    Abstract: Some embodiments are directed to a method to reinforce deep neural network learning capacity to classify rare cases, which includes the steps of training a first deep neural network used to classify generic cases of original data into specified labels; localizing discriminative class-specific features within the original data processed through the first deep neural network and mapping the discriminative class-specific features as spatial-probabilistic labels; training a second-deep neural network used to classify rare cases of the original data into the spatial-probabilistic labels; and training a combined deep neural network used to classify both generic and rare cases of the original data into primary combined specified and spatial-probabilistic labels.
    Type: Application
    Filed: March 15, 2017
    Publication date: April 25, 2019
    Applicant: Impra Europe S.A.S.
    Inventors: Dzmitry TSISHKOU, Rémy BENDAHAN
  • Publication number: 20190086546
    Abstract: Some embodiments are directed to a processing method of a three-dimensional point cloud, including: obtaining a 3D point cloud from a predetermined view point of a depth sensor; extracting 3D coordinates and intensity data from each point of the 3D point cloud with respect to the view point; transforming 3D coordinates and intensity data into at least three two-dimensional spaces, namely an intensity 2D space function of the intensity data of each point, a height 2D space function of an elevation data of each point, and a distance 2D space function of a distance data between each point of 3D point cloud and the view point, defining a single multi-channel 2D space.
    Type: Application
    Filed: March 14, 2017
    Publication date: March 21, 2019
    Inventors: Dzmitry TSISHKOU, Frédéric ABAD, Rémy BENDAHAN