Patents by Inventor Rémy BENDAHAN

Rémy BENDAHAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11722892
    Abstract: A visual light communication emitter for a vehicle, arranged to communicate a status of the vehicle, includes a first light emitter arranged to emit flash light which is modulated at a first target frequency in a dedicated non-visible spectrum, and a second light emitter arranged to emit flash light which is modulated at a second target frequency in the dedicated non-visible spectrum. A difference between the first target frequency and the second target frequency is predetermined, so as to authenticate the status of the vehicle.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: August 8, 2023
    Assignee: AISIN CORPORATION
    Inventors: Remy Bendahan, Sylvain Bougnoux, Yuta Nakano, Takeshi Fujita
  • Publication number: 20230154198
    Abstract: A computer-implemented method for multimodal egocentric future prediction in a driving environment of an autonomous vehicle (AV) or an advanced driver assistance system (ADAS) equipped with a camera and comprising a trained reachability prior deep neural network (RPN), a trained reachability transfer deep neural network (RTN) and a trained future localization deep neural network (FLN) and/or a trained future emergence prediction deep neural network (EPN).
    Type: Application
    Filed: May 28, 2021
    Publication date: May 18, 2023
    Inventors: Osama MAKANSI, Cicek ÖZGÜN, Thomas BROX, Kévin BUCHICCHIO, Frédéric ABAD, Rémy BENDAHAN
  • Publication number: 20230131815
    Abstract: A computer-implemented method for predicting multiple future trajectories of moving objects of interest in the environment of a monitoring device comprising a memory augmented neural network (MANN) comprising at least one trained encoder deep neural network, one trained decoder deep neural network and a key-value database storing keys corresponding to past trajectory encodings and associated values corresponding to associated future trajectory encodings.
    Type: Application
    Filed: May 28, 2021
    Publication date: April 27, 2023
    Inventors: Federico BECATTINI, Francesco MARCHETTI, Lorenzo SEIDENARI, Alberto DEL BIMBO, Frédéric ABAD, Kévin BUCHICCHIO, Rémy BENDAHAN
  • Patent number: 11496215
    Abstract: Clothing equipment having a visual light communication emitter arranged to communicate a status of the clothing equipment, includes a light emitter arranged to emit flash light which is modulated at at least one target frequency in a dedicated non-visible spectrum, the light emitter including three fixed emitting portions distant from each other by predetermined distances, so as to authenticate the status of the clothing equipment.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: November 8, 2022
    Assignee: AISIN CORPORATION
    Inventors: Remy Bendahan, Sylvain Bougnoux, Yuta Nakano, Takeshi Fujita
  • Patent number: 11281941
    Abstract: A danger ranking training method comprising training a first deep neural network for generic object recognition within generic images, training a second deep neural network for specific object recognition within images of a specific application, training a third deep neural network for specific scene flow prediction within image sequences of the application, training a fourth deep neural network for potential danger areas localization within images or image sequences of the application using at least one human trained danger tagging method, training a fifth deep neural network for non-visible specific object anticipation and/or visible specific object prediction within image or image sequences of the application, and determining at least one danger pixel within an image or an image sequence of the application using an end-to-end deep neural network as a sequence of transfer learning of the five deep neural networks followed by one or several end-to-end top layers.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: March 22, 2022
    Assignee: IMRA EUROPE S.A.S.
    Inventors: Dzmitry Tsishkou, Rémy Bendahan
  • Publication number: 20210399802
    Abstract: Clothing equipment having a visual light communication emitter arranged to communicate a status of the clothing equipment, includes a light emitter arranged to emit flash light which is modulated at at least one target frequency in a dedicated non-visible spectrum, the light emitter including three fixed emitting portions distant from each other by predetermined distances, so as to authenticate the status of the clothing equipment.
    Type: Application
    Filed: March 2, 2021
    Publication date: December 23, 2021
    Applicant: AISIN SEIKI KABUSHIKI KAISHA
    Inventors: Remy BENDAHAN, Sylvain Bougnoux, Yuta Nakano, Takeshi Fujita
  • Publication number: 20210400476
    Abstract: A visual light communication emitter for a vehicle, arranged to communicate a status of the vehicle, includes a first light emitter arranged to emit flash light which is modulated at a first target frequency in a dedicated non-visible spectrum, and a second light emitter arranged to emit flash light which is modulated at a second target frequency in the dedicated non-visible spectrum. A difference between the first target frequency and the second target frequency is predetermined, so as to authenticate the status of the vehicle.
    Type: Application
    Filed: March 2, 2021
    Publication date: December 23, 2021
    Applicant: AISIN SEIKI KABUSHIKI KAISHA
    Inventors: Remy BENDAHAN, Sylvain Bougnoux, Yuta Nakano, Takeshi Fujita
  • Patent number: 10776664
    Abstract: Some embodiments are directed to a method to reinforce deep neural network learning capacity to classify rare cases, which includes the steps of training a first deep neural network used to classify generic cases of original data into specified labels; localizing discriminative class-specific features within the original data processed through the first deep neural network and mapping the discriminative class-specific features as spatial-probabilistic labels; training a second-deep neural network used to classify rare cases of the original data into the spatial-probabilistic labels; and training a combined deep neural network used to classify both generic and rare cases of the original data into primary combined specified and spatial-probabilistic labels.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: September 15, 2020
    Assignee: IMRA EUROPE S.A.S.
    Inventors: Dzmitry Tsishkou, Rémy Bendahan
  • Patent number: 10663594
    Abstract: Some embodiments are directed to a processing method of a three-dimensional point cloud, including: obtaining a 3D point cloud from a predetermined view point of a depth sensor; extracting 3D coordinates and intensity data from each point of the 3D point cloud with respect to the view point; transforming 3D coordinates and intensity data into at least three two-dimensional spaces, namely an intensity 2D space function of the intensity data of each point, a height 2D space function of an elevation data of each point, and a distance 2D space function of a distance data between each point of 3D point cloud and the view point, defining a single multi-channel 2D space.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: May 26, 2020
    Assignee: IMRA EUROPE S.A.S.
    Inventors: Dzmitry Tsishkou, Frédéric Abad, Rémy Bendahan
  • Publication number: 20190180144
    Abstract: A danger ranking training method comprising training a first deep neural network for generic object recognition within generic images, training a second deep neural network for specific object recognition within images of a specific application, training a third deep neural network for specific scene flow prediction within image sequences of the application, training a fourth deep neural network for potential danger areas localization within images or image sequences of the application using at least one human trained danger tagging method, training a fifth deep neural network for non-visible specific object anticipation and/or visible specific object prediction within image or image sequences of the application, and determining at least one danger pixel within an image or an image sequence of the application using an end-to-end deep neural network as a sequence of transfer learning of the five deep neural networks followed by one or several end-to-end top layers.
    Type: Application
    Filed: December 7, 2018
    Publication date: June 13, 2019
    Inventors: Dzmitry TSISHKOU, Rémy BENDAHAN
  • Publication number: 20190122077
    Abstract: Some embodiments are directed to a method to reinforce deep neural network learning capacity to classify rare cases, which includes the steps of training a first deep neural network used to classify generic cases of original data into specified labels; localizing discriminative class-specific features within the original data processed through the first deep neural network and mapping the discriminative class-specific features as spatial-probabilistic labels; training a second-deep neural network used to classify rare cases of the original data into the spatial-probabilistic labels; and training a combined deep neural network used to classify both generic and rare cases of the original data into primary combined specified and spatial-probabilistic labels.
    Type: Application
    Filed: March 15, 2017
    Publication date: April 25, 2019
    Applicant: Impra Europe S.A.S.
    Inventors: Dzmitry TSISHKOU, Rémy BENDAHAN
  • Publication number: 20190086546
    Abstract: Some embodiments are directed to a processing method of a three-dimensional point cloud, including: obtaining a 3D point cloud from a predetermined view point of a depth sensor; extracting 3D coordinates and intensity data from each point of the 3D point cloud with respect to the view point; transforming 3D coordinates and intensity data into at least three two-dimensional spaces, namely an intensity 2D space function of the intensity data of each point, a height 2D space function of an elevation data of each point, and a distance 2D space function of a distance data between each point of 3D point cloud and the view point, defining a single multi-channel 2D space.
    Type: Application
    Filed: March 14, 2017
    Publication date: March 21, 2019
    Inventors: Dzmitry TSISHKOU, Frédéric ABAD, Rémy BENDAHAN
  • Patent number: 8798376
    Abstract: A method for detecting contour points of an object in an image obtained by a video camera comprising the steps of: (i) selecting a scan line of the image; (ii) identifying minimum intensity differences called transitions between pixels of the selected scan line; (iii) identifying plateaus at both ends of the identified transitions; (iv) determining contour points of the object between the identified plateaus; (v) generating a descriptor of the contour in one dimension; and (vi) beginning again with step (i) by selecting another scan line of the image according to a predefined order.
    Type: Grant
    Filed: September 14, 2010
    Date of Patent: August 5, 2014
    Assignee: IMRA Europe S.A.S.
    Inventors: Remy Bendahan, Sylvain Bougnoux, Frederic Abad, Dzimitry Tsishkou, Christophe Vestri, Sebastien Wybo
  • Publication number: 20110069887
    Abstract: A method for detecting contour points of an object in an image obtained by a video camera comprising the steps of: (i) selecting a scan line of the image; (ii) identifying minimum intensity differences called transitions between pixels of the selected scan line; (iii) identifying plateaus at both ends of the identified transitions; (iv) determining contour points of the object between the identified plateaus; (v) generating a descriptor of the contour in one dimension; and (vi) beginning again with step (i) by selecting another scan line of the image according to a predefined order.
    Type: Application
    Filed: September 14, 2010
    Publication date: March 24, 2011
    Inventors: Remy BENDAHAN, Sylvain Bougnoux, Frederic Abad, Dzimitry Tsishkou, Christophe Vestri, Sebastien Wybo