Patents by Inventor Rémy BENDAHAN
Rémy BENDAHAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11722892Abstract: A visual light communication emitter for a vehicle, arranged to communicate a status of the vehicle, includes a first light emitter arranged to emit flash light which is modulated at a first target frequency in a dedicated non-visible spectrum, and a second light emitter arranged to emit flash light which is modulated at a second target frequency in the dedicated non-visible spectrum. A difference between the first target frequency and the second target frequency is predetermined, so as to authenticate the status of the vehicle.Type: GrantFiled: March 2, 2021Date of Patent: August 8, 2023Assignee: AISIN CORPORATIONInventors: Remy Bendahan, Sylvain Bougnoux, Yuta Nakano, Takeshi Fujita
-
Publication number: 20230154198Abstract: A computer-implemented method for multimodal egocentric future prediction in a driving environment of an autonomous vehicle (AV) or an advanced driver assistance system (ADAS) equipped with a camera and comprising a trained reachability prior deep neural network (RPN), a trained reachability transfer deep neural network (RTN) and a trained future localization deep neural network (FLN) and/or a trained future emergence prediction deep neural network (EPN).Type: ApplicationFiled: May 28, 2021Publication date: May 18, 2023Inventors: Osama MAKANSI, Cicek ÖZGÜN, Thomas BROX, Kévin BUCHICCHIO, Frédéric ABAD, Rémy BENDAHAN
-
Publication number: 20230131815Abstract: A computer-implemented method for predicting multiple future trajectories of moving objects of interest in the environment of a monitoring device comprising a memory augmented neural network (MANN) comprising at least one trained encoder deep neural network, one trained decoder deep neural network and a key-value database storing keys corresponding to past trajectory encodings and associated values corresponding to associated future trajectory encodings.Type: ApplicationFiled: May 28, 2021Publication date: April 27, 2023Inventors: Federico BECATTINI, Francesco MARCHETTI, Lorenzo SEIDENARI, Alberto DEL BIMBO, Frédéric ABAD, Kévin BUCHICCHIO, Rémy BENDAHAN
-
Patent number: 11496215Abstract: Clothing equipment having a visual light communication emitter arranged to communicate a status of the clothing equipment, includes a light emitter arranged to emit flash light which is modulated at at least one target frequency in a dedicated non-visible spectrum, the light emitter including three fixed emitting portions distant from each other by predetermined distances, so as to authenticate the status of the clothing equipment.Type: GrantFiled: March 2, 2021Date of Patent: November 8, 2022Assignee: AISIN CORPORATIONInventors: Remy Bendahan, Sylvain Bougnoux, Yuta Nakano, Takeshi Fujita
-
Patent number: 11281941Abstract: A danger ranking training method comprising training a first deep neural network for generic object recognition within generic images, training a second deep neural network for specific object recognition within images of a specific application, training a third deep neural network for specific scene flow prediction within image sequences of the application, training a fourth deep neural network for potential danger areas localization within images or image sequences of the application using at least one human trained danger tagging method, training a fifth deep neural network for non-visible specific object anticipation and/or visible specific object prediction within image or image sequences of the application, and determining at least one danger pixel within an image or an image sequence of the application using an end-to-end deep neural network as a sequence of transfer learning of the five deep neural networks followed by one or several end-to-end top layers.Type: GrantFiled: December 7, 2018Date of Patent: March 22, 2022Assignee: IMRA EUROPE S.A.S.Inventors: Dzmitry Tsishkou, Rémy Bendahan
-
Publication number: 20210399802Abstract: Clothing equipment having a visual light communication emitter arranged to communicate a status of the clothing equipment, includes a light emitter arranged to emit flash light which is modulated at at least one target frequency in a dedicated non-visible spectrum, the light emitter including three fixed emitting portions distant from each other by predetermined distances, so as to authenticate the status of the clothing equipment.Type: ApplicationFiled: March 2, 2021Publication date: December 23, 2021Applicant: AISIN SEIKI KABUSHIKI KAISHAInventors: Remy BENDAHAN, Sylvain Bougnoux, Yuta Nakano, Takeshi Fujita
-
Publication number: 20210400476Abstract: A visual light communication emitter for a vehicle, arranged to communicate a status of the vehicle, includes a first light emitter arranged to emit flash light which is modulated at a first target frequency in a dedicated non-visible spectrum, and a second light emitter arranged to emit flash light which is modulated at a second target frequency in the dedicated non-visible spectrum. A difference between the first target frequency and the second target frequency is predetermined, so as to authenticate the status of the vehicle.Type: ApplicationFiled: March 2, 2021Publication date: December 23, 2021Applicant: AISIN SEIKI KABUSHIKI KAISHAInventors: Remy BENDAHAN, Sylvain Bougnoux, Yuta Nakano, Takeshi Fujita
-
Patent number: 10776664Abstract: Some embodiments are directed to a method to reinforce deep neural network learning capacity to classify rare cases, which includes the steps of training a first deep neural network used to classify generic cases of original data into specified labels; localizing discriminative class-specific features within the original data processed through the first deep neural network and mapping the discriminative class-specific features as spatial-probabilistic labels; training a second-deep neural network used to classify rare cases of the original data into the spatial-probabilistic labels; and training a combined deep neural network used to classify both generic and rare cases of the original data into primary combined specified and spatial-probabilistic labels.Type: GrantFiled: March 15, 2017Date of Patent: September 15, 2020Assignee: IMRA EUROPE S.A.S.Inventors: Dzmitry Tsishkou, Rémy Bendahan
-
Patent number: 10663594Abstract: Some embodiments are directed to a processing method of a three-dimensional point cloud, including: obtaining a 3D point cloud from a predetermined view point of a depth sensor; extracting 3D coordinates and intensity data from each point of the 3D point cloud with respect to the view point; transforming 3D coordinates and intensity data into at least three two-dimensional spaces, namely an intensity 2D space function of the intensity data of each point, a height 2D space function of an elevation data of each point, and a distance 2D space function of a distance data between each point of 3D point cloud and the view point, defining a single multi-channel 2D space.Type: GrantFiled: March 14, 2017Date of Patent: May 26, 2020Assignee: IMRA EUROPE S.A.S.Inventors: Dzmitry Tsishkou, Frédéric Abad, Rémy Bendahan
-
Publication number: 20190180144Abstract: A danger ranking training method comprising training a first deep neural network for generic object recognition within generic images, training a second deep neural network for specific object recognition within images of a specific application, training a third deep neural network for specific scene flow prediction within image sequences of the application, training a fourth deep neural network for potential danger areas localization within images or image sequences of the application using at least one human trained danger tagging method, training a fifth deep neural network for non-visible specific object anticipation and/or visible specific object prediction within image or image sequences of the application, and determining at least one danger pixel within an image or an image sequence of the application using an end-to-end deep neural network as a sequence of transfer learning of the five deep neural networks followed by one or several end-to-end top layers.Type: ApplicationFiled: December 7, 2018Publication date: June 13, 2019Inventors: Dzmitry TSISHKOU, Rémy BENDAHAN
-
Publication number: 20190122077Abstract: Some embodiments are directed to a method to reinforce deep neural network learning capacity to classify rare cases, which includes the steps of training a first deep neural network used to classify generic cases of original data into specified labels; localizing discriminative class-specific features within the original data processed through the first deep neural network and mapping the discriminative class-specific features as spatial-probabilistic labels; training a second-deep neural network used to classify rare cases of the original data into the spatial-probabilistic labels; and training a combined deep neural network used to classify both generic and rare cases of the original data into primary combined specified and spatial-probabilistic labels.Type: ApplicationFiled: March 15, 2017Publication date: April 25, 2019Applicant: Impra Europe S.A.S.Inventors: Dzmitry TSISHKOU, Rémy BENDAHAN
-
Publication number: 20190086546Abstract: Some embodiments are directed to a processing method of a three-dimensional point cloud, including: obtaining a 3D point cloud from a predetermined view point of a depth sensor; extracting 3D coordinates and intensity data from each point of the 3D point cloud with respect to the view point; transforming 3D coordinates and intensity data into at least three two-dimensional spaces, namely an intensity 2D space function of the intensity data of each point, a height 2D space function of an elevation data of each point, and a distance 2D space function of a distance data between each point of 3D point cloud and the view point, defining a single multi-channel 2D space.Type: ApplicationFiled: March 14, 2017Publication date: March 21, 2019Inventors: Dzmitry TSISHKOU, Frédéric ABAD, Rémy BENDAHAN
-
Patent number: 8798376Abstract: A method for detecting contour points of an object in an image obtained by a video camera comprising the steps of: (i) selecting a scan line of the image; (ii) identifying minimum intensity differences called transitions between pixels of the selected scan line; (iii) identifying plateaus at both ends of the identified transitions; (iv) determining contour points of the object between the identified plateaus; (v) generating a descriptor of the contour in one dimension; and (vi) beginning again with step (i) by selecting another scan line of the image according to a predefined order.Type: GrantFiled: September 14, 2010Date of Patent: August 5, 2014Assignee: IMRA Europe S.A.S.Inventors: Remy Bendahan, Sylvain Bougnoux, Frederic Abad, Dzimitry Tsishkou, Christophe Vestri, Sebastien Wybo
-
Publication number: 20110069887Abstract: A method for detecting contour points of an object in an image obtained by a video camera comprising the steps of: (i) selecting a scan line of the image; (ii) identifying minimum intensity differences called transitions between pixels of the selected scan line; (iii) identifying plateaus at both ends of the identified transitions; (iv) determining contour points of the object between the identified plateaus; (v) generating a descriptor of the contour in one dimension; and (vi) beginning again with step (i) by selecting another scan line of the image according to a predefined order.Type: ApplicationFiled: September 14, 2010Publication date: March 24, 2011Inventors: Remy BENDAHAN, Sylvain Bougnoux, Frederic Abad, Dzimitry Tsishkou, Christophe Vestri, Sebastien Wybo