Patents by Inventor Nicolas Cebron
Nicolas Cebron has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11966452Abstract: Systems and methods for image-based perception. The methods comprise: capturing images by a plurality of cameras with overlapping fields of view; generating, by a computing device, spatial feature maps indicating locations of features in the images; identifying, by the computing device, overlapping portions of the spatial feature maps; generating, by the computing device, at least one combined spatial feature map by combining the overlapping portions of the spatial feature maps together; and/or using, by the computing device, the at least one combined spatial feature map to define a predicted cuboid for at least one object in the images.Type: GrantFiled: August 5, 2021Date of Patent: April 23, 2024Assignee: Ford Global Technologies, LLCInventors: Guy Hotson, Francesco Ferroni, Harpreet Banvait, Kiwoo Shin, Nicolas Cebron
-
Publication number: 20230394677Abstract: This document discloses system, method, and computer program product embodiments for image-based pedestrian speed estimation. For example, the method includes receiving an image of a scene, wherein the image includes a pedestrian and predicting a speed of the pedestrian by applying a machine-learning model to at least a portion of the image that includes the pedestrian. The machine-learning model is trained using a data set including training images of pedestrians, the training images associated with corresponding known pedestrian speeds. The method further includes providing the predicted speed of the pedestrian to a motion-planning system that is configured to control a trajectory of an autonomous vehicle in the scene.Type: ApplicationFiled: June 6, 2022Publication date: December 7, 2023Inventors: Harpreet Banvait, Guy Hotson, Nicolas Cebron, Michael Schoenberg
-
Publication number: 20230252638Abstract: Systems and methods for generating a panoptic segmentation mask for an input image. The methods include receiving the input image comprising a plurality of pixels, generating a semantic mask and an instance mask from the input image, and combining the semantic mask and the instance mask to generate a panoptic mask for the input image. The semantic mask includes a single-channel mask that associates each pixel in the input image with a corresponding one of a plurality of labels. The instance mask includes a plurality of masks, where each of the plurality of masks identifies an instance of a countable object in the input image, and is associated with an indication of whether that instance of the countable object is hidden behind another object in the input image.Type: ApplicationFiled: February 4, 2022Publication date: August 10, 2023Inventors: Guy Hotson, Nicolas Cebron, John Ryan Peterson, Marius Seritan, Craig Bryan
-
Patent number: 11663807Abstract: Systems and methods for image-based perception. The methods comprise: obtaining, by a computing device, images captured by a plurality of cameras with overlapping fields of view; generating, by the computing device, spatial feature maps indicating locations of features in the images; defining, by the computing device, predicted cuboids at each location of an object in the images based on the spatial feature maps; and assigning, by the computing device, at least two cuboids of said predicted cuboids to a given object when predictions from images captured by separate cameras of the plurality of cameras should be associated with a same detected object.Type: GrantFiled: August 5, 2021Date of Patent: May 30, 2023Assignee: Ford Global Technologies, LLCInventors: Guy Hotson, Francesco Ferroni, Harpreet Banvait, Kiwoo Shin, Nicolas Cebron
-
Publication number: 20230084623Abstract: Methods and systems for detecting objects near an autonomous vehicle (AV) are disclosed. An AV will capture an image. A trained network will process the image at a lower resolution and generate a first feature map that classifies object(s) within the image. The system will crop the image and use the network to process the cropped section at a higher resolution to generate a second feature map that classifies object(s) that appear within the cropped section. The system will crop the first feature map to match a corresponding region of the cropped section of the image. The system will fuse the cropped first and second feature maps to generate a third feature map. The system may output the object classifications in the third feature map to an AV system, such as a motion planning system that will use the object classifications to plan a trajectory for the AV.Type: ApplicationFiled: September 10, 2021Publication date: March 16, 2023Inventors: Guy Hotson, Richard L. Kwant, Deva K. Ramanan, Nicolas Cebron, Chao Fang
-
Publication number: 20230038578Abstract: Systems and methods for image-based perception. The methods comprise: obtaining, by a computing device, images captured by a plurality of cameras with overlapping fields of view; generating, by the computing device, spatial feature maps indicating locations of features in the images; defining, by the computing device, predicted cuboids at each location of an object in the images based on the spatial feature maps; and assigning, by the computing device, at least two cuboids of said predicted cuboids to a given object when predictions from images captured by separate cameras of the plurality of cameras should be associated with a same detected object.Type: ApplicationFiled: August 5, 2021Publication date: February 9, 2023Inventors: Guy Hotson, Francesco Ferroni, Harpreet Banvait, Kiwoo Shin, Nicolas Cebron
-
Publication number: 20230043716Abstract: Systems and methods for image-based perception. The methods comprise: capturing images by a plurality of cameras with overlapping fields of view; generating, by a computing device, spatial feature maps indicating locations of features in the images; identifying, by the computing device, overlapping portions of the spatial feature maps; generating, by the computing device, at least one combined spatial feature map by combining the overlapping portions of the spatial feature maps together; and/or using, by the computing device, the at least one combined spatial feature map to define a predicted cuboid for at least one object in the images.Type: ApplicationFiled: August 5, 2021Publication date: February 9, 2023Inventors: Guy Hotson, Francesco Ferroni, Harpreet Banvait, Kiwoo Shin, Nicolas Cebron
-
Publication number: 20220301099Abstract: Systems and methods for processing high resolution images are disclosed. The methods include generating a saliency map of a received high-resolution image using a saliency model. The saliency map includes a saliency value associated with each of a plurality of pixels of the high-resolution image. The method then includes using the saliency map for generating an inverse transformation function that is representative of an inverse mapping of one or more first pixel coordinates in a warped image to one or more second pixel coordinates in the high-resolution image, and implementing an image warp for converting the high-resolution image to the warped image using the inverse transformation function. The warped image is a foveated image that includes at least one region having a higher resolution than one or more other regions of the warped image.Type: ApplicationFiled: February 24, 2022Publication date: September 22, 2022Inventors: Nicolas Cebron, Deva K. Ramanan, Mengtian Li
-
Publication number: 20220188695Abstract: Systems and methods for on-board selection of data logs for training a machine learning model. The methods include, by an autonomous vehicle, receiving sensor data logs corresponding to surroundings of the autonomous vehicle from a plurality of sensors, identifying one or more events within each sensor data log. The methods also include, for each sensor data log: analyzing features of the identified one or more events within that sensor data log for determining whether that sensor data log satisfies one or more usefulness criteria for training a machine learning model, and transmitting that sensor data log to a remote computing device for training the machine learning model if that sensor data log satisfies one or more usefulness criteria for training the machine learning model. The features can include spatial features, temporal features, bounding box inconsistencies, or map-based features.Type: ApplicationFiled: December 16, 2020Publication date: June 16, 2022Inventors: Shaojun Zhu, Richard L. Kwant, Nicolas Cebron
-
Publication number: 20220122620Abstract: Systems and methods for siren detection in a vehicle are provided. A method includes recording an audio segment, using a first audio recording device coupled to an autonomous vehicle, separating, using a computing device coupled to the autonomous vehicle, the audio segment into one or more audio clips, generating a spectrogram of the one or more audio clips, and inputting each spectrogram into a Convolutional Neural Network (CNN) run on the computing device. The CNN may be pretrained to detect one or more sirens present in spectrographic data. The method further includes determining, using the CNN, whether a siren is present in the audio segment, and if the siren is determined to be present in the audio segment, determining a course of action of the autonomous vehicle.Type: ApplicationFiled: October 19, 2020Publication date: April 21, 2022Inventors: Olivia Watkins, Nathan Pendleton, Guy Hotson, Chao Fang, Richard L. Kwant, Weihua Gao, Deva Ramanan, Nicolas Cebron, Brett Browning
-
Patent number: 9998656Abstract: A monitoring system for a monitoring area. The monitoring system includes a monitoring camera, a reflecting device, and a evaluator. The monitoring camera has a field of view 4 for capturing a first partial section of the monitoring area. The reflecting device is positioned in the field of view of the monitoring camera such that the monitoring camera captures a second partial section of the monitoring area, wherein the first and the second partial sections are positioned to overlap in a common partial section of the monitoring area, wherein a first image area a depicts the first partial section I and a second image area depicts the second partial section in the monitoring image. The evaluator is configured to identify at least one correspondence object in the first as and the second partial section.Type: GrantFiled: July 13, 2015Date of Patent: June 12, 2018Assignee: Robert Bosch GmbHInventors: Jan Karl Warzelhan, Nicolas Cebron
-
Patent number: 9996751Abstract: The invention relates to a method for monitoring a monitored region (10) recorded by a camera, wherein a content analysis is performed for a sub-region (20) of the monitored region (10); wherein the sub-region (20) is determined in dependence on one or more parameters; and wherein the determination of the sub-region (20) is performed anew when at least one of the parameters changes during the monitoring.Type: GrantFiled: September 25, 2014Date of Patent: June 12, 2018Assignee: Robert Bosch GmbHInventors: Jan Karl Warzelhan, Nicolas Cebron
-
Publication number: 20170177945Abstract: The invention relates to a method for monitoring a monitored region (10) recorded by a camera, wherein a content analysis is performed for a sub-region (20) of the monitored region (10); wherein the sub-region (20) is determined in dependence on one or more parameters; and wherein the determination of the sub-region (20) is performed anew when at least one of the parameters changes during the monitoring.Type: ApplicationFiled: September 25, 2014Publication date: June 22, 2017Inventors: Jan Karl Warzelhan, Nicolas Cebron
-
Patent number: 9495612Abstract: A method for recognizing an object (120) in an image (100), in which the recognition of the object (120) comprises a first scaling stage of a scaling region of the image (100), and at least a further scaling stage of the scaling region of the image (100); and in which the image (100) is subdivided into at least one image segment (110); a first decision being taken for the at least one image segment (110) on the first scaling stage of the scaling region as to whether the at least one image segment (110) is considered on the at least one further scaling stage of the scaling region for the recognition of the object (120).Type: GrantFiled: November 25, 2014Date of Patent: November 15, 2016Assignee: Robert Bosch GmbHInventors: Nicolas Cebron, Jie Yu
-
Publication number: 20160021307Abstract: A monitoring system for a monitoring area. The monitoring system includes a monitoring camera, a reflecting device, and a evaluator. The monitoring camera has a field of view 4 for capturing a first partial section of the monitoring area. The reflecting device is positioned in the field of view of the monitoring camera such that the monitoring camera captures a second partial section of the monitoring area, wherein the first and the second partial sections are positioned to overlap in a common partial section of the monitoring area, wherein a first image area a depicts the first partial section I and a second image area depicts the second partial section in the monitoring image. The evaluator is configured to identify at least one correspondence object in the first as and the second partial section.Type: ApplicationFiled: July 13, 2015Publication date: January 21, 2016Inventors: Jan Karl Warzelhan, Nicolas Cebron
-
Publication number: 20150146927Abstract: A method for recognizing an object (120) in an image (100), in which the recognition of the object (120) comprises a first scaling stage of a scaling region of the image (100), and at least a further scaling stage of the scaling region of the image (100); and in which the image (100) is subdivided into at least one image segment (110); a first decision being taken for the at least one image segment (110) on the first scaling stage of the scaling region as to whether the at least one image segment (110) is considered on the at least one further scaling stage of the scaling region for the recognition of the object (120).Type: ApplicationFiled: November 25, 2014Publication date: May 28, 2015Inventors: Nicolas Cebron, Jie Yu