Patents by Inventor Nicolas Cebron

Nicolas Cebron has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11966452
    Abstract: Systems and methods for image-based perception. The methods comprise: capturing images by a plurality of cameras with overlapping fields of view; generating, by a computing device, spatial feature maps indicating locations of features in the images; identifying, by the computing device, overlapping portions of the spatial feature maps; generating, by the computing device, at least one combined spatial feature map by combining the overlapping portions of the spatial feature maps together; and/or using, by the computing device, the at least one combined spatial feature map to define a predicted cuboid for at least one object in the images.
    Type: Grant
    Filed: August 5, 2021
    Date of Patent: April 23, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Guy Hotson, Francesco Ferroni, Harpreet Banvait, Kiwoo Shin, Nicolas Cebron
  • Publication number: 20230394677
    Abstract: This document discloses system, method, and computer program product embodiments for image-based pedestrian speed estimation. For example, the method includes receiving an image of a scene, wherein the image includes a pedestrian and predicting a speed of the pedestrian by applying a machine-learning model to at least a portion of the image that includes the pedestrian. The machine-learning model is trained using a data set including training images of pedestrians, the training images associated with corresponding known pedestrian speeds. The method further includes providing the predicted speed of the pedestrian to a motion-planning system that is configured to control a trajectory of an autonomous vehicle in the scene.
    Type: Application
    Filed: June 6, 2022
    Publication date: December 7, 2023
    Inventors: Harpreet Banvait, Guy Hotson, Nicolas Cebron, Michael Schoenberg
  • Publication number: 20230252638
    Abstract: Systems and methods for generating a panoptic segmentation mask for an input image. The methods include receiving the input image comprising a plurality of pixels, generating a semantic mask and an instance mask from the input image, and combining the semantic mask and the instance mask to generate a panoptic mask for the input image. The semantic mask includes a single-channel mask that associates each pixel in the input image with a corresponding one of a plurality of labels. The instance mask includes a plurality of masks, where each of the plurality of masks identifies an instance of a countable object in the input image, and is associated with an indication of whether that instance of the countable object is hidden behind another object in the input image.
    Type: Application
    Filed: February 4, 2022
    Publication date: August 10, 2023
    Inventors: Guy Hotson, Nicolas Cebron, John Ryan Peterson, Marius Seritan, Craig Bryan
  • Patent number: 11663807
    Abstract: Systems and methods for image-based perception. The methods comprise: obtaining, by a computing device, images captured by a plurality of cameras with overlapping fields of view; generating, by the computing device, spatial feature maps indicating locations of features in the images; defining, by the computing device, predicted cuboids at each location of an object in the images based on the spatial feature maps; and assigning, by the computing device, at least two cuboids of said predicted cuboids to a given object when predictions from images captured by separate cameras of the plurality of cameras should be associated with a same detected object.
    Type: Grant
    Filed: August 5, 2021
    Date of Patent: May 30, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Guy Hotson, Francesco Ferroni, Harpreet Banvait, Kiwoo Shin, Nicolas Cebron
  • Publication number: 20230084623
    Abstract: Methods and systems for detecting objects near an autonomous vehicle (AV) are disclosed. An AV will capture an image. A trained network will process the image at a lower resolution and generate a first feature map that classifies object(s) within the image. The system will crop the image and use the network to process the cropped section at a higher resolution to generate a second feature map that classifies object(s) that appear within the cropped section. The system will crop the first feature map to match a corresponding region of the cropped section of the image. The system will fuse the cropped first and second feature maps to generate a third feature map. The system may output the object classifications in the third feature map to an AV system, such as a motion planning system that will use the object classifications to plan a trajectory for the AV.
    Type: Application
    Filed: September 10, 2021
    Publication date: March 16, 2023
    Inventors: Guy Hotson, Richard L. Kwant, Deva K. Ramanan, Nicolas Cebron, Chao Fang
  • Publication number: 20230038578
    Abstract: Systems and methods for image-based perception. The methods comprise: obtaining, by a computing device, images captured by a plurality of cameras with overlapping fields of view; generating, by the computing device, spatial feature maps indicating locations of features in the images; defining, by the computing device, predicted cuboids at each location of an object in the images based on the spatial feature maps; and assigning, by the computing device, at least two cuboids of said predicted cuboids to a given object when predictions from images captured by separate cameras of the plurality of cameras should be associated with a same detected object.
    Type: Application
    Filed: August 5, 2021
    Publication date: February 9, 2023
    Inventors: Guy Hotson, Francesco Ferroni, Harpreet Banvait, Kiwoo Shin, Nicolas Cebron
  • Publication number: 20230043716
    Abstract: Systems and methods for image-based perception. The methods comprise: capturing images by a plurality of cameras with overlapping fields of view; generating, by a computing device, spatial feature maps indicating locations of features in the images; identifying, by the computing device, overlapping portions of the spatial feature maps; generating, by the computing device, at least one combined spatial feature map by combining the overlapping portions of the spatial feature maps together; and/or using, by the computing device, the at least one combined spatial feature map to define a predicted cuboid for at least one object in the images.
    Type: Application
    Filed: August 5, 2021
    Publication date: February 9, 2023
    Inventors: Guy Hotson, Francesco Ferroni, Harpreet Banvait, Kiwoo Shin, Nicolas Cebron
  • Publication number: 20220301099
    Abstract: Systems and methods for processing high resolution images are disclosed. The methods include generating a saliency map of a received high-resolution image using a saliency model. The saliency map includes a saliency value associated with each of a plurality of pixels of the high-resolution image. The method then includes using the saliency map for generating an inverse transformation function that is representative of an inverse mapping of one or more first pixel coordinates in a warped image to one or more second pixel coordinates in the high-resolution image, and implementing an image warp for converting the high-resolution image to the warped image using the inverse transformation function. The warped image is a foveated image that includes at least one region having a higher resolution than one or more other regions of the warped image.
    Type: Application
    Filed: February 24, 2022
    Publication date: September 22, 2022
    Inventors: Nicolas Cebron, Deva K. Ramanan, Mengtian Li
  • Publication number: 20220188695
    Abstract: Systems and methods for on-board selection of data logs for training a machine learning model. The methods include, by an autonomous vehicle, receiving sensor data logs corresponding to surroundings of the autonomous vehicle from a plurality of sensors, identifying one or more events within each sensor data log. The methods also include, for each sensor data log: analyzing features of the identified one or more events within that sensor data log for determining whether that sensor data log satisfies one or more usefulness criteria for training a machine learning model, and transmitting that sensor data log to a remote computing device for training the machine learning model if that sensor data log satisfies one or more usefulness criteria for training the machine learning model. The features can include spatial features, temporal features, bounding box inconsistencies, or map-based features.
    Type: Application
    Filed: December 16, 2020
    Publication date: June 16, 2022
    Inventors: Shaojun Zhu, Richard L. Kwant, Nicolas Cebron
  • Publication number: 20220122620
    Abstract: Systems and methods for siren detection in a vehicle are provided. A method includes recording an audio segment, using a first audio recording device coupled to an autonomous vehicle, separating, using a computing device coupled to the autonomous vehicle, the audio segment into one or more audio clips, generating a spectrogram of the one or more audio clips, and inputting each spectrogram into a Convolutional Neural Network (CNN) run on the computing device. The CNN may be pretrained to detect one or more sirens present in spectrographic data. The method further includes determining, using the CNN, whether a siren is present in the audio segment, and if the siren is determined to be present in the audio segment, determining a course of action of the autonomous vehicle.
    Type: Application
    Filed: October 19, 2020
    Publication date: April 21, 2022
    Inventors: Olivia Watkins, Nathan Pendleton, Guy Hotson, Chao Fang, Richard L. Kwant, Weihua Gao, Deva Ramanan, Nicolas Cebron, Brett Browning
  • Patent number: 9998656
    Abstract: A monitoring system for a monitoring area. The monitoring system includes a monitoring camera, a reflecting device, and a evaluator. The monitoring camera has a field of view 4 for capturing a first partial section of the monitoring area. The reflecting device is positioned in the field of view of the monitoring camera such that the monitoring camera captures a second partial section of the monitoring area, wherein the first and the second partial sections are positioned to overlap in a common partial section of the monitoring area, wherein a first image area a depicts the first partial section I and a second image area depicts the second partial section in the monitoring image. The evaluator is configured to identify at least one correspondence object in the first as and the second partial section.
    Type: Grant
    Filed: July 13, 2015
    Date of Patent: June 12, 2018
    Assignee: Robert Bosch GmbH
    Inventors: Jan Karl Warzelhan, Nicolas Cebron
  • Patent number: 9996751
    Abstract: The invention relates to a method for monitoring a monitored region (10) recorded by a camera, wherein a content analysis is performed for a sub-region (20) of the monitored region (10); wherein the sub-region (20) is determined in dependence on one or more parameters; and wherein the determination of the sub-region (20) is performed anew when at least one of the parameters changes during the monitoring.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: June 12, 2018
    Assignee: Robert Bosch GmbH
    Inventors: Jan Karl Warzelhan, Nicolas Cebron
  • Publication number: 20170177945
    Abstract: The invention relates to a method for monitoring a monitored region (10) recorded by a camera, wherein a content analysis is performed for a sub-region (20) of the monitored region (10); wherein the sub-region (20) is determined in dependence on one or more parameters; and wherein the determination of the sub-region (20) is performed anew when at least one of the parameters changes during the monitoring.
    Type: Application
    Filed: September 25, 2014
    Publication date: June 22, 2017
    Inventors: Jan Karl Warzelhan, Nicolas Cebron
  • Patent number: 9495612
    Abstract: A method for recognizing an object (120) in an image (100), in which the recognition of the object (120) comprises a first scaling stage of a scaling region of the image (100), and at least a further scaling stage of the scaling region of the image (100); and in which the image (100) is subdivided into at least one image segment (110); a first decision being taken for the at least one image segment (110) on the first scaling stage of the scaling region as to whether the at least one image segment (110) is considered on the at least one further scaling stage of the scaling region for the recognition of the object (120).
    Type: Grant
    Filed: November 25, 2014
    Date of Patent: November 15, 2016
    Assignee: Robert Bosch GmbH
    Inventors: Nicolas Cebron, Jie Yu
  • Publication number: 20160021307
    Abstract: A monitoring system for a monitoring area. The monitoring system includes a monitoring camera, a reflecting device, and a evaluator. The monitoring camera has a field of view 4 for capturing a first partial section of the monitoring area. The reflecting device is positioned in the field of view of the monitoring camera such that the monitoring camera captures a second partial section of the monitoring area, wherein the first and the second partial sections are positioned to overlap in a common partial section of the monitoring area, wherein a first image area a depicts the first partial section I and a second image area depicts the second partial section in the monitoring image. The evaluator is configured to identify at least one correspondence object in the first as and the second partial section.
    Type: Application
    Filed: July 13, 2015
    Publication date: January 21, 2016
    Inventors: Jan Karl Warzelhan, Nicolas Cebron
  • Publication number: 20150146927
    Abstract: A method for recognizing an object (120) in an image (100), in which the recognition of the object (120) comprises a first scaling stage of a scaling region of the image (100), and at least a further scaling stage of the scaling region of the image (100); and in which the image (100) is subdivided into at least one image segment (110); a first decision being taken for the at least one image segment (110) on the first scaling stage of the scaling region as to whether the at least one image segment (110) is considered on the at least one further scaling stage of the scaling region for the recognition of the object (120).
    Type: Application
    Filed: November 25, 2014
    Publication date: May 28, 2015
    Inventors: Nicolas Cebron, Jie Yu