Patents by Inventor CLEMENT CREUSOT
CLEMENT CREUSOT has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220122365Abstract: A vehicle includes one or more cameras that capture a plurality of two-dimensional images of a three-dimensional object. A light detector and/or a semantic classifier search within those images for lights of the three-dimensional object. A vehicle signal detection module fuses information from the light detector and/or the semantic classifier to produce a semantic meaning for the lights. The vehicle can be controlled based on the semantic meaning. Further, the vehicle can include a depth sensor and an object projector. The object projector can determine regions of interest within the two-dimensional images, based on the depth sensor. The light detector and/or the semantic classifier can use these regions of interest to efficiently perform the search for the lights.Type: ApplicationFiled: December 29, 2021Publication date: April 21, 2022Applicant: GM Cruise Holdings LLCInventors: Clement Creusot, Divya Thuremella, Na Yu, Jia Pu
-
Patent number: 11288527Abstract: A vehicle includes one or more cameras that capture a plurality of two-dimensional images of a three-dimensional object. A light detector and/or a semantic classifier search within those images for lights of the three-dimensional object. A vehicle signal detection module fuses information from the light detector and/or the semantic classifier to produce a semantic meaning for the lights. The vehicle can be controlled based on the semantic meaning. Further, the vehicle can include a depth sensor and an object projector. The object projector can determine regions of interest within the two-dimensional images, based on the depth sensor. The light detector and/or the semantic classifier can use these regions of interest to efficiently perform the search for the lights.Type: GrantFiled: February 28, 2020Date of Patent: March 29, 2022Assignee: GM Cruise Holdings LLCInventors: Clement Creusot, Divya Thuremella, Na Yu, Jia Pu
-
Publication number: 20220051037Abstract: A vehicle includes one or more cameras that capture a plurality of two-dimensional images of a three-dimensional object. A light detector and/or a semantic classifier search within those images for lights of the three-dimensional object. A vehicle signal detection module fuses information from the light detector and/or the semantic classifier to produce a semantic meaning for the lights. The vehicle can be controlled based on the semantic meaning. Further, the vehicle can include a depth sensor and an object projector. The object projector can determine regions of interest within the two-dimensional images, based on the depth sensor. The light detector and/or the semantic classifier can use these regions of interest to efficiently perform the search for the lights.Type: ApplicationFiled: October 29, 2021Publication date: February 17, 2022Applicant: GM Cruise Holdings LLCInventors: Clement Creusot, Divya Thuremella, Na Yu, Jia Pu
-
Patent number: 11195033Abstract: A vehicle includes one or more cameras that capture a plurality of two-dimensional images of a three-dimensional object. A light detector and/or a semantic classifier search within those images for lights of the three-dimensional object. A vehicle signal detection module fuses information from the light detector and/or the semantic classifier to produce a semantic meaning for the lights. The vehicle can be controlled based on the semantic meaning. Further, the vehicle can include a depth sensor and an object projector. The object projector can determine regions of interest within the two-dimensional images, based on the depth sensor. The light detector and/or the semantic classifier can use these regions of interest to efficiently perform the search for the lights.Type: GrantFiled: February 27, 2020Date of Patent: December 7, 2021Assignee: GM Cruise Holdings LLCInventors: Clement Creusot, Divya Thuremella, Na Yu, Jia Pu
-
Publication number: 20210271906Abstract: A vehicle includes one or more cameras that capture a plurality of two-dimensional images of a three-dimensional object. A light detector and/or a semantic classifier search within those images for lights of the three-dimensional object. A vehicle signal detection module fuses information from the light detector and/or the semantic classifier to produce a semantic meaning for the lights. The vehicle can be controlled based on the semantic meaning. Further, the vehicle can include a depth sensor and an object projector. The object projector can determine regions of interest within the two-dimensional images, based on the depth sensor. The light detector and/or the semantic classifier can use these regions of interest to efficiently perform the search for the lights.Type: ApplicationFiled: February 28, 2020Publication date: September 2, 2021Applicant: GM Cruise Holdings LLCInventors: Clement Creusot, Divya Thuremella, Na Yu, Jia Pu
-
Publication number: 20210271905Abstract: A vehicle includes one or more cameras that capture a plurality of two-dimensional images of a three-dimensional object. A light detector and/or a semantic classifier search within those images for lights of the three-dimensional object. A vehicle signal detection module fuses information from the light detector and/or the semantic classifier to produce a semantic meaning for the lights. The vehicle can be controlled based on the semantic meaning. Further, the vehicle can include a depth sensor and an object projector. The object projector can determine regions of interest within the two-dimensional images, based on the depth sensor. The light detector and/or the semantic classifier can use these regions of interest to efficiently perform the search for the lights.Type: ApplicationFiled: February 27, 2020Publication date: September 2, 2021Applicant: GM Cruise Holdings LLCInventors: Clement Creusot, Divya Thuremella, Na Yu, Jia Pu
-
Publication number: 20210233390Abstract: Systems, methods, and computer-readable media are provided for receiving traffic object data from a plurality of autonomous vehicles, the traffic object data including a geographic location of a traffic object, comparing the traffic object data of each of the plurality of autonomous vehicles with known traffic object data, determining a discrepancy between the traffic object data of each of the plurality of autonomous vehicles and the known traffic object data, grouping the traffic object data of each of the plurality of autonomous vehicles based on the determining of the discrepancy between the traffic object data of each of the plurality of autonomous vehicles and the known traffic object data, determining whether a group of traffic object data of the grouping of the traffic object data of each of the plurality of autonomous vehicles exceeds a threshold; and updating a traffic object map based on the traffic object data of the group that exceeds the threshold.Type: ApplicationFiled: January 28, 2020Publication date: July 29, 2021Inventors: Georgios Georgiou, Clement Creusot, Matthias Wisniowski
-
Publication number: 20210124363Abstract: An occlusion detection system for an autonomous vehicle is described herein, where a signal conversion system receives a three-dimensional sensor signal from a sensor system and projects the three-dimensional sensor signal into a two-dimensional range image having a plurality of pixel values that include distance information to objects captured in the range image. A localization system detects a first object in the range image, such as a traffic light, having first distance information and a second object in the range image, such as a foreground object, having second distance information. An occlusion polygon is defined around the second object and the range image is provided to an object perception system that excludes information within the occlusion polygon to determine a configuration of the first object. A directive is output by the object perception system to control the autonomous vehicle based upon occlusion detection.Type: ApplicationFiled: December 31, 2020Publication date: April 29, 2021Inventor: Clement Creusot
-
Patent number: 10943134Abstract: A disambiguation system for an autonomous vehicle is described herein that disambiguates a traffic light framed by a plurality of regions of interest. The autonomous vehicle includes a localization system that defines the plurality of regions of interest around traffic lights captured in a sensor signal and provides an input to a disambiguation system. When the captured traffic lights are in close proximity, the plurality of regions of interest overlap each other such that a traffic light disposed in the overlapping region is ambiguous to an object detector because it is framed by more than one region of interest. A disambiguation system associates the traffic light with the correct region of interest to disambiguate a relationship thereof and generates a disambiguated directive for controlling the autonomous vehicle. Disambiguation can be achieved according to any of an edge-based technique, a vertex-based technique, and a region of interest distance-based technique.Type: GrantFiled: August 31, 2018Date of Patent: March 9, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Clement Creusot, Sarthak Sahu
-
Publication number: 20210048830Abstract: An autonomous vehicle incorporating a multimodal multi-technique signal fusion system is described herein. The signal fusion system is configured to receive at least one sensor signal that is output by at least one sensor system (multimodal), such as at least one image sensor signal from at least one camera. The at least one sensor signal is provided to a plurality of object detector modules of different types (multi-technique), such as an absolute detector module and a relative activation detector module, that generate independent directives based on the at least one sensor signal. The independent directives are fused by a signal fusion module to output a fused directive for controlling the autonomous vehicle.Type: ApplicationFiled: October 31, 2020Publication date: February 18, 2021Inventors: Clement Creusot, Sarthak Sahu, Matthais Wisniowski
-
Patent number: 10884424Abstract: An occlusion detection system for an autonomous vehicle is described herein, where a signal conversion system receives a three-dimensional sensor signal from a sensor system and projects the three-dimensional sensor signal into a two-dimensional range image having a plurality of pixel values that include distance information to objects captured in the range image. A localization system detects a first object in the range image, such as a traffic light, having first distance information and a second object in the range image, such as a foreground object, having second distance information. An occlusion polygon is defined around the second object and the range image is provided to an object perception system that excludes information within the occlusion polygon to determine a configuration of the first object. A directive is output by the object perception system to control the autonomous vehicle based upon occlusion detection.Type: GrantFiled: September 7, 2018Date of Patent: January 5, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventor: Clement Creusot
-
Patent number: 10852743Abstract: An autonomous vehicle incorporating a multimodal multi-technique signal fusion system is described herein. The signal fusion system is configured to receive at least one sensor signal that is output by at least one sensor system (multimodal), such as at least one image sensor signal from at least one camera. The at least one sensor signal is provided to a plurality of object detector modules of different types (multi-technique), such as an absolute detector module and a relative activation detector module, that generate independent directives based on the at least one sensor signal. The independent directives are fused by a signal fusion module to output a fused directive for controlling the autonomous vehicle.Type: GrantFiled: September 7, 2018Date of Patent: December 1, 2020Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Clement Creusot, Sarthak Sahu, Matthais Wisniowski
-
Publication number: 20200183386Abstract: Sun-aware routing and controls of an autonomous vehicle is described herein. A location and an orientation of a sensor system is identified within an environment of the autonomous vehicle to determine whether the sun causes a threshold level of perception degradation to the sensor system incident to generating a sensor signal indicative of a traffic light. The determination is based upon perception degradation data that can be precomputed for locations and orientations within the environment for dates and times of day. The perception degradation data is based upon the location of at least one traffic light and positions of the sun relative to the locations and the orientations within the environment. A mechanical system of the autonomous vehicle is controlled to execute a maneuver that reduces perception degradation to the sensor system when the perception degradation is determined to exceed the threshold level of perception degradation.Type: ApplicationFiled: December 11, 2018Publication date: June 11, 2020Inventor: Clement Creusot
-
Publication number: 20200081448Abstract: An occlusion detection system for an autonomous vehicle is described herein, where a signal conversion system receives a three-dimensional sensor signal from a sensor system and projects the three-dimensional sensor signal into a two-dimensional range image having a plurality of pixel values that include distance information to objects captured in the range image. A localization system detects a first object in the range image, such as a traffic light, having first distance information and a second object in the range image, such as a foreground object, having second distance information. An occlusion polygon is defined around the second object and the range image is provided to an object perception system that excludes information within the occlusion polygon to determine a configuration of the first object. A directive is output by the object perception system to control the autonomous vehicle based upon occlusion detection.Type: ApplicationFiled: September 7, 2018Publication date: March 12, 2020Inventor: Clement Creusot
-
Publication number: 20200081450Abstract: An autonomous vehicle incorporating a multimodal multi-technique signal fusion system is described herein. The signal fusion system is configured to receive at least one sensor signal that is output by at least one sensor system (multimodal), such as at least one image sensor signal from at least one camera. The at least one sensor signal is provided to a plurality of object detector modules of different types (multi-technique), such as an absolute detector module and a relative activation detector module, that generate independent directives based on the at least one sensor signal. The independent directives are fused by a signal fusion module to output a fused directive for controlling the autonomous vehicle.Type: ApplicationFiled: September 7, 2018Publication date: March 12, 2020Inventors: Clement Creusot, Sarthak Sahu, Matthais Wisniowski
-
Publication number: 20200074194Abstract: A disambiguation system for an autonomous vehicle is described herein that disambiguates a traffic light framed by a plurality of regions of interest. The autonomous vehicle includes a localization system that defines the plurality of regions of interest around traffic lights captured in a sensor signal and provides an input to a disambiguation system. When the captured traffic lights are in close proximity, the plurality of regions of interest overlap each other such that a traffic light disposed in the overlapping region is ambiguous to an object detector because it is framed by more than one region of interest. A disambiguation system associates the traffic light with the correct region of interest to disambiguate a relationship thereof and generates a disambiguated directive for controlling the autonomous vehicle. Disambiguation can be achieved according to any of an edge-based technique, a vertex-based technique, and a region of interest distance-based technique.Type: ApplicationFiled: August 31, 2018Publication date: March 5, 2020Inventors: Clement Creusot, Sarthak Sahu
-
Patent number: 10489665Abstract: Systems and method are provided for controlling a vehicle. In one embodiment, a determination is made that a traffic control person and a traffic control sign are present within the environment of the vehicle based on sensor data, such as optical camera data. The position and orientation of the traffic control sign relative to the traffic control person is determined, e.g., via lidar sensor data, and the validity of the traffic control person and the traffic control sign is determined based on the position and orientation of the traffic control sign.Type: GrantFiled: September 7, 2017Date of Patent: November 26, 2019Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventor: Clement Creusot
-
Patent number: 10282999Abstract: Systems and method are provided for controlling a vehicle. In one embodiment, a method of detecting road construction includes receiving sensor data relating to an environment associated with a vehicle, determining that construction-related objects are present within the environment based on the sensor data, and determining whether a travel-impacting construction zone is present within the environment based on the presence of the construction-related objects in the environment.Type: GrantFiled: March 17, 2017Date of Patent: May 7, 2019Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventor: Clement Creusot
-
Publication number: 20180095475Abstract: Systems and method are provided for controlling a vehicle. In one embodiment, a visual position estimation method includes providing a ground model of an environment, and receiving, at a vehicle, sensor data relating to the environment, the sensor data including optical image data acquired by an optical camera coupled to the vehicle. A virtual camera image is generated based on the ground model, the position of the vehicle, and a position of the optical camera. The position of an object in the environment is determined based on the virtual camera image and the optical image data.Type: ApplicationFiled: November 22, 2017Publication date: April 5, 2018Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Gautier Minster, Clement Creusot
-
Publication number: 20170364759Abstract: Systems and method are provided for controlling a vehicle. In one embodiment, a determination is made that a traffic control person and a traffic control sign are present within the environment of the vehicle based on sensor data, such as optical camera data. The position and orientation of the traffic control sign relative to the traffic control person is determined, e.g., via lidar sensor data, and the validity of the traffic control person and the traffic control sign is determined based on the position and orientation of the traffic control sign.Type: ApplicationFiled: September 7, 2017Publication date: December 21, 2017Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventor: Clement Creusot