Vehicle systems and methods utilizing LIDAR data for road condition estimation
A system and method for estimating road conditions ahead of a vehicle, including: a LIDAR sensor operable for generating a LIDAR point cloud; a processor executing a road condition estimation algorithm stored in a memory, the road condition estimation algorithm performing the steps including: detecting a ground plane or drivable surface in the LIDAR point cloud; superimposing an M×N matrix on at least a portion of the LIDAR point cloud; for each patch of the LIDAR point cloud defined by the M×N matrix, statistically evaluating a relative position, a feature elevation, and a scaled reflectance index; and, from the statistically evaluated relative position, feature elevation, and scaled reflectance index, determining a slipperiness probability for each patch of the LIDAR point cloud; and a vehicle control system operable for, based on the determined slipperiness probability for each patch of the LIDAR point cloud, affecting an operation of the vehicle.
Latest Volvo Car Corporation Patents:
The present disclosure relates generally to the automotive and vehicle safety fields. More particularly, the present disclosure relates to vehicle systems and methods utilizing LIDAR data for road condition estimation.
BACKGROUNDSome modern vehicles report measured road condition to the cloud for subsequent use by other vehicles. The pre-emptive knowledge or estimation of road condition (i.e., slipperiness or friction) can significantly reduce road condition-related accidents. Often such road condition estimation is local and camera based, and may include segmenting a camera image of the drivable surface in front of a vehicle into M×N patches and assessing the textural characteristics of each portion of the patches for road surface slipperiness, road rutting, etc.
The pre-emptive knowledge or estimation of road condition is especially important in autonomous driving (AD) settings, where an AD setting may be turned off to ensure safety conditions specifically if a road surface ahead is predicted to be too slippery or a path with the greatest expected friction may be preferred in such situations, for example. Most camera-based systems can detect and analyze the road surface ahead with high confidence and high reliability up to about 50 m under well-lit conditions, with the road condition within several meters of the wheel tracks being the most important for path planning, for example.
LIDAR sensors are generally capable of detecting the road surface in terms of relative road surface location (x and y coordinates), road surface height (z coordinate), and reflectance (r) reliably up to 70-100 m ahead. Thus, this information can be used to predict the road surface condition within several meters of the wheel tracks, assisting in path planning significantly.
When a vehicle is in AD mode, the early detection of detrimental (i.e., slippery) road conditions in the form of ice patches, slush patches, water patches, etc. is necessary to trigger a stop to AD functionalities and guide a human driver to take over for assured safety conditions. Thus, advance warning of road conditions is imperative. Again, relative slipperiness analysis of the road surface ahead can also lend useful insights for path planning.
Because LIDAR sensors can operate at significant distances ahead of a vehicle as compared to conventional front-facing cameras, the relative position of the road surface, the height of accumulation on the road surface, and the scaled reflectance index of each road surface point can be used with a machine learning (ML) model to predict slippery road surfaces ahead, raise alerts, and selectively disable AD mode.
To date, M×N patches have been superimposed on a front-facing camera image of a road surface to predict a patchiness index of the road surface up to 30-50 m ahead. Here, five lateral patches may be used, for example, including a patch under each vehicle wheel, a center patch between the vehicle wheel patches, and two side patches outside of the vehicle wheel patches. Such a front-facing camera image 10 with the superimposed patches and a bird's eye view (BEV) transformation 12 of the front-facing camera image 10 with the superimposed patches broken into columns and rows are shown in
To date, LIDAR sensors have been used to detect the depth of water on the road surface ahead of a vehicle, with the LIDAR sensor mounted close to the ground to enhance visibility. However, this setup is not scalable to long-range object or road surface detection. For such applications, a higher LIDAR sensor mounting position on the vehicle is preferred.
Such higher LIDAR sensor mounting positions have been used, however, typically to track reflectance on a road surface along two straight-line paths, supposedly where the wheels will contact the road surface. These one-dimensional (1-D) signals along the wheel paths correspond to the reflectance (r) along the wheel paths and can be used to detect the presence of water or ice along the wheel paths using statistical and/or ML models. However, the 1-D signals are vendor and scenario sensitive and do not provide a view of the entire road condition ahead. Dividing the road surface ahead into patches and using both accumulation height (z) and reflectance (r) parameters would provide a holistic view of road condition and aid motion control and path planning for AD, as well as advanced driver assistance systems (ADAS), modalities.
Finally, existing LIDAR sensor systems have been used generally to detect road unevenness, such as road irregularities and potholes, and maneuver a vehicle away from such road unevenness, without maximizing the inputs from the perception sensor to control switching between AD/ADAS modalities and aid in path planning. This could be done through the superimposition of spatial patches on the road surface to generate pseudo-clusters indicative of the probability of a road condition (slipperiness, ice, snow, slush, and water), p, to provide a holistic view of road condition and aid motion control and path planning for AD, as well as ADAS, modalities.
SUMMARYThus, in various exemplary embodiments, the present disclosure provides systems and methods utilizing LIDAR data for road condition estimation that segment the road surface ahead of a vehicle into M×N patches and, optionally, BEV transform this construct. For each segment, x, y, z, and r are then determined—taking full advantage of position, accumulation height, and reflectance information to aid motion control and path planning for AD, as well as ADAS, modalities.
The LIDAR point clouds are utilized with the M×N patches, as opposed to simply clustering the road surface based on absolute r indices, as relative variations in r are observed for the same road surface and the associated deterministic values are not always consistent. Thus, if there is a uniform pitch of black road surface ahead, ideally all points would have the same r index, but this is not the practical case, as the distance from the LIDAR sensor, light scattering, and variations in scanning patterns all have significant impacts. The context of r is learnable, but not always the values, so clustering based on r index is unreliable and non-generalizable. Accordingly, the M×N patches on the road surface are designed to find patterns on the road surface that should have been clustered if r was a reliable and repeatable entity.
In one exemplary embodiment, the present disclosure provides a method for estimating road condition ahead of a vehicle utilizing a LIDAR sensor, the method including: obtaining a LIDAR point cloud from the LIDAR sensor; detecting a ground plane or drivable surface in the LIDAR point cloud; superimposing an M×N matrix on at least a portion of the LIDAR point cloud; for each patch of the LIDAR point cloud defined by the M×N matrix, statistically evaluating a relative position, a feature elevation, and a scaled reflectance index; from the statistically evaluated relative position, feature elevation, and scaled reflectance index, determining a slipperiness probability for each patch of the LIDAR point cloud; and, based on the determined slipperiness probability for each patch of the LIDAR point cloud, one or more of alerting a driver of the vehicle to an upcoming slippery road condition, enabling/disabling one of a driver assist and an autonomous driving functionality, providing a display of an estimated road condition ahead to the driver of the vehicle for one or more of motion and path planning purposes, updating a past determined slipperiness probability for each patch of a past LIDAR point cloud, and reporting the determined slipperiness probability for each patch of the LIDAR point cloud to a cloud server. The LIDAR sensor is coupled to the vehicle above the ground plane or drivable surface. The method further includes transforming the LIDAR point cloud from a three-dimensional LIDAR point cloud to a bird's-eye-view LIDAR point cloud. Detecting the ground plane or drivable surface in the LIDAR point cloud includes detecting the ground plane or drivable surface in the LIDAR point cloud using one of an unsupervised iterative algorithm and a supervised deep learning/machine learning algorithm. The scaled reflectance index is scaled by its relative distance from the LIDAR sensor.
In another exemplary embodiment, the present disclosure provides a non-transitory computer readable medium for estimating road condition ahead of a vehicle stored in a memory and executed by a processor to perform the steps including: obtaining a LIDAR point cloud from a LIDAR sensor; detecting a ground plane or drivable surface in the LIDAR point cloud; superimposing an M×N matrix on at least a portion of the LIDAR point cloud; for each patch of the LIDAR point cloud defined by the M×N matrix, statistically evaluating a relative position, a feature elevation, and a scaled reflectance index; from the statistically evaluated relative position, feature elevation, and scaled reflectance index, determining a slipperiness probability for each patch of the LIDAR point cloud; and, based on the determined slipperiness probability for each patch of the LIDAR point cloud, one or more of alerting a driver of the vehicle to an upcoming slippery road condition, enabling/disabling one of a driver assist and an autonomous driving functionality, providing a display of an estimated road condition ahead to the driver of the vehicle for one or more of motion and path planning purposes, updating a past determined slipperiness probability for each patch of a past LIDAR point cloud, and reporting the determined slipperiness probability for each patch of the LIDAR point cloud to a cloud server. The LIDAR sensor is coupled to the vehicle above the ground plane or drivable surface. The steps further include transforming the LIDAR point cloud from a three-dimensional LIDAR point cloud to a bird's-eye-view LIDAR point cloud. Detecting the ground plane or drivable surface in the LIDAR point cloud includes detecting the ground plane or drivable surface in the LIDAR point cloud using one of an unsupervised iterative algorithm and a supervised deep learning/machine learning algorithm. The scaled reflectance index is scaled by its relative distance from the LIDAR sensor.
In a further exemplary embodiment, the present disclosure provides a system for estimating road condition ahead of a vehicle, the system including: a LIDAR sensor operable for generating a LIDAR point cloud; a processor executing a road condition estimation algorithm stored in a memory, the road condition estimation algorithm performing the steps including: detecting a ground plane or drivable surface in the LIDAR point cloud; superimposing an M×N matrix on at least a portion of the LIDAR point cloud; for each patch of the LIDAR point cloud defined by the M×N matrix, statistically evaluating a relative position, a feature elevation, and a scaled reflectance index; and, from the statistically evaluated relative position, feature elevation, and scaled reflectance index, determining a slipperiness probability for each patch of the LIDAR point cloud; and a vehicle control system operable for, based on the determined slipperiness probability for each patch of the LIDAR point cloud, one or more of alerting a driver of the vehicle to an upcoming slippery road condition, enabling/disabling one of a driver assist and an autonomous driving functionality, providing a display of an estimated road condition ahead to the driver of the vehicle for one or more of motion and path planning purposes, updating a past determined slipperiness probability for each patch of a past LIDAR point cloud, and reporting the determined slipperiness probability for each patch of the LIDAR point cloud to a cloud server. The LIDAR sensor is coupled to the vehicle above the ground plane or drivable surface. The steps performed by the road condition estimation algorithm further include transforming the LIDAR point cloud from a three-dimensional LIDAR point cloud to a bird's-eye-view LIDAR point cloud. Detecting the ground plane or drivable surface in the LIDAR point cloud includes detecting the ground plane or drivable surface in the LIDAR point cloud using one of an unsupervised iterative algorithm and a supervised deep learning/machine learning algorithm. The scaled reflectance index is scaled by its relative distance from the LIDAR sensor.
The present disclosure is illustrated and described herein with reference to the various drawings, in which like reference numbers are used to denote like system components/method steps, as appropriate, and in which:
The present disclosure provides systems and methods utilizing LIDAR data for road condition estimation that segment the road surface ahead of a vehicle into M×N patches and, optionally, BEV transform this construct. For each segment, x, y, z, and r are then determined—taking full advantage of position, accumulation height, and reflectance information to aid motion control and path planning for AD, as well as ADAS, modalities.
The LIDAR point clouds are utilized with the M×N patches, as opposed to simply clustering the road surface based on absolute r indices, as relative variations in r are observed for the same road surface and the associated deterministic values are not always consistent. Thus, if there is a uniform pitch of black road surface ahead, ideally all points would have the same r index, but this is not the practical case, as the distance from the LIDAR sensor, light scattering, and variations in scanning patterns all have significant impacts. The context of r is learnable, but not always the values, so clustering based on r index is unreliable and non-generalizable. Accordingly, the M×N patches on the road surface are designed to find patterns on the road surface that should have been clustered if r was a reliable and repeatable entity.
Referring now specifically to
Based on the predicted slipperiness index, p, for each patch and the overall matrix, alarms can be raised, ADAS functionalities can be implemented, and/or an active AD mode can be disabled 72 by the vehicle control system 56. Alternatively or in addition, the predicted slipperiness indices, p, ahead of the vehicle can be formatted and displayed visually to a driver of the vehicle and or fed into the AD function to allow for enhanced vehicle motion and trajectory planning 74. Alternatively or in addition, as the vehicle moves and more LIDAR frames 52 are acquired, the relative positions of road patches can be converted to global coordinate positions and the slipperiness probability updated for each global coordinate position with each new LIDAR frame 52 76. The same patch on the road surface may be visible in multiple LIDAR frames 52, and updating the associated slipperiness probability with respect to global coordinate positions may thus optimize path planning information. All data, of course, may be transmitted to the cloud for use by other vehicles as well.
The LIDAR perception sensor utilized here provides the capability to extend vision over camera images to instances with low standing sun, poor lighting, and night-time vision conditions. The framework provided segments the ground plane LIDAR point cloud into several segments and applies statistical features within the ground plane or drivable surface patches to describe a complete road condition in front of the vehicle. Based on the segmented ground plane or drivable surface point clouds, a probability map of slipperiness underneath the vehicle wheels and in the nearby vicinity can be generated, which can be used to warn a driver, turn off AD mode (to ensure safety), and plan vehicle path/trajectory accordingly to minimize vehicle slippage or hydroplaning. As LIDAR frames continue to be acquired, the probability of slipperiness can be updated for global coordinates to allow for optimal vehicle control and path planning applications involving AD functionalities.
It is to be recognized that, depending on the example, certain acts or events of any of the techniques described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) a tangible computer-readable storage medium that is non-transitory or (2) a communication medium, such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can include random-access memory (RAM), read-only memory (ROM), electrically erasable-programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disc storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio frequency (RF), and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies, such as IR, RF, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
Thus again, the LIDAR perception sensor utilized here provides the capability to extend vision over camera images to instances with low standing sun, poor lighting, and night-time vision conditions. The framework provided segments the ground plane or drivable surface LIDAR point cloud into several segments and applies statistical features within the ground plane or drivable surface patches to describe a complete road condition in front of the vehicle. Based on the segmented ground plane or drivable surface point clouds, a probability map of slipperiness underneath the vehicle wheels and in the nearby vicinity can be generated, which can be used to warn a driver, turn off AD mode (to ensure safety), and plan vehicle path/trajectory accordingly to minimize vehicle slippage or hydroplaning. As LIDAR frames continue to be acquired, the probability of slipperiness can be updated for particular global coordinates to allow for optimal vehicle control and path planning applications involving AD functionalities.
Although the present disclosure is illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to persons of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present invention, are contemplated thereby, and are intended to be covered by the following non-limiting claims for all purposes.
Claims
1. A method for estimating road condition ahead of a vehicle utilizing a LIDAR sensor, the method comprising:
- obtaining a LIDAR point cloud from the LIDAR sensor;
- detecting a ground plane or drivable surface in the LIDAR point cloud;
- superimposing an M×N matrix on at least a portion of the LIDAR point cloud, wherein at least one of M and N has a value greater than 1;
- for each patch of the LIDAR point cloud defined by the M×N matrix, statistically evaluating a relative position (x,y), a feature elevation (z), and a scaled reflectance index (r) comprising r/x or r/x2;
- from the statistically evaluated relative position, feature elevation, and scaled reflectance index, determining a slipperiness probability, on a per patch basis, for each patch of the M×N matrix superimposed on the LIDAR point cloud; and
- based on the determined slipperiness probability, on the per patch basis, for each patch of the M×N matrix superimposed on the LIDAR point cloud, one or more of alerting a driver of the vehicle to an upcoming slippery road condition, enabling/disabling one of a driver assist and an autonomous driving functionality, providing a display of an estimated road condition ahead to the driver of the vehicle for one or more of motion and path planning purposes, updating a past determined slipperiness probability for each patch of a past LIDAR point cloud, and reporting the determined slipperiness probability for each patch of the LIDAR point cloud to a cloud server.
2. The method of claim 1, wherein the LIDAR sensor is coupled to the vehicle and disposed above the ground plane or drivable surface.
3. The method of claim 1, further comprising transforming the LIDAR point cloud from a three-dimensional LIDAR point cloud to a bird's-eye-view LIDAR point cloud.
4. The method of claim 1, wherein detecting the ground plane or drivable surface in the LIDAR point cloud comprises detecting the ground plane or drivable surface in the LIDAR point cloud using one of an unsupervised iterative algorithm and a supervised deep learning/machine learning algorithm.
5. A non-transitory computer readable medium for estimating road condition ahead of a vehicle stored in a memory and executed by a processor to perform the steps comprising:
- obtaining a LIDAR point cloud from a LIDAR sensor;
- detecting a ground plane or drivable surface in the LIDAR point cloud;
- superimposing an M×N matrix on at least a portion of the LIDAR point cloud, wherein at least one of M and N has a value greater than 1;
- for each patch of the LIDAR point cloud defined by the M×N matrix, statistically evaluating a relative position (x,y), a feature elevation (z), and a scaled reflectance index (r) comprising r/x or r/x2;
- from the statistically evaluated relative position, feature elevation, and scaled reflectance index, determining a slipperiness probability, on a per patch basis, for each patch of the M×N matrix superimposed on the LIDAR point cloud; and
- based on the determined slipperiness probability, on the per patch basis, for each patch of the M×N matrix superimposed on the LIDAR point cloud, one or more of alerting a driver of the vehicle to an upcoming slippery road condition, enabling/disabling one of a driver assist and an autonomous driving functionality, providing a display of an estimated road condition ahead to the driver of the vehicle for one or more of motion and path planning purposes, updating a past determined slipperiness probability for each patch of a past LIDAR point cloud, and reporting the determined slipperiness probability for each patch of the LIDAR point cloud to a cloud server.
6. The computer readable medium of claim 5, wherein the LIDAR sensor is coupled to the vehicle and disposed above the ground plane or drivable surface.
7. The computer readable medium of claim 5, wherein the steps further comprise transforming the LIDAR point cloud from a three-dimensional LIDAR point cloud to a bird' s-eye-view LIDAR point cloud.
8. The computer readable medium of claim 5, wherein detecting the ground plane or drivable surface in the LIDAR point cloud comprises detecting the ground plane or drivable surface in the LIDAR point cloud using one of an unsupervised iterative algorithm and a supervised deep learning/machine learning algorithm.
9. A system for estimating road condition ahead of a vehicle, the system comprising:
- a LIDAR sensor operable for generating a LIDAR point cloud;
- a processor executing a road condition estimation algorithm stored in a memory, the road condition estimation algorithm performing the steps comprising: detecting a ground plane or drivable surface in the LIDAR point cloud; superimposing an M×N matrix on at least a portion of the LIDAR point cloud, wherein at least one of M and N has a value greater than 1; for each patch of the LIDAR point cloud defined by the M×N matrix, statistically evaluating a relative position (x,y), a feature elevation (z), and a scaled reflectance index (r) comprising r/x or r/x2; and from the statistically evaluated relative position, feature elevation, and scaled reflectance index, determining a slipperiness probability, on a per patch basis, for each patch of the M×N matrix superimposed on the LIDAR point cloud; and
- a vehicle control system operable for, based on the determined slipperiness probability, on the per patch basis, for each patch of the M×N matrix superimposed on the LIDAR point cloud, one or more of alerting a driver of the vehicle to an upcoming slippery road condition, enabling/disabling one of a driver assist and an autonomous driving functionality, providing a display of an estimated road condition ahead to the driver of the vehicle for one or more of motion and path planning purposes, updating a past determined slipperiness probability for each patch of a past LIDAR point cloud, and reporting the determined slipperiness probability for each patch of the LIDAR point cloud to a cloud server.
10. The system of claim 9, wherein the LIDAR sensor is coupled to the vehicle and disposed above the ground plane or drivable surface.
11. The system of claim 9, wherein the steps performed by the road condition estimation algorithm further comprise transforming the LIDAR point cloud from a three-dimensional LIDAR point cloud to a bird's-eye-view LIDAR point cloud.
12. The system of claim 9, wherein detecting the ground plane or drivable surface in the LIDAR point cloud comprises detecting the ground plane or drivable surface in the LIDAR point cloud using one of an unsupervised iterative algorithm and a supervised deep learning/machine learning algorithm.
13. The method of claim 1, wherein the feature elevation and the scaled reflectance index are determined for each relative position within each patch.
14. The computer readable medium of claim 5, wherein the feature elevation and the scaled reflectance index are determined for each relative position within each patch.
15. The system of claim 9, wherein the feature elevation and the scaled reflectance index are determined for each relative position within each patch.
16. The method of claim 1, wherein the slipperiness probability for each patch of the M×N matrix superimposed on the LIDAR point cloud provides a slipperiness probability, on the per patch basis, for each patch as a whole.
17. The computer readable medium of claim 5, wherein the slipperiness probability for each patch of the M×N matrix superimposed on the LIDAR point cloud provides a slipperiness probability, on the per patch basis, for each patch as a whole.
18. The method of claim 1, further comprising updating the determined slipperiness probability for each patch based on a subsequent LIDAR frame for a same patch.
19. The computer readable medium of claim 5, wherein the steps further comprise updating the determined slipperiness probability for each patch based on a subsequent LIDAR frame for a same patch.
20. The system of claim 9, wherein the steps performed by the road condition estimation algorithm further comprise updating the determined slipperiness probability for each patch based on a subsequent LIDAR frame for a same patch.
9139204 | September 22, 2015 | Zhao et al. |
9188981 | November 17, 2015 | Israelsson |
9453941 | September 27, 2016 | Stainvas Olshansky |
9598087 | March 21, 2017 | Zhao et al. |
20130194565 | August 1, 2013 | Sorensen |
20140307247 | October 16, 2014 | Zhu |
20150178572 | June 25, 2015 | Omer |
20150367855 | December 24, 2015 | Parchami |
20150371095 | December 24, 2015 | Hartmann et al. |
20180203113 | July 19, 2018 | Taylor |
20190077407 | March 14, 2019 | Miura et al. |
20190178989 | June 13, 2019 | Tsai et al. |
20190317218 | October 17, 2019 | Cao et al. |
20200081124 | March 12, 2020 | Shi |
19932094 | January 2001 | DE |
2698299 | February 2014 | EP |
3299993 | March 2018 | EP |
2004280339 | October 2004 | JP |
3817611 | September 2006 | JP |
2013173911 | November 2013 | WO |
2017068743 | April 2017 | WO |
2018054910 | March 2018 | WO |
2018119902 | July 2018 | WO |
- Dec. 14, 2020 European Search Report issued on International Application No. 20189826.
- Llata et al., LIDAR Design for Road Condition Measurement ahead of moving vehicle, Member IEEE, Department of Electronics Technology.
- Abstract.
Type: Grant
Filed: Aug 15, 2019
Date of Patent: Feb 28, 2023
Patent Publication Number: 20210048529
Assignee: Volvo Car Corporation (Gothenburg)
Inventors: Sohini Roy Chowdhury (Santa Clara, CA), Minming Zhao (Mountain View, CA), Srikar Muppirisetty (Mölndal)
Primary Examiner: Christian Chace
Assistant Examiner: Shayne M. Gilbertson
Application Number: 16/541,264
International Classification: G01S 17/89 (20200101); B60W 40/068 (20120101); B60W 50/14 (20200101); G06V 20/56 (20220101);