PRE-TRACKING SENSOR EVENT DETECTION AND FUSION

-

This application discloses a computing system to implement pre-tracking sensor event detection and fusion in an assisted or automated driving system of a vehicle. The computing system can receive an environmental model including sensor measurement data from different types of sensors in the vehicle. The computing system can identify, on a per-sensor type basis, patterns in the sensor measurement data indicative of possible objects proximate to the vehicle. The computing system can associate the patterns in the sensor measurement data from different types of the sensors to identify detection events corresponding to the possible objects proximate to the vehicle. The computing system also can generate values and confidence levels corresponding to properties of the detection events. The computing system can utilize the detection events and corresponding values and confidence levels to pre-classify, identify, and track objects in the environment model.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED APPLICATION

This patent application claims priority to U.S. Provisional Patent Application No. 62/385,156, filed Sep. 8, 2016, which is incorporated by reference herein.

TECHNICAL FIELD

This application is generally related to automated driving and assistance systems and, more specifically, to pre-tracking sensor event detection and fusion for automated driving and assistance systems.

BACKGROUND

Many modern vehicles include built-in advanced driver assistance systems (ADAS) to provide automated safety and/or assisted driving functionality. For example, these advanced driver assistance systems can have applications to implement adaptive cruise control, automatic parking, automated braking, blind spot monitoring, collision avoidance, driver drowsiness detection, lane departure warning, or the like. The next generation of vehicles can include autonomous driving (AD) systems to control and navigate the vehicles independent of human interaction.

These vehicles typically include multiple sensors, such as one or more cameras, a Light Detection and Ranging (LIDAR) sensor, a Radio Detection and Ranging (RADAR) system, or the like, to measure different portions of the environment around the vehicles. Each sensor processes their own measurements captured over time to detect an object within their field of view, and then provide a list of detected objects to an application in the advanced driver assistance systems or the autonomous driving systems to which the sensor is dedicated. In some instances, the sensors can also provide a confidence level corresponding to their detection of objects on the list based on their captured measurements.

The applications in the advanced driver assistance systems or the autonomous driving systems can utilize the list of objects received from their corresponding sensors and, in some cases, the associated confidence levels of their detection, to implement automated safety and/or driving functionality. For example, when a RADAR sensor in the front of a vehicle provides the advanced driver assistance system in the vehicle a list having an object in a current path of the vehicle, the application corresponding to front-end collision in the advanced driver assistance system can provide a warning to the driver of the vehicle or control vehicle in order to avoid a collision with the object.

Because each application has dedicated sensors, the application can receive a list of objects from the dedicated sensors that provides the application a fixed field of view in around a portion of the vehicle. When multiple sensors for an application have at least partially overlapping fields of view, the application can integrate object lists from its multiple dedicated sensors for the fixed field of view around the portion of the vehicle for the application. Since the vehicle moves, however, having a narrow field of view provided from the sensors can leave the application blind to potential objects. Conversely, widening the field of view can increase cost, for example, due to additional sensors, and add data processing latency.

SUMMARY

This application discloses a computing system to implement pre-tracking sensor event detection and fusion in an assisted or automated driving system of a vehicle. The computing system can receive an environmental model including sensor measurement data from different types of sensors in the vehicle. The sensor measurement data is spatially and temporally aligned in the environmental model. The computing system can identify, on a per-sensor type basis, patterns in the sensor measurement data indicative of possible objects proximate to the vehicle. The computing system can associate the patterns in the sensor measurement data from different types of the sensors to identify detection events corresponding to the possible objects proximate to the vehicle. The computing system also can generate values and confidence levels corresponding to properties of the detection events. The computing system can utilize the detection events and corresponding values and confidence levels to pre-classify and track objects in the environment model. Embodiments will be described below in greater detail.

DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates an example autonomous driving system according to various embodiments.

FIG. 2A illustrates an example measurement coordinate fields for a sensor system deployed in a vehicle according to various embodiments.

FIG. 2B illustrates an example environmental coordinate field associated with an environmental model for a vehicle according to various embodiments.

FIG. 3 illustrates an example sensor fusion system according to various examples.

FIG. 4 illustrates an example sensor event detection and fusion system in a sensor fusion system according to various embodiments.

FIG. 5 illustrates an example intra-modality system in a sensor event detection and fusion system according to various embodiments.

FIG. 6 illustrates an example flowchart for pre-tracking sensor event detection and fusion according to various embodiments.

FIGS. 7 and 8 illustrate an example of a computer system of the type that may be used to implement various embodiments of the invention.

DETAILED DESCRIPTION Sensor Fusion for Autonomous Driving

FIG. 1 illustrates an example autonomous driving system 100 according to various embodiments. Referring to FIG. 1, the autonomous driving system 100, when installed in a vehicle, can sense an environment surrounding the vehicle and control operation of the vehicle based, at least in part, on the sensed environment.

The autonomous driving system 100 can include a sensor system 110 having multiple sensors, each of which can measure different portions of the environment surrounding the vehicle and output the measurements as raw measurement data 115. The raw measurement data 115 can include characteristics of light, electromagnetic waves, or sound captured by the sensors, such as an intensity or a frequency of the light, electromagnetic waves, or the sound, an angle of reception by the sensors, a time delay between a transmission and the corresponding reception of the light, electromagnetic waves, or the sound, a time of capture of the light, electromagnetic waves, or sound, or the like.

The sensor system 110 can include multiple different types of sensors, such as an image capture device 111, a Radio Detection and Ranging (RADAR) device 112, a Light Detection and Ranging (LIDAR) device 113, an ultra-sonic device 114, one or more microphones, infrared or night-vision cameras, time-of-flight cameras, cameras capable of detecting and transmitting differences in pixel intensity, or the like. The image capture device 111, such as one or more cameras or event-based cameras, can capture at least one image of at least a portion of the environment surrounding the vehicle. The image capture device 111 can output the captured image(s) as raw measurement data 115, which, in some embodiments, can be unprocessed and/or uncompressed pixel data corresponding to the captured image(s).

The RADAR device 112 can emit radio signals into the environment surrounding the vehicle. Since the emitted radio signals may reflect off of objects in the environment, the RADAR device 112 can detect the reflected radio signals incoming from the environment. The RADAR device 112 can measure the incoming radio signals by, for example, measuring a signal strength of the radio signals, a reception angle, a frequency, or the like. The RADAR device 112 also can measure a time delay between an emission of a radio signal and a measurement of the incoming radio signals from the environment that corresponds to emitted radio signals reflected off of objects in the environment. The RADAR device 112 can output the measurements of the incoming radio signals as the raw measurement data 115.

The LIDAR device 113 can transmit light, such as from a laser or other optical transmission device, into the environment surrounding the vehicle. The transmitted light, in some embodiments, can be pulses of ultraviolet light, visible light, near infrared light, or the like. Since the transmitted light can reflect off of objects in the environment, the LIDAR device 113 can include a photo detector to measure light incoming from the environment. The LIDAR device 113 can measure the incoming light by, for example, measuring an intensity of the light, a wavelength, or the like. The LIDAR device 113 also can measure a time delay between a transmission of a light pulse and a measurement of the light incoming from the environment that corresponds to the transmitted light having reflected off of objects in the environment. The LIDAR device 113 can output the measurements of the incoming light and the time delay as the raw measurement data 115.

The ultra-sonic device 114 can emit acoustic pulses, for example, generated by transducers or the like, into the environment surrounding the vehicle. The ultra-sonic device 114 can detect ultra-sonic sound incoming from the environment, such as, for example, the emitted acoustic pulses having been reflected off of objects in the environment. The ultra-sonic device 114 also can measure a time delay between emission of the acoustic pulses and reception of the ultra-sonic sound from the environment that corresponds to the emitted acoustic pulses having reflected off of objects in the environment. The ultra-sonic device 114 can output the measurements of the incoming ultra-sonic sound and the time delay as the raw measurement data 115.

The different sensors in the sensor system 110 can be mounted in the vehicle to capture measurements for different portions of the environment surrounding the vehicle. FIG. 2A illustrates an example measurement coordinate fields for a sensor system deployed in a vehicle 200 according to various embodiments. Referring to FIG. 2A, the vehicle 200 can include multiple different sensors capable of detecting incoming signals, such as light signals, electromagnetic signals, and sound signals. Each of these different sensors can have a different field of view into an environment around the vehicle 200. These fields of view can allow the sensors to measure light and/or sound in different measurement coordinate fields.

The vehicle in this example includes several different measurement coordinate fields, including a front sensor field 211, multiple cross-traffic sensor fields 212A, 212B, 214A, and 214B, a pair of side sensor fields 213A and 213B, and a rear sensor field 215. Each of the measurement coordinate fields can be sensor-centric, meaning that the measurement coordinate fields can describe a coordinate region relative to a location of its corresponding sensor.

Referring back to FIG. 1, the autonomous driving system 100 can include a sensor fusion system 300 to receive the raw measurement data 115 from the sensor system 110 and populate an environmental model 121 associated with the vehicle with the raw measurement data 115. In some embodiments, the environmental model 121 can have an environmental coordinate field corresponding to a physical envelope surrounding the vehicle, and the sensor fusion system 300 can populate the environmental model 121 with the raw measurement data 115 based on the environmental coordinate field. In some embodiments, the environmental coordinate field can be a non-vehicle centric coordinate field, for example, a world coordinate system, a path-centric coordinate field, a coordinate field parallel to a road surface utilized by the vehicle, or the like.

FIG. 2B illustrates an example environmental coordinate field 220 associated with an environmental model for the vehicle 200 according to various embodiments. Referring to FIG. 2B, an environment surrounding the vehicle 200 can correspond to the environmental coordinate field 220 for the environmental model. The environmental coordinate field 220 can be vehicle-centric and provide a 360 degree area around the vehicle 200. The environmental model can be populated and annotated with information detected by the sensor fusion system 300 or inputted from external sources. Embodiments will be described below in greater detail.

Referring back to FIG. 1, to populate the raw measurement data 115 into the environmental model 121 associated with the vehicle, the sensor fusion system 300 can spatially align the raw measurement data 115 to the environmental coordinate field of the environmental model 121. The sensor fusion system 300 also can identify when the sensors captured the raw measurement data 115, for example, by time stamping the raw measurement data 115 when received from the sensor system 110. The sensor fusion system 300 can populate the environmental model 121 with the time stamp or other time-of-capture information, which can be utilized to temporally align the raw measurement data 115 in the environmental model 121. In some embodiments, the sensor fusion system 300 can analyze the raw measurement data 115 from the multiple sensors as populated in the environmental model 121 to detect a sensor event or at least one object in the environmental coordinate field associated with the vehicle. The sensor event can include a sensor measurement event corresponding to a presence of the raw measurement data 115 in the environmental model 121, for example, above a noise threshold. The sensor event can include a sensor detection event corresponding to a spatial and/or temporal grouping of the raw measurement data 115 in the environmental model 121. The object can correspond to spatial grouping of the raw measurement data 115 having been tracked in the environmental model 121 over a period of time, allowing the sensor fusion system 300 to determine the raw measurement data 115 corresponds to an object around the vehicle. The sensor fusion system 300 can populate the environment model 121 with an indication of the detected sensor event or detected object and a confidence level of the detection. Embodiments of sensor fusion and sensor event detection or object detection will be described below in greater detail.

The sensor fusion system 300, in some embodiments, can generate feedback signals 116 to provide to the sensor system 110. The feedback signals 116 can be configured to prompt the sensor system 110 to calibrate one or more of its sensors. For example, the sensor system 110, in response to the feedback signals 116, can re-position at least one of its sensors, expand a field of view of at least one of its sensors, change a refresh rate or exposure time of at least one of its sensors, alter a mode of operation of at least one of its sensors, or the like.

The autonomous driving system 100 can include a driving functionality system 120 to receive at least a portion of the environmental model 121 from the sensor fusion system 300. The driving functionality system 120 can analyze the data included in the environmental model 121 to implement automated driving functionality or automated safety and assisted driving functionality for the vehicle. The driving functionality system 120 can generate control signals 131 based on the analysis of the environmental model 121.

The autonomous driving system 100 can include a vehicle control system 130 to receive the control signals 131 from the driving functionality system 120. The vehicle control system 130 can include mechanisms to control operation of the vehicle, for example by controlling different functions of the vehicle, such as braking, acceleration, steering, parking brake, transmission, user interfaces, warning systems, or the like, in response to the control signals.

FIG. 3 illustrates an example sensor fusion system 300 according to various examples. Referring to FIG. 3, the sensor fusion system 300 can include a measurement integration system 310 to receive raw measurement data 301 from multiple sensors mounted in a vehicle. The measurement integration system 310 can generate an environmental model 315 for the vehicle, which can be populated with the raw measurement data 301.

The measurement integration system 310 can include a spatial alignment unit 311 to correlate measurement coordinate fields of the sensors to an environmental coordinate field for the environmental model 315. The measurement integration system 310 can utilize this correlation to convert or translate locations for the raw measurement data 301 within the measurement coordinate fields into locations within the environmental coordinate field. The measurement integration system 310 can populate the environmental model 315 with the raw measurement data 301 based on the correlation between the measurement coordinate fields of the sensors to the environmental coordinate field for the environmental model 315.

The measurement integration system 310 also can temporally align the raw measurement data 301 from different sensors in the sensor system. In some embodiments, the measurement integration system 310 can include a temporal alignment unit 312 to assign time stamps to the raw measurement data 301 based on when the sensor captured the raw measurement data 301, when the raw measurement data 301 was received by the measurement integration system 310, or the like. In some embodiments, the temporal alignment unit 312 can convert a capture time of the raw measurement data 301 provided by the sensors into a time corresponding to the sensor fusion system 300. The measurement integration system 310 can annotate the raw measurement data 301 populated in the environmental model 315 with the time stamps for the raw measurement data 301. The time stamps for the raw measurement data 301 can be utilized by the sensor fusion system 300 to group the raw measurement data 301 in the environmental model 315 into different time periods or time slices. In some embodiments, a size or duration of the time periods or time slices can be based, at least in part, on a refresh rate of one or more sensors in the sensor system. For example, the sensor fusion system 300 can set a time slice to correspond to the sensor with a fastest rate of providing new raw measurement data 301 to the sensor fusion system 300.

The measurement integration system 310 can include an ego motion unit 313 to compensate for movement of at least one sensor capturing the raw measurement data 301, for example, due to the vehicle driving or moving in the environment. The ego motion unit 313 can estimate motion of the sensor capturing the raw measurement data 301, for example, by utilizing tracking functionality to analyze vehicle motion information, such as global positioning system (GPS) data, inertial measurements, vehicle odometer data, video images, or the like. The tracking functionality can implement a Kalman filter, a Particle filter, optical flow-based estimator, or the like, to track motion of the vehicle and its corresponding sensors relative to the environment surrounding the vehicle.

The ego motion unit 313 can utilize the estimated motion of the sensor to modify the correlation between the measurement coordinate field of the sensor to the environmental coordinate field for the environmental model 315. This compensation of the correlation can allow the measurement integration system 310 to populate the environmental model 315 with the raw measurement data 301 at locations of the environmental coordinate field where the raw measurement data 301 was captured as opposed to the current location of the sensor at the end of its measurement capture.

In some embodiments, the measurement integration system 310 may receive objects or object lists 302 from a variety of sources. The measurement integration system 310 can receive the object list 302 from sources external to the vehicle, such as in a vehicle-to-vehicle (V2V) communication, a vehicle-to-infrastructure (V2I) communication, a vehicle-to-pedestrian (V2P) communication, a vehicle-to-device (V2D) communication, a vehicle-to-grid (V2G) communication, or generally a vehicle-to-everything (V2X) communication. The measurement integration system 310 also can receive the objects or an object list 302 from other systems internal to the vehicle, such as from a human machine interface, mapping systems, localization system, driving functionality system, vehicle control system, or the vehicle may be equipped with at least one sensor that outputs the object list 302 rather than the raw measurement data 301.

The measurement integration system 310 can receive the object list 302 and populate one or more objects from the object list 302 into the environmental model 315 along with the raw measurement data 301. The object list 302 may include one or more objects, a time stamp for each object, and optionally include a spatial metadata associated with a location of objects in the object list 302. For example, the object list 302 can include speed measurements for the vehicle, which may not include a spatial component to be stored in the object list 302 as the spatial metadata. When the object list 302 includes a confidence level associated with an object in the object list 302, the measurement integration system 310 also can annotate the environmental model 315 with the confidence level for the object from the object list 302.

The sensor fusion system 300 can include an object detection system 320 to receive the environmental model 315 from the measurement integration system 310. In some embodiments, the sensor fusion system 300 can include a memory system 330 to store the environmental model 315 from the measurement integration system 310. The object detection system 320 may access the environmental model 315 from the memory system 330.

The object detection system 320 can analyze data stored in the environmental model 315 to detect at least one object. The sensor fusion system 300 can populate the environment model 315 with an indication of the detected object at a location in the environmental coordinate field corresponding to the detection. The object detection system 320 can identify confidence levels corresponding to the detect object, which can be based on at least one of a quantity, a quality, or a sensor diversity of raw measurement data 301 utilized in detecting the object. The sensor fusion system 300 can populate or store the confidence levels corresponding to the detect objects with the environmental model 315. For example, the object detection system 320 can annotate the environmental model 315 with object annotations 324 or the object detection system 320 can output the object annotations 324 to the memory system 330, which populates the environmental model 315 with the detected object and corresponding confidence level of the detection in the object annotations 324.

The object detection system 320 can include a sensor event detection and fusion system 400 to identify detection events 325 from the data stored in the environmental model 315. In some embodiments, the sensor event detection and fusion system 400 can identify the detection events 325 by analyzing the data stored in the environmental model 315 on a per-sensor type basis to identify patterns in the data, such as image features or data point clusters. When the sensor event detection and fusion system 400 utilizes patterns from a single sensor modality or type to generate the detection events 325, the detection event 325 may be called a sensor detection event. In some embodiments, the sensor event detection and fusion system 400 also can associate or correlate identified patterns across multiple different sensor modalities or types to generate the detection event 325, which can be called a fused sensor detection event.

The sensor event detection and fusion system 400 also can generate hypothesis information 326 corresponding to the detection events 325. In some embodiments, the hypothesis information 326 can identify confidence levels corresponding to various properties or characteristics associated with the detection events 325. The sensor fusion system 300 can populate or store the detection events 325 and the corresponding hypothesis information 326 with the environmental model 315. For example, the object detection system 320 can annotate the environmental model 315 with the detection events 325 and the corresponding hypothesis information 326, or the object detection system 320 can output the detection events 325 and the corresponding hypothesis information 326 to the memory system 330, which populates the environmental model 315 with the detection events 325 and the corresponding hypothesis information 326. Embodiments of the identification of detection events and generation of the corresponding hypothesis information will be described below in greater detail.

The object detection system 320 can include a pre-classification unit 322 to assign pre-classifications to the detection events 325, which may be based, at least in part, on the hypothesis information 326. In some embodiments, the pre-classification can correspond to a type of object, such as another vehicle, a pedestrian, a cyclist, an animal, a static object, or the like, which may be identified or pointed to by the hypothesis information 326. For example, the pre-classification unit 322 may utilize the confidence levels corresponding to various properties or characteristics in the hypothesis information 326 to assign pre-classifications for the detection events 325. The pre-classification unit 322 can annotate the environmental model 315 with the sensor detection event, the fused sensor detection event and/or the assigned pre-classification.

The object detection system 320 can include a tracking unit 323 to track the detection events 325 in the environmental model 315 over time, for example, by analyzing the annotations in the environmental model 315, and determine whether the detection events 325 corresponds to objects in the environmental coordinate system. In some embodiments, the tracking unit 323 can track the detection events 325 utilizing at least one state change prediction model, such as a kinetic model, a probabilistic model, or other state change prediction model.

The tracking unit 323 can select the state change prediction model to utilize to track the detection events 325 based on the assigned pre-classifications of the detection events 325 by the pre-classification unit 322. The state change prediction model may allow the tracking unit 323 to implement a state transition prediction, which can assume or predict future states of the detection events 325, for example, based on a location of the detection events 325 in the environmental model 315, a prior movement of the detection events 325, a classification of the detection events 325, or the like. In some embodiments, the tracking unit 323 implementing the kinetic model can utilize kinetic equations for velocity, acceleration, momentum, or the like, to assume or predict the future states of the detection events 325 based, at least in part, on its prior states.

The tracking unit 323 may determine a difference between the predicted future states of the detection events 325 and its actual future states, which the tracking unit 323 may utilize to determine whether the detection events 325 correspond to objects proximate to the vehicle. The tracking unit 323 can track the detection event 325 in the environmental coordinate field associated with the environmental model 315, for example, across multiple different sensors and their corresponding measurement coordinate fields.

When the tracking unit 323, based on the tracking of the detection events 325 with the state change prediction model, determines the detection events 325 are trackable, the tracking unit 323 can annotate the environmental model 315 to indicate the presence of trackable detect events. The tracking unit 323 can continue tracking the trackable detect events over time by implementing the state change prediction models and analyzing the environmental model 315 when updated with additional raw measurement data 301. After annotating the environmental model 315 to indicate the presence of trackable detect events, the tracking unit 323 can continue to track the trackable detect events in the environmental coordinate field associated with the environmental model 315, for example, across multiple different sensors and their corresponding measurement coordinate fields.

The sensor fusion system 300 can include a publish-subscribe system 340 to communicate with devices, components, or applications external to the sensor fusion system 300. In some embodiments, the publish-subscribe system 340 can allow processes or applications in a driving functionality system to receive information from the annotated environmental model 332 or to provide annotations to the annotated environmental model 332.

The publish-subscribe system 340 can receive subscriptions 341, for example, from a driving functionality system, a vehicle control system, or other devices external from the sensor fusion system 300. The subscriptions 341 may identify at least one region of interest in the annotated environmental model 332 for a region of interest management system 350 to monitor for data or events. The region of interest can have a spatially description, for example, a portion of the environmental coordinate field in the annotated environmental model 332. The region of interest can have a temporal description, for example, as time window or a time offset during which the region of interest management system 350 can monitor the annotated environmental model 332. In some embodiments, the subscriptions 341 can include additional monitoring information, such as a type of data to monitor, a type of detection, and whether to enable dynamic adaptation of the identified region of interest.

The publish-subscribe system 340 can provide at least a portion of the subscriptions 341 to the region of interest management system 350 as subscription information 343. The region of interest management system 350 can analyze the subscription information 343 to identify at least one region of interest of the annotated environmental model 332 to monitor for data or events. In some embodiments, the region of interest management system 350 can include a registry of the one or more regions of interest that the region of interest management system 350 monitors based on the subscription information 343 from the publish-subscribe system 340.

The region of interest management system 350 can monitor the environmental model 315, for example, by accessing the memory system 330, to identify data and events corresponding to at least one of the regions of interest in the registry. When the region of interest management system 350 detects data or an event corresponding to a region of interest in a subscription 343, the region of interest management system 350 can forward detection information to the publish-subscribe system 340. The detection information can identify the region of interest associated with the detection, the detected portion of the annotated environmental model 332, a reference to the detected portion of the annotated environmental model 332 in the memory system 330, a type of detection, or the like. The publish-subscribe system 340 can utilize the detection information to generate and output event data 342 to the devices, systems, applications, or components that provided the publish-subscribe system 340 the subscription 341 describing the region of interest associated with the detection.

The region of interest management system 350 also can dynamically adapt the region of interest for one or more of the subscriptions 341, for example, based on the mode of operation of the vehicle, a planned path or route the vehicle expects to traverse, features in map data 331, or the like. For example, the region of interest management system 350 or other portion of the sensor fusion system 300 can identify locations of upcoming traffic lights or signage and suggest the process or component in the driving functionality system expand its region of interest or add a new region of interest to include the upcoming traffic lights or signage. In another example, the region of interest management system 350 or other portion of the sensor fusion system 300 can identify the vehicle plans to make a turn and expand its region of interest to include areas corresponding to the road after making the turn.

FIG. 4 illustrates an example sensor event detection and fusion system 400 in a sensor fusion system according to various embodiments. Referring to FIG. 4, the sensor event detection and fusion system 400 can include an intra-sensor modality system 410 to receive sensor measurement data in an environmental model 401. The sensor measurement data can include measurements or raw data from different types or modalities of sensors in a vehicle. For example, the sensor measurement data can include data measured by at least one image capture device, one or more RADAR sensors, one or more LIDAR sensors, one or more ultrasonic sensors, or the like. The intra-sensor modality system 410 can receive the sensor measurement data in frames or scans over time, such as periodically, intermittently, in response to sensing events, or the like.

The intra-sensor modality system 410 also can receive system-level data 402 from various systems or components in a vehicle. The system-level data 402 can include external vehicle information, such as weather conditions surrounding the vehicle, a visible and/or intra-red spectrum brightness-level external to the vehicle, an average signal level for a RADAR sensor type, an average brightness level for a LIDAR sensor type, or the like, and include context of the vehicle, such as a speed of the vehicle, or the like.

The intra-sensor modality system 410 can analyze the sensor measurement data on a per-modality basis or per-sensor-type basis to detect patterns in the sensor measurement data, such as the detected patterns 413. For example, the pattern detection unit 411 can identify the detected patterns 413 by analyzing image data from at least one image capture device to identify and extract features from the image data. In another example, the pattern detection unit 411 can identify the detected patterns 413 by analyzing LIDAR data points from one or more LIDAR sensors to identify clusters of the LIDAR data points. The pattern detection unit 411 can determine the extracted features from image data and the identified clusters can correspond to the detected patterns 413.

The intra-sensor modality system 410, in some embodiments, may identify the detected patterns 413 based, at least in part, on other data sets or conditions. For example, the intra-sensor modality system 410 can identify differences in the sensor measurement data between frames, and utilize the identified differences in the sensor measurement data to identify the detected patterns 413. The intra-sensor modality system 410 also can utilize the system-level data 402 when analyzing the sensor measurement data. For example, external weather conditions, such as precipitation, can increase error in certain sensor measurements. A speed of the vehicle also can cause distortion in image data, for example, straight lines appearing as curved lines in the image data due to vehicle speed. The intra-sensor modality system 410 can utilize these increased error rates and image distortion in the identification of the detected patterns 413.

The intra-sensor modality system 410 also can access a visibility map, which can identify spatial portions of the environmental model 401 corresponding to a sensor field-of-view based on its line-of-sight. In some embodiments, the intra-sensor modality system 410 can utilize the visibility map to identify where in the environmental model 401 to expect sensor measurement data. For example, when the visibility map identifies a spatial portion of the environmental model 401 that can be populated with sensor measurement data from a LIDAR sensor, the intra-sensor modality system 410 can expect sensor measurement data from the LIDAR sensor to be populated in that spatial portion of the environmental model 401.

In some embodiments, the intra-sensor modality system 410 can utilize the visibility map to identify which spatial portions of the environmental model 401 to not expect sensor measurement data, for example, due to a blocked line-of-sight of a sensor or sensor type. For example, when the vehicle does not have a sensor of a certain-type to capture measurements of a portion of the environment around the vehicle, the visibility map can identify which portion of the environmental model 401 that the intra-sensor modality system 410 should not expect to have sensor measurement data corresponding that certain-type of sensor. In some embodiments, the visibility map can identify which of the sensors in a sensor system captured sensor measurement data corresponding to each detection event 421 or identify which of the sensors in the sensor system had a blocked line-of-sight or did not have a field-of-view corresponding to a spatial location corresponding to the detection event 421. Embodiments of pattern detection in the sensor measurement data will be described below in greater detail.

The intra-sensor modality system 410 also can identify inter-frame pattern differences 414 corresponding to the detected patterns 413. In some embodiments, after the intra-sensor modality system 410 identifies one of the detected patterns 413, the intra-sensor modality system 410 can analyze the sensor measurement data corresponding to the detected patterns 413 between multiple frames, for example, between adjacently received frames, to identify inter-frame pattern differences 414 of the detected patterns 413.

The sensor event detection and fusion system 400 can include an association system 420 to receive the detected patterns 413 and the inter-frame pattern differences 414 of the detected patterns 413. The association system 420 also can receive the sensor measurement data from the environmental model 401 and the system-level data 402 from various systems or components in a vehicle.

The association system 420 can identify detection events 421 for the sensor event detection and fusion system 400 based, at least in part, on the detected patterns 413. The detection events 421 can identify the sensor measurement data in the environmental model 401 includes data corresponding to a possible object proximate to the vehicle. A possible object may be sensor measurement data indicative of an object proximate to the vehicle prior to tracking of the object.

When the association system 420 utilizes a single sensor modality or sensor type to generate the detection events 421, the detection event 421 may be called a sensor detection event. In some embodiments, the association system 420 also can associate or correlate the identified intra-sensor modality information across multiple different sensor modalities to generate the detection events 421 called a fused sensor detection event. In some embodiments, the association system 420 can compare detected patterns 413 from multiple different sensor modalities, and identify a detection event 421 from the detected patterns 413 based on a spatial-alignment between the detected patterns 413, a temporal-alignment between the detected patterns 413, a state of the data in between the detected patterns 413, or the like. The state of the data can correspond to a velocity of the data corresponding to the detected patterns 413. In some embodiments, one or more of the sensors can determine the velocity of the data corresponding to the detected patterns 413, which can be received by the association system 420 in the sensor measurement data. The association system 420 also may determine the velocity of the data corresponding to the detected patterns 413 from the inter-frame pattern differences 414.

The association system 420 also can generate hypothesis information 422 corresponding to the detection events 421. The hypothesis information 422 can describe various properties or characteristics associated with the detection events 421 and provide confidence levels corresponding to those properties or characteristics. In some embodiments, the properties or characteristics associated with the detection events 421 can include unity, velocity, orientation, measurement-based virtual center of gravity, existence, size, novelty, or the like, of the sensor measurement data corresponding to the detection event 421. The unity characteristic can identify whether the sensor measurement data corresponds to a single possible object or multiple possible objects. For example, the unity characteristic can identify when one possible object in the sensor measurement data occludes or blocks visibility to at least another possible object. The velocity characteristic can identify at least one velocity associated with the sensor measurement data. The orientation characteristic can identify a directionality of the sensor measurement data and/or an angle associated with the possible object relative to the vehicle. The measurement-based virtual center of gravity characteristic can identify a center of the possible object, for example, based on a density of the data points associated with the detection event 421. The existence characteristic can identify whether the possible object identified by the detection event 421 is an actual object proximate to the vehicle. The size characteristic can identify or estimate a real size of the possible object associated with the detection event 421. The novelty characteristic can identify whether the detection event 421 corresponds to a newly detected pattern 413 or corresponding to a previous detection event 421.

In some embodiments, the association system 420 can implement one or more computational networks, for example, each to generate hypothesis information 422 corresponding to a different characteristic associated with the detection events 421. The computational networks may be rule-based networks, expert-based networks, such as Bayesian networks, networks for probabilistic inference, networks for exact and approximate inference, Markovian networks, machine-learning networks, or the like, which can receive data, such as the sensor measurement data from the environmental model 401, the system-level data 402, the detected patterns 413, the inter-frame pattern differences 414, the detection events 421, or the like, and determine the hypothesis information 422 based on the inputted data. The hypothesis information 422 can include estimated values for the various characteristics along with confidence-levels of the estimated values. In some embodiments, the hypothesis information 422 can describe an estimated range of multiple outcomes for at least one of the characteristics and provide percent chances of each of the multiple outcomes being correct, such as with one or more probability density functions. For example, the hypothesis information 422 can represent a velocity characteristic as probability density function, which can describe a range of possible velocities associated with the detection event 421 and estimate a percent chances that each of the possible velocities has occurred.

FIG. 5 illustrates an example intra-modality system 500 in a sensor event detection and fusion system according to various embodiments. Referring to FIG. 5, the intra-modality system 500 can include an inter-frame difference unit 510, a pattern detection unit 520, and an inter-frame detection unit 530, each to receive sensor measurement data in an environmental model 501. The sensor measurement data can include measurements or raw data from different types or modalities of sensors in a vehicle. For example, the sensor measurement data can include image data 502-1 measured by at least one image capture device, RADAR measurements 502-2 from one or more RADAR sensors, LIDAR measurements 502-N from one or more LIDAR sensors, or the like.

The image data 502-1 can include pixel or image data of entire image frames or can include pixel or image data corresponding to frame-to-frame changes captured by the image capture device. For example, when the image capture device corresponds to an event-based camera, the event-based camera can initially output an entire image frame of image data 502-1 and subsequently output frame-to-frame pixel or other image data changes. The RADAR measurements 502-2 can include raw signal data in a frequency domain or include pre-processed data, for example, corresponding to untracked targets identified by the RADAR sensors. In some embodiments, the sensors can output sensor measurement data periodically, intermittently, in response to sensing events, or the like, with frame rates, scan rates, or refresh rates that can vary from sensor-to-sensor.

The inter-frame difference unit 510, the pattern detection unit 520, and the inter-frame detection unit 530 also can receive system-level data 503 from various systems or components in a vehicle. The system-level data 503 can include external vehicle information, such as weather conditions surrounding the vehicle, a brightness-level external to the vehicle, or the like, and include context of the vehicle, such as a speed of the vehicle, or the like.

The inter-frame difference unit 510 can determine differences from adjacent frames or scans of the sensor measurement data on a per-sensor-type basis. For example, the inter-frame difference unit 510 can compare the received sensor measurement data from a type of sensor against sensor measurement data from a previously received frame or scan from that type of sensor to determine the differences from adjacent frames or scans of the sensor measurement data. The inter-frame difference unit 510 can perform this inter-frame and intra-modality comparison of the sensor measurement data based, at least in part, on the spatial locations of the sensor measurement data in the environmental model 501. In some embodiments, the inter-frame difference unit 510 also can perform this inter-frame and intra-modality comparison of the sensor measurement data based on the output from the ego motion unit 313 in FIG. 3, for example, utilizing a distance a vehicle traveled between frames from the output from the ego motion unit 313.

In some embodiments, prior to determining the inter-frame differences for the sensor measurement data, the inter-frame difference unit 510 can selectively process the sensor measurement data based on the types of data received from the sensors. For example, when an image capture sensor provides entire image frames as the image data 502-1, the inter-frame difference unit 510 can cache the entire image frames, determine inter-frame differences for the sensor measurement data from a plurality of the cached image frames. In another example, when an image capture sensor provided event-based pixels as the image data 502-1, the inter-frame difference unit 510 can perform pixel caching to generate an entire image from the image data 502-1. The inter-frame difference unit 510 can utilize the event-based pixels as the inter-frame differences in the sensor measurement data. In another example, when one or more of the RADAR sensors provides raw signal data in a frequency domain as RADAR measurements 502-2, the inter-frame difference unit 510 can detect one or more untracked targets from the RADAR measurements 502-2. The inter-frame difference unit 510 can determine differences between the untracked targets in adjacent frames, which can constitute inter-frame differences in the sensor measurement data for the RADAR sensor modality.

The pattern detection unit 520 can analyze, on a per-sensor-type basis, the sensor measurement data from the environmental model 501 and optionally the inter-frame differences to identify or detect patterns 523 in the sensor measurement data. The detected patterns 523 can correspond to image features extracted from the image data 502-1, groups of untracked targets from the RADAR measurements 502-2, groupings of data points from the LIDAR measurements 502-N, or the like.

In some embodiments, the pattern detection unit 520 can adjust the identification and detection of the patterns 523 in the sensor measurement data based on the system-level data 503. For example, when the system-level data 503 indicates there is precipitation external to the vehicle, which can lead to false-positive measurements by LIDAR sensors, the pattern detection unit 520 can adjust the identification of data point clusters from the LIDAR measurements 502-N based on the presence of the precipitation.

The pattern detection unit 520 can include a feature extraction unit 521 to identify features in image frames from the image data 502-1. The features can include corners, circles, boxes, edges, lines, or the like, in the image frames. The feature extraction unit 521, in some embodiments, can identify the features based on color or intensity changes within the image frames or between frames, for example, based on the inter-frame differences. The feature extraction unit 521 also can identify gradients or intensity changes from the image data 502-1 and generate a histogram of oriented gradients (HOG) for utilization on identifying the features in the image frames. In some embodiments, the histogram of oriented gradients can be oriented by directionality associated with the gradients or intensity changes from the image data 502-1. The feature extraction unit 521 can identify features in image frames from the image data 502-1 by performing optical flow calculations on image data 502-1 in the image frames to identify optical flow-based features in the image frames. The feature extraction unit 521 can identify features in image frames from the image data 502-1 by performing transformations on the image data 502-1, such as rectification transformations, perspective transformations, orthographic transformations, and/or Hough transformations. The feature extraction unit 521 can identify features in image frames from the image data 502-1 by performing ridge detection, for example, to extract features corresponding to lane markings.

The pattern detection unit 520 can include a plurality of cluster identification units 522-1 to 522-M, for example, at least on one cluster identification unit 522 per sensor type. The cluster identification units 522-1 to 522-M can identify groups of data points in the sensor measurement data, for example, based on spatial location or spatial density of the data points in the environmental model 501, a temporal-alignment of the data points in the environmental model 501, velocities and/or directionality of movement of the data points from the environmental model 501 and/or the inter-frame differences, or the like. In some embodiments, the cluster identification units 522-1 to 522-M can implement one or more clustering algorithms, such as a density-based spatial clustering of applications with noise (DBSCAN) algorithm, to analyze the sensor measurement data from the environmental model 501 on a per-sensor-type or per-sensor modality basis, and to identify the groups of data points in the sensor measurement data. The cluster identification units 522-1 to 522-M also can identify sub-groups of clusters or sub-clusters within larger clusters, or other information about the groups of the data points, such as identify an orientation associated with groups or sub-groups of the data points, determine outlying data points not included in groups or sub-groups, or the like.

In some embodiments, the cluster identification units 522-1 to 522-M also can identify portions of the environmental model 501 missing data points from one or more sensor types. For example, when an object having a non-reflective surface material, a light scattering surface material, or both, is proximate to a vehicle, a RADAR sensor may output RADAR measurements indicating the presence of the object, but a LIDAR sensor may not detect the object due, in part, to the material of the object. In this situation, the cluster identification unit 522-1 to 522-M can indicate that no pattern was detected due to missing data points.

In some embodiments, the cluster identification unit 522-1 to 522-M can identify missing data points within a field of view of the sensor as a negative sensor data cluster. Negative sensor data clusters can correspond to portions of the environmental model 501 without data points from a RADAR or LIDAR sensor, but which the cluster identification unit 522-1 to 522-M can identify as indicating the presence of the object. For example, the cluster identification unit 522-1 to 522-M can analyze the environmental model 501 to identify holes in sensor measurement data within a field-of-view of one or more sensors capturing the sensor measurement data, and then make a determination as to whether the identified holes in the sensor measurement data correspond to free space or a possible object having a non-reflective surface material, a light scattering surface material, or both.

The inter-frame detection unit 530 can determine frame-to-frame differences in sensor measurement data corresponding to the detected patterns 523 on a per-sensor-type basis. The inter-frame detection unit 530 can generate inter-frame pattern differences 531 indicative of the frame-to-frame differences in sensor measurement data corresponding to the detected patterns 523. In some embodiments, the inter-frame detection unit 530 also can perform this inter-frame and intra-modality comparison of the sensor measurement data based on the output from the ego motion unit 313 in FIG. 3, for example, utilizing a distance a vehicle traveled between frames from the output from the ego motion unit 313.

In some embodiments, the inter-frame detection unit 530 can adjust the determination of the frame-to-frame differences in sensor measurement data corresponding to the detected patterns 523 based on the system-level data 503. For example, when the system-level data 503 indicates there is low-level of external light, such as at night or when the vehicle is located within an enclosure, the inter-frame detection unit 530 can adjust its determination of the frame-to-frame differences in sensor measurement data corresponding to the detected patterns 523 based on the level of external light.

FIG. 6 illustrates an example flowchart for pre-tracking sensor event detection and fusion according to various embodiments. Referring to FIG. 6, in a block 601, a computing system can receive sensor measurement data from different types of sensors in a vehicle. The sensor measurement data can include measurements or raw data from different types or modalities of sensors in a vehicle. For example, the sensor measurement data can include data measured by at least one image capture device, one or more RADAR sensors, one or more LIDAR sensors, one or more ultrasonic sensors, or the like. The computing system can receive the sensor measurement data in frames or scans over time, such as periodically, intermittently, in response to sensing events, or the like.

The computing system also can receive system-level data from various systems or components in the vehicle. The system-level data can include external vehicle information, such as weather conditions surrounding the vehicle, a brightness-level external to the vehicle, or the like, and include context of the vehicle, such as a speed of the vehicle, or the like.

In a block 602, the computing system can detect, on a per-sensor type basis, patterns in the sensor measurement data indicative of possible objects proximate to the vehicle. The computing system may identify groups of data points in the sensor measurement data as the patterns, for example, based on spatial location or spatial density of the data points in the environmental model, a temporal-alignment of the data points in the environmental model, velocities and/or directionality of movement of the data points from the environmental model and/or the inter-frame differences, or the like. The computing system also may detect the patterns by identifying and extracting image features from sensed image data.

The computing system, in some embodiments, may identify differences in the sensor measurement data between frames, and utilize the identified differences in the sensor measurement data to detect the patterns. For example, the identified differences may be able to indicate velocity and/or directionality of movement of the data points, which can allow the computing system to group or cluster data points in the sensor measurement data or identify image features to extract. The computing system, in some embodiments, can detect the patterns by utilizing the system-level data to adjust the sensor measurement data used in the detection of the patterns.

In a block 603, the computing system can identify, on the per-sensor type basis, inter-frame differences in the sensor measurement data corresponding to the detected patterns. For example, after the computing system identifies the pattern in the sensor measurement data, the computing system can compare the data points or image features to previous sensor measurement data associated with the patterns to identify differences in spatial location, intensity, angle, movement, velocity, color, shape, or the like.

In a block 604, the computing system can associate the detected patterns in the sensor measurement data from different types of the sensors. The computing system can perform an inter-modality comparison of the sensor measurement data corresponding to the detected pattern, for example, to determine whether the detected patterns were identified by multiple different sensor types. In some embodiments, the computing system can utilize a visibility map to determine whether certain sensors or sensor types have the ability to capture measurements in different spatial locations. The computing system can utilize the determined sensor visibility in the association of the detected patterns in the sensor measurement data.

In a block 605, the computing system can identify detection events corresponding to the possible objects proximate to the vehicle based on the association of the detected patterns and the inter-frame differences corresponding to the detected patterns. In some embodiments, the computing system can analyze the detected patterns, how they were associated, and possibly inter-frame differences in the sensor measurement data corresponding to the detected patterns to determine characteristics of the associated patterns. Based on this analysis, the computing system can determine whether the associated patterns or any individual patterns include data corresponding to a possible object proximate to the vehicle. The possible object may be sensor measurement data indicative of an object proximate to the vehicle that has yet to be confirmed by a sensor fusion system. In some embodiments, the computing system can compare detected patterns from multiple different sensor modalities, and identify a detection event from the detected patterns based on a spatially-alignment between the detected patterns, a temporal-alignment between the detected patterns, a state of the data in the between the detected patterns, or the like. The state of the data can correspond to a velocity of the data corresponding to the detected patterns, which can be determined by the sensors and included in the sensor measurement data or determined based on the inter-frame differences.

When the computing system utilizes a single sensor modality or sensor type to generate a detection event, the detection event may be called a sensor detection event. When the computing system utilizes associated or correlated patterns from multiple different sensor modalities to generate a detection event, the detection event may be called a fused sensor detection event.

In a block 606, the computing system can generate hypothesis information having confidence levels corresponding to properties of the detection events. The hypothesis information can describe various properties or characteristics associated with the detection events and provide confidence levels corresponding to those properties or characteristics. In some embodiments, the properties or characteristics associated with the detection events can include unity, velocity, orientation, center of gravity, existence, size, and novelty of the sensor measurement data corresponding to the detection event. The unity characteristic can identify whether the sensor measurement data corresponds to a single possible object or multiple possible objects proximate to each other. The velocity characteristic can identify at least one velocity associated with the sensor measurement data. The orientation characteristic can identify a directionality of the sensor measurement data and/or an angle associated with the possible object relative to the vehicle. The center of gravity characteristic can identify a center of the possible object based on a density of the data points associated with the detection event. The existence characteristic can identify whether the possible object identified by the detection event is an actual object proximate to the vehicle. The size characteristic can identify or estimate a real size of the possible object associated with the detection event. The novelty characteristic can identify whether the detection event corresponds to a newly detected pattern or corresponding to a previous detection event.

Illustrative Operating Environment

The execution of various driving automation processes according to embodiments may be implemented using computer-executable software instructions executed by one or more programmable computing devices. Because these embodiments may be implemented using software instructions, the components and operation of a programmable computer system on which various embodiments of the invention may be employed will be described below.

FIGS. 7 and 8 illustrate an example of a computer system of the type that may be used to implement various embodiments. Referring to FIG. 7, various examples may be implemented through the execution of software instructions by a computing device 701, such as a programmable computer. Accordingly, FIG. 7 shows an illustrative example of a computing device 701. As seen in FIG. 7, the computing device 701 includes a computing unit 703 with a processing unit 705 and a system memory 707. The processing unit 705 may be any type of programmable electronic device for executing software instructions, but will conventionally be a microprocessor. The system memory 707 may include both a read-only memory (ROM) 709 and a random access memory (RAM) 711. As will be appreciated by those of ordinary skill in the art, both the read-only memory (ROM) 709 and the random access memory (RAM) 711 may store software instructions for execution by the processing unit 705.

The processing unit 705 and the system memory 707 are connected, either directly or indirectly, through a bus 713 or alternate communication structure, to one or more peripheral devices 717-723. For example, the processing unit 705 or the system memory 707 may be directly or indirectly connected to one or more additional memory storage devices, such as a hard disk drive 717, which can be magnetic and/or removable, a removable optical disk drive 719, and/or a flash memory card. The processing unit 705 and the system memory 707 also may be directly or indirectly connected to one or more input devices 721 and one or more output devices 723. The input devices 721 may include, for example, a keyboard, a pointing device (such as a mouse, touchpad, stylus, trackball, or joystick), a scanner, a camera, and a microphone. The output devices 723 may include, for example, a monitor display, a printer and speakers. With various examples of the computing device 701, one or more of the peripheral devices 717-723 may be internally housed with the computing unit 703. Alternately, one or more of the peripheral devices 717-723 may be external to the housing for the computing unit 703 and connected to the bus 713 through, for example, a Universal Serial Bus (USB) connection.

With some implementations, the computing unit 703 may be directly or indirectly connected to a network interface 715 for communicating with other devices making up a network. The network interface 715 can translate data and control signals from the computing unit 703 into network messages according to one or more communication protocols, such as the transmission control protocol (TCP) and the Internet protocol (IP). Also, the network interface 715 may employ any suitable connection agent (or combination of agents) for connecting to a network, including, for example, a wireless transceiver, a modem, or an Ethernet connection. Such network interfaces and protocols are well known in the art, and thus will not be discussed here in more detail.

It should be appreciated that the computing device 701 is illustrated as an example only, and it not intended to be limiting. Various embodiments may be implemented using one or more computing devices that include the components of the computing device 701 illustrated in FIG. 7, which include only a subset of the components illustrated in FIG. 7, or which include an alternate combination of components, including components that are not shown in FIG. 7. For example, various embodiments may be implemented using a multi-processor computer, a plurality of single and/or multiprocessor computers arranged into a network, or some combination of both.

With some implementations, the processor unit 705 can have more than one processor core. Accordingly, FIG. 8 illustrates an example of a multi-core processor unit 705 that may be employed with various embodiments. As seen in this figure, the processor unit 705 includes a plurality of processor cores 801A and 801B. Each processor core 801A and 801B includes a computing engine 803A and 803B, respectively, and a memory cache 805A and 805B, respectively. As known to those of ordinary skill in the art, a computing engine 803A and 803B can include logic devices for performing various computing functions, such as fetching software instructions and then performing the actions specified in the fetched instructions. These actions may include, for example, adding, subtracting, multiplying, and comparing numbers, performing logical operations such as AND, OR, NOR and XOR, and retrieving data. Each computing engine 803A and 803B may then use its corresponding memory cache 805A and 805B, respectively, to quickly store and retrieve data and/or instructions for execution.

Each processor core 801A and 801B is connected to an interconnect 807. The particular construction of the interconnect 807 may vary depending upon the architecture of the processor unit 705. With some processor cores 801A and 801B, such as the Cell microprocessor created by Sony Corporation, Toshiba Corporation and IBM Corporation, the interconnect 807 may be implemented as an interconnect bus. With other processor units 801A and 801B, however, such as the Opteron™ and Athlon™ dual-core processors available from Advanced Micro Devices of Sunnyvale, Calif., the interconnect 807 may be implemented as a system request interface device. In any case, the processor cores 801A and 801B communicate through the interconnect 807 with an input/output interface 809 and a memory controller 810. The input/output interface 809 provides a communication interface between the processor unit 705 and the bus 713. Similarly, the memory controller 810 controls the exchange of information between the processor unit 705 and the system memory 707. With some implementations, the processor unit 705 may include additional components, such as a high-level cache memory accessible shared by the processor cores 801A and 801B. It also should be appreciated that the description of the computer network illustrated in FIG. 7 and FIG. 8 is provided as an example only, and it not intended to suggest any limitation as to the scope of use or functionality of alternate embodiments.

The system and apparatus described above may use dedicated processor systems, micro controllers, programmable logic devices, microprocessors, or any combination thereof, to perform some or all of the operations described herein. Some of the operations described above may be implemented in software and other operations may be implemented in hardware. Any of the operations, processes, and/or methods described herein may be performed by an apparatus, a device, and/or a system substantially similar to those as described herein and with reference to the illustrated figures.

The processing device may execute instructions or “code” stored in a computer-readable memory device. The memory device may store data as well. The processing device may include, but may not be limited to, an analog processor, a digital processor, a microprocessor, a multi-core processor, a processor array, a network processor, or the like. The processing device may be part of an integrated control system or system manager, or may be provided as a portable electronic device configured to interface with a networked system either locally or remotely via wireless transmission.

The processor memory may be integrated together with the processing device, for example RAM or FLASH memory disposed within an integrated circuit microprocessor or the like. In other examples, the memory device may comprise an independent device, such as an external disk drive, a storage array, a portable FLASH key fob, or the like. The memory and processing device may be operatively coupled together, or in communication with each other, for example by an I/O port, a network connection, or the like, and the processing device may read a file stored on the memory. Associated memory devices may be “read only” by design (ROM) by virtue of permission settings, or not. Other examples of memory devices may include, but may not be limited to, WORM, EPROM, EEPROM, FLASH, NVRAM, OTP, or the like, which may be implemented in solid state semiconductor devices. Other memory devices may comprise moving parts, such as a known rotating disk drive. All such memory devices may be “machine-readable” and may be readable by a processing device.

Operating instructions or commands may be implemented or embodied in tangible forms of stored computer software (also known as “computer program” or “code”). Programs, or code, may be stored in a digital memory device and may be read by the processing device. “Computer-readable storage medium” (or alternatively, “machine-readable storage medium”) may include all of the foregoing types of computer-readable memory devices, as well as new technologies of the future, as long as the memory devices may be capable of storing digital information in the nature of a computer program or other data, at least temporarily, and as long at the stored information may be “read” by an appropriate processing device. The term “computer-readable” may not be limited to the historical usage of “computer” to imply a complete mainframe, mini-computer, desktop or even laptop computer. Rather, “computer-readable” may comprise storage medium that may be readable by a processor, a processing device, or any computing system. Such media may be any available media that may be locally and/or remotely accessible by a computer or a processor, and may include volatile and non-volatile media, and removable and non-removable media, or any combination thereof.

A program stored in a computer-readable storage medium may comprise a computer program product. For example, a storage medium may be used as a convenient means to store or transport a computer program. For the sake of convenience, the operations may be described as various interconnected or coupled functional blocks or diagrams. However, there may be cases where these functional blocks or diagrams may be equivalently aggregated into a single logic device, program or operation with unclear boundaries.

CONCLUSION

While the application describes specific examples of carrying out embodiments of the invention, those skilled in the art will appreciate that there are numerous variations and permutations of the above described systems and techniques that fall within the spirit and scope of the invention as set forth in the appended claims. For example, while specific terminology has been employed above to refer to electronic design automation processes, it should be appreciated that various examples of the invention may be implemented using any desired combination of electronic design automation processes.

One of skill in the art will also recognize that the concepts taught herein can be tailored to a particular application in many other ways. In particular, those skilled in the art will recognize that the illustrated examples are but one of many alternative implementations that will become apparent upon reading this disclosure.

Although the specification may refer to “an”, “one”, “another”, or “some” example(s) in several locations, this does not necessarily mean that each such reference is to the same example(s), or that the feature only applies to a single example.

Claims

1. A method comprising:

receiving, by a computing system, sensor measurement data from different types of sensors in a vehicle, wherein the sensor measurement data is spatially and temporally aligned in an environmental model associated with the vehicle;
identifying, by the computing system on a per-sensor type basis, patterns in the sensor measurement data indicative of possible objects proximate to the vehicle; and
associating, by the computing system, the patterns in the sensor measurement data from different types of the sensors to identify detection events corresponding to the possible objects proximate to the vehicle, wherein a control system for the vehicle is configured to control operation of the vehicle based, at least in part, on the detection events.

2. The method of claim 1, wherein identifying the patterns in the sensor measurement data further comprises extracting features from image data corresponding to at least one image capture device.

3. The method of claim 1, wherein identifying patterns in the sensor measurement data further comprises identifying, on the per-sensor type basis, one or more clusters of data points in the sensor measurement data.

4. The method of claim 3, wherein identifying the clusters of the data points in the sensor measurement data further comprises:

identifying spatial locations in a particular time period for the data points in the sensor measurement data from the environmental model;
determining a state corresponding to the data points in the sensor measurement data based, at least in part, on inter-frame differences corresponding to the data points in the sensor measurement data; and
grouping a subset of the data points in the sensor measurement data into one of the clusters based on the identifying spatial locations in the particular time period and the determined state corresponding to the data points.

5. The method of claim 1, further comprising determining, by the computing system, inter-frame differences in the sensor measurement data corresponding the patterns in the sensor measurement data, wherein associating the patterns in the sensor measurement data from different types of the sensors to identify the detection events corresponding to the possible objects proximate to the vehicle is based, at least in part, on the inter-frame differences in the sensor measurement data.

6. The method of claim 1, wherein the detection events have properties associated with the possible objects proximate to the vehicle, and wherein associating the patterns in the sensor measurement data from different types of the sensors to identify the detection events further comprises generating confidence levels corresponding to the properties of the detection events.

7. The method of claim 6, wherein the properties include at least one a unity, a velocity, an orientation, a center of gravity, an existence, a size, or a novelty associated with the detection events.

8. An apparatus comprising at least one memory device storing instructions configured to cause one or more processing devices to perform operations comprising:

receiving sensor measurement data from different types of sensors in a vehicle, wherein the sensor measurement data is spatially and temporally aligned in an environmental model associated with the vehicle;
identifying, on a per-sensor type basis, patterns in the sensor measurement data indicative of possible objects proximate to the vehicle; and
associating the patterns in the sensor measurement data from different types of the sensors to identify detection events corresponding to the possible objects proximate to the vehicle, wherein a control system for the vehicle is configured to control operation of the vehicle based, at least in part, on the detection events.

9. The apparatus of claim 8, wherein identifying the patterns in the sensor measurement data further comprises extracting features from image data corresponding to at least one image capture device.

10. The apparatus of claim 8, wherein identifying patterns in the sensor measurement data further comprises identifying, on the per-sensor type basis, one or more clusters of data points in the sensor measurement data.

11. The apparatus of claim 10, wherein identifying the clusters of the data points in the sensor measurement data further comprising:

identifying spatial locations in a particular time period for the data points in the sensor measurement data from the environmental model;
determining a state corresponding to the data points in the sensor measurement data based, at least in part, on inter-frame differences corresponding to the data points in the sensor measurement data; and
grouping a subset of the data points in the sensor measurement data into one of the clusters based on the identifying spatial locations in the particular time period and the determined state corresponding to the data points.

12. The apparatus of claim 8, wherein the instructions are further configured to cause the one or more processing devices to perform operations comprising determining inter-frame differences in the sensor measurement data corresponding the patterns in the sensor measurement data, wherein associating the patterns in the sensor measurement data from different types of the sensors to identify the detection events corresponding to the possible objects proximate to the vehicle is based, at least in part, on the inter-frame differences in the sensor measurement data.

13. The apparatus of claim 8, wherein the detection events have properties associated with the possible objects proximate to the vehicle, and wherein associating the patterns in the sensor measurement data from different types of the sensors to identify the detection events further comprises generating confidence levels corresponding to the properties of the detection events.

14. The apparatus of claim 13, wherein the properties include at least one a unity, a velocity, an orientation, a center of gravity, an existence, a size, or a novelty associated with the detection event.

15. A system comprising:

a memory device configured to store machine-readable instructions; and
a computing system including one or more processing devices, in response to executing the machine-readable instructions, configured to: receive sensor measurement data from different types of sensors in a vehicle, wherein the sensor measurement data is spatially and temporally aligned in an environmental model associated with the vehicle; identify, on a per-sensor type basis, patterns in the sensor measurement data indicative of possible objects proximate to the vehicle; and associate the patterns in the sensor measurement data from different types of the sensors to identify detection events corresponding to the possible objects proximate to the vehicle, wherein a control system for the vehicle is configured to control operation of the vehicle based, at least in part, on the detection events.

16. The system of claim 15, wherein the one or more processing devices, in response to executing the machine-readable instructions, are configured to identify patterns in the sensor measurement data by extracting features from image data corresponding to at least one image capture device.

17. The system of claim 15, wherein the one or more processing devices, in response to executing the machine-readable instructions, are configured to identify patterns in the sensor measurement data by identifying, on the per-sensor type basis, one or more clusters of data points in the sensor measurement data.

18. The system of claim 17, wherein the one or more processing devices, in response to executing the machine-readable instructions, are configured to identify the clusters of the data points in the sensor measurement data by:

identifying spatial locations in a particular time period for the data points in the sensor measurement data from the environmental model;
determining a state corresponding to the data points in the sensor measurement data based, at least in part, on inter-frame differences corresponding to the data points in the sensor measurement data; and
grouping a subset of the data points in the sensor measurement data into one of the clusters based on the identifying spatial locations in the particular time period and the determined state corresponding to the data points.

19. The system of claim 15, wherein the one or more processing devices, in response to executing the machine-readable instructions, are configured to:

determine inter-frame differences in the sensor measurement data corresponding the patterns in the sensor measurement data; and
associate the patterns in the sensor measurement data from different types of the sensors to identify the detection events corresponding to the possible objects proximate to the vehicle based, at least in part, on the inter-frame differences in the sensor measurement data.

20. The system of claim 15, wherein the one or more processing devices, in response to executing the machine-readable instructions, are configured to generate confidence levels corresponding to properties of the detection events based, at least in part, on the association of the patterns in the sensor measurement data from different types of the sensors.

Patent History
Publication number: 20180067490
Type: Application
Filed: Jan 20, 2017
Publication Date: Mar 8, 2018
Applicant:
Inventors: Matthias Pollach (Munchen), Ljubo Mercep (Munchen)
Application Number: 15/411,830
Classifications
International Classification: G05D 1/00 (20060101); G05D 1/02 (20060101); G06N 5/04 (20060101); G06N 7/00 (20060101);