SCENE CLASSIFICATION

According to one aspect, scene classification may be provided. An image capture device may capture a series of image frames of an environment from a moving vehicle. A temporal classifier may classify image frames with temporal predictions and generate a series of image frames associated with respective temporal predictions based on a scene classification model. The temporal classifier may perform classification of image frames based on a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a fully connected layer. The scene classifier may classify image frames based on a CNN, global average pooling, and a fully connected layer and generate an associated scene prediction based on the scene classification model and respective temporal predictions. A controller of a vehicle may activate or deactivate vehicle sensors or vehicle systems of the vehicle based on the scene prediction.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application, Ser. No. 62/731158, filed on Sep. 14, 2018; the entirety of the above-noted application(s) is incorporated by reference herein.

BACKGROUND

In driving scenarios, scene understanding by a human involves answering questions about a place, environmental conditions, and traffic participant behavior. Interestingly, humans are able to perform dynamic scene recognition rapidly and accurately with little attention to objects in the scene. Human drivers have the remarkable ability to classify complex traffic scenes and adapt their driving behavior based on their environment. In this regard, automated human level dynamic scene recognition may thus be an attractive goal to achieve.

BRIEF DESCRIPTION

According to one aspect, a system for scene classification may include an image capture device, an image segmentation module, an image masker, a temporal classifier, and a scene classifier. The image capture device may capture a first series of image frames of an environment from a moving vehicle. The image segmentation module may identify one or more traffic participants within the environment based on a first convolutional neural network (CNN). The image masker may generate a second series of image frames by masking one or more of the traffic participants from the environment. The temporal classifier may classify one or more image frames of the second series of image frames with one of two or more temporal predictions and generate a third series of image frames associated with respective temporal predictions based on a scene classification model. The temporal classifier may perform classification based on a second CNN, a long short-term memory (LSTM) network, and a first fully connected layer. The scene classifier may classify one or more image frames of the third series of image frames based on a third CNN, global average pooling, and a second fully connected layer and generate an associated scene prediction based on the scene classification model and respective temporal predictions.

The two or more temporal predictions may include an approaching annotation, an entering annotation, and a passing annotation. The first CNN, the second CNN, or the third CNN may be a deepnet CNN or a ResNet 50 CNN. The system for scene classification may be implemented in a vehicle and the vehicle may include a controller activating or deactivating one or more sensors or one or more vehicle systems of the vehicle based on the scene prediction.

The scene classifier may classify one or more image frames of the third series of image frames with a weather classification including clear, sunny, snowy, rainy, overcast, or foggy and the controller may activate or deactivate one or more of the sensors or one or more of the vehicle systems of the vehicle based on the weather classification. The scene classifier may classify one or more image frames of the third series of image frames with a road surface classification including dry, wet, or snow and the controller may activate or deactivate one or more of the sensors or one or more of the vehicle systems of the vehicle based on the road surface classification. The scene classifier may classify one or more image frames of the third series of image frames with an environment classification including urban, ramp, highway, or local and the controller may activate or deactivate one or more of the sensors or one or more of the vehicle systems of the vehicle based on the environment classification.

One or more of the vehicle systems may be a LIDAR system or radar system. The controller may deactivate the LIDAR system or radar system based on the scene prediction being a tunnel. The controller may prioritize searching for traffic lights, stop signs, stop lines based on the scene prediction being an intersection.

According to one aspect, a vehicle equipped with a system for scene classification may include an image capture device, an image segmentation module, an image masker, a temporal classifier, a scene classifier, and a controller. The image capture device may capture a first series of image frames of an environment from a moving vehicle. The image segmentation module may identify one or more traffic participants within the environment based on a first convolutional neural network (CNN). The image masker may generate a second series of image frames by masking one or more of the traffic participants from the environment. The temporal classifier may classify one or more image frames of the second series of image frames with one of two or more temporal predictions and generate a third series of image frames associated with respective temporal predictions based on a scene classification model. The temporal classifier may perform classification based on a second CNN, a long short-term memory (LSTM) network, and a first fully connected layer. The scene classifier may classify one or more image frames of the third series of image frames based on a third CNN, global average pooling, and a second fully connected layer and generate an associated scene prediction based on the scene classification model and respective temporal predictions. The controller may activate or deactivate one or more sensors or one or more vehicle systems of the vehicle based on the scene prediction.

The two or more temporal predictions may include an approaching annotation, an entering annotation, and a passing annotation. The first CNN, the second CNN, or the third CNN may be a deepnet CNN or a ResNet 50 CNN. One or more of the vehicle systems may be a LIDAR system or radar system and the controller may deactivate the LIDAR system or radar system based on the scene prediction being a tunnel.

According to one aspect, a system for scene classification may include an image capture device, a temporal classifier, and a scene classifier. The image capture device may capture a first series of image frames of an environment from a moving vehicle. The temporal classifier may classify one or more image frames of the first series of image frames with one of two or more temporal predictions and generate a second series of image frames associated with respective temporal predictions based on a scene classification model. The temporal classifier may perform classification based on a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a first fully connected layer. The scene classifier may classify one or more image frames of the second series of image frames based on a second CNN, global average pooling, and a second fully connected layer and generate an associated scene prediction based on the scene classification model and respective temporal predictions.

The two or more temporal predictions may include an approaching annotation, an entering annotation, and a passing annotation. The CNN or the second CNN may be a ResNet 50 CNN. The system for scene classification may be implemented in a vehicle and the vehicle may include a controller activating or deactivating one or more sensors or one or more vehicle systems of the vehicle based on the scene prediction.

The scene classifier may classify one or more image frames of the third series of image frames with a weather classification including clear, sunny, snowy, rainy, overcast, or foggy. The controller may activate or deactivate one or more of the sensors or one or more of the vehicle systems of the vehicle based on the weather classification. The scene classifier may classify one or more image frames of the third series of image frames with a road surface classification including dry, wet, or snow. The controller may activate or deactivate one or more of the sensors or one or more of the vehicle systems of the vehicle based on the road surface classification.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a component diagram of a system for scene classification, according to one aspect.

FIG. 2 is a flow diagram of a method for scene classification, according to one aspect.

FIG. 3 is an exemplary diagram of temporal predictions or predictions associated with a scene classification, according to one aspect.

FIGS. 4A-4B are exemplary diagrams of temporal predictions or predictions associated with various scene classifications, according to one aspect.

FIG. 5 is an exemplary diagram of an architecture associated with training the system for scene classification of FIG. 1.

FIG. 6 is an illustration of an example computer-readable medium or computer-readable device including processor-executable instructions configured to embody one or more of the provisions set forth herein, according to one aspect.

FIG. 7 is an illustration of an example computing environment where one or more of the provisions set forth herein are implemented, according to one aspect.

DETAILED DESCRIPTION

The following terms are used throughout the disclosure, the definitions of which are provided herein to assist in understanding one or more aspects of the disclosure.

A “processor”, as used herein, processes signals and performs general computing and arithmetic functions. Signals processed by the processor may include digital signals, data signals, computer instructions, processor instructions, messages, a bit, a bit stream, or other means that may be received, transmitted, and/or detected. Generally, the processor may be a variety of various processors including multiple single and multicore processors and co-processors and other multiple single and multicore processor and co-processor architectures. The processor may include various modules to execute various functions.

A “memory”, as used herein, may include volatile memory and/or non-volatile memory. Non-volatile memory may include, for example, ROM (read only memory), PROM (programmable read only memory), EPROM (erasable PROM), and EEPROM (electrically erasable PROM). Volatile memory may include, for example, RAM (random access memory), synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), and direct RAM bus RAM (DRRAM). The memory may store an operating system that controls or allocates resources of a computing device.

A “disk” or “drive”, as used herein, may be a magnetic disk drive, a solid state disk drive, a floppy disk drive, a tape drive, a Zip drive, a flash memory card, and/or a memory stick. Furthermore, the disk may be a CD-ROM (compact disk ROM), a CD recordable drive (CD-R drive), a CD rewritable drive (CD-RW drive), and/or a digital video ROM drive (DVD-ROM). The disk may store an operating system that controls or allocates resources of a computing device.

A “bus”, as used herein, refers to an interconnected architecture that is operably connected to other computer components inside a computer or between computers. The bus may transfer data between the computer components. The bus may be a memory bus, a memory controller, a peripheral bus, an external bus, a crossbar switch, and/or a local bus, among others. The bus may also be a vehicle bus that interconnects components inside a vehicle using protocols such as Media Oriented Systems Transport (MOST), Controller Area network (CAN), Local Interconnect Network (LIN), among others.

A “database”, as used herein, may refer to a table, a set of tables, and a set of data stores (e.g., disks) and/or methods for accessing and/or manipulating those data stores.

An “operable connection”, or a connection by which entities are “operably connected”, is one in which signals, physical communications, and/or logical communications may be sent and/or received. An operable connection may include a wireless interface, a physical interface, a data interface, and/or an electrical interface.

A “computer communication”, as used herein, refers to a communication between two or more computing devices (e.g., computer, personal digital assistant, cellular telephone, network device) and may be, for example, a network transfer, a file transfer, an applet transfer, an email, a hypertext transfer protocol (HTTP) transfer, and so on. A computer communication may occur across, for example, a wireless system (e.g., IEEE 802.11), an Ethernet system (e.g., IEEE 802.3), a token ring system (e.g., IEEE 802.5), a local area network (LAN), a wide area network (WAN), a point-to-point system, a circuit switching system, a packet switching system, among others.

A “vehicle”, as used herein, refers to any moving vehicle that is capable of carrying one or more human occupants and is powered by any form of energy. The term “vehicle” includes cars, trucks, vans, minivans, SUVs, motorcycles, scooters, boats, personal watercraft, and aircraft. In some scenarios, a motor vehicle includes one or more engines. Further, the term “vehicle” may refer to an electric vehicle (EV) that is powered entirely or partially by one or more electric motors powered by an electric battery. The EV may include battery electric vehicles (BEV) and plug-in hybrid electric vehicles (PHEV). Additionally, the term “vehicle” may refer to an autonomous vehicle and/or self-driving vehicle powered by any form of energy. The autonomous vehicle may or may not carry one or more human occupants.

A “vehicle system”, as used herein, may be any automatic or manual systems that may be used to enhance the vehicle, driving, and/or safety. Exemplary vehicle systems include an autonomous driving system, an electronic stability control system, an anti-lock brake system, a brake assist system, an automatic brake prefill system, a low speed follow system, a cruise control system, a collision warning system, a collision mitigation braking system, an auto cruise control system, a lane departure warning system, a blind spot indicator system, a lane keep assist system, a navigation system, a transmission system, brake pedal systems, an electronic power steering system, visual devices (e.g., camera systems, proximity sensor systems), a climate control system, an electronic pretensioning system, a monitoring system, a passenger detection system, a vehicle suspension system, a vehicle seat configuration system, a vehicle cabin lighting system, an audio system, a sensory system, among others.

The aspects discussed herein may be described and implemented in the context of non-transitory computer-readable storage medium storing computer-executable instructions. Non-transitory computer-readable storage media include computer storage media and communication media. For example, flash memory drives, digital versatile discs (DVDs), compact discs (CDs), floppy disks, and tape cassettes. Non-transitory computer-readable storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, modules, or other data.

FIG. 1 is a component diagram of a system for scene classification 100, according to one aspect. A vehicle 10 may be equipped with a system for scene classification 100. The system for scene classification 100 may include an image capture device 102, a processor 104, a memory 106, a storage drive 108, a communication interface 110, an image segmentation module 112, an image masker 114, a convolutor 116, a temporal classifier 118, and a scene classifier 120. The vehicle 10 may include a controller, one or more vehicle sensors, and one or more vehicle systems 190. The communication interface 110 may be in communication with a server 130. The server 130 may include a scene classification database which may include a ground truth image sequence 132 and a scene classification model 134 or scene classification policy network. According to one aspect, one or more of the image segmentation module 112, the image masker 114, the convolutor 116, the temporal classifier 118, and/or the scene classifier 120 may be implemented via the processor 104, the memory 106, the storage drive 108, etc.

Ground Truth

According to one aspect, the ground truth image sequence 132 may include a series of one or more image frames which are associated with a moving vehicle and may be collected during a training phase. It will be appreciated that some scenes may be static, while other scenes or places may be dynamic. For example, an intersection may be a static scene or place, while a construction zone may be dynamic in that the construction zone may be defined by traffic cones, which may change size, shape, appearance, and/or location between construction zones and between different days or times.

Each one of the one or more image frames of the ground truth image sequence 132 may be annotated (e.g., manually annotated and be indicative of the ground truth) with one or more labels, such as a temporal classification label, a weather classification label, a road surface classification label, an environment classification label, and a scene classification label. Examples of temporal classification labels may include background, approaching, entering, passing, etc. In other words, the image frames are annotated temporally with fine grained labels such as Approaching (A), Entering (E), and Passing (P), depending on vantage point and/or the position of the training vehicle relative position to the place of interest or scene. The classification labels may be organized in a hierarchical and in a causal manner. For example, at the top, environment may be annotated, followed by the scene classes at the mid-level, and the fine grained annotations such as approaching, entering, and passing at bottom level.

Examples of weather classification labels may include clear, sunny, snowy, rainy, overcast, cloudy, foggy, light, dark, etc. Examples of road surface classification labels may include dry, wet, snow, obscured (e.g., some traffic markings not visible), mud, etc. Examples of environment classification labels may include environment types, such as urban, country, suburban, ramp, highway, local (e.g., neighborhood, residential, school), etc. Ramps, for example, may be a connector between two highways or between a highway and another road type. Examples of scene classification labels may include road places, a construction zone, an intersection (e.g., an x-way intersection, such as a three-way, four-way, five-way, etc.), a bridge, an overhead bridge, a railroad crossing, a tunnel, lane merge, lane branch, zebra crossing, etc. Some scene classifications may merely be associated with approaching and passing temporal classification labels, while others may be associated with approaching, entering, and passing labels. The road surface classification and the weather classifications may be mutually exclusive from one another. In other words, it may be wet on the road, but the weather may be sunny, for example.

This annotated ground truth image sequence 132 may be utilized to train a model, which may be stored in the scene classification database as a scene classification model 134 or a scene classification policy network, for example. Because the ground truth image sequence 132 is annotated as desired (e.g., this may be performed manually, by humans), the scene classification model 134 may be trained via machine learning, deep learning, or other type of artificial intelligence technique. In this regard, the system for scene classification 100 may be trained (e.g., via the processor 104) to mimic results from the ground truth image sequence 132 by minimizing losses and by backpropagation.

Image Capture

The image capture device 102 may capture a first series of image frames (e.g., video) of an environment (e.g., operating environment) from the perspective of a moving vehicle. According to one aspect, this first series of image frames or video of the environment may be taken as an input to the system for scene classification 100.

Segmentation

The image segmentation module 112 may identify one or more traffic participants within the environment from the image frames based on a first convolutional neural network (CNN) and the first series of image frames. According to one aspect, the image segmentation module 112 may implement a deeplab CNN. Regardless of implementation, the image segmentation module 112 may provide semantics segmentation as an output when the input of the series of image frames is provided. The image segmentation module 112 may classify objects within each image frame of the first series of image frames. For example, the image segmentation module 112 may identify one or more pedestrians, one or more vehicles (e.g., in traffic), one or more motorists, one or more bystanders, one or more bicyclists, one or more moving objects, etc.

Masking

The image masker 114 may generate a second series of image frames by masking one or more of the traffic participants from the environment. Because traffic participants generally have no bearing on how a scene is defined (e.g., whether the environment is an intersection, a highway, etc.), the image masker 114 may mask all of the traffic participants from the environment from the second series of image frames. According to one aspect, the image masker 114 may utilize semantic segmentation to mask one or more of the traffic participants from the image frame sequence. According to one aspect, the image masker 114 may also mask other unnecessary objects from the environment, such as birds in the sky, etc. In this way, the image masker 114 may provide the system for scene classification 100 with greater spatial hard attention by allowing neural networks of the system for scene classification 100 focus on the unmasked portions of the image frames, thereby providing greater accuracy during classification. Thus, semantic context may be provided via the image masker 114 and the image segmentation module 112.

Temporal Classification

The temporal classifier 118 may classify one or more image frames of the second series of image frames (e.g., or from the original set of image frames captured by the image capture device 102) with one of two or more temporal predictions and generate a third series of image frames associated with respective temporal predictions based on a scene classification model 134. Examples of temporal predictions may include, background, approaching, entering, passing of a scene or a place, etc. The temporal classifier 118 may learn that approaching is generally followed by entering, and then by passing.

According to one aspect, the temporal classifier 118 may perform classification based on a second CNN, a long short-term memory (LSTM) network, and a first fully connected layer on an input set of image frames, which may be the original input image frames (RGB), image frames concatenated with semantic segmentation (RGBS), image frames with traffic participants masked using semantic segmentation (RGB-masked), or merely using a one channel semantic segmentation image (S). In this way, the temporal classifier 118 may be utilized to determine where within a scene, the vehicle 10 is located (e.g., on a frame by frame basis). According to one aspect, the second CNN may be implemented as ResNet 50, for example. The temporal classifier 118 may determine and assign one or more of the temporal predictions to one or more corresponding image frames of the first series of image frames or one or more corresponding image frames of the second series of image frames prior to any determination by the scene classifier 120 regarding the type of scene or place.

The temporal classifier 118, when performing classification based on any CNN, may implement the convolutor 116, and pass an input through one or more of the CNNs of the convolutor, such as a CNN, a depth CNN, a pose CNN, etc. to generate an output.

In other words, the temporal classifier 118 may determine the beginning, middle, and/or end of a scene before determining what type of scene the scene actually is or prior to determining the associated scene prediction for the scene. Stated yet another way, the temporal classifier 118 may enable the system for scene classification 100 to distinguish between different stages of an event, such as when the vehicle 10 passes through an intersection or a construction zone. Specifically, the temporal classifier 118 may label, assign, or annotate one or more image frames of one or more of the series of images with a temporal prediction from a set of temporal predictions. As previously discussed, examples of these temporal predictions may include background, approaching, entering, or passing of a scene or a place. In this way, fine grain or fine-tuned temporal classification may be provided by the temporal classifier 118 (e.g., to localize the vehicle 10 within a specific, unknown scene or place). It will be appreciated that other temporal predictions may be utilized according to other aspects. For example, the temporal prediction may be numerical and be indicative of progress through a scene (e.g., which may yet to be defined by the scene classifier 120). Regardless, the ground truth image sequence 132 may be utilized to train a classifier, such as the temporal classifier 118, to detect when the vehicle 10 is approaching, entering, or passing a scene, regardless of whether the type of scene is known.

Weather Classification

The scene classifier 120 may utilize the scene classification model 134, which may be trained on a CNN, such as ResNet 50 or a deepnet CNN, to determine the weather classification for the vehicle 10. Similarly to scene classification, weather, road surface, and environment, may be classified using an input where the traffic participants are masked (e.g., using the image masker 114 generated series of image frames which mask one or more of the traffic participants from the environment). However, other inputs may be provided, such as the original input image frames (RGB), image frames concatenated with semantic segmentation (RGBS), image frames with traffic participants masked using semantic segmentation (RGB-masked), or merely using a one channel semantic segmentation image (S). The scene classification model 134 may be trained based on the annotated ground truth image sequence 132. Examples of weather classification labels may include lighting conditions, visibility conditions, such as clear, sunny, snowy, rainy, overcast, cloudy, foggy, light, dark, etc.

Road Surface Classification

The scene classifier 120 may utilize the scene classification model 134, which may have been trained on a CNN, such as ResNet 50, to determine the road surface classification for the vehicle 10. The scene classification model 134 may be trained based on the ground truth image sequence 132, which may be annotated with one or more labels for each of the associated image frames, as described above. Examples of road surface classification labels may include dry, wet, snow, obscured (e.g., some traffic markings not visible), mud, etc.

Environment Classification

The scene classifier 120 may operate similarly to the other types of classifications. Examples of environment classification labels may include environment types, such as urban, country, suburban, ramp, highway, local (e.g., neighborhood, residential, school), etc.

Scene or Place Classification

The scene classifier 120 may classify one or more image frames of the third series of image frames based on a third CNN, global average pooling, and a second fully connected layer and generate an associated scene prediction based on the scene classification model 134 and respective temporal predictions. The scene classifier 120 may generate a fourth series of image frames associated with respective temporal predictions based on the scene classification model 134 and respective temporal predictions. In this way, the temporal classifier 118 may be utilized to trim image frames from the video or from the image sequences to enable efficient scene classification to occur. Stated another way, the scene classifier 120 may merely consider image frames marked as approaching, entering, and passing of a given environment place, while ignoring image frames annotated as background, and thus provide dynamic classification of road scenes, for example. In this way, this two-stage architecture mitigates the unnecessary use of processing power, by excluding background image frames from being examined and/or scene classified. Thus, the temporal classifier 118 acts as a coarse separator for the scene classifier 120, mitigating the amount of processing power and resources utilized to classify scenes, and sending merely the candidate frames of approaching, entering, or passing to the scene classifier 120 as an event window to the prediction network.

The scene classifier 120, similarly to the temporal classifier 118, when performing classification based on any CNN, may implement the convolutor 116, and pass an input through one or more of the CNNs of the convolutor, such as a CNN, a depth CNN, a pose CNN, ResNet 50 CNN, etc. to generate an output.

According to one aspect, the third CNN may be implemented as ResNet 50, for example. Therefore, the scene classifier 120 may utilize one or more of the temporal predictions from one or more of the corresponding image frames to facilitate determination of what type of scene or place is associated with the approaching, entering, and passing of a scene. For example, the temporal classifier 118 may have classified one or more image frames of the series of image frames with temporal predictions. Using these temporal predictions, the scene classifier 120 may determine that a set of image frames associated with approaching, entering, and passing of a scene from the series of image frames is a construction zone, for example. Thus, the temporal classifier 118 may determine that the vehicle 10 is travelling through a beginning, middle, and end of an unknown type of scene, and the scene classifier 120 may determine what type of scene the scene is after the temporal classifier 118 has made or determined its temporal predictions of the image frames.

Examples of scene or place classifications may include road places, such as a construction zone, an intersection (e.g., an x-way intersection, such as a three-way, four-way, five-way, etc.), a bridge, an overhead bridge, a railroad crossing, a tunnel, lane merge, lane branch, zebra crossing, etc. In this way, the scene prediction may a scene classification indicative of a type of location where the vehicle 10 is approaching, entering, or passing, for example.

According to one aspect, the scene classifier 120 may generate the scene prediction based on the input of first series of image frames, in real time, and such that a complete series of image frames temporally annotated from background, approaching, entering, passing is not necessarily required to generate the scene prediction. In other words, merely a partial series of image frames may be assigned temporal predictions (e.g., background, approaching, . . . , etc.) prior to the scene classifier 120 generating the associated scene prediction based on the CNN, the global average pooling, and respective temporal predictions. Thus, development of machine learning that utilizes the semantic context and temporal nature of the ground truth dataset may improve classification results for the system for scene classification 100.

Vehicle Application

The controller may activate or deactivate one or more sensors or one or more vehicle systems 190 of the vehicle 10 based on the scene prediction and/or one or more of the classifications, such as the weather classification, the road surface classification, the environment classification, etc. For example, because scene context features may serve as a prior for other down-stream tasks such as recognition of objects, behavior, action, intention, navigation, localization, etc., the controller of the vehicle 10 may react based on the scene prediction determined by the scene classifier 120, as well as the other classifications, including the weather classification, the road surface classification, and the environment classification.

For example, if the scene classifier 120 determines the scene prediction to be a crosswalk, the controller of the vehicle 10 may activate additional sensors to detect pedestrians. At other times, such as when the vehicle 10 is on the highway, the pedestrian sensors may be prioritized lower. As another example, if the scene classifier 120 determines the scene prediction to be an intersection, the controller of the vehicle 10 may activate additional sensors or run specific modules to detect traffic lights, stop signs, stop lines, or other intersection related information. In other words, the controller may reprioritize or highly prioritize searching for traffic lights, stop signs, stop lines based on the scene prediction being an intersection. Conversely, the controller may deactivate a LIDAR system or a radar system based on the scene prediction being a tunnel.

According to one aspect, the scene classifier 120 determines the scene prediction to be a construction zone, the controller of the vehicle (e.g., implemented via the processor 104) may warn or provide notifications and/or disable autonomous driving based on the scene prediction being the construction zone because autonomous vehicles may utilize pre-built, high definition maps of a roadway. If the scene classifier 120 determines that it is foggy or rainy out, the processor 104 may disable the LIDAR from one or more of the vehicle systems 190 to mitigate ghosting effects. When the scene classifier 120 determines that the vehicle scene prediction is in a tunnel, or that there is an overhead bridge, GPS of the vehicle systems 190 may be deprioritized because GPS may lose tracking from the tunnel or the overhead bridge. Further, cameras may be prepped for extreme exposure when exiting the tunnel or overhead bridge area. Similarly, a lane departure warning system may be implemented with wider tolerances or disabled when the scene classifier 120 determines the scene prediction to be a branch area or near an exit ramp, for example. Therefore, the scene classifier 120 may be utilized to enhance the use of one or more of the vehicle systems 190, such as by activating, deactivating, prioritizing, deprioritizing, etc. one or more of the respective vehicle systems 190. In this way, the scene classifier 120 may provide contextual cues for other vehicle systems 190 of the vehicle 10 to operate efficiently.

FIG. 2 is a flow diagram of a method 200 for scene classification, according to one aspect. The method 200 for scene classification may include capturing 202 a first series of image frames of an environment from a moving vehicle, identifying 204 traffic participants within the environment based on a first CNN, generating 206 a second series of image frames by masking traffic participants from the environment, classifying 208 image frames of the second series of image frames with temporal predictions based on a second CNN, a long short-term memory (LSTM) network, and a first fully connected layer, classifying 210 image frames based on a third CNN, global average pooling, and a second fully connected layer, and generating 212 an associated scene prediction based on the scene classification model 134 and respective temporal predictions.

FIG. 3 is an exemplary diagram of temporal predictions associated with a scene classification, according to one aspect. In FIG. 3, different image frames captured by the image capture device 102 may be labelled in association with the ground truth image sequence 132. For example, a first image frame 310 may be labelled as an approaching image frame, a second image frame 320 may be labelled as an entering image frame, and a third image frame 330 may be labelled as a passing image frame. This approaching, entering, and passing may correspond with the vehicle 10 approaching 312, entering 322, and passing 332 an intersection, as seen in FIG. 3.

While FIG. 3 depicts the approaching, entering, and passing for the intersection scene type, other types of scenes may be annotated in a similar fashion (e.g., including temporal predictions of approaching, entering, and passing and also including other annotations, such as scene type annotations of an intersection, a bridge, a tunnel, etc.). It will be appreciated that the ground truth image sequence 132 and the captured series of image frames from the image capture device 102 may be from the perspective of a moving vehicle, and thus, the image frames are not from the perspective of a static or stationary camera. In other words, the ground truth image sequence 132 and the captured series of image frames may include space-time variations in viewpoint and/or scene appearance. As seen in FIG. 3, view variations may be caused by the changing distance to the intersection as the vehicle 10 approaches the scene of interest (i.e. the intersection at the passing 332).

FIGS. 4A-4B are exemplary diagrams of temporal predictions associated with various scene classifications, according to one aspect. In FIGS. 4A-4B, different examples of a variety of annotations are provided. According to one aspect, one or more CNNs or other networks may be implemented to make parameters fed through the architecture of FIGS. 4A-4B tractable.

FIG. 5 is an exemplary diagram of an architecture associated with training the system for scene classification 100 of FIG. 1. The ground truth image sequence 132 may be annotated to include the scene classifications label of ‘construction’ and each one of the image frames of the input series of image frames of the construction environment may be annotated with temporal predictions indicative of where the moving vehicle is within the construction zone. In other words, the temporal predictions of the ground truth image sequence 132 may be marked as approaching, entering, or passing, for example.

The image capture device 102 may capture an input series of image frames. The image segmentation module 112 may segment or identify one or more traffic participants using semantic segmentation, such as via a CNN 510 (e.g., a deeplab CNN). The image masker 114 may mask one or more of the traffic participants from the image frames, thereby enabling the system for scene classification 100 to focus merely on the surrounding environment and provide more accurate scene classification accordingly.

As seen in FIG. 5, the temporal classifier 118 may be utilized to trim untrimmed video and aggregate the features to classify the entire trimmed segment. For example, it may be beneficial to analyze or determine a class as a 4-way intersection by looking at or examining a segment (e.g., approaching, entering, and passing) in its entirety rather than on a per frame basis. Here, the temporal classifier 118 may be fed the series of image frames which have the traffic participants masked (e.g., the RGB-masked image frames). According to other aspects or architectures, the temporal classifier 118 may receive other series of image frames, such as the RGB, RGBS, or S image frames. In any event, the temporal classifier 118 may receive the input set of image frames and feed this through a CNN 520, such as the ResNet 50 CNN, extract a set of features 522, feed this set of features through an LSTM 526 and a fully connected layer 528, thereby producing a series of image frames, each annotated with temporal predictions.

The series of image frames annotated with temporal predictions may be fed to the scene classifier 120, which may include one or more CNNs 530, such as the ResNet 50 CNN, extract a set of features 532, perform global average pooling 536, and feed the results through a fully connected layer 538 to generate a scene prediction for the scene (e.g., which may be unknown up to this point) including image frames annotated as approaching, entering, and passing. This model may be trained based on the ground truth image sequence 132. In other words, the temporal classifier 118 and the scene classifier 120 may be trained using machine learning or deep learning to replicate or mimic the annotations of the ground truth image sequence 132, such as when a similar unannotated series of image frames is provided to the system for scene classification 100, thereby building a scene classification model 134 or scene classification policy network stored within the scene classification database on the server 130.

The scene classifier 120 may aggregate frames within this window through global average pooling and produce a singular class label for the entire event, place, or scene. According to one aspect, one or more of the CNNs described herein may be pre-trained on the ground truth image sequence 132 or another database from the scene classification database. Data augmentation may be performed to reduce over-fitting. Random flips, random resize, and random crop may be employed. As indicated, the processor 104 or the controller of the vehicle 10 may make adjustments for one or more vehicle systems 190 based on the generated scene prediction.

Still another aspect involves a computer-readable medium including processor-executable instructions configured to implement one aspect of the techniques presented herein. An aspect of a computer-readable medium or a computer-readable device devised in these ways is illustrated in FIG. 6, wherein an implementation 600 includes a computer-readable medium 608, such as a CD-R, DVD-R, flash drive, a platter of a hard disk drive, etc., on which is encoded computer-readable data 606. This encoded computer-readable data 606, such as binary data including a plurality of zero's and one's as shown in 606, in turn includes a set of processor-executable computer instructions 604 configured to operate according to one or more of the principles set forth herein. In this implementation 600, the processor-executable computer instructions 604 may be configured to perform a method 602, such as the method 200 of FIG. 2. In another aspect, the processor-executable computer instructions 604 may be configured to implement a system, such as the system for scene classification 100 of FIG. 1. Many such computer-readable media may be devised by those of ordinary skill in the art that are configured to operate in accordance with the techniques presented herein.

As used in this application, the terms “component”, “module,” “system”, “interface”, and the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processing unit, an object, an executable, a thread of execution, a program, or a computer. By way of illustration, both an application running on a controller and the controller may be a component. One or more components residing within a process or thread of execution and a component may be localized on one computer or distributed between two or more computers.

Further, the claimed subject matter is implemented as a method, apparatus, or article of manufacture using standard programming or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.

FIG. 7 and the following discussion provide a description of a suitable computing environment to implement aspects of one or more of the provisions set forth herein. The operating environment of FIG. 7 is merely one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment. Example computing devices include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices, such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like, multiprocessor systems, consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, etc.

Generally, aspects are described in the general context of “computer readable instructions” being executed by one or more computing devices. Computer readable instructions may be distributed via computer readable media as will be discussed below. Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform one or more tasks or implement one or more abstract data types. Typically, the functionality of the computer readable instructions are combined or distributed as desired in various environments.

FIG. 7 illustrates a system 700 including a computing device 712 configured to implement one aspect provided herein. In one configuration, the computing device 712 includes at least one processing unit 716 and memory 718. Depending on the exact configuration and type of computing device, memory 718 may be volatile, such as RAM, non-volatile, such as ROM, flash memory, etc., or a combination of the two. This configuration is illustrated in FIG. 7 by dashed line 714.

In other aspects, the computing device 712 includes additional features or functionality. For example, the computing device 712 may include additional storage such as removable storage or non-removable storage, including, but not limited to, magnetic storage, optical storage, etc. Such additional storage is illustrated in FIG. 7 by storage 720. In one aspect, computer readable instructions to implement one aspect provided herein are in storage 720. Storage 720 may store other computer readable instructions to implement an operating system, an application program, etc. Computer readable instructions may be loaded in memory 718 for execution by processing unit 716, for example.

The term “computer readable media” as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data. Memory 718 and storage 720 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by the computing device 712. Any such computer storage media is part of the computing device 712.

The term “computer readable media” includes communication media. Communication media typically embodies computer readable instructions or other data in a “modulated data signal” such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” includes a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.

The computing device 712 includes input device(s) 724 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, or any other input device. Output device(s) 722 such as one or more displays, speakers, printers, or any other output device may be included with the computing device 712. Input device(s) 724 and output device(s) 722 may be connected to the computing device 712 via a wired connection, wireless connection, or any combination thereof. In one aspect, an input device or an output device from another computing device may be used as input device(s) 724 or output device(s) 722 for the computing device 712. The computing device 712 may include communication connection(s) 726 to facilitate communications with one or more other devices 730, such as through network 728, for example.

Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter of the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example aspects.

Various operations of aspects are provided herein. The order in which one or more or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated based on this description. Further, not all operations may necessarily be present in each aspect provided herein.

As used in this application, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. Further, an inclusive “or” may include any combination thereof (e.g., A, B, or any combination thereof). In addition, “a” and “an” as used in this application are generally construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Additionally, at least one of A and B and/or the like generally means A or B or both A and B. Further, to the extent that “includes”, “having”, “has”, “with”, or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising”.

Further, unless specified otherwise, “first”, “second”, or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first channel and a second channel generally correspond to channel A and channel B or two different or two identical channels or the same channel. Additionally, “comprising”, “comprises”, “including”, “includes”, or the like generally means comprising or including, but not limited to.

It will be appreciated that various of the above-disclosed and other features and functions, or alternatives or varieties thereof, may be desirably combined into many other different systems or applications. Also that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.

Claims

1. A system for scene classification, comprising:

an image capture device capturing a first series of image frames of an environment from a moving vehicle;
an image segmentation module identifying one or more traffic participants within the environment based on a first convolutional neural network (CNN);
an image masker generating a second series of image frames by masking one or more of the traffic participants from the environment;
a temporal classifier classifying one or more image frames of the second series of image frames with one of two or more temporal predictions and generating a third series of image frames associated with respective temporal predictions based on a scene classification model, wherein the classification is based on a second CNN, a long short-term memory (LSTM) network, and a first fully connected layer; and
a scene classifier classifying one or more image frames of the third series of image frames based on a third CNN, global average pooling, and a second fully connected layer and generating an associated scene prediction based on the scene classification model and respective temporal predictions.

2. The system for scene classification of claim 1, wherein the two or more temporal predictions include an approaching annotation, an entering annotation, and a passing annotation.

3. The system for scene classification of claim 1, wherein the first CNN, the second CNN, or the third CNN is a deepnet CNN or a ResNet 50 CNN.

4. The system for scene classification of claim 1, wherein the system for scene classification is implemented in a vehicle and the vehicle includes a controller activating or deactivating one or more sensors or one or more vehicle systems of the vehicle based on the scene prediction.

5. The system for scene classification of claim 4, wherein the scene classifier classifies one or more image frames of the third series of image frames with a weather classification including clear, sunny, snowy, rainy, overcast, or foggy; and

wherein the controller activates or deactivates one or more of the sensors or one or more of the vehicle systems of the vehicle based on the weather classification.

6. The system for scene classification of claim 4, wherein the scene classifier classifies one or more image frames of the third series of image frames with a road surface classification including dry, wet, or snow; and

wherein the controller activates or deactivates one or more of the sensors or one or more of the vehicle systems of the vehicle based on the road surface classification.

7. The system for scene classification of claim 4, wherein the scene classifier classifies one or more image frames of the third series of image frames with an environment classification including urban, ramp, highway, or local; and

wherein the controller activates or deactivates one or more of the sensors or one or more of the vehicle systems of the vehicle based on the environment classification.

8. The system for scene classification of claim 4, wherein one or more of the vehicle systems is a LIDAR system or radar system.

9. The system for scene classification of claim 8, wherein the controller deactivates the LIDAR system or radar system based on the scene prediction being a tunnel.

10. The system for scene classification of claim 4, wherein the controller prioritizes searching for traffic lights, stop signs, stop lines based on the scene prediction being an intersection.

11. A vehicle equipped with a system for scene classification, comprising:

an image capture device capturing a first series of image frames of an environment from a moving vehicle;
an image segmentation module identifying one or more traffic participants within the environment based on a first convolutional neural network (CNN);
an image masker generating a second series of image frames by masking one or more of the traffic participants from the environment;
a temporal classifier classifying one or more image frames of the second series of image frames with one of two or more temporal predictions and generating a third series of image frames associated with respective temporal predictions based on a scene classification model, wherein the classification is based on a second CNN, a long short-term memory (LSTM) network, and a first fully connected layer; and
a scene classifier classifying one or more image frames of the third series of image frames based on a third CNN, global average pooling, and a second fully connected layer and generating an associated scene prediction based on the scene classification model and respective temporal predictions; and
a controller activating or deactivating one or more sensors or one or more vehicle systems of the vehicle based on the scene prediction.

12. The vehicle of claim 11, wherein the two or more temporal predictions include an approaching annotation, an entering annotation, and a passing annotation.

13. The vehicle of claim 11, wherein the first CNN, the second CNN, or the third CNN is a deepnet CNN or a ResNet 50 CNN.

14. The vehicle of claim 11, wherein one or more of the vehicle systems is a LIDAR system or radar system and wherein the controller deactivates the LIDAR system or radar system based on the scene prediction being a tunnel.

15. A system for scene classification, comprising:

an image capture device capturing a first series of image frames of an environment from a moving vehicle;
a temporal classifier classifying one or more image frames of the first series of image frames with one of two or more temporal predictions and generating a second series of image frames associated with respective temporal predictions based on a scene classification model, wherein the classification is based on a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a first fully connected layer; and
a scene classifier classifying one or more image frames of the second series of image frames based on a second CNN, global average pooling, and a second fully connected layer and generating an associated scene prediction based on the scene classification model and respective temporal predictions.

16. The system for scene classification of claim 15, wherein the two or more temporal predictions include an approaching annotation, an entering annotation, and a passing annotation.

17. The system for scene classification of claim 15, wherein the CNN or the second CNN is a ResNet 50 CNN.

18. The system for scene classification of claim 15, wherein the system for scene classification is implemented in a vehicle and the vehicle includes a controller activating or deactivating one or more sensors or one or more vehicle systems of the vehicle based on the scene prediction.

19. The system for scene classification of claim 18, wherein the scene classifier classifies one or more image frames of the third series of image frames with a weather classification including clear, sunny, snowy, rainy, overcast, or foggy; and

wherein the controller activates or deactivates one or more of the sensors or one or more of the vehicle systems of the vehicle based on the weather classification.

20. The system for scene classification of claim 18, wherein the scene classifier classifies one or more image frames of the third series of image frames with a road surface classification including dry, wet, or snow; and

wherein the controller activates or deactivates one or more of the sensors or one or more of the vehicle systems of the vehicle based on the road surface classification.
Patent History
Publication number: 20200089969
Type: Application
Filed: Apr 3, 2019
Publication Date: Mar 19, 2020
Patent Grant number: 11195030
Inventors: Athmanarayanan Lakshmi Narayanan (Sunnyvale, CA), Isht Dwivedi (New York, NY), Behzad Dariush (San Ramon, CA)
Application Number: 16/374,205
Classifications
International Classification: G06K 9/00 (20060101); G06T 7/174 (20060101);