SYSTEM AND METHOD FOR COMPLETING TRAJECTORY PREDICTION FROM AGENT-AUGMENTED ENVIRONMENTS

A system and method for completing trajectory prediction from agent-augmented environments that include receiving image data associated with surrounding environment of an ego agent and processing an agent-augmented static representation of the surrounding environment of the ego agent based on the image data. The system and method also include processing a set of spatial graphs that correspond to an observation time horizon based on the agent-augmented static representation. The system and method further include predicting future trajectories of agents that are located within the surrounding environment of the ego agent based on the spatial graphs.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims priority to U.S. Provisional Application Ser. No. 63/113,710 filed on Nov. 13, 2020, which is expressly incorporated herein by reference.

BACKGROUND

Trajectory prediction is an essential basis to deploy advanced autonomous navigation for the safe operation of intelligent machines (i.e., robots and vehicles) in interactive environments. Future trajectory prediction is a challenging problem due to high variability in human behavior during interactions with external entities (i.e., other road agents, environments, etc.). Existing methods are likely to overlook the influence of physical constraints in modeling social interactions which often results in generating implausible predictions from an observed environment.

The importance of modeling social aspects of human behaviors has been broadly highlighted and extensively researched in the past. One line of trajectory prediction research has been directed toward the advancement of human-human interactions with no consideration of an environment. However, the interactive environment is not simply an open space. There exist influences due to static structures and obstacles in a scene. Accordingly, current methods are hard to verify the physical functionality on human-space interaction modeling from their weak consideration of the environment.

BRIEF DESCRIPTION

According to one aspect, a computer-implemented method for completing trajectory prediction from agent-augmented environments that includes receiving image data associated with surrounding environment of an ego agent and processing an agent-augmented static representation of the surrounding environment of the ego agent based on the image data. The computer-implemented method also includes processing a set of spatial graphs that correspond to an observation time horizon based on the agent-augmented static representation. Nodes in the spatial graphs are simultaneously clustered considering their spatial and temporal interactions. The computer-implemented method further includes predicting future trajectories of agents that are located within the surrounding environment of the ego agent based on the spatial graphs.

According to another aspect, a system for completing trajectory prediction from agent-augmented environments that includes a memory storing instructions when executed by a processor cause the processor to receive image data associated with surrounding environment of an ego agent and process an agent-augmented static representation of the surrounding environment of the ego agent based on the image data. The instructions also cause the processor to process a set of spatial graphs that correspond to an observation time horizon based on the agent-augmented static representation. Nodes in the spatial graphs are simultaneously clustered considering their spatial and temporal interactions. The system further causes the processor to predict future trajectories of agents that are located within the surrounding environment of the ego agent based on the spatial graphs.

According to yet another aspect, a non-transitory computer readable storage medium storing instructions that when executed by a computer, which includes a processor perform a method that includes receiving image data associated with surrounding environment of an ego agent and processing an agent-augmented static representation of the surrounding environment of the ego agent based on the image data. The method also includes processing a set of spatial graphs that correspond to an observation time horizon based on the agent-augmented static representation. Nodes in the spatial graphs are simultaneously clustered considering their spatial and temporal interactions. The method further includes predicting future trajectories of agents that are located within the surrounding environment of the ego agent based on the spatial graphs.

BRIEF DESCRIPTION OF THE DRAWINGS

The novel features believed to be characteristic of the disclosure are set forth in the appended claims. In the descriptions that follow, like parts are marked throughout the specification and drawings with the same numerals, respectively. The drawing figures are not necessarily drawn to scale and certain figures can be shown in exaggerated or generalized form in the interest of clarity and conciseness. The disclosure itself, however, as well as a preferred mode of use, further objects and advances thereof, will be best understood by reference to the following detailed description of illustrative embodiments when read in conjunction with the accompanying drawings, wherein:

FIG. 1 is a schematic view of an exemplary operating environment for implementing systems and methods for completing trajectory prediction from agent-augmented environments according to an exemplary embodiment of the present disclosure;

FIG. 2A is an illustrative example of a surrounding environment of an ego agent according to an exemplary embodiment of the present disclosure;

FIG. 2B is an illustrative example of an agent-augmented surrounding environment of the ego agent according to an exemplary embodiment of the present disclosure;

FIG. 3 is a process flow diagram of a method for creating an agent-augmented surrounding environment based on images of the surrounding environment of the ego agent according to an exemplary embodiment of the present disclosure;

FIG. 4 is a process flow diagram of a method for predicting future trajectories associated with each of the dynamic agents located within the surrounding environment of the ego agent according to an exemplary embodiment of the present disclosure;

FIG. 5 is a schematic overview of a methodology executed by a trajectory prediction application according to an exemplary embodiment of the present disclosure;

FIG. 6 is an illustrative clustering scheme executed by a neural network according to an exemplary embodiment of the present disclosure; and

FIG. 7 is a process flow diagram of a method for completing trajectory prediction from agent-augmented environments according to an exemplary embodiment of the present disclosure.

DETAILED DESCRIPTION

The following includes definitions of selected terms employed herein. The definitions include various examples and/or forms of components that fall within the scope of a term and that may be used for implementation. The examples are not intended to be limiting.

A “bus”, as used herein, refers to an interconnected architecture that is operably connected to other computer components inside a computer or between computers. The bus may transfer data between the computer components. The bus may be a memory bus, a memory controller, a peripheral bus, an external bus, a crossbar switch, and/or a local bus, among others. The bus can also be a vehicle bus that interconnects components inside a vehicle using protocols such as Media Oriented Systems Transport (MOST), Controller Area network (CAN), Local Interconnect Network (LIN), among others.

“Computer communication”, as used herein, refers to a communication between two or more computing devices (e.g., computer, personal digital assistant, cellular telephone, network device) and can be, for example, a network transfer, a file transfer, an applet transfer, an email, a hypertext transfer protocol (HTTP) transfer, and so on. A computer communication can occur across, for example, a wireless system (e.g., IEEE 802.11), an Ethernet system (e.g., IEEE 802.3), a token ring system (e.g., IEEE 802.5), a local area network (LAN), a wide area network (WAN), a point-to-point system, a circuit switching system, a packet switching system, among others.

A “disk”, as used herein can be, for example, a magnetic disk drive, a solid state disk drive, a floppy disk drive, a tape drive, a Zip drive, a flash memory card, and/or a memory stick. Furthermore, the disk can be a CD-ROM (compact disk ROM), a CD recordable drive (CD-R drive), a CD rewritable drive (CD-RW drive), and/or a digital video ROM drive (DVD ROM). The disk can store an operating system that controls or allocates resources of a computing device.

A “memory”, as used herein can include volatile memory and/or non-volatile memory. Non-volatile memory can include, for example, ROM (read only memory), PROM (programmable read only memory), EPROM (erasable PROM), and EEPROM (electrically erasable PROM). Volatile memory can include, for example, RAM (random access memory), synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), and direct RAM bus RAM (DRRAM). The memory can store an operating system that controls or allocates resources of a computing device.

A “module”, as used herein, includes, but is not limited to, non-transitory computer readable medium that stores instructions, instructions in execution on a machine, hardware, firmware, software in execution on a machine, and/or combinations of each to perform a function(s) or an action(s), and/or to cause a function or action from another module, method, and/or system. A module may also include logic, a software controlled microprocessor, a discrete logic circuit, an analog circuit, a digital circuit, a programmed logic device, a memory device containing executing instructions, logic gates, a combination of gates, and/or other circuit components. Multiple modules may be combined into one module and single modules may be distributed among multiple modules.

An “operable connection”, or a connection by which entities are “operably connected”, is one in which signals, physical communications, and/or logical communications may be sent and/or received. An operable connection may include a wireless interface, a physical interface, a data interface and/or an electrical interface.

A “processor”, as used herein, processes signals and performs general computing and arithmetic functions. Signals processed by the processor may include digital signals, data signals, computer instructions, processor instructions, messages, a bit, a bit stream, or other means that may be received, transmitted and/or detected. Generally, the processor may be a variety of various processors including multiple single and multicore processors and co-processors and other multiple single and multicore processor and co-processor architectures. The processor may include various modules to execute various functions.

A “vehicle”, as used herein, refers to any moving vehicle that is capable of carrying one or more human occupants and is powered by any form of energy. The term “vehicle” includes, but is not limited to: cars, trucks, vans, minivans, SUVs, motorcycles, scooters, boats, go-karts, amusement ride cars, rail transport, personal watercraft, and aircraft. In some cases, a motor vehicle includes one or more engines. Further, the term “vehicle” may refer to an electric vehicle (EV) that is capable of carrying one or more human occupants and is powered entirely or partially by one or more electric motors powered by an electric battery. The EV may include battery electric vehicles (BEV) and plug-in hybrid electric vehicles (PHEV). The term “vehicle” may also refer to an autonomous vehicle and/or self-driving vehicle powered by any form of energy. The autonomous vehicle may or may not carry one or more human occupants. Further, the term “vehicle” may include vehicles that are automated or non-automated with pre-determined paths or free-moving vehicles.

A “value” and “level”, as used herein may include, but is not limited to, a numerical or other kind of value or level such as a percentage, a non-numerical value, a discrete state, a discrete value, a continuous value, among others. The term “value of X” or “level of X” as used throughout this detailed description and in the claims refers to any numerical or other kind of value for distinguishing between two or more states of X. For example, in some cases, the value or level of X may be given as a percentage between 0% and 100%. In other cases, the value or level of X could be a value in the range between 1 and 10. In still other cases, the value or level of X may not be a numerical value, but could be associated with a given discrete state, such as “not X”, “slightly x”, “x”, “very x” and “extremely x”.

I. System Overview

Referring now to the drawings, wherein the showings are for purposes of illustrating one or more exemplary embodiments and not for purposes of limiting same, FIG. 1 is a schematic view of an exemplary operating environment 100 for implementing systems and methods for completing trajectory prediction from agent-augmented environments according to an exemplary embodiment of the present disclosure. The components of the environment 100, as well as the components of other systems, hardware architectures, and software architectures discussed herein, may be combined, omitted, or organized into different architectures for various embodiments.

Generally the environment 100 includes an ego agent 102. The ego agent 102 may include, but may not be limited to a transportation vehicle (e.g., car, truck, bus, airplane, etc.), a robot, a motorized bicycle/scooter, an automated shopping cart, an automated suit case, a motorized wheel chair, and the like. In one embodiment, the ego agent 102 may include components, systems, and sub-systems that maybe operably and electronically controlled by an electronic control unit (ECU) 104. The ECU 104 may be configured to execute one or more applications, operating systems, ego agent system and subsystem user interfaces, among others. The ECU 104 may also execute a trajectory prediction application 106 that may be configured to provide trajectory prediction by encoding multi-scale agent interactions from agent-augmented environments.

As shown in the illustrative example of FIG. 2, the surrounding environment 200 of the ego agent 102 may be captured within a plurality of images. The plurality of images may be captured at a plurality of time steps. The images may capture agents 202, 204 (additional to the ego agent 102) that are located (e.g., traveling) within the surrounding environment of the ego agent 102. The agents 202, 204 may include, but may be limited to additional transportation vehicles, pedestrians, robots, motorized bicycles/scooters, automated shopping carts, automated suit cases, motorized wheel chairs, and the like.

In one embodiment, the trajectory prediction application 106 may be configured to execute image logic to classify the agents 202, 204 based on analysis of the plurality of images of the surrounding environment 200. The agents 202, 204 may be classified as dynamic agents 202 that may be dynamically located at different locations of the surrounding environment 200 based on their movement. As discussed below, the trajectory prediction application 106 may be configured to predict future behavior in the form of respective predicted trajectories (as represented by the dashed arrows) for each of the dynamic agents 202. Additionally, the trajectory prediction application 106 may be configured to execute the image logic to analyze the plurality of images of the surrounding environment of the ego agent 102 and to classify one or more of the agents 202, 204 as static agents 204 that are located within respective locations of the surrounding environment 200.

The trajectory prediction application 106 may also be configured to execute the image logic to analyze image data associated with the plurality of images and identify static structures 206 that may be located within the surrounding environment of the ego agent 102. The static structures 206 may include static obstacles such as buildings, traffic infrastructure (e.g., traffic lights, traffic signage, posts, cones, barrels), guard rails, trees, and the like. With continued reference to FIG. 2A, in some instances, implausible future behaviors may be predicted for one or more of the dynamic agents 202. As shown, an implausible future trajectory 208 may be predicted for a dynamic agent 202 as passing through a static structure 206 that may be configured as a building.

As discussed below, as represented in an illustrative example of FIG. 2B, upon identifying the dynamic agent(s) 202, static agents(s) 204, and the existence of any static structures 206 that may be located within the surrounding environment 200 of the ego agent 102 as captured within plurality of images, the trajectory prediction application 106 may be configured to process an agent-augmented surrounding environment 210. The agent-augmented surrounding environment 210 is an augmented representation of the surrounding environment 200 of the ego agent 102 at a plurality of time steps. In particular, the trajectory prediction application 106 may be configured to graphically augment each of the images of the surrounding environment 200 of the ego agent 102 (as represented in FIG. 2A) with respect to one or more static structures 206 that may occlude one or more perspective paths of respective dynamic agents 202.

As represented in FIG. 2B, respective boundaries of static structures 206 may be augmented with augmented static agents 212 that may represent the occlusion caused by the existence of the respective static structures 206. Accordingly, the agent-augmented surrounding environment 210 may represent the static structures 206 as multiple augmented static agents 212 that enable the application 106 to learn physical constraints of the surrounding environment 200 of the ego agent 102. Additionally, the application 106 may be configured to augment static agents 204 as previously identified as being located within the surrounding environment 200 into augmented static agents 212 since they may also occlude the respective perspective paths of one or more dynamic agents 202. Accordingly, the trajectory prediction application 106 may be configured to leverage a similarity between static agents 204 and static structures 206 to represent environmental structures as multiple augmented static agents 212.

As discussed in more detail below, upon outputting the agent-augmented surrounding environment 210 as a static representation of the surrounding environment 200, the trajectory prediction application 106 may be configured to input the static representation to a neural network 108. In one configuration, the trajectory prediction application 106 may utilize the neural network 108 to construct a set of spatial graphs that pertain to the static representation of the surrounding environment 200 of the ego agent 102. In one configuration, the number of spatial graphs may be based on an observation time horizon (e.g., number of time steps) of the surrounding environment 200 of the ego agent 102.

In an exemplary embodiment, the neural network 108 may utilize an encoder 112 of the neural network 108 to simultaneously cluster the nodes of the spatial graphs. As discussed below, simultaneous clustering may be based on spatial and temporal interactions of the dynamic agents 202 and the augmented static agents 212. The encoder 112 may be configured to generate coarsened graphs using a graph coarsening process that is discussed below. Accordingly, the encoder 112 of the neural network 108 encodes behavioral representations in various scales which may thereby enable the neural network 108 to learn fundamental entities of agent interactions.

In one or more embodiments, a decoder 114 of the neural network 108 may be configured to decode a coarsened spatial temporal graph that has been clustered to capture both spatial locality and temporal dependency of the agents 202, 204 within the surrounding environment 200 of the ego agent 102. As discussed in more detail below, the decoder 114 may be configured to generate future trajectories of the dynamic agents 202 using the previously encoded interaction features. In particular, the neural network 108 may predict respective trajectories of each of the dynamic agents 202 that are located within the surrounding environment of the ego agent 102 at one or more future time steps (t+1, t+2, t+n). The neural network 108 may thereby communicate respective data to the trajectory prediction application 106.

In one embodiment, upon receiving the prediction of the respective trajectories of each of the dynamic agents 202 at one or more future time steps, the trajectory prediction application 106 may be configured to output instructions to communicate autonomous control parameters to an autonomous controller 116 of the ego agent 102 to autonomously control the ego agent 102 to avoid overlap with the respective predicted trajectories that are respectively associated with each of the dynamic agents 202.

In an additional embodiment, the trajectory prediction application 106 may be configured to output instructions to agent systems/control units 118. The agent systems/control units 118 may include driver assistance systems that may provide audio and/or visual alerts to an operator (not shown) (e.g., driver, remote operator) of the ego agent 102 in one or more circumstances. The trajectory prediction application 106 may be configured to output instructions to the agent systems/control units 118 to provide one or more alerts to the operator of the ego agent 102 to avoid overlap with the respective predicted trajectories of the dynamic agents 202 that are located within the surrounding environment of the ego agent 102.

With continued reference to FIG. 1, in addition to the ECU 104, the autonomous controller 116 and the agent systems/control units 118, the ego agent 102 may also include a storage unit 120 and a camera system 122. In one or more embodiments, the ECU 104 may include a microprocessor, one or more application-specific integrated circuit(s) (ASIC), or other similar devices. The ECU 104 may also include internal processing memory, an interface circuit, and bus lines for transferring data, sending commands, and communicating with the plurality of components of the ego agent 102.

The ECU 104 may also include a communication device (not shown) for sending data internally within (e.g., between one or more components) the ego agent 102 and communicating with externally hosted computing systems (e.g., external to the ego agent 102). Generally, the ECU 104 may communicate with the storage unit 120 to execute the one or more applications, operating systems, ego agent system and subsystem user interfaces, and the like that are stored within the storage unit 120. In one embodiment, the ECU 104 may communicate with the autonomous controller 116 to execute autonomous driving commands to operate the ego agent 102 to be fully autonomously driven or semi-autonomously driven in a particular manner. As discussed, the autonomous driving commands may be based on commands that may be communicated by the trajectory prediction application 106.

In one or more embodiments, the autonomous controller 116 may autonomously control the operation of the ego agent 102 by providing one or more commands to one or more of the agent systems/control units 118 to provide full autonomous or semi-autonomous control of the ego agent 102. Such autonomous control of the ego agent 102 may be provided by sending one or more commands to control one or more of the agent systems/control units 118 to operate (e.g., drive) the ego agent 102 during one or more circumstances (e.g., when providing driver assist controls) and/or to fully control operation of the ego agent 102 during an entire trip of the ego agent 102. The one or more commands may be provided to one or more agent systems/control units 118 that include, but are not limited to an engine control unit, a braking control unit, a transmission control unit, a steering control unit, driver assistance systems, and the like to control the ego agent 102 to be autonomously driven and/or provide audio and/or visual alerts to an operator of the ego agent 102 in one or more circumstances.

In one or more embodiments, the storage unit 120 of the ego agent 102 may be configured to store one or more executable files associated with one or more operating systems, applications, associated operating system data, application data, ego agent system and subsystem user interface data, and the like that are executed by the ECU 104. The storage unit 120 may also be configured to store computer implemented logic, such as image logic that may be utilized by the trajectory prediction application 106 to analyze image data provided by the camera system 122. In one or more embodiments, the storage unit 120 may be accessed by the trajectory prediction application 106 to store data associated with future trajectories predicted by the neural network 108 for each of the dynamic agents 202 to be further utilized to provide one or more commands to the autonomous controller 116 and/or the agent systems/control units 118.

With continued reference to FIG. 1, the camera system 122 may include one or more of the cameras (not shown) that may be positioned in one or more directions and at one or more areas to capture the plurality of images of the surrounding environment 200 of the ego agent 102 (e.g., images of the roadway on which the ego agent 102 is traveling). The one or more cameras of the camera system 122 may be disposed at external front portions of the ego agent 102, including, but not limited to different portions of the ego agent dashboard, ego agent bumper, ego agent front lighting units, ego agent fenders, and the windshield, robotic portions. In one embodiment, the one or more cameras may be configured as RGB cameras that may capture RGB bands that are configured to capture rich information about object appearance, as well as relationships and interactions between the ego agent 102, dynamic agents 202, static agents 204, and static structures 206 located within the surrounding environment 200 of the ego agent 102.

In alternate embodiments, the one or more cameras may be configured as stereoscopic cameras that are configured to capture environmental information in the form three-dimensional images. In one or more configurations, the one or more cameras may be configured to capture a plurality of first person viewpoint RGB images/videos of the surrounding environment 200 of the ego agent 102. The camera system 122 may be configured to convert the plurality of RGB images/videos (e.g., sequences of images) into image data that is communicated to the trajectory prediction application 106 to be further analyzed.

In an exemplary embodiment, the neural network 108 may be configured as a plurality of graph neural networks (GNNs) that may be configured to execute machine learning/deep learning processes to model pair-wise interactions between the ego agent 102, the dynamic agents 202 located within the surrounding environment 200, and the static structures 206 that may be located within the surrounding environment of the ego agent 102. The neural network 108 may be executed by a processing unit 110. The processing unit 110 may be configured to provide processing capabilities to be configured to utilize machine learning/deep learning to provide artificial intelligence capabilities that may be executed to analyze inputted data and to output data to the trajectory prediction application 106.

In one embodiment, the neural network 108 may be configured to utilize the encoder 112 to code the pair-wise interactions to be shared in fine-scale spatial graphs. As discussed below, the encoder 112 may be configured to dynamically map each of the individual nodes of the spatial graphs to a set of clusters. As a result, each cluster may correspond to a node in a new coarsened graph. Accordingly, while generating the coarsened graph, the neural network 108 may intuitively learn a clustering strategy that operates among the nodes in a spatial and temporal dimension of graphs. As such, the encoder 112 encodes behavioral representations in a coarser level as a notion of both spatial locality and temporal dependency. The neural network 108 may utilize the decoder 114 to generate future trajectories respectively associated with each of the dynamic agents 202 using previously encoded interaction features. Accordingly, the neural network 108 may output future predicted trajectories respectively associated with each of the dynamic agents 202 at one or more future time steps to the trajectory prediction application 106. The trajectory prediction application 106 may operably control one or more functions of the ego agent 102 based on the future predicted trajectories of the dynamic agents 202 located within the surrounding environment of the ego agent 102.

II. The Trajectory Prediction Application and Related Methods

Components of the trajectory prediction application 106 will now be described according to an exemplary embodiment and with reference to FIG. 1. In an exemplary embodiment, the trajectory prediction application 106 may be stored on the storage unit 120 and executed by the ECU 104 of the ego agent 102. In another embodiment, the trajectory prediction application 106 may be stored on an externally hosted computing infrastructure (not shown) and may be accessed by a telematics control unit (not shown) of the ego agent 102 to be executed by the ECU 104 of the ego agent 102.

The general functionality of trajectory prediction application 106 will now be discussed with continued reference to FIG. 1. In an exemplary embodiment, the trajectory prediction application 106 may include a plurality of modules 124-128 that may be configured to provide trajectory prediction by encoding multi-scale agent interactions from agent-augmented environments. The plurality of modules 124-128 may include a static representation module 124, a graph coarsening module 126, and an agent control module 128. However, it is appreciated that the trajectory prediction application 106 may include one or more additional modules and/or sub-modules that are included in lieu of the modules 124-128.

In an exemplary embodiment, the plurality of modules 124-128 may utilize the neural network 108 to generate the trajectories for each of the dynamic agents 202 that are located within the surrounding environment 200 of ego agent 102. The neural network 108 may analyze a sequence of each motion data for an arbitrary agent i that is split into the observation trajectory Xi={(x,y)ti|t∈{1, Tin}} and the future trajectory Yi={(x,y)ti|t∈{Tin+1,Tin+Tout}}, where Tin is the observation horizon, Tout is the prediction horizon, and (x,y)ti denotes 2D coordinates of agent i at time t. The trajectory prediction application 106 may utilize a ground-truth segmentation map for the static structures 206 that is available from individual scenes which may be used to generate the agent-augmented surrounding environment 210 that pertain to respective time steps (e.g., t−n, t−3, t−2, t−1, t). Accordingly, as discussed in more detail below, the neural network 108 may predict a set of 2D locations Y″ of original dynamic agents 202 and static agents 204 i∈{1, L}, given their past motion information Xi∀i∈MO together with those of J augmented static agents 212 i∈{L+1, L+J} and may output the prediction of the trajectories of each of the dynamic agents 202 to the trajectory prediction application 106.

FIG. 3 is a process flow diagram of a method 300 for creating an agent-augmented surrounding environment 210 based on images of the surrounding environment 200 of the ego agent 102 according to an exemplary embodiment of the present disclosure. FIG. 3 will be described with reference to the components of FIG. 1 though it is to be appreciated that the method 300 of FIG. 3 may be used with other systems/components. The method 300 may begin at block 302, wherein the method 300 may include receiving image data associated with the surrounding environment 200 of the ego agent 102.

In an exemplary embodiment, the static representation module 124 may be configured to receive environmental data that may be associated with the surrounding environment 200 of the ego agent 102 in the form of image data that may be provided by the camera system 122 of the ego agent 102 at a plurality of time steps. The camera system 122 may be configured to receive images captured at the plurality of time steps from the one or more cameras of the camera system 122. The camera system 122 may be configured to convert the plurality of images into image data and may communicate the image data associated with the plurality of images captured of the surrounding environment 200 at the plurality of time steps (e.g., t−n, t−3, t−2, t−1, t). Accordingly, past and present observations of the surrounding environment 200 that include the dynamic agents 202, the static agents 204, and the static structures 206 that may be located within the surrounding environment of the ego agent 102 may be included within the image data that is received by the static representation module 124. The motion based locations of the one or more static agents 204 from the perspective of the ego agent 102 as captured at each time step may be included within the image data.

The method 300 may proceed to block 304, wherein the method 300 may include creating an agent-augmented environment by representing interactive structures as static agents. In an exemplary embodiment, upon receiving the image data associated with the plurality of images captured of the surrounding environment 200 at the plurality of time steps, the static representation module 124 may be configured to execute the image logic to identify dynamic agents 202, static agents 204, and static structures 206 that may be located within the surrounding environment of the ego agent 102. In one embodiment, the static representation module 124 may be configured to generate segmentation maps using the image data associated with the plurality of images. The static representation module 124 may be configured to represent spatial boundaries of non-passable areas that may include locations of the surrounding environment 200 of the ego agent 102 that may include the static agents 204 and the static structures 206.

As discussed above with respect to FIG. 2B, the static representation module 124 may be configured to augment the respective boundaries of static structures 206 with augmented static agents 212 that may represent the occlusion caused by the existence of the respective static structures 206. Accordingly, the agent-augmented surrounding environment 210 may be processed that represent the static structures 206 as multiple augmented static agents 212 that enable the trajectory prediction application 106 and the neural network 108 to learn physical constraints of the surrounding environment 200 of the ego agent 102. Additionally, the static representation module 124 may be configured to augment static agents 204 as previously identified as being located within the surrounding environment 200 into augmented static agents 212 since they may also occlude the respective perspective paths of one or more dynamic agents 202. Accordingly, the static representation module 124 may be configured to leverage a similarity between static agents 204 and static structures 206 to represent environmental structures as multiple augmented static agents 212.

The method 300 may proceed to block 306, wherein the method 300 may include inputting a static representation of the surrounding environment 200 to the neural network 108. In an exemplary embodiment, upon processing the agent-augmented surrounding environment 210 as an augmented representation of the surrounding environment 200, the static representation module 124 may be configured to input the agent-augmented surrounding environment 210 as a static representation of the surrounding environment 200 to the neural network 108.

With reference to FIG. 5, a schematic overview of a methodology executed by the trajectory prediction application 106 according to an exemplary embodiment of the present disclosure, as shown the static representation 504 may be included as an aggregated representation of the agent-augmented surrounding environment 210 that is processed based on the plurality of images 502 of the surrounding environment captured at the plurality of time steps that may be augmented with augmented static agents 212. The static representation 504 may accordingly be input as a spatial-temporal representation of the surrounding environment 200 as captured within the plurality of images 502 during the plurality of time steps.

In one embodiment, the static representation 504 may be provided with a representation of the surrounding environment 200 of the ego agent 102 as a scene with original L agents 202, 204 that is further augmented by additional J augmented static agents 212, resulting total N=L+J agents in the scene as captured at respective locations over the course of the plurality of time steps that have been captured within the plurality of images 502. From this static representation 504, joint modeling of agent-agent interactions (between dynamic agents 202 and static agents 204) and agent-space interactions (toward the augmented static agents 212) is processed by the neural network 108 by using a graph structure, where the graph edges accordingly find interactions between the nodes of spatial graphs.

FIG. 4 is a process flow diagram of a method 400 for predicting future trajectories associated with each of the dynamic agents 202 located within the surrounding environment 200 of the ego agent 102 according to an exemplary embodiment of the present disclosure. FIG. 4 will be described with reference to the components of FIG. 1 though it is to be appreciated that the method 400 of FIG. 4 may be used with other systems/components. The method 400 may begin at block 402, wherein the method 400 may include constructing a set of spatial graphs.

In one embodiment, upon inputting the static representation 504 to the neural network 108, the static representation module 124 may communicate respective data to the graph coarsening module 126 of the trajectory prediction application 106. The graph coarsening module 126 may utilize the neural network 108 to execute machine learning/deep learning processes to model pair-wise interactions between the ego agent 102, the dynamic agents 202 located within the surrounding environment 200, and the static structures 206 that may be located within the surrounding environment of the ego agent 102. Accordingly, the neural network 108 may perform machine learning/deep learning processes to complete a graph coarsening process based on the static representation 504.

With reference again to FIG. 5, upon receiving the input of the static representation 504 of the surrounding environment 200 of the ego agent that includes the total N=L+J agents in the scene over a course of the plurality of time steps that have been captured within the plurality of images, the neural network 108 may be configured to utilize the encoder 112 to construct a number of spatial graphs 506 that correspond to the observation time horizon. The observation time horizon may be based on the number of time steps that have been captured within the plurality of images as included within the image data to the static representation module 124.

In one embodiment, the encoder 112 of the neural network 108 may be configured to process a set of spatial graphs 506 Gt=(At,Ht) that are defined at individual observation time steps t∈{1, Tin}, where AtN×N is an adjacency matrix and HtN×d are the node embeddings of size d computed from the motion state of N agents at time step t. The message propagation of GNNs is given by:


Ht(k)=M(At,Ht(k−1);Wt(k−1)),  (1)

where M is a message propagation function that updates node features, k indicates an index of iterations, and Wt are trainable parameters.

As a multi-scale GNN, the neural network 108 may model the interactions of each of the dynamic agents 202 to implement the message propagation function M as follows:


Ht(k)=Γ(Dt−1/2ÂtDv−1/2Ht(k−1)Wt(k−1)),  (2)

where σ is a non-linearity function, Ât=At+I adds self-connection to the graph, Dt=ΣÂt,ij is a sum of non-zero elements in the row of Ât, and Wt are trainable parameters. After K iterations, a set of adjacency matrices {At|t∈{1,Tin}} and output node embeddings {Zt|t∈{1, Tin}} are generated, where Zt=Ht(K)N×d. For convenience of description, the subscript t is not used and a function Z=GNN(A,H) is used to represent the operation of the GNN layer, where A is an adjacency matrix, H is an input node matrix, and Z is an output node embeddings.

With continued reference to the method 400 of FIG. 4, upon construction of the set of spatial graphs 506, the method 400 may proceed to block 404, wherein the method 400 may include simultaneously clustering nodes in the spatial graphs 506 considering their spatial and temporal interactions. As represented by the illustrative clustering scheme of FIG. 6, the neural network 108 may be configured to complete spatio-temporal graph clustering to jointly coarsen input graphs in spatial and time dimension.

In one configuration, a learnable assignment matrix U(s)Ns×Ns+1 may be defined at an arbitrary scale s, where a new coarsened graph at scale s+1 contains Ns+1<Ns nodes. The assignment matrix U(s) is used to provide a soft assignment of individual Ns nodes at scale s to one of Ns+1 clusters at the next scale s+1. Given the input node embedding matrix Z(s) and adjacency matrix A(s), the following equations may be computed using the assignment matrix U(s):


F(s+1)=U(s)TZ(s),  (3)


A(s+1)=U(s)TA(s)U(s),  (4)

where F(s+1)Ns+1×d is a new node embedding matrix and A(s+1)Ns+1×Ns+1 is a new coarsened adjacency matrix at scale s+1.

In an exemplary embodiment, upon the setting of spatial graphs 506 where the number of spatial graphs 506 correspond to the observation time horizon, the neural network 108 may apply graph clustering to spatial-temporal dimensions so that the nodes can be jointly assigned to the clusters under the consideration of spatial locality and temporal dependency. In one configuration, the encoder 112 may consider a stack of input matrices as a tensor, which may induce induces a third-order tensor ∈Ns×Ns×Ts and ∈Ns×Ns×Ts where Ts denotes the temporal dimension at scale s. The neural network 108 thereby uses the tensor n-mode product to conduct spatial and temporal graph coarsxeNg.

The n-mode product of a tensor ∈I1× . . . ×IN and a matrix B∈J×In is denoted as ×n B. Then, the output is computed as a tensor of size I1×I2× . . . ×In−1×J×In+1× . . . ×IN with entries


nB)(i1, . . . ,in−1,j,in+1, . . . ,iN)=Σin(i1, . . . ,iN)B(j,in),  (5)

which results in a change of basis. In one configuration, a third-order tensor ∈I1×I2×I3 with C(i) denotes i-th frontal slice of the tensor . Then, the transpose operation is defined by


tr≡[C(1)TC(2)T . . . CI3)T]∈I2×I1×I3  (6)

In another configuration, when ∈I1×I2×I3 I1×I4×I3, the output of the product tr* is the tensor of size I2×I4×I3,


tr*≡[C(1)T·D(1)C(2)T·D(2)C(I3)T·D(I3)]∈I2×I4×I3,  (7)

where · denotes the usual matrix-matrix multiplication.

In an exemplary embodiment, the neural network 108 may create a spatial assignment tensor (s)∈Ns×Ns+1×Ts for spatial graph clustering and an additional temporal assignment matrix V(s)Ts×Ts+1 for temporal graph clustering, both at scale s. Employing the n-mode product, the new coarsened node embedding tensor (s+1) may be computed at scale s, similar to Eqn. (3) provided above:


(s)=(s)tr*(s)Ns+1×d×Ts,  (8)


(s+1)=(s)×3V(s)TNs+1×d×Ts+1.  (9)

The new adjacency tensor (s+1) at scale s may be computed similar to Eqn. (4) provided above:


(s)=(s)tr*(s)*(s)Ns+1×Ns+1×Ts,  (10)


(s+1)=(s)×3V(s)TNs+1×Ns+1×Ts+1,   (11)

where † in the superscript position indicates a temporary result of the spatial clustering.

Accordingly, given the motion states, the encoder 112 may provide the decoder 114 with social interactions between the dynamic agents 202 from the agent-augmented surrounding environment 210. The encoder 112 may initialize graphs Gt=(At, Ft)∀t∈{1, Tin} using the node embeddings of N agents (original L agents with augmented J agents) and the corresponding adjacency matrices. Accordingly, as a multi-scale GNN, the neural network 108 may generate output node embeddings using the input node features at scale s=1 such that Ft(1)=Ht:


Zt(1)=GNNemb(At(1),Ft(1)), ∀t∈{1,Tin}.  (12)

GNNemb takes the t-th slice of the adjacency tensor and the corresponding node feature matrix. We denote the node embedding tensor (1) as .

In one configuration, the proposed spatio-temporal graph clustering assigns each individual node in graphs to one of clusters under the consideration of spatial locality and temporal dependency. To make spatio-temporal graph clustering end-to-end differentiable, a stack of the spatial assignment matrices and temporal assignment matrix V are generated using same input At(s) and Ft(s) at scale s:


Ut(s)=Γ(GNNspa(At(s),Ft(s))) ∀t∈{1, . . . ,Ts},  (13)

where a denotes a softmax operation. The function GNNspa followed by the softmax enables the probabilistic assignment. To generate the temporal assignment matrix V(s), the neural network 108 uses the temporary result of spatial clustering in Eqn. (8) provided above by pooling the features along the first dimension pool ((s))∈d×Ts. The neural network 108 may create an additional adjacency matrix B∈Ts×Ts and may compute:


V(s)=Γ(GNNtmp(B(s),pool(F(s))T)).  (14)

At the first scale s=1, each entity of may be initialized using the inverse of Euclidean distance between two nodes and that of B with a vector of 1's to consider the entire graph.

After S times of S-T graph clustering, a single cluster may result that represents the encoding of spatio-temporal social interactions (s)1×1×d abstracted from various scales of graphs. The model may be trained to use (s) as an indicator of essential elements in the output node embeddings . An attention mechanism given by:


†=ATTspa(,cat(,(s));Wspa),  (15)


=ATTmp(,cat(,(s));Wtmp),  (16)

may be used, where t indicates a temporary output and cat denotes the concatenation of features. By adopting two attention mechanisms (ATTspa and ATTtmp), relevant dynamic agents 202 are highlighted (spatial dimension) and frames (temporal dimension) to decode future motion states of the dynamic agents 202 as discussed below.

Accordingly, using equations 8-11, as shown above, the neural network 108 may extract spatial-temporal interactions between the dynamic agents 202 and may thereby generate graph clustering toward a spatio-temporal domain. As discussed above and as represented in FIG. 5 and FIG. 6, multiple constructed graphs may be fine scaled to coarse-scale using spatio-temporal graph coarsening as described in detail above. The encoder 112 may encode individual nodes that are dynamically mapped to a set of clusters. Each cluster may correspond to a node in a new coarsened graph. Consequently, by using spatio-temporal graph clustering, behavioral representations are interactively encoded in a coarser level as a notion of both spatial locality and temporal dependency.

Referring again to the method 400 of FIG. 4, upon the clustering of nodes in the graphs based on spatio-temporal interactions and generating new coarsened graphs to encode behavioral representations in various scales, the method 400 may proceed to block 406, wherein the method 400 may include decoding previously encoded interaction features. In an exemplary embodiment, the neural network 108 may be configured to employ the decoder 114 of the neural network 108 to complete future trajectory decoding. In one configuration, the decoder 114 may be built using a set of convolutional layers and may be configured to predict future trajectories that are associated with each of the dynamic agents 202 located within the surrounding environment 200 of the ego agent 102 by decoding the interaction features previously encoded by the encoder 112 of the neural network 108.

In particular, the decoder 114 may be configured to decode the interaction encoded spatio-temporal features (s), to predict future locations Yt of the dynamic agents 202 over the prediction time steps t∈{Tin+1, Tin+Tout}. In one configuration, the output of the decoder 114 is the mean Rejectit, standard deviation Γti, and correlation coefficient ti, assuming a bivariate Gaussian distribution. Accordingly, each predicted future location of an arbitrary agent i is sampled by ({circumflex over (x)},ŷ)ti˜(iti{circumflex over (Γ)}t,ti). In one embodiment, during training of the neural network 108, the negative log-likelihood given by

= 1 L 1 T out i L t T out log ( P ( ( x , y ) t i i ti Γ ^ t , t i ) ) , ( 17 )

is minimized, where L is the number of original dynamic agents 202/static agents 204.

As represented in FIG. 5, the trajectory prediction 508 associated with the dynamic agents 202 may be output by the neural network 108. The neural network 108 may be configured to communicate the predicted trajectories that are associated with each of the dynamic agents 202 at a plurality of future time steps t E {Tin+1, Tin+Tout} to the graph coarsening module 126 of the trajectory prediction application 106. The graph coarsening module 126 may thereby communicate the predicted trajectories associated with each of the dynamic agents 202 that are located within the surrounding environment 200 at each of the future time steps to the agent control module 130 of the trajectory prediction application 106.

With continued reference to FIG. 4, the method 400 may proceed to block 408, wherein the method 400 may include controlling one or more components of the ego agent 102 based on the predicted future trajectories. In one embodiment, upon receipt of the communication of the projected trajectories of each of the dynamic agents 202 that are located within the surrounding environment 200 of the ego agent 102, the agent control module 128 may be configured to output instructions to communicate the autonomous control parameters to the autonomous controller 116 of the ego agent 102 to autonomously control the ego agent 102 to avoid overlap with the respective predicted trajectories at one or more future time steps t∈{Tin+1, Tin+Tout}. In additional embodiments, the agent control module 130 may be configured to output instructions to agent systems/control units 118 to provide one or more alerts to the operator of the ego agent 102 to avoid overlap with the respective predicted trajectories of the dynamic agents 202 at one or more future time steps.

FIG. 7 is a process flow diagram of a method 700 for completing trajectory prediction from agent-augmented environments according to an exemplary embodiment of the present disclosure. FIG. 7 will be described with reference to the components of FIG. 1 though it is to be appreciated that the method 700 of FIG. 7 may be used with other systems/components. The method 700 may begin at block 702, wherein the method 700 may include receiving image data associated with surrounding environment 200 of an ego agent 102.

The method 700 may proceed to block 704, wherein the method 700 may include processing an agent-augmented static representation of the surrounding environment 200 of the ego agent 102 based on the image data. The method 700 may proceed to block 706, wherein the method 700 may include processing a set of spatial graphs that correspond to an observation time horizon based on the agent-augmented static representation. In one embodiment, nodes in the spatial temporal graphs are simultaneously clustered considering their spatial and temporal interactions. The method 700 may proceed to block 708, wherein the method 700 may include predicting future trajectories of agents that are located within the surrounding environment of an ego agent 102 based on the spatial graphs.

It should be apparent from the foregoing description that various exemplary embodiments of the disclosure may be implemented in hardware. Furthermore, various exemplary embodiments may be implemented as instructions stored on a non-transitory machine-readable storage medium, such as a volatile or non-volatile memory, which may be read and executed by at least one processor to perform the operations described in detail herein. A machine-readable storage medium may include any mechanism for storing information in a form readable by a machine, such as a personal or laptop computer, a server, or other computing device. Thus, a non-transitory machine-readable storage medium excludes transitory signals but may include both volatile and non-volatile memories, including but not limited to read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and similar storage media.

It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in machine readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

It will be appreciated that various implementations of the above-disclosed and other features and functions, or alternatives or varieties thereof, may be desirably combined into many other different systems or applications. Also that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.

Claims

1. A computer-implemented method for completing trajectory prediction from agent-augmented environments, comprising:

receiving image data associated with surrounding environment of an ego agent;
processing an agent-augmented static representation of the surrounding environment of the ego agent based on the image data;
processing a set of spatial graphs that correspond to an observation time horizon based on the agent-augmented static representation, wherein nodes in the spatial graphs are simultaneously clustered considering their spatial and temporal interactions; and
predicting future trajectories of agents that are located within the surrounding environment of the ego agent based on the spatial graphs.

2. The computer-implemented method of claim 1, wherein the image data includes data associated with the surrounding environment of the ego agent based on a plurality of images that are captured at a plurality of time steps.

3. The computer-implemented method of claim 1, wherein the image data includes dynamic agents, static agents, and static structures that are located within the surrounding environment of the ego agent.

4. The computer-implemented method of claim 3, wherein processing the agent-augmented static representation of the surrounding environment of the ego agent includes augmenting the boundaries of the static structures with augmented static agents that represent an occlusion to respective paths of the dynamic agents that are caused by the static structures.

5. The computer-implemented method of claim 3, wherein processing the set of spatial graphs includes constructing the set of spatial graphs that pertain to a static representation of the surrounding environment that is based on the agent-augmented static representation.

6. The computer-implemented method of claim 5, wherein the set of spatial graphs model pair-wise interactions between the ego agent, the dynamic agents located within the surrounding environment, and the static structures that are located within the surrounding environment of the ego agent.

7. The computer-implemented method of claim 6, further including iteratively encoding interaction features in a coarser lever as a notion of both spatial locality and temporal dependency, wherein new coarsened graphs include nodes that correspond to clusters that consider the spatial and temporal interactions.

8. The computer-implemented method of claim 7, wherein predicting future trajectories of the agents includes decoding previously encoded interaction features to predict future trajectories that are associated with each of the dynamic agents over a plurality of future time steps.

9. The computer-implemented method of claim 1, further including controlling at least one component of the ego agent based on the predicted future trajectories of agents within the surrounding environment of the ego agent.

10. A system for completing trajectory prediction from agent-augmented environments, comprising:

a memory storing instructions when executed by a processor cause the processor to:
receive image data associated with surrounding environment of an ego agent;
process an agent-augmented static representation of the surrounding environment of the ego agent based on the image data;
process a set of spatial graphs that correspond to an observation time horizon based on the agent-augmented static representation, wherein nodes in the spatial graphs are simultaneously clustered considering their spatial and temporal interactions; and
predict future trajectories of agents that are located within the surrounding environment of the ego agent based on the spatial graphs.

11. The system of claim 10, wherein the image data includes data associated with the surrounding environment of the ego agent based on a plurality of images that are captured at a plurality of time steps.

12. The system of claim 10, wherein the image data includes dynamic agents, static agents, and static structures that are located within the surrounding environment of the ego agent.

13. The system of claim 12, wherein processing the agent-augmented static representation of the surrounding environment of the ego agent includes augmenting the boundaries of the static structures with augmented static agents that represent an occlusion to respective paths of the dynamic agents that are caused by the static structures.

14. The system of claim 12, wherein processing the set of spatial graphs includes constructing the set of spatial graphs that pertain to a static representation of the surrounding environment that is based on the agent-augmented static representation.

15. The system of claim 14, wherein the set of spatial graphs model pair-wise interactions between the ego agent, the dynamic agents located within the surrounding environment, and the static structures that are located within the surrounding environment of the ego agent.

16. The system of claim 15, further including iteratively encoding interaction features in a coarser lever as a notion of both spatial locality and temporal dependency, wherein new coarsened graphs include nodes that correspond to clusters that consider the spatial and temporal interactions.

17. The system of claim 16, wherein predicting future trajectories of the agents includes decoding previously encoded interaction features to predict future trajectories that are associated with each of the dynamic agents over a plurality of future time steps.

18. The system of claim 10, further including controlling at least one component of the ego agent based on the predicted future trajectories of agents within the surrounding environment of the ego agent.

19. A non-transitory computer readable storage medium storing instructions that when executed by a computer, which includes a processor perform a method, the method comprising:

receiving image data associated with surrounding environment of an ego agent;
processing an agent-augmented static representation of the surrounding environment of the ego agent based on the image data;
processing a set of spatial graphs that correspond to an observation time horizon based on the agent-augmented static representation, wherein nodes in the spatial graphs are simultaneously clustered considering their spatial and temporal interactions; and
predicting future trajectories of agents that are located within the surrounding environment of the ego agent based on the spatial graphs.

20. The non-transitory computer readable storage medium of claim 19, further including controlling at least one component of the ego agent based on the predicted future trajectories of agents within the surrounding environment of the ego agent.

Patent History
Publication number: 20220153307
Type: Application
Filed: Jan 28, 2021
Publication Date: May 19, 2022
Patent Grant number: 12110041
Inventors: Chiho Choi (San Jose, CA), Srikanth Malla (Sunnyvale, CA), Sangjae Bae (Berkley, CA)
Application Number: 17/161,136
Classifications
International Classification: B60W 60/00 (20060101); G06K 9/00 (20060101);