INDUCING RICH INTERACTION STRUCTURES BETWEEN WORDS FOR DOCUMENT-LEVEL EVENT ARGUMENT EXTRACTION
Systems and methods for natural language processing are described. One or more embodiments of the present disclosure receive a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, generate word representation vectors for the words, generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, generate a relationship representation vector based on the document structures, and predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
The following relates generally to natural language processing, and more specifically to event argument extraction.
Natural language processing (NLP) refers to techniques for using computers to interpret natural language. In some cases, NLP tasks involve assigning annotation data such as grammatical information to words or phrases within a natural language expression. Different classes of machine learning algorithms have been applied to NLP tasks.
Event extraction is an NLP task that involves identifying instances of events in text. In some examples, event extraction involves a number of sub-tasks including entity detection, event detection, and event argument extraction. Entity detection refers to identifying entities such as people, objects, and places. Event detection refers to identifying events such as actions or moments referred to within a text. Event argument extraction refers to identifying the relationships between the entity mentions and the events (event participants and spatio-temporal attributes, collectively known as event arguments).
Conventionally, sentence-level event argument extraction is used to determine the relationship between an event trigger word and an argument candidate word in the same sentence. However, systems designed for sentence-level event argument extraction systems are not scalable to perform document-level event argument extraction, where an argument candidate word is located far from an event trigger word. There is a need in the art for improved event argument extraction systems that are scalable and efficient in document-level event argument extraction.
SUMMARYThe present disclosure describes systems and methods for natural language processing. One or more embodiments of the disclosure provide an event argument extraction apparatus trained using machine learning techniques to predict a relationship between an event trigger word and an argument candidate word based on a high-dimensional relationship representation vector. For example, an event argument extraction network may be trained for document-level event argument extraction and role prediction. The event argument extraction network leverages multiple sources of information to generate a set of document structures to provide sufficient knowledge for representation learning. In some examples, the set of document structures involve syntax, discourse, contextual semantic information. In some examples, a graph-based network is used to produce relatively rich document structures to enable multi-hop and heterogeneous interactions between words in a document.
A method, apparatus, and non-transitory computer readable medium for natural language processing are described. One or more embodiments of the method, apparatus, and non-transitory computer readable medium include receiving a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, generating word representation vectors for the words, generating a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, generating a relationship representation vector based on the document structures, and predicting a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
An apparatus and method for natural language processing are described. One or more embodiments of the apparatus and method include a document encoder configured to generate word representation vectors for words of a document organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, a structure component configured to generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, a relationship encoder configured to generate a relationship representation vector based on the document structures, and a decoder configured to predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
A method, apparatus, and non-transitory computer readable medium for training a neural network are described. One or more embodiments of the method, apparatus, and non-transitory computer readable medium include receiving training data including documents comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and argument words, and the training data further including ground truth relationships between the event trigger word and the argument words, generating word representation vectors for the words, generating a plurality of document structures including a semantic structure based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information based on the plurality of sentences, generating relationship representation vectors based on the document structures, predicting relationships between the event trigger word and the argument candidate words based on the relationship representation vectors, computing a loss function by comparing the predicted relationships to the ground truth relationships, and updating parameters of a neural network based on the loss function.
The present disclosure describes systems and methods for natural language processing. One or more embodiments of the disclosure provide an event argument extraction apparatus trained using machine learning techniques to predict a relationship between an event trigger word and an argument candidate word based on a high-dimensional relationship representation vector. For example, an event argument extraction network may be trained for document-level event argument extraction and role prediction. The event argument extraction network leverages multiple sources of information to generate a set of document structures to provide sufficient knowledge for representation learning. In some examples, the set of document structures involve syntax, discourse, contextual semantic information, and an external knowledge base (e.g., WordNet). Additionally, a graph-based network such as a graph transformer network (GTN) may be used to produce relatively rich document structures to enable multi-hop and heterogeneous interactions between words in a document.
Recently, event argument extraction systems have been developed that focus on sentence-level event argument extraction, where event trigger words and argument candidate words are presented in a same sentence (i.e., document structures are not considered on a document-level in training). However, conventional event argument extraction systems do not incorporate external knowledge when generating document structures, and fail to combine multiple structures from different sources of information for multi-hop heterogeneous reasoning.
One or more embodiments of the present disclosure provide an improved event argument extraction apparatus that can perform document-level event argument extraction tasks, including tasks where event trigger words and argument candidate words belong to different sentences in multiple documents. In some examples, the event argument extraction network includes a deep neural network that generates document structures based on multiple sources of information such as syntax, semantic and discourse. According to one embodiment, a graph transformer network (GTN) is used to combine these document structures.
By applying the unconventional step of generating multiple document structures, one or more embodiments of the present disclosure provide an event argument extraction network that can perform efficient event argument extraction at a document level. The improved network is scalable to scenarios where an event trigger word and an argument candidate word are located far from each other in different sentences or documents. In some cases, a supervised training approach may be used to train the event argument extraction network. As a result, the improved network can extract arguments of event mentions over one or more documents to provide a complete view of information for events in these documents.
Embodiments of the present disclosure may be used in the context of information extraction, knowledge base construction, and question answering applications. For example, an event argument extraction network based on the present disclosure may be used to predict a relationship between an event trigger word and an argument candidate word. In some examples, the event trigger word and the argument candidate word belong to different sentences in multiple documents (i.e., document-level event argument extraction). An example of an application of the inventive concept in the question answering context is provided with reference to
In the example of
The event argument extraction apparatus 110 includes a trained event argument extraction network having a document encoder, which generates word representation vectors for the words. The event argument extraction network generates a set of document structures using a structure component. The set of document structures include a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the set of sentences. The event argument extraction network generates a relationship representation vector based on the document structures.
Accordingly, the event argument extraction apparatus 110 predicts a relationship between the event trigger word and the argument candidate word based on the relationship representation vector. In the example above, the event argument extraction network identifies an entity mention (i.e., Embassy of Algeria) as an argument candidate word for the event trigger word (i.e., donated) found in the query. The event argument extraction apparatus 110 returns the predicted answer to the user 100, e.g., via the user device 105 and the cloud 115.
The user device 105 may be a personal computer, laptop computer, mainframe computer, palmtop computer, personal assistant, mobile device, or any other suitable processing apparatus. In some examples, the user device 105 includes software that incorporates an event argument extraction or a question answering application (e.g., a dialogue system). The question answering application may either include or communicate with the event argument extraction apparatus 110.
The event argument extraction apparatus 110 includes a computer implemented network comprising a document encoder, a structure component, a relationship encoder, and a decoder. The network generates word representation vectors for words of a document organized into a plurality of sentences using a document encoder, the words comprising an event trigger word and an argument candidate word. The network generates a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences using a structure component. The network generates a relationship representation vector based on the document structures using a relationship encoder. The network predicts a relationship between the event trigger word and the argument candidate word based on the relationship representation vector using a decoder.
The event argument extraction apparatus 110 may also include a processor unit, a memory unit, and a training component. The training component is used to train the event argument extraction (EAE) network. Additionally, the event argument extraction apparatus 110 can communicate with the database 120 via the cloud 115. In some cases, the architecture of the event argument extraction network is also referred to as a network model or an EAE network. Further detail regarding the architecture of the event argument extraction apparatus 110 is provided with reference to
In some cases, the event argument extraction apparatus 110 is implemented on a server. A server provides one or more functions to users linked by way of one or more of the various networks. In some cases, the server includes a single microprocessor board, which includes a microprocessor responsible for controlling all aspects of the server. In some cases, a server uses microprocessor and protocols to exchange data with other devices/users on one or more of the networks via hypertext transfer protocol (HTTP), and simple mail transfer protocol (SMTP), although other protocols such as file transfer protocol (FTP), and simple network management protocol (SNMP) may also be used. In some cases, a server is configured to send and receive hypertext markup language (HTML) formatted files (e.g., for displaying web pages). In various embodiments, a server comprises a general purpose computing device, a personal computer, a laptop computer, a mainframe computer, a supercomputer, or any other suitable processing apparatus.
A cloud 115 is a computer network configured to provide on-demand availability of computer system resources, such as data storage and computing power. In some examples, the cloud 115 provides resources without active management by the user. The term cloud 115 is sometimes used to describe data centers available to many users over the Internet. Some large cloud networks have functions distributed over multiple locations from central servers. A server is designated an edge server if it has a direct or close connection to a user. In some cases, a cloud 115 is limited to a single organization. In other examples, the cloud is available to many organizations. In one example, a cloud 115 includes a multi-layer communications network comprising multiple edge routers and core routers. In another example, a cloud 115 is based on a local collection of switches in a single physical location.
A database 120 is an organized collection of data (e.g., a corpus of documents to be searched). For example, a database 120 stores data in a specified format known as a schema. A database 120 may be structured as a single database, a distributed database, multiple distributed databases, or an emergency backup database. In some cases, a database controller may manage data storage and processing in a database 120. In some cases, a user interacts with database controller. In other cases, database controller may operate automatically without user interaction.
At operation 200, the user inputs a query related to an event. In an example, the query is “who donated to the foundation?”. The word “donated” may be recognized by an event argument extraction system as an event trigger word. In some cases, the operations of this step refer to, or may be performed by, a user operating a user device as described with reference to
At operation 205, the system identifies a document corresponding to the event. In some cases, the operations of this step refer to, or may be performed by, a database as described with reference to
According to an example, the text of a document includes “the foundation said that immediately following the Haitian earthquake, the Embassy of Algeria provided an unsolicited lump-sum fund to the foundation's relief plan. This was a one-time, specific donation to help Haiti and it had donated twice to the Clinton Foundation before.”
At operation 210, the system determines a relationship between an entity and the event. In some cases, the operations of this step refer to, or may be performed by, an event argument extraction apparatus as described with reference to
In the above example document, the system recognizes the entity mention (i.e., Embassy of Algeria) as an argument (of role giver) for an event mention associated with an event trigger word (i.e., “donated”). The system utilizes the coreference link between the entity mention (Embassy of Algeria) and a pronoun “it” (i.e., discourse information) that can be directly connected with the trigger word (i.e., “donated”) using an edge in the syntactic dependency tree of the second sentence to reach this reasoning. Alternatively, if a coreference link is not found or obtained (e.g., due to errors in the coreference resolution systems), the system can rely on close semantic similarity between the event trigger word (“donated”) and the phrase (i.e., “provided an unsolicited lump-sum fund”) that may be linked to the entity mention or an argument candidate word (i.e., Embassy of Algeria) using a dependency edge for the first sentence. The system is configured as a document-level EAE model that can jointly capture information from syntax, semantic, and discourse structures to sufficiently encode important interactions between words for event argument extraction.
At operation 215, the system responds to the query based on the relationship. According to the example above, the system provides an answer to the query, “Embassy of Algeria”. Therefore, the user would understand it is Embassy of Algeria that donated to the foundation. In some cases, the user can modify the query and the system generates an updated response based on the modified query. In some cases, the operations of this step refer to, or may be performed by, a user device as described with reference to
In
In some examples, the document encoder comprises a word encoder, a position embedding table, and an LSTM. In some examples, the structure component comprises a lexical database, a dependency parser, and a coreference network. In some examples, the relationship encoder comprises a GTN and a graph convolution network (GCN). In some examples, the decoder comprises a feed-forward layer and a softmax layer.
A processor unit 300 is an intelligent hardware device, (e.g., a general-purpose processing component, a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), a microcontroller, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor unit 300 is configured to operate a memory array using a memory controller. In other cases, a memory controller is integrated into the processor. In some cases, the processor unit 300 is configured to execute computer-readable instructions stored in a memory to perform various functions. In some embodiments, a processor unit 300 includes special purpose components for modem processing, baseband processing, digital signal processing, or transmission processing.
Examples of a memory unit 305 include random access memory (RAM), read-only memory (ROM), or a hard disk. Examples of memory unit 305 include solid state memory and a hard disk drive. In some examples, a memory unit 305 is used to store computer-readable, computer-executable software including instructions that, when executed, cause a processor to perform various functions described herein. In some cases, the memory unit 305 contains, among other things, a basic input/output system (BIOS) which controls basic hardware or software operation such as the interaction with peripheral components or devices. In some cases, a memory controller operates memory cells. For example, the memory controller can include a row decoder, column decoder, or both. In some cases, memory cells within a memory unit 305 store information in the form of a logical state.
According to some embodiments of the present disclosure, the event argument extraction apparatus includes a computer implemented artificial neural network (ANN) that predicts a relationship between an event trigger word and an argument candidate word based on a relationship representation vector. An ANN is a hardware or a software component that includes a number of connected nodes (i.e., artificial neurons), which loosely correspond to the neurons in a human brain. Each connection, or edge, transmits a signal from one node to another (like the physical synapses in a brain). When a node receives a signal, it processes the signal and then transmits the processed signal to other connected nodes. In some cases, the signals between nodes comprise real numbers, and the output of each node is computed by a function of the sum of its inputs. Each node and edge is associated with one or more node weights that determine how the signal is processed and transmitted.
According to some embodiments, training component 310 receives training data including documents including a set of words organized into a set of sentences, the words including an event trigger word and argument words, and the training data further including ground truth relationships between the event trigger word and the argument words. In some examples, training component 310 computes a loss function by comparing the predicted relationships to the ground truth relationships. In some examples, training component 310 updates parameters of a neural network based on the loss function. In some examples, the updated parameters include parameters of a position embedding table used for generating the word representation vectors. In some examples, the updated parameters include parameters of weight matrices used for generating the semantic structure. In some examples, the updated parameters include weights for weighted linear combinations of the document structures used for generating the relationship representation vectors.
According to some embodiments, event argument extraction network 315 receives a document including a set of words organized into a set of sentences, the words including an event trigger word and an argument candidate word.
According to some embodiments, document encoder 320 generates word representation vectors for the words. In some examples, document encoder 320 encodes the words of the document to produce individual word embeddings. The document encoder 320 then identifies relative position of word pairs of the document to produce position embeddings, where the word representation vectors are based on the word embeddings and the position embeddings. In some examples, document encoder 320 applies a long short-term memory (LSTM) to produce word embeddings and position embeddings. The document encoder 320 then extracts a hidden vector from the LSTM to produce the word representation vectors.
According to some embodiments, document encoder 320 is configured to generate word representation vectors for words of a document organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word. In some examples, the document encoder 320 includes a word encoder, a position embedding table, and an LSTM. Document encoder 320 is an example of, or includes aspects of, the corresponding element described with reference to
According to some embodiments, structure component 325 generates a set of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the set of sentences. In some examples, structure component 325 receives dependency tree information for the sentences of the document. The structure component 325 then connects the dependency tree information for the sentences to create combined dependency tree. The structure component 325 then identifies a path between a pair of words of the combined dependency tree, where the syntax structure is generated based on the path.
In some examples, structure component 325 applies a weighted matrix to each word representation vector of a pair of the word representation vectors to produce weighted word representation vectors. Next, the structure component 325 computes a semantic similarity score for the pair of the word representation vectors based on the weighted word representation vectors, where the semantic structure is based on the semantic similarity score.
In some examples, structure component 325 maps the words to corresponding nodes of a lexical database. Next, the structure component 325 identifies a glossary for each of the corresponding nodes. The structure component 325 then identifies node embeddings based on the glossary. The structure component 325 then computes a glossary similarity score for a pair of the node embeddings, where the semantic structure is based on the glossary similarity score.
In some examples, structure component 325 maps the words to corresponding nodes of a lexical database. Next, the structure component 325 computes information content of the corresponding nodes, and a least common subsumer for a pair of the corresponding nodes. The structure component 325 then computes a structural similarity score for the pair of the corresponding nodes, where the semantic structure is based on the structural similarity score. In some examples, structure component 325 computes a boundary score for a pair of words based on whether the pair of words are both located in a same sentence, where the discourse structure is based on the boundary score.
In some examples, structure component 325 identifies coreference information for words of the document. Then, the structure component 325 computes a coreference score for a pair of words based on whether the pair of words are located in sentences containing entity mentions that refer to a same entity of the coreference information, where the discourse structure is based on the coreference score. In some examples, the structure component 325 includes a lexical database, a dependency parser, and a coreference network. Structure component 325 is an example of, or includes aspects of, the corresponding element described with reference to
One or more embodiments of the present disclosure combine different information sources to generate effective document structures for event argument extraction tasks. The event argument extraction network 315 produces document structures based on knowledge from syntax (i.e., dependency trees), discourse (i.e., coreference links), and semantic similarity. Semantic similarity depends on contextualized representation vectors to compute interaction scores between nodes and relies on using external knowledge bases to enrich document structures for event argument extraction. The words in the documents are linked to the entries in one or more external knowledge bases and exploit the entry similarity in knowledge bases to obtain word similarity scores for the structures. In some examples, lexical database (e.g., WordNet) is used as the knowledge base and tools (e.g., word sense disambiguation or WSD) are applied to facilitate word-entry linking. The linked entry or node in WordNet can provide expert knowledge on the meanings of the words (e.g., glossary and hierarchy information). Such expert knowledge complements the contextual information of words and enhances the semantic-based document structures for event argument extraction. In some embodiments, the event argument extraction network 315 uses one or more external knowledge bases for document structures in information extraction tasks.
A knowledge base refers to structured information, which may be extracted from a vast amount of unstructured or unusable data. A knowledge base may be used for downstream applications for search, question-answering, link prediction, visualization, modeling, and etc.
Knowledge base construction is challenging as it involves dealing with complex input data and tasks such as parsing, extracting, cleaning, linking, and integration. Using machine learning techniques, these tasks typically depend on feature engineering (i.e., manually crafting attributes of the input data to feed into a system). In some cases, deep learning models can operate directly over raw input data such as text or images, and enable connections between a set of entities, and a user can interpret those connections after a knowledge accumulation phase and make inferences based on prior knowledge.
According to some embodiments, relationship encoder 330 generates a relationship representation vector based on the document structures. In some examples, relationship encoder 330 generates a set of intermediate structures for each of a set of channels of a graph transformer network (GTN), where each of the intermediate structures includes a weighted linear combination of the document structures. The relationship encoder 330 then generates a hidden vector for each of the channels using a graph convolution network (GCN), where the relationship representation vector is generated based on the hidden vector for each of the channels. In some examples, the relationship encoder 330 includes a GTN and a GCN. Relationship encoder 330 is an example of, or includes aspects of, the corresponding element described with reference to
A Graph Transformer Network (GTN) is capable of generating new graph structures, which involve identifying useful connections between unconnected nodes on the original graph, while learning effective node representation on the new graphs in an end-to-end fashion. Graph Transformer layer, a core layer of the GTN, learns a soft selection of edge types and composite relations for generating useful multi-hop connections or meta-paths. In some cases, GTNs learn new graph structures, based on data and tasks without domain knowledge, and can yield powerful node representation via convolution on the new graphs. Without domain-specific graph preprocessing, GTNs outperform existing technology that require pre-defined meta-paths from domain knowledge.
According to some embodiments, decoder 335 predicts a relationship between the event trigger word and the argument candidate word based on the relationship representation vector. In some examples, decoder 335 applies a feed-forward neural network to the relationship representation vector. In some examples, decoder 335 applies a softmax layer to an output of the feed-forward neural network, where the relationship is predicted based on the output of the softmax layer. In some examples, the decoder 335 includes a feed-forward layer and a softmax layer. Decoder 335 is an example of, or includes aspects of, the corresponding element described with reference to
A softmax function is used as the activation function of the neural network to normalize the output of the network to a probability distribution over predicted output classes. After applying the softmax function, each component of the feature map is in the interval (0, 1) and the components add up to one. These values are interpreted as probabilities.
The described methods may be implemented or performed by devices that include a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, a conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). Thus, the functions described herein may be implemented in hardware or software and may be executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored in the form of instructions or code on a computer-readable medium.
Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of code or data. A non-transitory storage medium may be any available medium that can be accessed by a computer. For example, non-transitory computer-readable media can comprise random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disk (CD) or other optical disk storage, magnetic disk storage, or any other non-transitory medium for carrying or storing data or code.
Also, connecting components may be properly termed computer-readable media. For example, if code or data is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, or microwave signals, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technology are included in the definition of medium. Combinations of media are also included within the scope of computer-readable media.
One or more embodiments of the present disclosure relate to tasks of identifying roles of entity mention or arguments in the events evoked by trigger words. An event argument extraction network includes a deep learning model for document-level event argument extraction tasks where document structures and graphs are utilized to represent input documents and support representation learning. The document structures are used in the event argument extraction network to facilitate direct connections and interactions of important context words, which may be located far from each other in the documents. Additionally, these document structures enhance representation vectors (e.g., for event argument extraction tasks). Multiple sources of information (e.g., syntax structure, semantic structure, and discourse structure) are leveraged and combined to generate document structures for a document-level EAE network model (i.e., the event argument extraction network). Furthermore, the network model incorporates external knowledge for words in these documents (retrieved from a lexical database such as WordNet) to compute document structures and use of graph-based networks (e.g., a graph transformer network or GTN) to induce relatively rich document structures for event argument extraction tasks. The network model leads to increased performance of document-level tasks (e.g., event argument extraction).
From top to bottom as illustrated in
The event argument extraction network is not limited to sentence-level event argument extraction where event triggers and arguments appear in the same sentence. In some cases, arguments of an event may be presented in sentences other than a sentence that hosts event triggers in an document 400. For example, in the EE dataset of the DARPA AIDA program (phase 1) extraction, 38% of arguments are shown outside the sentences containing the corresponding triggers, i.e., in the document-level context.
In some examples, document-level event argument extraction involves long document context, so that the network model is configured to identify important context words (among long word sequences) effectively and link them to event triggers and arguments for role prediction.
The word representation vectors 410 are input to a structure component 415 to produce document structures 420. Structure component 415 is an example of, or includes aspects of, the corresponding element described with reference to
Document structures 420 are used to facilitate connection and reasoning between important context words for prediction. In some cases, document structures 420 may refer to interaction graphs, utilizing designated objects in one or more documents (e.g., words, entity mentions, sentences) to form nodes, and different information sources and heuristics to establish the edges. For example, document structures 420 may be represented using adjacency matrices for which the value or score at one cell indicates the importance of a node toward the other one for representation learning and role prediction in event argument extraction. Such direct connections between nodes enable objects, which are sequentially far from each other in the documents, to interact and produce useful information. Document structures 420 and graphs are used to perform document context modeling and representation learning for document-level event argument extraction.
Existing models use specific types of information or heuristics (e.g., use only one type of information) to form the edges in document structures. However, the event argument extraction network can leverage a diversity of useful information to enrich document structures 420 in event argument extraction. In an embodiment, the structure component 415 can generate document structures 420 based on syntactic dependency trees of a set of sentences, considering discourse structures (e.g., coreference links) to induce document structures 420, and employing semantic representation-based similarity between tokens to infer document structures 420. The network model considers multiple information sources to capture important interaction information between nodes or words.
The event argument extraction network is configured to produce combination of document structures 420 for event argument extraction where the argument reasoning process for the event trigger word and the argument candidate word involves a sequence of interactions with multiple other words, using different types of information at each interaction step, e.g., syntax, discourse and/or semantic information (i.e., heterogeneous interaction types). The interaction score for a pair of nodes or words (i,j) in the combined structures for event argument extraction are conditioned on the two involved words (i.e., i and j) and conditioned on sequences of other words to enable heterogeneous interaction types between the pairs of words in the sequence.
One or more embodiments of the present disclosure provide graph-based networks (e.g., GTN and GCN) for document structure combination and representation learning in document-level event argument extraction. In an embodiment, the network model uses a graph-based network that learns graphical transformation of input (e.g., GTN) to implement a multi-hop heterogeneous reasoning architecture. The weighted sums of initial document structures, serving as intermediate structures that capture information are obtained from different perspectives (heterogeneous types). Next, the intermediate structures are multiplied to generate the final combined document structures that enable involvement of multiple other words into a single structure score (multi-hop reasoning paths).
Next, the document structures 420 are input to the relationship encoder 440 to produce relationship representation vector 445. The relationship encoder 440 is an example of, or includes aspects of, the corresponding element described with reference to
The relationship representation vector 445 is then input to the decoder 450 to produce relationship 455 between the event trigger word and the argument candidate word. Decoder 450 is an example of, or includes aspects of, the corresponding element described with reference to
In accordance with
At operation 500, the system receives a document including a set of words organized into a set of sentences, the words including an event trigger word and an argument candidate word. In some cases, the operations of this step refer to, or may be performed by, an event argument extraction network as described with reference to
One or more embodiments of the present disclosure formulates document-level event argument extraction as a multi-class classification task. The input to the system is a document D=w1, w2, . . . , wN comprising multiple sentences, i.e., Si's. The event argument extraction network includes a golden event trigger, i.e., the t-th word of the D(wt), and an argument candidate, i.e., the a-th word of D (wa), as the inputs (wt and wa can occur in different sentences). In some embodiments, the system is configured to predict the role of an argument candidate wa in an event mention evoked by wt. In some cases, the role may be None, indicating that wa is not a participant in the event mention wt. Additionally, the event argument extraction network includes a document encoder to transform the words in D into high dimensional vectors, a structure generation component (e.g., a structure component) to generate initial document structures for event argument extraction, and a structure combination component (e.g., a relationship encoder) to combine these document structures and learn representation vectors for role prediction.
At operation 505, the system generates word representation vectors for the words. In some cases, the operations of this step refer to, or may be performed by, a document encoder as described with reference to
Some examples of the method, apparatus, and non-transitory computer readable medium described above further include encoding the words of the document to produce individual word embeddings. Some examples further include identifying relative position of word pairs of the document to produce position embeddings, wherein the word representation vectors are based on the word embeddings and the position embeddings.
Some examples of the method, apparatus, and non-transitory computer readable medium described above further include applying a long short-term memory (LSTM) to produce word embeddings and position embeddings. Some examples further include extracting a hidden vector from the LSTM to produce the word representation vectors.
In an embodiment, each word wi∈D is transformed into a representation vector xi that is the concatenation of two vectors, a pre-trained word embedding of wi and position embeddings of wi. For the pre-trained word embedding of wi, the network model considers both non-contextualized (i.e., GloVe) and contextualized embeddings (i.e., BERT) embeddings. For example, in the BERT model, as wi may be split into multiple word-pieces, the average of the hidden vectors is used for the word-pieces of wi in the last layer of the BERT model as the word embedding vector for wi. In an embodiment, the BERTbase variation model is used where D is divided into segments of 512 word-pieces to be encoded separately.
For the position embeddings of wi, these vectors are obtained by calculating the relative distances between wi and the trigger and argument words (i.e., i-t and i-a respectively) from a position embedding table. The table is initialized randomly and updated in the training process. Position embedding vectors are important as the network model depends on the position embedding vectors regarding the positions of the trigger and argument words.
The vectors X=x1, x2, . . . , xn represent the words D, and these vectors are input to a bi-directional long short-term memory network (LSTM) to generate an abstract vector sequence H=h1, h2, . . . , hN. hi is the hidden vector for wi that is obtained by concatenating the corresponding forward and backward hidden vectors from the bi-directional LSTM. In some cases, the hidden vectors in H are used as inputs for the subsequent computation. At the instant step, the sentence boundary information of D are not included yet in the hidden vectors H. The sentence boundary information of D will be described in greater detail below.
At operation 510, the system generates a set of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the set of sentences. In some cases, the operations of this step refer to, or may be performed by, a structure component as described with reference to
Some examples of the method, apparatus, and non-transitory computer readable medium described above further include receiving dependency tree information for the sentences of the document. Some examples further include connecting the dependency tree information for the sentences to create combined dependency tree. Some examples further include identifying a path between a pair of words of the combined dependency tree, wherein the syntax structure is generated based on the path.
Some examples of the method, apparatus, and non-transitory computer readable medium described above further include applying a weighted matrix to each word representation vector of a pair of the word representation vectors to produce weighted word representation vectors. Some examples further include computing a semantic similarity score for the pair of the word representation vectors based on the weighted word representation vectors, wherein the semantic structure is based on the semantic similarity score.
Some examples of the method, apparatus, and non-transitory computer readable medium described above further include mapping the words to corresponding nodes of a lexical database. Some examples further include identifying a glossary for each of the corresponding nodes. Some examples further include identifying node embeddings based on the glossary. Some examples further include computing a glossary similarity score for a pair of the node embeddings, wherein the semantic structure is based on the glossary similarity score.
Some examples of the method, apparatus, and non-transitory computer readable medium described above further include mapping the words to corresponding nodes of a lexical database. Some examples further include computing information content of the corresponding nodes, and a least common subsumer for a pair of the corresponding nodes. Some examples further include computing a structural similarity score for the pair of the corresponding nodes, wherein the semantic structure is based on the structural similarity score.
Some examples of the method, apparatus, and non-transitory computer readable medium described above further include computing a boundary score for a pair of words based on whether the pair of words are both located in a same sentence, wherein the discourse structure is based on the boundary score.
Some examples of the method, apparatus, and non-transitory computer readable medium described above further include identifying coreference information for words of the document. Some examples further include computing a coreference score for a pair of words based on whether the pair of words are located in sentences containing entity mentions that refer to a same entity of the coreference information, wherein the discourse structure is based on the coreference score.
The event argument extraction network generates initial document structures that would be subsequently combined to learn representation vectors for document-level event argument extraction. In some embodiments, a document structure involves an interaction graph ={, ε} between the words in D, i.e., ={wi|wi∈D}. The document structure is represented using a real-valued adjacency matrix A={aij}i,j=1 . . . N, where the value or score aij reflects the importance (or the level of interaction) of wj for the representation computation of wi for event argument extraction. Accordingly, wi and wj can influence representation computation of each other even if these words are sequentially far away from each other in D, resulting in flexible flow of information in representation learning for event argument extraction. Various types of information are simultaneously considered to form the edges ε (or compute the interaction scores aij), including syntax structure, semantic structure, and discourse structure.
In some examples, the syntax-based document structure is based on sentence-level event argument extraction where dependency parsing trees of input sentences reveal important context, i.e., via the shortest dependency paths to connect event triggers and arguments, and guide the interaction modeling between words for argument role prediction by LSTM cells. The dependency trees for the sentences in D are used to provide information for the document structures for event argument extraction.
In some embodiments, the dependency relations or connections between pairs of words in W are leveraged to compute interaction scores aijdep in the syntax-based document structure Adep={aijdep}i,j=1 . . . N for D. Two words are more important to each other for representation learning if these two words are connected in the dependency trees. In an embodiment, the dependency tree Ti for each sentence Si in D is obtained using a dependency parser. For example, Stanford Core NLP Toolkit is used to parse the sentences. Then, the dependency trees Ti are connected for the sentences, and the event argument extraction network creates a link between root node of a tree Ti for Si with the root node of the tree Ti+1 for the subsequent sentence Si+1. The network model then retrieves the shortest path PD between the nodes for wt and wa in TD. Next, the network model computes an interaction dep dep score aijdep. The interaction score aijdep is set to 1 if (wi, wj) or (wj, wi) is an edge in P, and 0 otherwise. Accordingly, the document structure Adep is set to focus on the syntax-based important context words and their dependency interaction for the role prediction between wt and wa.
In an embodiment, the system is configured to generate a semantic (document) structure. The semantic-based document structures evaluate the above interaction scores in the structures based on semantic similarity of words (i.e., two words are more important for representation learning of each other if they are more semantically related). The event argument extraction network applies two complementary approaches to capture the semantics of the words in D for semantic-based structure generation, i.e., contextual semantics and knowledge-based semantics.
In an embodiment, contextual semantics reveal the semantic of a word via the context in which it appears. This suggests the use of contextualized representation vectors hi∈H (obtained from the LSTM model) to capture contextual semantics for the words wi∈D and produce a contextual semantic-based document structure Acontext={aijcontext}i,j=1 . . . N for D. The network model computes semantic-based interaction score aijdep for wi and wj through employing the normalized similarity score between their contextualized representation vectors:
where Uk and Uq are trainable weight matrices, and biases are omitted herein for brevity and convenience of description.
In an embodiment, knowledge-based semantics employ external knowledge of the words from one or more knowledge bases to capture their semantics. The external knowledge provides a complementary source of information for the contextual semantics of the words (i.e., external knowledge versus internal contexts) to enrich the document structures for D. In some examples, a knowledge base for word meanings such as WordNet is used to obtain external knowledge for the words in D. WordNet involves a network connecting word meanings (i.e., synsets) according to various semantic relations (e.g., synonyms, hyponyms). Each node/synset in WordNet is associated with a textual glossary to provide expert definitions of the corresponding meaning.
In an embodiment, the knowledge-based document structures generated for D can map each word wi∈D to a synset node Mi in WordNet with a Word Sense Disambiguation (WSD) tool. One embodiment of the present disclosure uses a knowledge base (e.g., WordNet 3.0) and a BERT-based WSD tool to perform word synset mapping. Then, the knowledge based interaction scores between two words wi and wj in D are determined by leveraging the similarity scores between two linked synset nodes Mi and Mj in WordNet. The information embedded in synset nodes Mi is leveraged by using two versions of knowledge-based document structures for D based on glossaries of the synset nodes and hierarchy structures in the knowledge base (e.g., WordNet).
The glossary-based document structure is represented as follows:
Agloss={aijgloss}i,j=1 . . . N (3)
For each word wi∈D, the network model retrieves glossary GMi (a sequence of words) from the corresponding linked node Mi in WordNet. A representation vector VMi is computed to capture the semantic information in GMi, by applying the max-pooling operation over non-contextualized (i.e., GloVe-based) pre-trained embeddings of the words in GMi. The glossary-based interaction score aijgloss for wi and wj is estimated using the similarity between the glossary representations VMi and VMj (with the cosine similarity) as aijgloss=cosine(VMi, VMj).
A knowledge base (e.g., WordNet) hierarchy-based document structure is given as Astruct={aijstruct}i,j=1 . . . N. The interaction score aijstruct for wi and wj relies on the structure-based similarity of the linked synset nodes Mi and Mj in a knowledge base (e.g., WordNet). A relevant similarity measure (e.g., Lin similarity) is applied to the synset nodes in WordNet as follows:
where IC and LCS represent the information content of the synset nodes and the least common subsumer of the two synsets in the WordNet hierarchy (most specific ancestor node), respectively. In some examples, an nltk tool is used to obtain the Lin similarity. Other WordNet-based similarities may also be considered.
In some cases, lengths of input texts are different between document-level event argument extraction and sentence-level event argument extraction. One difference between document-level and sentence-level event argument extraction involves the presence of multiple sentences in document-level event argument extraction where discourse information (i.e., where the sentences span and how the sentences relate to each other) is used to understand the input documents. The present disclosure leverages discourse structures to provide complementary information for the syntax-based and semantic-based document structures for event argument extraction. The sentence boundary-based and coreference-based discourse information is used to generate discourse-based document structures for event argument extraction.
In an embodiment, the structure component of the network model is configured to generate a sentence boundary-based document structure as follows:
Asent={aijsent}i,j=1 . . . N (5)
Asent relates to the same sentence information of the words in D. The two words in a same sentence involve more useful information for the representation computation of each other than those words in different sentences. The event argument extraction computes Asent by setting the sentence boundary-based score aijsent to 1 if wi and wj appear in the same sentence in D and 0 otherwise.
In an embodiment, the structure component of the network model is configured to generate a coreference-based document structure as follows:
Acoref={aijcoref}i,j=1 . . . N (6)
The event argument extraction network uses the relations and connections between the sentences (cross-sentence information) in D rather than considering within-sentence information. Two sentences in D are considered to be related if they contain entity mentions that refer to the same entity in D (coreference information). For example, Stanford NLP Toolkit is used to determine the coreference of entity mentions. Based on such a relation between sentences, two words in D are considered more relevant to each other if they appear in related sentences. Therefore, for the coreference-based structure, aijcoref is set to 1 if wi and wj appear in different, but related sentences, and 0 otherwise.
At operation 515, the system generates a relationship representation vector based on the document structures. In some cases, the operations of this step refer to, or may be performed by, a relationship encoder as described with reference to
Some examples of the method, apparatus, and non-transitory computer readable medium described above further include generating a plurality of intermediate structures for each of a plurality of channels of a graph transformer network (GTN), wherein each of the intermediate structures comprises a weighted linear combination of the document structures. Some examples further include generating a hidden vector for each of the channels using a graph convolution network (GCN), wherein the relationship representation vector is generated based on the hidden vector for each of the channels.
In some embodiments, six different document structures are generated for D (i.e., =[Adep, Acontext, Agloss, Astruct, Asent, Acoref]) as described above. The document structures are based on complementary types of information (i.e., structure types), the system then combines them to generate richer document structures for event argument extraction. The combination is achieved noting each importance score aijv in one of the structures Aijv (v∈V={dep, context, gloss, struct, sent, coref}) which considers the direct interaction between the two involving words wi and wj (i.e., not including any other words) according to one specific information type v. Each importance score in the combined structures is conditioned on the interactions with other important context words in D (i.e., in addition to the two involving words) where each interaction between a pair of words can use any of the six structure types (multi-hop and heterogeneous-type reasoning). Therefore, graph-based networks (e.g., GTN) are used to enable a multi-hop and heterogeneous-type reasoning in the structure combination for event argument extraction.
In some examples, multi-hop reasoning paths may be implemented at different lengths through adding the identity matrix I (e.g., 428 size N×N) into the set of initial document structures =[Adep, Acontext, Agloss, Astruct, Asent, Acoref, I]=[1, . . . , 7]. The graph-based network model (e.g., GTN) is organized into C channels for structure combination (motivated by multi-head attention in transformers), where the k-th channel contains M intermediate document structures Q1k, Q2k, . . . , QMk of size N×N. As such, each intermediate structure Qik is computed by a linear combination of the initial structures using learnable weights αijk as follows:
Qik=Σj=1 . . . 7αijkj (7)
Due to the linear combination, the interaction scores in Qik are able to reason with any of the six initial structure types in V (although such scores consider the direct interactions of the involving words). The intermediate structures Q1k, Q2k, . . . , QMk in each channel k are multiplied to generate the final document structure Qk=Q1k×Q2k× . . . ×QMk of size N×N (for the k-the channel). Matrix multiplication enables the importance score between a pair of words wi and wj in Qk conditioning on the multi-hop interactions or reasoning between the two words and other words in D (up to M−1 hops due to the inclusion of I in ). The interactions involved in one importance score in Qk may be based on any of the initial structure types in V (heterogeneous reasoning) due to the flexibility of the intermediate structure Qik.
The GTN model feeds the rich document structures Q1, Q2, . . . , QC from the C channels into C graph convolutional networks (GCNs) to induce document structure-enriched representation vectors for argument role prediction in event argument extraction (one GCN for each Qk={Qijk}i,j=1 . . . N). As such, each of these GCN models involves G layers that produce the hidden vectors
where Uk,t is the weight matrix for the t-th layer of the k-th GCN model and the input vectors
In a neural network, an activation function may be used to transforming summed weighted inputs from a node into the activation of the node or an output. A ReLU layer may implement a rectified linear activation function, which comprises a piecewise linear function that outputs the input directly if is positive, otherwise, it outputs zero. A rectified linear activation function may be used as a default activation function for many types of neural networks. Using a rectified linear activation function may enable the use of stochastic gradient descent with backpropagation of errors to train deep neural networks. The rectified linear activation function may operate similar to a linear function, but it may enable complex relationships in the data to be learned. The rectified linear activation function may also provide more sensitivity to the activation sum input to avoid saturation. A node or unit that implements a rectified linear activation function may be referred to as a rectified linear activation unit, or ReLU for short. Networks that use a rectifier function for hidden layers may be referred to as rectified networks.
Next, the hidden vectors in the last layers of all the GCN models (at the G-th layers) for wi (i.e.,
The event argument extraction network assembles a representation vector R based on the hidden vectors for wa and wt from the GTN model as follows:
R=[h′a,h′t,MaxPool(h′1,h′2, . . . ,h′N)] (9)
At operation 520, the system predicts a relationship between the event trigger word and the argument candidate word based on the relationship representation vector. In some cases, the operations of this step refer to, or may be performed by, a decoder as described with reference to
Some examples of the method, apparatus, and non-transitory computer readable medium described above further include applying a feed-forward neural network to the relationship representation vector. Some examples further include applying a softmax layer to an output of the feed-forward neural network, wherein the relationship is predicted based on the output of the softmax layer.
In an embodiment, the representation vector R is used to predict the argument role for wa and wt in D. The representation vector R is then input to a two-layer feed-forward network with softmax to produce a probability distribution P(.|D, a, t) over the possible argument roles.
Training and EvaluationIn accordance with
In some examples, the updated parameters comprise parameters of a position embedding table used for generating the word representation vectors. In some examples, the updated parameters comprise parameters of weight matrices used for generating the semantic structure. In some examples, the updated parameters comprise weights for weighted linear combinations of the document structures used for generating the relationship representation vectors.
One or more embodiments of the present disclosure use supervised training techniques. Supervised learning is one of three basic machine learning paradigms, alongside unsupervised learning and reinforcement learning. Supervised learning is a machine learning technique based on learning a function that maps an input to an output based on example input-output pairs. Supervised learning generates a function for predicting labeled data based on labeled training data consisting of a set of training examples. In some cases, each example is a pair consisting of an input object (typically a vector) and a desired output value (i.e., a single value, or an output vector). A supervised learning algorithm analyzes the training data and produces the inferred function, which can be used for mapping new examples. In some cases, the learning results in a function that correctly determines the class labels for unseen instances. in other words, the learning algorithm generalizes from the training data to unseen examples.
Accordingly, during the training process, the parameters and weights of an event argument extraction network are adjusted to increase the accuracy of the result (i.e., by minimizing a loss function which corresponds in some way to the difference between the current result and the target result). The weight of an edge increases or decreases the strength of the signal transmitted between nodes. In some cases, nodes have a threshold below which a signal is not transmitted at all. In some examples, the nodes are aggregated into layers. Different layers perform different transformations on their inputs. The initial layer is known as the input layer and the last layer is known as the output layer. In some cases, signals traverse certain layers multiple times.
At operation 600, the system receives training data including documents including a set of words organized into a set of sentences, the words including an event trigger word and argument words, and the training data further including ground truth relationships between the event trigger word and the argument words. In some cases, the operations of this step refer to, or may be performed by, a training component as described with reference to
At operation 605, the system generates word representation vectors for the words. In some cases, the operations of this step refer to, or may be performed by, a document encoder as described with reference to
In some embodiments, document-level event argument extraction is formulated as a multi-class classification task. The input to an event argument extraction network of the system is a document comprising multiple sentences. The document includes an event trigger word, e.g., the t-th word of the document and an argument candidate word.
In an embodiment, the event argument extraction network is trained to predict the role of an argument candidate word in an event mention evoked by an event trigger word. In some cases, the role may be None, indicating that the argument candidate word is not a participant in the event mention. The network model includes a document encoder to transform the words in the document into high-dimensional vectors. In some cases, the argument candidate word is not limited to a word but can be a phrase or a sentence from a document.
At operation 610, the system generates a set of document structures including a semantic structure based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information based on the set of sentences. In some cases, the operations of this step refer to, or may be performed by, a structure component as described with reference to
In some embodiments, the system uses document structures for document-level event argument extraction and relates to multiple document structures such as relation extraction, question answering and generation, text summarization, and document classification. Additionally, the system uses external knowledge to generate the structures (e.g., based on WordNet) and combines multiple structures for multi-hop heterogeneous reasoning using a GTN. The system combines multiple document structures to learn representation vectors for document-level event argument extraction.
At operation 615, the system generates relationship representation vectors based on the document structures. In some cases, the operations of this step refer to, or may be performed by, a relationship encoder as described with reference to
In some embodiments, six different document structures are generated for the documents. The document structures are based on complementary types of information (i.e., structure types), the system then combines them to generate rich document structures for event argument extraction. The combination is achieved noting each importance score aijv in one of the structures Aijv (v∈V={dep, context, gloss, struct, sent, coref}) which considers the direct interaction between the two involving words wi and wj (i.e., not including any other words) according to one specific information type v. Each importance score in the combined structures is conditioned on the interactions with other important context words in a document (i.e., in addition to the two involving words) where each interaction between a pair of words can use any of the six structure types (multi-hop and heterogeneous-type reasoning). Therefore, graph-based networks (e.g., GTN) are used to enable a multi-hop and heterogeneous-type reasoning in the structure combination for event argument extraction.
At operation 620, the system predicts relationships between the event trigger word and the argument candidate words based on the relationship representation vectors. In some cases, the operations of this step refer to, or may be performed by, a decoder as described with reference to
At operation 625, the system computes a loss function by comparing the predicted relationships to the ground truth relationships. In some cases, the operations of this step refer to, or may be performed by, a training component as described with reference to
In some examples, a supervised training model may be used that includes a loss function that compares predictions of the event argument extraction network with ground truth training data. The term loss function refers to a function that impacts how a machine learning model is trained in a supervised learning model. Specifically, during each training iteration, the output of the model is compared to the known annotation information in the training data. The loss function provides a value for how close the predicted annotation data is to the actual annotation data. After computing the loss function, the parameters of the model are updated accordingly, and a new set of predictions are made during the next iteration.
In some embodiments, the negative log-likelihood (i.e., Lpred) is optimized to train the network model as follows:
=−P(y|D,a,t) (10)
where y is the ground-truth argument role for the input example. The event argument extraction network is a multi-hop reasoning for event argument extractor with heterogeneous document structure types.
At operation 630, the system updates parameters of a neural network based on the loss function. In some cases, the operations of this step refer to, or may be performed by, a training component as described with reference to
Evaluation of document-level event argument extraction models are conducted on the RAMS dataset (i.e., roles across multiple sentences). In some examples, the RAMS dataset includes 9,124 annotated event mentions across 139 types for 65 argument roles, which is the largest available dataset for document-level event argument extraction. The standard train/dev/test split and the evaluation scripts for RAM are used herein for comparison. Additionally, the event argument extraction network is evaluated on the BNB dataset for implicit semantic role labelling (iSRL), a closely related task to document-level event argument extraction where models predict the roles of argument candidates for a given predicate (the arguments and predicates can appear in different sentences in iSRL.
One or more embodiments of the present disclosure use a customized dataset based on the existing BNB dataset (with the same data split and pre-processing script) for comparison. In some examples, the dataset annotates 2,603 argument mentions for 12 argument roles (for 1,247 predicates or triggers). The development set of the RAMS dataset is used to fine-tune the hyper-parameters of the network model. After the fine-tuning process, the hyper-parameters used for a dataset (i.e., BNB) are 1e-5 for learning rate of the Adam optimizer, 32 for the mini-batch size, 30 dimensions for the position embeddings, 200 hidden units for the feed-forward network, a bidirectional LSTM (Bi-LSTM) and GCN layers, 2 layers for Bi-LSTM and GCN models (G=2), and C=3 channels for GTN with M=3 intermediate structures in each channel. One embodiment of the present disclosure considers GloVE (of 300 dimensions) and BERTbase (of 768 dimensions) for the pre-trained word embeddings (updated during the training).
The event argument extraction network is compared with structure-free and structure-based baselines on the RAMS dataset. The structure-free baselines do not exploit document structures for event argument extraction. In some examples, the event argument extraction network is compared with a document level argument linking model (i.e., RAMSmodel) and a head-word based approach (i.e., Head-based model). The RAMSmodel performs well for document-level event argument extraction on RAMS dataset.
A structure-based baseline employs document structures to learn representation vectors for input documents. However, existing technology fails to use document structure-based models for document-level event argument extraction. The event argument extraction network is compared with document structure-based models for a related task of document-level relation extraction (DRE) in information extraction. The structure-based baselines include the following four neural network models for DRE. In some examples, an interstitial dependency based neural network (iDepNN) uses the dependency trees of sentences to build document structures.
A graph convolutional neural network (GCNN) baseline generates document structures based on both syntax and discourse information (e.g., dependency trees, coreference links). GCNN may consider more than one source of information for document structures but fails to exploit the semantic-based document structures (for contextual and knowledge-based semantics) and lacks effective mechanisms for structure combination (i.e., do not use GTN). In a latent structure refinement (LSR) baseline, document structures are inferred from the representation vectors of the words (based exclusively on semantic information). A graph-based neural network extraction with edge-oriented graphs (EoG) generates the edge representations based on document structures. However, EoG considers syntax and discourse information for document structures and fails to exploit semantic-based document structures similar to GCNN. Some embodiments of the present disclosure adapt to the datasets for document-level event argument extraction.
The present disclosure considers a standard decoding, i.e., using argmax with P(.|D, a, t) to obtain the predicted roles, and the decoding setting where the model predictions are constrained to the permissible roles for the event type e evoked by the trigger word wt. The role r* with the highest probability under P(.|D, a, t) is selected as the predicted role. Model performance on the RAMS test set using BERT and GloVe embeddings are recorded and evaluated. The event argument extraction network outperforms all the baselines in both the standard and type constrained decoding regardless of embeddings used (e.g., BERT or GloVe). The increased performance with p<0.01 demonstrates the effectiveness of the network model for document-level event argument extraction. The structure-based (e.g., GCNN, LSR and EoG) models outperform structure-free baselines. One of the reasons is that the structure-based models employ document structures for document-level event argument extraction. The event argument extraction network achieves increased performance because it includes contextual and knowledge-based structures with multi-hop heterogeneous reasoning in event argument extraction tasks.
The performance of the event argument extraction network is evaluated on the BNB dataset for iSRL. As the dataset involves a different train/dev/test split from the original BNB dataset, RAMSmodel is used as the baseline. The performance of the structure-based baselines (i.e., iDepNN, GCNN, LSR, and EoG) are recorded and evaluated. Performance of the models on the BNB test dataset (using BERT embeddings) are recorded and evaluated. The event argument extraction network achieves increased performance than all the baseline models (p<0.01).
The description and drawings described herein represent example configurations and do not represent all the implementations within the scope of the claims. For example, the operations and steps may be rearranged, combined or otherwise modified. Also, structures and devices may be represented in the form of block diagrams to represent the relationship between components and avoid obscuring the described concepts. Similar components or features may have the same name but may have different reference numbers corresponding to different figures.
Some modifications to the disclosure may be readily apparent to those skilled in the art, and the principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
In this disclosure and the following claims, the word “or” indicates an inclusive list such that, for example, the list of X, Y, or Z means X or Y or Z or XY or XZ or YZ or XYZ. Also the phrase “based on” is not used to represent a closed set of conditions. For example, a step that is described as “based on condition A” may be based on both condition A and condition B. In other words, the phrase “based on” shall be construed to mean “based at least in part on.” Also, the words “a” or “an” indicate “at least one.”
Claims
1. A method for natural language processing, comprising:
- receiving a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word;
- generating word representation vectors for the words;
- generating a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences;
- generating a relationship representation vector based on the document structures; and
- predicting a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
2. The method of claim 1, further comprising:
- encoding the words of the document to produce individual word embeddings; and
- identifying relative position of word pairs of the document to produce position embeddings, wherein the word representation vectors are based on the word embeddings and the position embeddings.
3. The method of claim 1, further comprising:
- applying a long short-term memory (LSTM) to produce word embeddings and position embeddings; and
- extracting a hidden vector from the LSTM to produce the word representation vectors.
4. The method of claim 1, further comprising:
- receiving dependency tree information for the sentences of the document;
- connecting the dependency tree information for the sentences to create combined dependency tree; and
- identifying a path between a pair of words of the combined dependency tree, wherein the syntax structure is generated based on the path.
5. The method of claim 1, further comprising:
- applying a weighted matrix to each word representation vector of a pair of the word representation vectors to produce weighted word representation vectors; and
- computing a semantic similarity score for the pair of the word representation vectors based on the weighted word representation vectors, wherein the semantic structure is based on the semantic similarity score.
6. The method of claim 1, further comprising:
- mapping the words to corresponding nodes of a lexical database;
- identifying a glossary for each of the corresponding nodes;
- identifying node embeddings based on the glossary; and
- computing a glossary similarity score for a pair of the node embeddings, wherein the semantic structure is based on the glossary similarity score.
7. The method of claim 1, further comprising:
- mapping the words to corresponding nodes of a lexical database;
- computing information content of the corresponding nodes, and a least common subsumer for a pair of the corresponding nodes; and
- computing a structural similarity score for the pair of the corresponding nodes, wherein the semantic structure is based on the structural similarity score.
8. The method of claim 1, further comprising:
- computing a boundary score for a pair of words based on whether the pair of words are both located in a same sentence, wherein the discourse structure is based on the boundary score.
9. The method of claim 1, further comprising:
- identifying coreference information for words of the document; and
- computing a coreference score for a pair of words based on whether the pair of words are located in sentences containing entity mentions that refer to a same entity of the coreference information, wherein the discourse structure is based on the coreference score.
10. The method of claim 1, further comprising:
- generating a plurality of intermediate structures for each of a plurality of channels of a graph transformer network (GTN), wherein each of the intermediate structures comprises a weighted linear combination of the document structures; and
- generating a hidden vector for each of the channels using a graph convolution network (GCN), wherein the relationship representation vector is generated based on the hidden vector for each of the channels.
11. The method of claim 1, further comprising:
- applying a feed-forward neural network to the relationship representation vector; and
- applying a softmax layer to an output of the feed-forward neural network, wherein the relationship is predicted based on the output of the softmax layer.
12. An apparatus for natural language processing, comprising:
- a document encoder configured to generate word representation vectors for words of a document organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word;
- a structure component configured to generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences;
- a relationship encoder configured to generate a relationship representation vector based on the document structures; and
- a decoder configured to predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
13. The apparatus of claim 12, wherein:
- the document encoder comprises a word encoder, a position embedding table, and a long short-term memory (LSTM).
14. The apparatus of claim 12, wherein:
- the structure component comprises a lexical database, a dependency parser, and a coreference network.
15. The apparatus of claim 12, wherein:
- the relationship encoder comprises a graph transformer network (GTN) and a graph convolution network (GCN).
16. The apparatus of claim 12, wherein:
- the decoder comprises a feed-forward layer and a softmax layer.
17. A method for training a neural network, comprising:
- receiving training data including documents comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and argument words, and the training data further including ground truth relationships between the event trigger word and the argument words;
- generating word representation vectors for the words;
- generating a plurality of document structures including a semantic structure based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information based on the plurality of sentences;
- generating relationship representation vectors based on the document structures;
- predicting relationships between the event trigger word and the argument candidate words based on the relationship representation vectors;
- computing a loss function by comparing the predicted relationships to the ground truth relationships; and
- updating parameters of a neural network based on the loss function.
18. The method of claim 17, wherein:
- the updated parameters comprise parameters of a position embedding table used for generating the word representation vectors.
19. The method of claim 17, wherein:
- the updated parameters comprise parameters of weight matrices used for generating the semantic structure.
20. The method of claim 17, wherein:
- the updated parameters comprise weights for weighted linear combinations of the document structures used for generating the relationship representation vectors.
Type: Application
Filed: Apr 6, 2021
Publication Date: Oct 6, 2022
Patent Grant number: 11893345
Inventors: Amir Pouran Ben Veyseh (Eugene, OR), Franck Dernoncourt (San Jose, CA), Quan Tran (San Jose, CA), Varun Manjunatha (Newton, MA), Lidan Wang (San Jose, CA), Rajiv Jain (Vienna, VA), Doo Soon Kim (San Jose, CA), Walter Chang (San Jose, CA)
Application Number: 17/223,166