Patents by Inventor Carolin Lawrence

Carolin Lawrence has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240028631
    Abstract: A method of performing a semantic textual similarity search between a target document and a set of source documents includes: selecting target textual data from the target document; identifying and classifying, name entities for each textual sequence of the target textual data; generating, a semantic textual similarity model and using the semantic textual similarity model to generate textual embedding of the identified name entities for each textual sequence of the target textual data; aggregating the textual embedding with the structural features of the target textual data to generate target document embedding; searching, by a similarity estimator, for similar documents by measuring similarities of the target document embedding with embeddings of each source document of the set of source documents; and computing explanatory information about a degree of similarity between the target document and any document of the set of the source documents.
    Type: Application
    Filed: October 5, 2021
    Publication date: January 25, 2024
    Inventors: Na Gong, Carolin Lawrence, Timo Sztyler
  • Publication number: 20230385281
    Abstract: A memory for storing data for access by an application program includes an encoded sequential query representing m×n path queries of a query graph having a plurality of elements including m root nodes and n leaf nodes, and a missing element, wherein m and n are integer values. Each of the path queries includes a root node as a start position and a leaf node as an end position, wherein positional encodings of the elements include a positional order within each path query, wherein the positional encodings of the elements include counter values that are reset at a start position of each of the path queries, and wherein the missing element is masked. The invention can be used for, but is not limited to, medical uses, such as for predicting protein-drug interactions or medical conditions, as well as other applications, and significantly outperforms the optimized model.
    Type: Application
    Filed: July 20, 2023
    Publication date: November 30, 2023
    Inventors: Bhushan Kotnis, Mathias Niepert, Carolin Lawrence
  • Publication number: 20230359629
    Abstract: A method for encoding a query graph into a sequence representation includes receiving m×n path queries representing a query graph having m root nodes and n leaf nodes, each path query beginning with a root node and ending with a leaf node, and encoding positions of each node and each edge between nodes within each path query independently, wherein the encoded positions include a positional order within each path query. The positional encodings may include no positional ordering between different path queries. The query graph may include one or more missing entities and each missing entity may be masked to produce a masked sequential query, which may be fed to a transformer encoder. The invention can be used for, but is not limited to, medical uses, such as for predicting protein-drug interactions or medical conditions, as well as other applications, and significantly outperforms the optimized model.
    Type: Application
    Filed: July 20, 2023
    Publication date: November 9, 2023
    Inventors: Bhushan Kotnis, Mathias Niepert, Carolin Lawrence
  • Publication number: 20230359630
    Abstract: A method of encoding a query graph includes receiving m×n path queries representing the query graph, each starting with a root node and ending with a leaf node, the query graph including one or more missing nodes and/or missing edges between nodes. Positions of each node and edge are encoded within each of the path queries independently, wherein the encoded positions include a positional order within each path query, and wherein the encoded positions include positional counter values that are reset at a start position of each of the path queries. Each of the missing nodes and/or missing edges are masked to produce a masked query, which is fed to a transformer encoder. The invention can be used for, but is not limited to, medical uses, such as for predicting protein-drug interactions or medical conditions, as well as other applications, and significantly outperforms the optimized model.
    Type: Application
    Filed: July 20, 2023
    Publication date: November 9, 2023
    Inventors: Bhushan Kotnis, Mathias Niepert, Carolin Lawrence
  • Patent number: 11748356
    Abstract: Method for encoding a query graph into a sequence representation includes receiving a set of m×n path queries representing a query graph, the query graph having a plurality of nodes including m root nodes and n leaf nodes, wherein m and n are integer values, each of the m×n path queries beginning with a root node and ending with a leaf node, and encoding positions of each node and each edge between nodes within each of the m×n path queries independently, wherein the encoded positions include a positional order within each path query. The positional encodings may include no positional ordering between different path queries. The query graph may include one or more missing entities and each token representing one of the one or more missing entities may be masked to produce a masked sequential query, which may be fed to a transformer encoder.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: September 5, 2023
    Assignee: NEC CORPORATION
    Inventors: Bhushan Kotnis, Mathias Niepert, Carolin Lawrence
  • Patent number: 11741318
    Abstract: A method is provided for extracting machine readable data structures from unstructured, low-resource language input text. The method includes obtaining a corpus of high-resource language data structures, filtering the corpus of high-resource language data structures to obtain a filtered corpus of high-resource language data structures, obtaining entity types for each entity of each filtered high-resource language data structure, performing type substitution for each obtained entity by replacing each entity with an entity of the same type to generate type substituted data structures, and replacing each entity with an equivalent a corresponding low-resource language data structure entity to generate code switched sentences. The method further includes generating an augmented data structure corpus, training a multi-head self-attention transformer model, and providing the unstructured low-resource language input text to the trained model to extract the machine readable data structures.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: August 29, 2023
    Assignee: NEC CORPORATION
    Inventors: Bhushan Kotnis, Kiril Gashteovski, Carolin Lawrence
  • Publication number: 20230267338
    Abstract: A method for automated decision making in an artificial intelligence task by fact-relevant open information extraction and knowledge graph generation includes obtaining a keyword query for performing the fact-relevant open information extraction and expanding the keyword query using keyword alias and query generation. The fact-relevant open information extraction is performed to extract triples from a text which contains the keyword or the keyword alias. The knowledge graph is generated using the extracted triples and an open knowledge graph (OpenKG) extractor that has been trained using keywords and aliases. Supervised or unsupervised classification is performed using the generated knowledge graph to make the automated decision in the artificial intelligence task.
    Type: Application
    Filed: May 2, 2022
    Publication date: August 24, 2023
    Inventors: Bhushan Kotnis, Kiril Gashteovski, Carolin Lawrence
  • Publication number: 20230259790
    Abstract: A method of consolidating recommendations based on individual recommendations includes receiving a knowledge graph including source entities, target entities and attribute entities. Each source entity and each target entity is linked to one or more of the attribute entities. Using a trained prediction learning model, a prediction is determined for each source entity based on the knowledge graph. The trained prediction model was trained using prediction training data including historical data. The prediction for each source includes recommendation data identifying one or multiple target entities. Using a trained consolidation learning model, a consolidated prediction is determined for the source entities based on the prediction for each source entity. The trained consolidation learning model was trained using consolidation training data including the historical data and the recommendation data. The consolidated prediction identifies a target entity that maximizes a joint probability of the source entities.
    Type: Application
    Filed: April 22, 2022
    Publication date: August 17, 2023
    Inventors: Timo Sztyler, Carolin Lawrence
  • Publication number: 20230142625
    Abstract: A method of providing personalized guideline information for a user in a predetermined domain, in which a set of personality types is defined for users of said predetermined domain, includes: determining, by a personality type recognizer, a personality type for a user in order to assign the personality type to said user, selecting a personality-typed machine learning model from a model pool of personality-typed machine learning models based on the personality type of said user, where the selected personality-typed machine learning model is used to initialize an individual personalized machine learning model of said user, and generating, by the individual personalized machine learning model of said user, a recommendation prediction, the recommendation prediction is presented as a guideline information to said user.
    Type: Application
    Filed: June 2, 2020
    Publication date: May 11, 2023
    Inventors: Carolin LAWRENCE, Timo SZTYLER
  • Publication number: 20220366274
    Abstract: A method for operating a neural link prediction model includes training a neural link predictor of the neural link prediction model using a knowledge base of facts, estimating an influence of at least one fact—being provided to the neural link predictor—on a behavior or prediction of the neural link predictor, collecting and storing the influence of the at least one fact on at least one parameter of the neural link predictor during training within a memory. The influence is expressed by a change in a value of the at least one parameter.
    Type: Application
    Filed: August 4, 2020
    Publication date: November 17, 2022
    Inventors: Carolin LAWRENCE, Timo SZTYLER, Mathias NIEPERT
  • Publication number: 20220309254
    Abstract: A method is provided for extracting machine readable data structures from unstructured, low-resource language input text. The method includes obtaining a corpus of high-resource language data structures, filtering the corpus of high-resource language data structures to obtain a filtered corpus of high-resource language data structures, obtaining entity types for each entity of each filtered high-resource language data structure, performing type substitution for each obtained entity by replacing each entity with an entity of the same type to generate type substituted data structures, and replacing each entity with an equivalent a corresponding low-resource language data structure entity to generate code switched sentences. The method further includes generating an augmented data structure corpus, training a multi-head self-attention transformer model, and providing the unstructured low-resource language input text to the trained model to extract the machine readable data structures.
    Type: Application
    Filed: June 9, 2021
    Publication date: September 29, 2022
    Inventors: Bhushan Kotnis, Kiril Gashteovski, Carolin Lawrence
  • Publication number: 20220309230
    Abstract: A method for transforming an input sequence into an output sequence includes obtaining a data set of interest, the data set including input sequences and output sequences, wherein each of the sequences is decomposable into tokens. At a prediction time, the input sequence is concatenated with a sequence of placeholder tokens of a configured maximum length to generate a concatenated sequence. The concatenated sequence is provided as input to a transformer encoder that is learnt at a training time. A prediction strategy is applied to replace the placeholder tokens with real output tokens. The real output tokens are provided as the output sequence.
    Type: Application
    Filed: September 9, 2019
    Publication date: September 29, 2022
    Inventors: Carolin LAWRENCE, Bhushan KOTNIS, Mathias NIEPERT
  • Publication number: 20220147819
    Abstract: Iterative artificial-intelligence (AI)-based prediction methods and systems are provided. The method may include receiving a dataset of knowledge, processing the dataset of knowledge to produce one or more predictions, one or more explanations corresponding to the one or more predictions, and one or more output options, selecting, using an AI algorithm, an output option from the one or more output options, and presenting the selected output option to a user, the selected output option including a prediction and an explanation of the prediction.
    Type: Application
    Filed: January 21, 2021
    Publication date: May 12, 2022
    Inventors: Carolin Lawrence, Timo Sztyler
  • Publication number: 20210173841
    Abstract: Method for encoding a query graph into a sequence representation includes receiving a set of m×n path queries representing a query graph, the query graph having a plurality of nodes including m root nodes and n leaf nodes, wherein m and n are integer values, each of the m×n path queries beginning with a root node and ending with a leaf node, and encoding positions of each node and each edge between nodes within each of the m×n path queries independently, wherein the encoded positions include a positional order within each path query. The positional encodings may include no positional ordering between different path queries. The query graph may include one or more missing entities and each token representing one of the one or more missing entities may be masked to produce a masked sequential query, which may be fed to a transformer encoder.
    Type: Application
    Filed: April 24, 2020
    Publication date: June 10, 2021
    Inventors: Bhushan Kotnis, Mathias Niepert, Carolin Lawrence