Patents by Inventor Kenyu Kobayashi

Kenyu Kobayashi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240070156
    Abstract: Techniques for propagating scores in subgraphs are provided. In one technique, multiple path scores are stored, each path score associated with a path (or subgraph), of multiple paths, in a graph of nodes. The path scores may be generated by a machine-learned model. For each path score, a path that is associated with that path score is identified and nodes of that path are identified. For each identified node, a node score for that node is determined or computed based on the corresponding path score and the node score is stored in association with that node. Subsequently, for each node in a subset of the graph, multiple node scores that are associated with that node are identified and aggregated to generate a propagated score for that node. In a related technique, a propagated score of a node is used to compute a score for each leaf node of the node.
    Type: Application
    Filed: August 23, 2022
    Publication date: February 29, 2024
    Inventors: Kenyu Kobayashi, Arno Schneuwly, Renata Khasanova, Matteo Casserini, Felix Schmidt
  • Publication number: 20240061997
    Abstract: Herein is a machine learning (ML) explainability (MLX) approach in which a natural language explanation is generated based on analysis of a parse tree such as for a suspicious database query or web browser JavaScript. In an embodiment, a computer selects, based on a respective relevance score for each non-leaf node in a parse tree of a statement, a relevant subset of non-leaf nodes. The non-leaf nodes are grouped in the parse tree into groups that represent respective portions of the statement. Based on a relevant subset of the groups that contain at least one non-leaf node in the relevant subset of non-leaf nodes, a natural language explanation of why the statement is anomalous is generated.
    Type: Application
    Filed: August 19, 2022
    Publication date: February 22, 2024
    Inventors: Kenyu Kobayashi, Arno Schneuwly, Renata Khasanova, Matteo Casserini, Felix Schmidt
  • Publication number: 20240037383
    Abstract: Herein are machine learning (ML) explainability (MLX) techniques for calculating and using a novel fidelity metric for assessing and comparing explainers that are based on feature attribution. In an embodiment, a computer generates many anomalous tuples from many non-anomalous tuples. Each anomalous tuple contains a perturbed value of a respective perturbed feature. For each anomalous tuple, a respective explanation is generated that identifies a respective identified feature as a cause of the anomalous tuple being anomalous. A fidelity metric is calculated by counting correct explanations for the anomalous tuples whose identified feature is the perturbed feature. Tuples may represent entries in an activity log such as structured query language (SQL) statements in a console output log of a database server. This approach herein may gauge the quality of a set of MLX explanations for why log entries or network packets are characterized as anomalous by an intrusion detector or other anomaly detector.
    Type: Application
    Filed: July 26, 2022
    Publication date: February 1, 2024
    Inventors: Kenyu Kobayashi, Arno Schneuwly, Renata Khasanova, Matteo Casserini, Felix Schmidt
  • Publication number: 20240037372
    Abstract: The present invention relates to machine learning (ML) explainability (MLX). Herein are techniques for a novel relevance propagation rule in layer-wise relevance propagation (LRP) for feature attribution-based explanation (ABX) for a reconstructive autoencoder. In an embodiment, a reconstruction layer of a reconstructive neural network in a computer generates a reconstructed tuple that is based on an original tuple that contains many features. A reconstruction residual cost function calculates a reconstruction error that measures a difference between the original tuple and the reconstructed tuple. Applied to the reconstruction error is a novel reconstruction relevance propagation rule that assigns a respective reconstruction relevance to each reconstruction neuron in the reconstruction layer. Based on the reconstruction relevance of the reconstruction neurons, a respective feature relevance of each feature is determined, from which an ABX explanation may be automatically generated.
    Type: Application
    Filed: July 26, 2022
    Publication date: February 1, 2024
    Inventors: Kenyu Kobayashi, Arno Schneuwly, Renata Khasanova, Matteo Casserini, Felix Felix Schmidt
  • Publication number: 20230419169
    Abstract: Herein are machine learning (ML) explainability (MLX) techniques that perturb a non-anomalous tuple to generate an anomalous tuple as adversarial input to any explainer that is based on feature attribution. In an embodiment, a computer generates, from a non-anomalous tuple, an anomalous tuple that contains a perturbed value of a perturbed feature. In the anomalous tuple, the perturbed value of the perturbed feature is modified to cause a change in reconstruction error for the anomalous tuple. The change in reconstruction error includes a decrease in reconstruction error of the perturbed feature and/or an increase in a sum of reconstruction error of all features that are not the perturbed feature. After modifying the perturbed value, an attribution-based explainer automatically generates an explanation that identifies an identified feature as a cause of the anomalous tuple being anomalous. Whether the identified feature of the explanation is or is not the perturbed feature is detected.
    Type: Application
    Filed: June 28, 2022
    Publication date: December 28, 2023
    Inventors: Kenyu Kobayashi, Arno Schneuwly, Renata Khasanova, Matteo Casserini, Felix Schmidt