Patents by Inventor Arjun Ravi Kannan

Arjun Ravi Kannan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230214640
    Abstract: An example computing platform is configured to receive configuration data that defines a pipeline for building a deep learning model, the configuration data including data defining an input dataset, data type assignments for a set of input data variables included within the dataset, data transformations that are to be applied to the dataset, and a machine learning process that is to be utilized to train the deep learning model. Based on the received configuration data, the computing platform functions to build the deep learning model by obtaining the input dataset, assigning a data type to data in the dataset, selecting transformation operations for the data in the dataset, splitting the dataset into a sequence of data blocks, applying the transformation operations to each data block to produce a transformed dataset, generating a compressed data structure that includes the transformed datasets, and applying the machine learning process to the transformed datasets.
    Type: Application
    Filed: December 31, 2021
    Publication date: July 6, 2023
    Inventors: Kenrick Fernandes, Ryan Franks, Arjun Ravi Kannan
  • Publication number: 20220414766
    Abstract: A computing platform may be configured to (i) train an initial model object for a data science model using a machine learning process, (ii) determine that the initial model object exhibits a threshold level of bias, and (iii) thereafter produce an updated version of the initial model object having mitigated bias by (a) identifying a subset of the initial model object's set of input variables that are to be replaced by transformations, (b) producing a post-processed model object by replacing each respective input variable in the identified subset with a respective transformation of the respective input variable that has one or more unknown parameters, (c) producing a parameterized family of the post-processed model object, and (d) selecting, from the parameterized family of the post-processed model object, one given version of the post-processed model object to use as the updated version of the initial model object for the data science model.
    Type: Application
    Filed: August 31, 2022
    Publication date: December 29, 2022
    Inventors: Alexey Miroshnikov, Konstandinos Kotsiopoulos, Arjun Ravi Kannan, Raghu Kulkarni, Steven Dickerson, Ryan Franks
  • Publication number: 20210383268
    Abstract: A method, system, and computer-readable medium are disclosed for detecting and mitigating bias in a trained machine learning model. The method includes the steps of: training the model based on a training data set; detecting bias in the model relative to a protected class; identifying one or more groups of input variables that contribute to the bias; and mitigating bias in the model. Mitigating the bias is performed by constructing a post-processed score function that either (a) neutralizes or partially neutralizes one or more groups of input variables in the input vector of the model, or (b) utilizes a fair score approximation of the model to project the distributions for the protected class and/or the unprotected class to substantially match. In an embodiment, detecting bias in the trained model is performed by comparing distribution for two or more subpopulations based on a distance metric, such as a Wasserstein distance.
    Type: Application
    Filed: June 3, 2020
    Publication date: December 9, 2021
    Inventors: Alexey Miroshnikov, Kostandinos Kotsiopoulos, Arjun Ravi Kannan, Raghu Kulkarni, Steven Dickerson
  • Publication number: 20210383275
    Abstract: A framework for interpreting machine learning models is proposed that utilizes interpretability methods to determine the contribution of groups of input variables to the output of the model. Input variables are grouped based on dependencies with other input variables. The groups are identified by processing a training data set with a clustering algorithm. Once the groups of input variables are defined, scores related to each group of input variables for a given instance of the input vector processed by the model are calculated according to one or more algorithms. The algorithms can utilize group Partial Dependence Plot (PDP) values, Shapley Additive Explanations (SHAP) values, and Banzhaf values, and their extensions among others, and a score for each group can be calculated for a given instance of an input vector per group. These scores can then be sorted, ranked, and then combined into one hybrid ranking.
    Type: Application
    Filed: May 17, 2021
    Publication date: December 9, 2021
    Inventors: Alexey Miroshnikov, Konstandinos Kotsiopoulos, Arjun Ravi Kannan, Raghu Kulkarni, Steven Dickerson
  • Publication number: 20210350272
    Abstract: A framework for interpreting machine learning models is proposed that utilizes interpretability methods to determine the contribution of groups of input variables to the output of the model. Input variables are grouped based on correlation with other input variables. The groups are identified by processing a training data set with a clustering algorithm. Once the groups of input variables are defined, partial dependent plot (PDP) tables for each group are calculated and stored in a memory, which are used for calculating scores related to each group of input variables for a given instance of the input vector processed by the model. Furthermore, Shapley Additive Explanations (SHAP) values for each group can be calculated by summing the SHAP values of the input variables for a given instance of an input vector per group. These scores can then be sorted, ranked for each interpretability method, and then combined into one hybrid ranking.
    Type: Application
    Filed: May 6, 2020
    Publication date: November 11, 2021
    Inventors: Alexey Miroshnikov, Kostas Kotsiopoulos, Arjun Ravi Kannan, Raghu Kulkarni, Steven Dickerson