Patents by Inventor Christopher Côté SRINIVASA

Christopher Côté SRINIVASA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240118929
    Abstract: A system and method for machine learning architecture for prospective resource allocations are described. The method may include: receiving data records representing historical resource allocations from a user account associated with a first identifier to a resource account associated with a second identifier; deriving input features based on the data records; computing, using a trained neural network architecture, a predicted resource allocation amount and a predicted resource allocation date for the predicted resource allocation amount based on the derived input features; determining, using the trained neural network architecture, a first selection score associated with the predicted resource allocation amount and a second selection score associated with the predicted resource allocation date; and when the first or second selection score is above a minimum threshold, causing to display, at a display device, the associated resource allocation amount or date corresponding to the second identifier.
    Type: Application
    Filed: August 25, 2023
    Publication date: April 11, 2024
    Inventors: Lili MENG, Tristan Jean Claude SYLVAIN, Amir Hossein ABDI, Gabriel OLIVEIRA, Yunduz RAKHMANGULOVA, Yongmin YAN, Ella WILSON, Robert David EVANS, Saghar IRANDOUST, Christopher Côté SRINIVASA
  • Publication number: 20220382880
    Abstract: A system and method for adversarial vulnerability testing of machine learning models is proposed that receives as an input, a representation of a non-differentiable machine learning model, transforms the input model into a smoothed model and conducts an adversarial search against the smoothed model to generate an output data value representative of a potential vulnerability to adversarial examples. Variant embodiments are also proposed, directed to noise injection, hyperparameter control, and exhaustive/sampling-based searches in an effort to balance computational efficiency and accuracy in practical implementation. Flagged vulnerabilities can be used to have models re-validated, re-trained, or removed from use due to an increased cybersecurity risk profile.
    Type: Application
    Filed: May 20, 2022
    Publication date: December 1, 2022
    Inventors: Giuseppe Marcello Antonio CASTIGLIONE, Weiguang DING, Sayedmasoud HASHEMI AMROABADI, Ga WU, Christopher Côté SRINIVASA
  • Publication number: 20220245422
    Abstract: Systems and methods for machine learning architecture for out-of-distribution data detection. The system may include a processor and a memory storing processor-executable instructions that may, when executed, configure the processor to: receive an input data set; generate an out-of-distribution prediction based on the input data set and an auto-encoder, the auto-encoder trained based on a pretext task including a transformation of one or more training data sets for reconstruction, the trained auto-encoder trained for reducing a reconstruction error to encode semantic meaning of the training data sets; and generate a signal for providing an indication of whether the input data set is an out-of-distribution data set.
    Type: Application
    Filed: January 27, 2022
    Publication date: August 4, 2022
    Inventors: Ga WU, Anmol Singh JAWANDHA, Christopher Côté SRINIVASA
  • Publication number: 20220114399
    Abstract: Systems and methods for diagnosing and testing fairness of machine learning models based on detecting individual violations of group definitions of fairness, via adversarial attacks that aim to perturb model inputs to generate individual violations. The systems and methods employ auxiliary machine learning models using a local surrogate for identifying group membership and assess fairness by measuring the transferability of attacks from this model. The systems and methods generate fairness indicator values indicative of discrimination risk due to the target predictions generated by the machine learning model, by comparing gradients of the machine learning model to gradients of an auxiliary machine learning model.
    Type: Application
    Filed: October 8, 2021
    Publication date: April 14, 2022
    Inventors: Giuseppe Marcello Antonio CASTIGLIONE, Simon Jeremy Damion PRINCE, Christopher Côté SRINIVASA
  • Publication number: 20220029987
    Abstract: An approach for increasing security of biometric templates is described. An improved system is adapted to split a full set of features or representations of a trained model into a first partial template and a second partial template, the second partial template being stored on a secure enclave accessible only through zero-knowledge proof based interfaces. During verification using the template, a new full set of features is received for comparison, and a model is loaded based on the available portions of the model. Comparison utilizing the second partial template requires the computation of zero-knowledge proofs as direct access to the underlying second partial template is prohibited by the secure enclave.
    Type: Application
    Filed: July 21, 2021
    Publication date: January 27, 2022
    Inventors: Margaret Inez SALTER, Iustina-Miruna VINTILA, Arya POURTABATABAIE, Edison U. ORTIZ, Sara Zafar JAFARZADEH, Sayedmasoud HASHEMI AMROABADI, Christopher Côté SRINIVASA