Patents by Inventor Mohammad Samragh Razlighi

Mohammad Samragh Razlighi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11526601
    Abstract: A method for detecting and/or preventing an adversarial attack against a target machine learning model may be provided. The method may include training, based at least on training data, a defender machine learning model to enable the defender machine learning model to identify malicious input samples. The trained defender machine learning model may be deployed at the target machine learning model. The trained defender machine learning model may be coupled with the target machine learning model to at least determine whether an input sample received at the target machine learning model is a malicious input sample and/or a legitimate input sample. Related systems and articles of manufacture, including computer program products, are also provided.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: December 13, 2022
    Assignee: The Regents of the University of California
    Inventors: Bita Darvish Rouhani, Tara Javidi, Farinaz Koushanfar, Mohammad Samragh Razlighi
  • Publication number: 20220108194
    Abstract: Certain aspects of the present disclosure provide techniques for inferencing with a split inference model, including: generating an initial feature vector based on a client-side split inference model component; generating a modified feature vector by modifying a null-space component of the initial feature vector; providing the modified feature vector to a server-side split inference model component on a remote server; and receiving an inference from the remote server.
    Type: Application
    Filed: September 30, 2021
    Publication date: April 7, 2022
    Inventors: Mohammad SAMRAGH RAZLIGHI, Hossein HOSSEINI, Kambiz AZARIAN YAZDI, Joseph Binamira SORIAGA
  • Publication number: 20220083865
    Abstract: A framework is presented that provides a shift in the conceptual and practical realization of privacy-preserving interference on deep neural networks. The framework leverages the concept of the binary neural networks (BNNs) in conjunction with the garbled circuits protocol. In BNNs, the weights and activations are restricted to binary (e.g., ±1) values, substituting the costly multiplications with simple XNOR operations during the inference phase. The XNOR operation is known to be free in the GC protocol; therefore, performing oblivious inference on BNNs using GC results in the removal of costly multiplications. The approach consistent with implementations of the current subject matter provides for oblivious inference on the standard DL benchmarks being performed with minimal, if any, decrease in the prediction accuracy.
    Type: Application
    Filed: January 17, 2020
    Publication date: March 17, 2022
    Inventors: Mohammad Sadegh Riazi, Farinaz Koushanfar, Mohammad Samragh Razlighi
  • Publication number: 20210166106
    Abstract: A method may include training, based a training dataset, a machine learning model. The machine learning model may include a neuron configured to generate an output by applying, to one or more inputs to the neuron, an activation function. The output of the activation function may be subject to a multi-level binarization function configured to generate an estimate of the output. The estimate of the output may include a first bit providing a first binary representation of the output and a second bit providing a second binary representation of a first residual error associated with the first binary representation of the output. In response to determining that the training of the machine learning model is complete, the trained machine learning model may be deployed to perform a cognitive task. Related systems and articles of manufacture, including computer program products, are also provided.
    Type: Application
    Filed: December 12, 2018
    Publication date: June 3, 2021
    Inventors: Mohammad Ghasemzadeh, Farinaz Koushanfar, Mohammad Samragh Razlighi
  • Publication number: 20200167471
    Abstract: A method for detecting and/or preventing an adversarial attack against a target machine learning model may be provided. The method may include training, based at least on training data, a defender machine learning model to enable the defender machine learning model to identify malicious input samples. The trained defender machine learning model may be deployed at the target machine learning model. The trained defender machine learning model may be coupled with the target machine learning model to at least determine whether an input sample received at the target machine learning model is a malicious input sample and/or a legitimate input sample. Related systems and articles of manufacture, including computer program products, are also provided.
    Type: Application
    Filed: July 12, 2018
    Publication date: May 28, 2020
    Inventors: Bita Darvish Rouhani, Tara Javidi, Farinaz Koushanfar, Mohammad Samragh Razlighi