Patents by Inventor Farinaz Koushanfar
Farinaz Koushanfar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250005200Abstract: In some embodiments, there is provided a system, which comprises a processor, and at least one non-transitory computer readable media storing instructions.Type: ApplicationFiled: November 8, 2022Publication date: January 2, 2025Inventors: Mojan Javaheripi, Mohammad Samragh Razlighi, Siam Umar Hussain, Farinaz Koushanfar
-
Publication number: 20240370535Abstract: In some implementations, a secure setup is performed by a trusted third party to generate, based on an obfuscated watermark extractor, a prover key and a verifier key to prove ownership of a first machine learning model by a model owner without divulging a trigger key or a watermark corresponding to the first machine learning model. Next, the model owner of the first machine learning model generates, based on the prover key, a proof of ownership of a second machine learning model by at least determining that the obfuscated watermark extractor extracts the watermark from the second machine learning model in response to the second machine learning model being provided as a public input to the obfuscated watermark extractor. Then, the proof of ownership is verified, based on the verifier key, to acknowledge that the second machine learning model matches the first machine learning model.Type: ApplicationFiled: April 30, 2024Publication date: November 7, 2024Inventors: Nojan Sheybani, Farinaz Koushanfar, Ritvik Kapila, Zahra Ghodsi
-
Publication number: 20240346379Abstract: A computing system determines a median of a first number of mean values received from a first number of clusters, where each cluster of the first number of clients includes a first plurality of clients. Also, a threshold is determined based on the median, where the threshold applies to model updates. The median and the threshold are broadcast to all clients. Next, one or more clients that fail to provide a proof attesting that their model update is within the threshold of the median are dropped. Then, a second plurality of clients, not including the one or more dropped clients, participate in a final round of secure aggregation. Next, a final aggregate result is obtained, where the final aggregate result is based on the final round of secure aggregation. Then, one or more actions are performed based on the final aggregate result.Type: ApplicationFiled: April 12, 2024Publication date: October 17, 2024Inventors: Zahra Ghodsi, Mojan Javaheripi, Nojan Sheybani, Xinqiao Zhang, Farinaz Koushanfar
-
Publication number: 20240242191Abstract: A method may include embedding, in a hidden layer and/or an output layer of a first machine learning model, a first digital watermark. The first digital watermark may correspond to input samples altering the low probabilistic regions of an activation map associated with the hidden layer of the first machine learning model. Alternatively, the first digital watermark may correspond to input samples rarely encountered by the first machine learning model. The first digital watermark may be embedded in the first machine learning model by at least training, based on training data including the input samples, the first machine learning model. A second machine learning model may be determined to be a duplicate of the first machine learning model based on a comparison of the first digital watermark embedded in the first machine learning model and a second digital watermark extracted from the second machine learning model.Type: ApplicationFiled: February 27, 2024Publication date: July 18, 2024Inventors: Bita Darvish Rouhani, Huili Chen, Farinaz Koushanfar
-
Patent number: 12014569Abstract: In some embodiments, there is provided a system for generating synthetic human fingerprints. The system includes at least one processor and at least one memory storing instructions which when executed by the at least one processor causes operations, such as receiving, from a database and/or a sensor, at least one real fingerprint; training, based on the at least one real fingerprint, a generative adversarial network to learn a distribution of real fingerprints; training a super-resolution engine to learn to transform low resolution synthetic fingerprints to high-resolution fingerprints; providing to the trained super resolution engine at least one low resolution synthetic fingerprint that is generated as an output by the trained generative adversarial network; and in response to the providing, outputting, by trained super resolution engine, at least one high resolution synthetic fingerprint.Type: GrantFiled: January 29, 2021Date of Patent: June 18, 2024Assignee: The Regents of the University of CaliforniaInventors: Mohammad Sadegh Riazi, Seyed Mohammad Chavoshian, Farinaz Koushanfar
-
Patent number: 11972408Abstract: A method may include embedding, in a hidden layer and/or an output layer of a first machine learning model, a first digital watermark. The first digital watermark may correspond to input samples altering the low probabilistic regions of an activation map associated with the hidden layer of the first machine learning model. Alternatively, the first digital watermark may correspond to input samples rarely encountered by the first machine learning model. The first digital watermark may be embedded in the first machine learning model by at least training, based on training data including the input samples, the first machine learning model. A second machine learning model may be determined to be a duplicate of the first machine learning model based on a comparison of the first digital watermark embedded in the first machine learning model and a second digital watermark extracted from the second machine learning model.Type: GrantFiled: March 21, 2019Date of Patent: April 30, 2024Assignee: The Regents of the University of CaliforniaInventors: Bita Darvish Rouhani, Huili Chen, Farinaz Koushanfar
-
Patent number: 11922313Abstract: A system may include a processor and a memory. The memory may include program code that provides operations when executed by the processor. The operations may include: partitioning, based at least on a resource constraint of a platform, a global machine learning model into a plurality of local machine learning models; transforming training data to at least conform to the resource constraint of the platform; and training the global machine learning model by at least processing, at the platform, the transformed training data with a first of the plurality of local machine learning models.Type: GrantFiled: February 6, 2017Date of Patent: March 5, 2024Assignee: WILLIAM MARSH RICE UNIVERSITYInventors: Bita Darvish Rouhani, Azalia Mirhoseini, Farinaz Koushanfar
-
Patent number: 11651260Abstract: A method for hardware-based machine learning acceleration is provided. The method may include partitioning, into a first batch of data and a second batch of data, an input data received at a hardware accelerator implementing a machine learning model. The input data may be a continuous stream of data samples. The input data may be partitioned based at least on a resource constraint of the hardware accelerator. An update of a probability density function associated with the machine learning model may be performed in real time. The probability density function may be updated by at least processing, by the hardware accelerator, the first batch of data before the second batch of data. An output may be generated based at least on the updated probability density function. The output may include a probability of encountering a data value. Related systems and articles of manufacture, including computer program products, are also provided.Type: GrantFiled: January 31, 2018Date of Patent: May 16, 2023Assignee: The Regents of the University of CaliforniaInventors: Bita Darvish Rouhani, Mohammad Ghasemzadeh, Farinaz Koushanfar
-
Patent number: 11625614Abstract: A method, a system, and a computer program product for fast training and/or execution of neural networks. A description of a neural network architecture is received. Based on the received description, a graph representation of the neural network architecture is generated. The graph representation includes one or more nodes connected by one or more connections. At least one connection is modified. Based on the generated graph representation, a new graph representation is generated using the modified at least one connection. The new graph representation has a small-world property. The new graph representation is transformed into a new neural network architecture.Type: GrantFiled: October 23, 2019Date of Patent: April 11, 2023Assignee: The Regents of the University of CaliforniaInventors: Mojan Javaheripi, Farinaz Koushanfar, Bita Darvish Rouhani
-
Publication number: 20230075233Abstract: In some embodiments, there is provided a system for generating synthetic human fingerprints. The system includes at least one processor and at least one memory storing instructions which when executed by the at least one processor causes operations, such as receiving, from a database and/or a sensor, at least one real fingerprint; training, based on the at least one real fingerprint, a generative adversarial network to learn a distribution of real fingerprints; training a super-resolution engine to learn to transform low resolution synthetic fingerprints to high-resolution fingerprints; providing to the trained super resolution engine at least one low resolution synthetic fingerprint that is generated as an output by the trained generative adversarial network; and in response to the providing, outputting, by trained super resolution engine, at least one high resolution synthetic fingerprint.Type: ApplicationFiled: January 29, 2021Publication date: March 9, 2023Inventors: Mohammad Sadegh Riazi, Seyed Mohammad Chavoshian, Farinaz Koushanfar
-
Patent number: 11599832Abstract: A computing system can include a plurality of clients located outside a cloud-based computing environment, where each of the clients may be configured to encode respective original data with a respective unique secret key to generate data hypervectors that encode the original data. A collaborative machine learning system can operate in the cloud-based computing environment and can be operatively coupled to the plurality of clients, where the collaborative machine learning system can be configured to operate on the data hypervectors that encode the original data to train a machine learning model operated by the collaborative machine learning system or to generate an inference from the machine learning model.Type: GrantFiled: June 29, 2020Date of Patent: March 7, 2023Assignee: The Regents of the University of CaliforniaInventors: Mohsen Imani, Yeseong Kim, Tajana Rosing, Farinaz Koushanfar, Mohammad Sadegh Riazi
-
Patent number: 11526601Abstract: A method for detecting and/or preventing an adversarial attack against a target machine learning model may be provided. The method may include training, based at least on training data, a defender machine learning model to enable the defender machine learning model to identify malicious input samples. The trained defender machine learning model may be deployed at the target machine learning model. The trained defender machine learning model may be coupled with the target machine learning model to at least determine whether an input sample received at the target machine learning model is a malicious input sample and/or a legitimate input sample. Related systems and articles of manufacture, including computer program products, are also provided.Type: GrantFiled: July 12, 2018Date of Patent: December 13, 2022Assignee: The Regents of the University of CaliforniaInventors: Bita Darvish Rouhani, Tara Javidi, Farinaz Koushanfar, Mohammad Samragh Razlighi
-
Patent number: 11386326Abstract: A method may include a transforming a trained machine learning model including by replacing at least one layer of the trained machine learning model with a dictionary matrix and a coefficient matrix. The dictionary matrix and the coefficient matrix may be formed by decomposing a weight matrix associated with the at least one layer of the trained machine learning model. A product of the dictionary matrix and the coefficient matrix may form a reduced-dimension representation of the weight matrix associated with the at least one layer of the trained machine learning model. The transformed machine learning model may be deployed to a client. Related systems and computer program products are also provided.Type: GrantFiled: June 11, 2019Date of Patent: July 12, 2022Assignee: The Regents of the University of CaliforniaInventors: Fang Lin, Mohammad Ghasemzadeh, Bita Darvish Rouhani, Farinaz Koushanfar
-
Publication number: 20220083865Abstract: A framework is presented that provides a shift in the conceptual and practical realization of privacy-preserving interference on deep neural networks. The framework leverages the concept of the binary neural networks (BNNs) in conjunction with the garbled circuits protocol. In BNNs, the weights and activations are restricted to binary (e.g., ±1) values, substituting the costly multiplications with simple XNOR operations during the inference phase. The XNOR operation is known to be free in the GC protocol; therefore, performing oblivious inference on BNNs using GC results in the removal of costly multiplications. The approach consistent with implementations of the current subject matter provides for oblivious inference on the standard DL benchmarks being performed with minimal, if any, decrease in the prediction accuracy.Type: ApplicationFiled: January 17, 2020Publication date: March 17, 2022Inventors: Mohammad Sadegh Riazi, Farinaz Koushanfar, Mohammad Samragh Razlighi
-
Publication number: 20210295166Abstract: A system may include a processor and a memory. The memory may include program code that provides operations when executed by the processor. The operations may include: partitioning, based at least on a resource constraint of a platform, a global machine learning model into a plurality of local machine learning models; transforming training data to at least conform to the resource constraint of the platform; and training the global machine learning model by at least processing, at the platform, the transformed training data with a first of the plurality of local machine learning models.Type: ApplicationFiled: February 6, 2017Publication date: September 23, 2021Applicant: WILLIAM MARSH RICE UNIVERSITYInventors: Bita Darvish Rouhani, Azalia Mirhoseini, Farinaz Koushanfar
-
Publication number: 20210166106Abstract: A method may include training, based a training dataset, a machine learning model. The machine learning model may include a neuron configured to generate an output by applying, to one or more inputs to the neuron, an activation function. The output of the activation function may be subject to a multi-level binarization function configured to generate an estimate of the output. The estimate of the output may include a first bit providing a first binary representation of the output and a second bit providing a second binary representation of a first residual error associated with the first binary representation of the output. In response to determining that the training of the machine learning model is complete, the trained machine learning model may be deployed to perform a cognitive task. Related systems and articles of manufacture, including computer program products, are also provided.Type: ApplicationFiled: December 12, 2018Publication date: June 3, 2021Inventors: Mohammad Ghasemzadeh, Farinaz Koushanfar, Mohammad Samragh Razlighi
-
Publication number: 20210019605Abstract: A method may include embedding, in a hidden layer and/or an output layer of a first machine learning model, a first digital watermark. The first digital watermark may correspond to input samples altering the low probabilistic regions of an activation map associated with the hidden layer of the first machine learning model. Alternatively, the first digital watermark may correspond to input samples rarely encountered by the first machine learning model. The first digital watermark may be embedded in the first machine learning model by at least training, based on training data including the input samples, the first machine learning model. A second machine learning model may be determined to be a duplicate of the first machine learning model based on a comparison of the first digital watermark embedded in the first machine learning model and a second digital watermark extracted from the second machine learning model.Type: ApplicationFiled: March 21, 2019Publication date: January 21, 2021Inventors: Bita Darvish Rouhani, Huili Chen, Farinaz Koushanfar
-
Publication number: 20210012196Abstract: A method may include training, based on a first training data available at a first node in a network, a first local machine learning model. A first local belief of a parameter set of a global machine learning model may be updated based on the training of the first local machine learning model. A second local belief of the parameter set of the global machine learning model may be received from a second node in the network. The second local belief may have been updated based on the second node training a second local machine learning model. The second local machine learning model may be trained based on a second training data available at the second node. The first local belief may be updated based on the second local belief of the second node. Related systems and articles of manufacture, including computer program products, are also provided.Type: ApplicationFiled: July 10, 2020Publication date: January 14, 2021Inventors: Anusha Lalitha, Tara Javidi, Farinaz Koushanfar, Osman Cihan Kilinc
-
Publication number: 20200410404Abstract: A computing system can include a plurality of clients located outside a cloud-based computing environment, where each of the clients may be configured to encode respective original data with a respective unique secret key to generate data hypervectors that encode the original data. A collaborative machine learning system can operate in the cloud-based computing environment and can be operatively coupled to the plurality of clients, where the collaborative machine learning system can be configured to operate on the data hypervectors that encode the original data to train a machine learning model operated by the collaborative machine learning system or to generate an inference from the machine learning model.Type: ApplicationFiled: June 29, 2020Publication date: December 31, 2020Inventors: Mohsen Imani, Yeseong Kim, Tajana Rosing, Farinaz Koushanfar, Mohammad Sadegh Riazi
-
Publication number: 20200167471Abstract: A method for detecting and/or preventing an adversarial attack against a target machine learning model may be provided. The method may include training, based at least on training data, a defender machine learning model to enable the defender machine learning model to identify malicious input samples. The trained defender machine learning model may be deployed at the target machine learning model. The trained defender machine learning model may be coupled with the target machine learning model to at least determine whether an input sample received at the target machine learning model is a malicious input sample and/or a legitimate input sample. Related systems and articles of manufacture, including computer program products, are also provided.Type: ApplicationFiled: July 12, 2018Publication date: May 28, 2020Inventors: Bita Darvish Rouhani, Tara Javidi, Farinaz Koushanfar, Mohammad Samragh Razlighi