Patents by Inventor Virendra J. Marathe

Virendra J. Marathe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250021664
    Abstract: Subject level privacy attack analysis for federated learning may be performed. A request that selects an analysis of one or more inference attacks may be received to determine a presence of data of a subject in a training set of a federated machine learning model. The selected inference attacks may be performed to determine the presence of the data of subject in the training set of the federated machine learning model. Respective success measurements may be generated for the selected inference attacks based on the performance of the selected inference attacks, which may then be provided.
    Type: Application
    Filed: September 27, 2024
    Publication date: January 16, 2025
    Inventors: Pallika Haridas Kanani, Virendra J. Marathe, Daniel Wyde Peterson, Anshuman Suri
  • Publication number: 20240394597
    Abstract: Federated training of a machine learning model with enforcement of subject level privacy is implemented. Respective samples of data items from a training data set are generated at multiple nodes of a federated machine learning system. Noise values are determined for individual ones of the sampled data items according to respective counts of data items of particular subjects and the cumulative counts of the items of the subjects. Respective gradients for the data items are the determined The gradients are then clipped and noise values are applied. Each subject's noisy clipped gradients in the sample are then aggregated. The aggregasted gradients for the entire sample are then used for determining machine learning model updates.
    Type: Application
    Filed: March 6, 2024
    Publication date: November 28, 2024
    Inventors: Virendra J. Marathe, Pallika Haridas Kanani
  • Patent number: 12130929
    Abstract: Subject level privacy attack analysis for federated learning may be performed. A request that selects an analysis of one or more inference attacks may be received to determine a presence of data of a subject in a training set of a federated machine learning model. The selected inference attacks may be performed to determine the presence of the data of subject in the training set of the federated machine learning model. Respective success measurements may be generated for the selected inference attacks based on the performance of the selected inference attacks, which may then be provided.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: October 29, 2024
    Assignee: Oracle International Corporation
    Inventors: Pallika Haridas Kanani, Virendra J. Marathe, Daniel Wyde Peterson, Anshuman Suri
  • Patent number: 12099885
    Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.
    Type: Grant
    Filed: August 4, 2023
    Date of Patent: September 24, 2024
    Assignee: Oracle International Corporation
    Inventors: David Dice, Virendra J. Marathe
  • Patent number: 11941429
    Abstract: A computer system including one or more processors and persistent, word-addressable memory implements a persistent atomic multi-word compare-and-swap operation. On entry, a list of persistent memory locations of words to be updated, respective expected current values contained the persistent memory locations and respective new values to write to the persistent memory locations are provided. The operation atomically performs the process of comparing the existing contents of the persistent memory locations to the respective current values and, should they match, updating the persistent memory locations with the new values and returning a successful status. Should any of the contents of the persistent memory locations not match a respective current value, the operation returns a failed status. The operation is performed such that the system can recover from any failure or interruption by restoring the list of persistent memory locations.
    Type: Grant
    Filed: April 7, 2022
    Date of Patent: March 26, 2024
    Assignee: Oracle International Corporation
    Inventors: Virendra J. Marathe, Matej Pavlovic, Alex Kogan, Timothy L. Harris
  • Publication number: 20240064016
    Abstract: Parameter permutation is performed for federated learning to train a machine learning model. Parameter permutation is performed by client systems of a federated machine learning system on updated parameters of a machine learning model that have been updated as part of training using local training data. An intra-model shuffling technique is performed at the client systems according to a shuffling pattern. Then, the encoded parameters are provided to an aggregation server using Private Information Retrieval (PIR) queries generated according to the shuffling pattern.
    Type: Application
    Filed: August 7, 2023
    Publication date: February 22, 2024
    Inventors: Hamid Mozaffari, Virendra J. Marathe, David Dice
  • Publication number: 20230401113
    Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.
    Type: Application
    Filed: August 4, 2023
    Publication date: December 14, 2023
    Applicant: Oracle International Corporation
    Inventors: David Dice, Virendra J. Marathe
  • Publication number: 20230394374
    Abstract: Hierarchical gradient averaging is performed as part of training a machine learning model to enforce subject level privacy. A sample of data items from a training data set is identified and respective gradients for the data items are determined. The gradients are then clipped. Each subject's clipped gradients in the sample are averaged. A noise value is added to a sum of the averaged gradients of each of the subjects in the sample. An average gradient for the entire sample is determined from the averaged gradients of the individual subjects with the added noise value. This average gradient for the entire sample is used for determining machine learning model updates.
    Type: Application
    Filed: June 6, 2022
    Publication date: December 7, 2023
    Inventors: Virendra J. Marathe, Pallika Haridas Kanani
  • Patent number: 11762711
    Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: September 19, 2023
    Assignee: Oracle International Corporation
    Inventors: David Dice, Virendra J. Marathe
  • Publication number: 20230274004
    Abstract: Subject level privacy attack analysis for federated learning may be performed. A request that selects an analysis of one or more inference attacks may be received to determine a presence of data of a subject in a training set of a federated machine learning model. The selected inference attacks may be performed to determine the presence of the data of subject in the training set of the federated machine learning model. Respective success measurements may be generated for the selected inference attacks based on the performance of the selected inference attacks, which may then be provided.
    Type: Application
    Filed: February 25, 2022
    Publication date: August 31, 2023
    Inventors: Pallika Haridas Kanani, Virendra J. Marathe, Daniel Wyde Peterson, Anshuman Suri
  • Publication number: 20230052231
    Abstract: Group-level privacy preservation is implemented within federated machine learning. An aggregation server may distribute a machine learning model to multiple users each including respective private datasets. The private datasets may individually include multiple items associated with a single group. Individual users may train the model using their local, private dataset to generate one or more parameter updates and to determine a count of the largest number of items associated with any single group of a number of groups in the dataset. Parameter updates generated by the individual users may be modified by applying respective noise values to individual ones of the parameter updates according to the respective counts to ensure differential privacy for the groups of the dataset. The aggregation server may aggregate the updates into a single set of parameter updates to update the machine learning model.
    Type: Application
    Filed: May 11, 2022
    Publication date: February 16, 2023
    Inventors: Virendra J. Marathe, Pallika Haridas Kanani
  • Patent number: 11487427
    Abstract: Concurrent threads may be synchronized at the level of the memory words they access rather than at the level of the lock that protects the execution of critical sections. Each lock may be associated with an array of flags and each flag may indicate ownership of certain memory words. A pessimistic thread may set flags corresponding to memory words it is accessing in the critical section, while an optimistic thread may read the corresponding flag before any memory access to ensure that the flag is not set and that therefore the associated memory word is not being accessed by the other thread. Thus, optimistic threads that do not have conflicts with the pessimistic thread may not have to wait for the pessimistic thread to release the lock before proceeding.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: November 1, 2022
    Assignee: Oracle International Corporation
    Inventors: Alex Kogan, David Dice, Virendra J. Marathe
  • Patent number: 11443240
    Abstract: Herein are techniques for domain adaptation of a machine learning (ML) model. These techniques impose differential privacy onto federated learning by the ML model. In an embodiment, each of many client devices receive, from a server, coefficients of a general ML model. For respective new data point(s), each client device operates as follows. Based on the new data point(s), a respective private ML model is trained. Based on the new data point(s), respective gradients are calculated for the coefficients of the general ML model. Random noise is added to the gradients to generate respective noisy gradients. A combined inference may be generated based on: the private ML model, the general ML model, and one of the new data point(s). The noisy gradients are sent to the server. The server adjusts the general ML model based on the noisy gradients from the client devices. This client/server process may be repeated indefinitely.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: September 13, 2022
    Assignee: Oracle International Corporation
    Inventors: Daniel Peterson, Pallika Haridas Kanani, Virendra J. Marathe
  • Publication number: 20220229691
    Abstract: A computer system including one or more processors and persistent, word-addressable memory implements a persistent atomic multi-word compare-and-swap operation. On entry, a list of persistent memory locations of words to be updated, respective expected current values contained the persistent memory locations and respective new values to write to the persistent memory locations are provided. The operation atomically performs the process of comparing the existing contents of the persistent memory locations to the respective current values and, should they match, updating the persistent memory locations with the new values and returning a successful status. Should any of the contents of the persistent memory locations not match a respective current value, the operation returns a failed status. The operation is performed such that the system can recover from any failure or interruption by restoring the list of persistent memory locations.
    Type: Application
    Filed: April 7, 2022
    Publication date: July 21, 2022
    Inventors: Virendra J. Marathe, Matej Pavlovic, Alex Kogan, Timothy L. Harris
  • Patent number: 11379324
    Abstract: Undo logging for persistent memory transactions may permit concurrent transactions to write to the same persistent object. After an undo log record has been written, a single persist barrier may be issued. The tail pointer of the undo log may be updated after the persist barrier, and without another persist barrier, so the tail update may be persisted when the next log record is written and persisted. Undo logging for persistent memory transactions may rely on inferring the tail of an undo log after a failure rather than relying on a guaranteed correct tail pointer based on persisting the tail after every append. Additionally, transaction version numbers and checksum information may be stored to the undo log enabling failure recovery.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: July 5, 2022
    Assignee: Oracle International Corporation
    Inventors: Virendra J. Marathe, Margo I. Seltzer, Steve Byan
  • Patent number: 11321117
    Abstract: A computer system including one or more processors and persistent, word-addressable memory implements a persistent atomic multi-word compare-and-swap operation. On entry, a list of persistent memory locations of words to be updated, respective expected current values contained the persistent memory locations and respective new values to write to the persistent memory locations are provided. The operation atomically performs the process of comparing the existing contents of the persistent memory locations to the respective current values and, should they match, updating the persistent memory locations with the new values and returning a successful status. Should any of the contents of the persistent memory locations not match a respective current value, the operation returns a failed status. The operation is performed such that the system can recover from any failure or interruption by restoring the list of persistent memory locations.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: May 3, 2022
    Assignee: Oracle International Corporation
    Inventors: Virendra J. Marathe, Matej Pavlovic, Alex Kogan, Timorthy L. Harris
  • Publication number: 20220100587
    Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.
    Type: Application
    Filed: December 10, 2021
    Publication date: March 31, 2022
    Inventors: David Dice, Virendra J. Marathe
  • Patent number: 11226849
    Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: January 18, 2022
    Assignee: Oracle International Corporation
    Inventors: David Dice, Virendra J. Marathe
  • Patent number: 11216274
    Abstract: A computer comprising one or more processors and memory may implement an atomic compare and swap (CAS) operation on multiple data elements. Each data element has a corresponding descriptor which includes a new value and a reference to a controlling descriptor for the CAS operation. The controlling descriptor includes a status value which indicates whether the CAS operation is in progress or has completed. The operation first allocates memory locations of the data elements by writing addresses of respective descriptors to the memory locations using CAS instructions. The operation then writes successful status to the status value of the controlling descriptor to indicate that the respective memory locations are no longer allocated. The operation then returns an indicator of successful completion without atomically updating the memory locations with the new values. Extensions are further described to implement CAS operations in non-volatile random access memories.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: January 4, 2022
    Assignee: Oracle International Corporation
    Inventors: Virendra J. Marathe, Alex Kogan, Mihail-Igor Zablotchi
  • Publication number: 20210073677
    Abstract: Herein are techniques for domain adaptation of a machine learning (ML) model. These techniques impose differential privacy onto federated learning by the ML model. In an embodiment, each of many client devices receive, from a server, coefficients of a general ML model. For respective new data point(s), each client device operates as follows. Based on the new data point(s), a respective private ML model is trained. Based on the new data point(s), respective gradients are calculated for the coefficients of the general ML model. Random noise is added to the gradients to generate respective noisy gradients. A combined inference may be generated based on: the private ML model, the general ML model, and one of the new data point(s). The noisy gradients are sent to the server. The server adjusts the general ML model based on the noisy gradients from the client devices. This client/server process may be repeated indefinitely.
    Type: Application
    Filed: March 25, 2020
    Publication date: March 11, 2021
    Inventors: Daniel Peterson, Pallika Haridas Kanani, Virendra J. Marathe