Patents by Inventor Virendra J. Marathe
Virendra J. Marathe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250021664Abstract: Subject level privacy attack analysis for federated learning may be performed. A request that selects an analysis of one or more inference attacks may be received to determine a presence of data of a subject in a training set of a federated machine learning model. The selected inference attacks may be performed to determine the presence of the data of subject in the training set of the federated machine learning model. Respective success measurements may be generated for the selected inference attacks based on the performance of the selected inference attacks, which may then be provided.Type: ApplicationFiled: September 27, 2024Publication date: January 16, 2025Inventors: Pallika Haridas Kanani, Virendra J. Marathe, Daniel Wyde Peterson, Anshuman Suri
-
Publication number: 20240394597Abstract: Federated training of a machine learning model with enforcement of subject level privacy is implemented. Respective samples of data items from a training data set are generated at multiple nodes of a federated machine learning system. Noise values are determined for individual ones of the sampled data items according to respective counts of data items of particular subjects and the cumulative counts of the items of the subjects. Respective gradients for the data items are the determined The gradients are then clipped and noise values are applied. Each subject's noisy clipped gradients in the sample are then aggregated. The aggregasted gradients for the entire sample are then used for determining machine learning model updates.Type: ApplicationFiled: March 6, 2024Publication date: November 28, 2024Inventors: Virendra J. Marathe, Pallika Haridas Kanani
-
Patent number: 12130929Abstract: Subject level privacy attack analysis for federated learning may be performed. A request that selects an analysis of one or more inference attacks may be received to determine a presence of data of a subject in a training set of a federated machine learning model. The selected inference attacks may be performed to determine the presence of the data of subject in the training set of the federated machine learning model. Respective success measurements may be generated for the selected inference attacks based on the performance of the selected inference attacks, which may then be provided.Type: GrantFiled: February 25, 2022Date of Patent: October 29, 2024Assignee: Oracle International CorporationInventors: Pallika Haridas Kanani, Virendra J. Marathe, Daniel Wyde Peterson, Anshuman Suri
-
Patent number: 12099885Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.Type: GrantFiled: August 4, 2023Date of Patent: September 24, 2024Assignee: Oracle International CorporationInventors: David Dice, Virendra J. Marathe
-
Patent number: 11941429Abstract: A computer system including one or more processors and persistent, word-addressable memory implements a persistent atomic multi-word compare-and-swap operation. On entry, a list of persistent memory locations of words to be updated, respective expected current values contained the persistent memory locations and respective new values to write to the persistent memory locations are provided. The operation atomically performs the process of comparing the existing contents of the persistent memory locations to the respective current values and, should they match, updating the persistent memory locations with the new values and returning a successful status. Should any of the contents of the persistent memory locations not match a respective current value, the operation returns a failed status. The operation is performed such that the system can recover from any failure or interruption by restoring the list of persistent memory locations.Type: GrantFiled: April 7, 2022Date of Patent: March 26, 2024Assignee: Oracle International CorporationInventors: Virendra J. Marathe, Matej Pavlovic, Alex Kogan, Timothy L. Harris
-
Publication number: 20240064016Abstract: Parameter permutation is performed for federated learning to train a machine learning model. Parameter permutation is performed by client systems of a federated machine learning system on updated parameters of a machine learning model that have been updated as part of training using local training data. An intra-model shuffling technique is performed at the client systems according to a shuffling pattern. Then, the encoded parameters are provided to an aggregation server using Private Information Retrieval (PIR) queries generated according to the shuffling pattern.Type: ApplicationFiled: August 7, 2023Publication date: February 22, 2024Inventors: Hamid Mozaffari, Virendra J. Marathe, David Dice
-
Publication number: 20230401113Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.Type: ApplicationFiled: August 4, 2023Publication date: December 14, 2023Applicant: Oracle International CorporationInventors: David Dice, Virendra J. Marathe
-
Publication number: 20230394374Abstract: Hierarchical gradient averaging is performed as part of training a machine learning model to enforce subject level privacy. A sample of data items from a training data set is identified and respective gradients for the data items are determined. The gradients are then clipped. Each subject's clipped gradients in the sample are averaged. A noise value is added to a sum of the averaged gradients of each of the subjects in the sample. An average gradient for the entire sample is determined from the averaged gradients of the individual subjects with the added noise value. This average gradient for the entire sample is used for determining machine learning model updates.Type: ApplicationFiled: June 6, 2022Publication date: December 7, 2023Inventors: Virendra J. Marathe, Pallika Haridas Kanani
-
Patent number: 11762711Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.Type: GrantFiled: December 10, 2021Date of Patent: September 19, 2023Assignee: Oracle International CorporationInventors: David Dice, Virendra J. Marathe
-
Publication number: 20230274004Abstract: Subject level privacy attack analysis for federated learning may be performed. A request that selects an analysis of one or more inference attacks may be received to determine a presence of data of a subject in a training set of a federated machine learning model. The selected inference attacks may be performed to determine the presence of the data of subject in the training set of the federated machine learning model. Respective success measurements may be generated for the selected inference attacks based on the performance of the selected inference attacks, which may then be provided.Type: ApplicationFiled: February 25, 2022Publication date: August 31, 2023Inventors: Pallika Haridas Kanani, Virendra J. Marathe, Daniel Wyde Peterson, Anshuman Suri
-
Publication number: 20230052231Abstract: Group-level privacy preservation is implemented within federated machine learning. An aggregation server may distribute a machine learning model to multiple users each including respective private datasets. The private datasets may individually include multiple items associated with a single group. Individual users may train the model using their local, private dataset to generate one or more parameter updates and to determine a count of the largest number of items associated with any single group of a number of groups in the dataset. Parameter updates generated by the individual users may be modified by applying respective noise values to individual ones of the parameter updates according to the respective counts to ensure differential privacy for the groups of the dataset. The aggregation server may aggregate the updates into a single set of parameter updates to update the machine learning model.Type: ApplicationFiled: May 11, 2022Publication date: February 16, 2023Inventors: Virendra J. Marathe, Pallika Haridas Kanani
-
Patent number: 11487427Abstract: Concurrent threads may be synchronized at the level of the memory words they access rather than at the level of the lock that protects the execution of critical sections. Each lock may be associated with an array of flags and each flag may indicate ownership of certain memory words. A pessimistic thread may set flags corresponding to memory words it is accessing in the critical section, while an optimistic thread may read the corresponding flag before any memory access to ensure that the flag is not set and that therefore the associated memory word is not being accessed by the other thread. Thus, optimistic threads that do not have conflicts with the pessimistic thread may not have to wait for the pessimistic thread to release the lock before proceeding.Type: GrantFiled: January 10, 2020Date of Patent: November 1, 2022Assignee: Oracle International CorporationInventors: Alex Kogan, David Dice, Virendra J. Marathe
-
Patent number: 11443240Abstract: Herein are techniques for domain adaptation of a machine learning (ML) model. These techniques impose differential privacy onto federated learning by the ML model. In an embodiment, each of many client devices receive, from a server, coefficients of a general ML model. For respective new data point(s), each client device operates as follows. Based on the new data point(s), a respective private ML model is trained. Based on the new data point(s), respective gradients are calculated for the coefficients of the general ML model. Random noise is added to the gradients to generate respective noisy gradients. A combined inference may be generated based on: the private ML model, the general ML model, and one of the new data point(s). The noisy gradients are sent to the server. The server adjusts the general ML model based on the noisy gradients from the client devices. This client/server process may be repeated indefinitely.Type: GrantFiled: March 25, 2020Date of Patent: September 13, 2022Assignee: Oracle International CorporationInventors: Daniel Peterson, Pallika Haridas Kanani, Virendra J. Marathe
-
Publication number: 20220229691Abstract: A computer system including one or more processors and persistent, word-addressable memory implements a persistent atomic multi-word compare-and-swap operation. On entry, a list of persistent memory locations of words to be updated, respective expected current values contained the persistent memory locations and respective new values to write to the persistent memory locations are provided. The operation atomically performs the process of comparing the existing contents of the persistent memory locations to the respective current values and, should they match, updating the persistent memory locations with the new values and returning a successful status. Should any of the contents of the persistent memory locations not match a respective current value, the operation returns a failed status. The operation is performed such that the system can recover from any failure or interruption by restoring the list of persistent memory locations.Type: ApplicationFiled: April 7, 2022Publication date: July 21, 2022Inventors: Virendra J. Marathe, Matej Pavlovic, Alex Kogan, Timothy L. Harris
-
Patent number: 11379324Abstract: Undo logging for persistent memory transactions may permit concurrent transactions to write to the same persistent object. After an undo log record has been written, a single persist barrier may be issued. The tail pointer of the undo log may be updated after the persist barrier, and without another persist barrier, so the tail update may be persisted when the next log record is written and persisted. Undo logging for persistent memory transactions may rely on inferring the tail of an undo log after a failure rather than relying on a guaranteed correct tail pointer based on persisting the tail after every append. Additionally, transaction version numbers and checksum information may be stored to the undo log enabling failure recovery.Type: GrantFiled: June 19, 2020Date of Patent: July 5, 2022Assignee: Oracle International CorporationInventors: Virendra J. Marathe, Margo I. Seltzer, Steve Byan
-
Patent number: 11321117Abstract: A computer system including one or more processors and persistent, word-addressable memory implements a persistent atomic multi-word compare-and-swap operation. On entry, a list of persistent memory locations of words to be updated, respective expected current values contained the persistent memory locations and respective new values to write to the persistent memory locations are provided. The operation atomically performs the process of comparing the existing contents of the persistent memory locations to the respective current values and, should they match, updating the persistent memory locations with the new values and returning a successful status. Should any of the contents of the persistent memory locations not match a respective current value, the operation returns a failed status. The operation is performed such that the system can recover from any failure or interruption by restoring the list of persistent memory locations.Type: GrantFiled: June 5, 2020Date of Patent: May 3, 2022Assignee: Oracle International CorporationInventors: Virendra J. Marathe, Matej Pavlovic, Alex Kogan, Timorthy L. Harris
-
Publication number: 20220100587Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.Type: ApplicationFiled: December 10, 2021Publication date: March 31, 2022Inventors: David Dice, Virendra J. Marathe
-
Patent number: 11226849Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.Type: GrantFiled: March 6, 2020Date of Patent: January 18, 2022Assignee: Oracle International CorporationInventors: David Dice, Virendra J. Marathe
-
Patent number: 11216274Abstract: A computer comprising one or more processors and memory may implement an atomic compare and swap (CAS) operation on multiple data elements. Each data element has a corresponding descriptor which includes a new value and a reference to a controlling descriptor for the CAS operation. The controlling descriptor includes a status value which indicates whether the CAS operation is in progress or has completed. The operation first allocates memory locations of the data elements by writing addresses of respective descriptors to the memory locations using CAS instructions. The operation then writes successful status to the status value of the controlling descriptor to indicate that the respective memory locations are no longer allocated. The operation then returns an indicator of successful completion without atomically updating the memory locations with the new values. Extensions are further described to implement CAS operations in non-volatile random access memories.Type: GrantFiled: October 30, 2020Date of Patent: January 4, 2022Assignee: Oracle International CorporationInventors: Virendra J. Marathe, Alex Kogan, Mihail-Igor Zablotchi
-
Publication number: 20210073677Abstract: Herein are techniques for domain adaptation of a machine learning (ML) model. These techniques impose differential privacy onto federated learning by the ML model. In an embodiment, each of many client devices receive, from a server, coefficients of a general ML model. For respective new data point(s), each client device operates as follows. Based on the new data point(s), a respective private ML model is trained. Based on the new data point(s), respective gradients are calculated for the coefficients of the general ML model. Random noise is added to the gradients to generate respective noisy gradients. A combined inference may be generated based on: the private ML model, the general ML model, and one of the new data point(s). The noisy gradients are sent to the server. The server adjusts the general ML model based on the noisy gradients from the client devices. This client/server process may be repeated indefinitely.Type: ApplicationFiled: March 25, 2020Publication date: March 11, 2021Inventors: Daniel Peterson, Pallika Haridas Kanani, Virendra J. Marathe