Patents by Inventor Om Thakkar

Om Thakkar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240158437
    Abstract: The invention provides a process of purification of antibody or fusion protein from protein mixture comprising product and process related impurities. The process provides the use of hydroxyapatite chromatography for the separation of low molecular weight impurities and basic variants. In addition, invention further provides a scalable purification process to remove product and process related impurities.
    Type: Application
    Filed: January 18, 2024
    Publication date: May 16, 2024
    Inventors: Om Narayan, Tarun Kumar Gupta, Mayankkumar Thakkar
  • Patent number: 11955134
    Abstract: A method of phrase extraction for ASR models includes obtaining audio data characterizing an utterance and a corresponding ground-truth transcription of the utterance and modifying the audio data to obfuscate a particular phrase recited in the utterance. The method also includes processing, using a trained ASR model, the modified audio data to generate a predicted transcription of the utterance, and determining whether the predicted transcription includes the particular phrase by comparing the predicted transcription of the utterance to the ground-truth transcription of the utterance. When the predicted transcription includes the particular phrase, the method includes generating an output indicating that the trained ASR model leaked the particular phrase from a training data set used to train the ASR model.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: April 9, 2024
    Assignee: Google LLC
    Inventors: Ehsan Amid, Om Thakkar, Rajiv Mathews, Francoise Beaufays
  • Publication number: 20230401452
    Abstract: Systems and methods herein train weight-agnostic networks in a federated learning setting with orthogonal data distribution. Unlike traditional networks, weight-agnostic networks have a small size and can be trained using neural architecture search. The methods and systems described herein include sharing of a subset of networks between clients to allow federated learning for weight-agnostic networks in which clients do not have samples from all classes.
    Type: Application
    Filed: June 14, 2023
    Publication date: December 14, 2023
    Inventors: Rida Bazzi, Om Thakkar
  • Publication number: 20230335126
    Abstract: A method includes inserting a set of canary text samples into a corpus of training text samples and training an external language model on the corpus of training text samples and the set of canary text samples inserted into the corpus of training text samples. For each canary text sample, the method also includes generating a corresponding synthetic speech utterance and generating an initial transcription for the corresponding synthetic speech utterance. The method also includes rescoring the initial transcription generated for each corresponding synthetic speech utterance using the external language model. The method also includes determining a word error rate (WER) of the external language model based on the rescored initial transcriptions and the canary text samples and detecting memorization of the canary text samples by the external language model based on the WER of the external language model.
    Type: Application
    Filed: April 19, 2023
    Publication date: October 19, 2023
    Applicant: Google LLC
    Inventors: Ronny Huang, Steve Chien, Om Thakkar, Rajiv Mathews
  • Publication number: 20230223028
    Abstract: Techniques are disclosed that enable training a global model using gradients provided to a remote system by a set of client devices during a reporting window, where each client device randomly determines a reporting time in the reporting window to provide the gradient to the remote system. Various implementations include each client device determining a corresponding gradient by processing data using a local model stored locally at the client device, where the local model corresponds to the global model.
    Type: Application
    Filed: October 16, 2020
    Publication date: July 13, 2023
    Inventors: Om Thakkar, Abhradeep Guha Thakurta, Peter Kairouz, Borja de Balle Pigem, Brendan McMahan
  • Publication number: 20230178094
    Abstract: A method of phrase extraction for ASR models includes obtaining audio data characterizing an utterance and a corresponding ground-truth transcription of the utterance and modifying the audio data to obfuscate a particular phrase recited in the utterance. The method also includes processing, using a trained ASR model, the modified audio data to generate a predicted transcription of the utterance, and determining whether the predicted transcription includes the particular phrase by comparing the predicted transcription of the utterance to the ground-truth transcription of the utterance. When the predicted transcription includes the particular phrase, the method includes generating an output indicating that the trained ASR model leaked the particular phrase from a training data set used to train the ASR model.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 8, 2023
    Applicant: Google LLC
    Inventors: Ehsan Amid, Om Thakkar, Rajiv Mathews, Francoise Beaufays