Patents by Inventor Rajiv Mathews

Rajiv Mathews has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135918
    Abstract: A method includes receiving distillation data including a plurality of out-of-domain training utterances. For each particular out-of-domain training utterance of the distillation data, the method includes generating a corresponding augmented out-of-domain training utterance, and generating, using a teacher ASR model trained on training data corresponding to a target domain, a pseudo-label corresponding to the corresponding augmented out-of-domain training utterance. The method also includes distilling a student ASR model from the teacher ASR model by training the student ASR model using the corresponding augmented out-of-domain training utterances paired with the corresponding pseudo-labels generated by the teacher ASR model.
    Type: Application
    Filed: October 16, 2023
    Publication date: April 25, 2024
    Applicant: Google LLC
    Inventors: Tien-Ju Yang, You-Chi Cheng, Shankar Kumar, Jared Lichtarge, Ehsan Amid, Yuxin Ding, Rajiv Mathews, Mingqing Chen
  • Patent number: 11955134
    Abstract: A method of phrase extraction for ASR models includes obtaining audio data characterizing an utterance and a corresponding ground-truth transcription of the utterance and modifying the audio data to obfuscate a particular phrase recited in the utterance. The method also includes processing, using a trained ASR model, the modified audio data to generate a predicted transcription of the utterance, and determining whether the predicted transcription includes the particular phrase by comparing the predicted transcription of the utterance to the ground-truth transcription of the utterance. When the predicted transcription includes the particular phrase, the method includes generating an output indicating that the trained ASR model leaked the particular phrase from a training data set used to train the ASR model.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: April 9, 2024
    Assignee: Google LLC
    Inventors: Ehsan Amid, Om Thakkar, Rajiv Mathews, Francoise Beaufays
  • Publication number: 20240112673
    Abstract: Implementations described herein identify and correct automatic speech recognition (ASR) misrecognitions. For example, on-device processor(s) of a client device may generate a predicted textual segment that is predicted to correspond to spoken utterance of a user of the client device, and may receive further input that modifies the predicted textual segment to an alternate textual segment. Further, the on-device processor(s) may store these textual segments in on-device storage as a candidate correction pair, and transmit the candidate correction pair to a remote system. Moreover, remote processor(s) of the remote system may determine that the candidate correction pair is an actual correction pair, and may cause client devices to generate updates for a global ASR model for the candidate correction pair. Additionally, the remote processor(s) may distribute the global ASR model to the client devices and/or additional client devices.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 4, 2024
    Inventors: Rajiv Mathews, Rohit Prabhavalkar, Giovanni Motta, Mingqing Chen, Lillian Zhou, Dhruv Guliani, Harry Zhang, Trevor Strohman, Françoise Beaufays
  • Publication number: 20240112672
    Abstract: On-device processor(s) of a client device may store, in on-device storage and in association with a time to live (TTL) in the on-device storage, a correction directed to ASR processing of audio data. The correction may include a portion of a given speech hypothesis that was modified to an alternate speech hypothesis. Further, the on-device processor(s) may cause an on-device ASR model to be personalized based on the correction. Moreover, and based on additional ASR processing of additional audio data, the on-device processor(s) may store, in the on-device storage and in association with an additional TTL in the on-device storage, a pseudo-correction directed to the additional ASR processing. Accordingly, the on-device processor(s) may cause the on-device ASR model to be personalized based on the pseudo-correction to prevent forgetting by the on-device ASR model.
    Type: Application
    Filed: October 4, 2022
    Publication date: April 4, 2024
    Inventors: Rajiv Mathews, Dragan Zivkovic, Khe Chai Sim
  • Publication number: 20240095582
    Abstract: During a round of decentralized learning for updating of a global machine learning (ML) model, remote processor(s) of a remote system may transmit, to a population of computing devices, primary weights for a primary version of the global ML model, and cause each of the computing devices to generate a corresponding update for the primary version of the global ML model. Further, the remote processor(s) may cause the primary version of the global ML model to be updated based on the corresponding updates that are received during the round of decentralized learning. However, the remote processor(s) may receive other corresponding updates subsequent to the round of decentralized learning. Accordingly, various techniques described herein (e.g., FARe-DUST, FeAST on MSG, and/or other techniques) enable the other corresponding updates to be utilized in achieving a final version of the global ML model.
    Type: Application
    Filed: December 6, 2022
    Publication date: March 21, 2024
    Inventors: Andrew Hard, Sean Augenstein, Rohan Anil, Rajiv Mathews, Lara McConnaughey, Ehsan Amid, Antonious Girgis
  • Publication number: 20240095594
    Abstract: A method includes training a first differentially private (DP) model using a private training set, the private training set including a plurality of training samples, the first DP model satisfying a differential privacy budget, the differential privacy budget defining an amount of information about individual training samples of the private training set that may be revealed by the first DP model. The method also includes, while training the first DP model, generating a plurality of intermediate checkpoints, each intermediate checkpoint of the plurality of intermediate checkpoints representing a different intermediate state of the first DP model, each of the intermediate checkpoints satisfying the same differential privacy budget. The method further includes determining an aggregate of the first DP model and the plurality of intermediate checkpoints, and determining, using the aggregate, a second DP model, the second DP model satisfying the same differential privacy budget.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 21, 2024
    Applicant: Google LLC
    Inventors: Om Dipakbhai Thakkar, Arun Ganesh, Virat Vishnu Shejwalkar, Abhradeep Guha Thakurta, Rajiv Mathews
  • Publication number: 20240070530
    Abstract: Implementations disclosed herein are directed to a hybrid federated learning (FL) technique that utilizes both federated averaging (FA) and federated distillation (FD) during a given round of FL of a given global machine learning (ML) model. Implementations may identify a population of client devices to participate in the given round of FL, determine a corresponding quantity of instances of client data available at each of the client devices that may be utilized during the given round of FL, and select different subsets of the client devices based on the corresponding quantity of instances of client data. Further, implementations may cause a first subset of the client devices to generate a corresponding FA update and a second subset of client devices to generate a corresponding FD update. Moreover, implementations may subsequently update the given global ML model based on the corresponding FA updates and the corresponding FD updates.
    Type: Application
    Filed: December 5, 2022
    Publication date: February 29, 2024
    Inventors: Ehsan Amid, Rajiv Mathews, Rohan Anil, Shankar Kumar, Jared Lichtarge
  • Publication number: 20230359907
    Abstract: Implementations disclosed herein are directed to various techniques for mitigating and/or preventing catastrophic forgetting in federated learning of global machine learning (ML) models. Implementations may identify a global ML model that is initially trained at a remote server based on a server data set, determine server-based data for global weight(s) of the global ML model, and transmit the global ML model and the server-based data to a plurality of client devices. The server-based data may include, for example, EWC loss term(s), client augmenting gradients, server augmenting gradients, and/or server-based data. Further, the plurality client devices may generate, based on processing corresponding predicted output and using the global ML model, and based on the server-based data, a corresponding client gradient, and transmit the corresponding client gradient to the remote server. Implementations may further generate an updated global ML model based on at least the corresponding client gradients.
    Type: Application
    Filed: July 1, 2022
    Publication date: November 9, 2023
    Inventors: Sean Augenstein, Andrew Hard, Kurt Partridge, Rajiv Mathews, Lin Ning, Karan Singhal
  • Publication number: 20230352019
    Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
    Type: Application
    Filed: July 6, 2023
    Publication date: November 2, 2023
    Inventors: Françoise Beaufays, Rajiv Mathews, Dragan Zivkovic, Kurt Partridge, Andrew Hard
  • Publication number: 20230351246
    Abstract: Implementations disclosed herein are directed to utilizing elastic weight consolidation (EWC) loss term(s) in federated learning of global machine learning (ML) models. Implementations may identify a global ML model that initially trained at a remote server based on a server data set, determine the EWC loss term(s) for global weight(s) of the global ML model, and transmit the global ML model and the EWC loss term(s) to a plurality of client devices. The EWC loss term(s) may be determined based on a Fisher information matrix for the server data set. Further, the plurality client devices may generate, based on processing corresponding predicted output and using the global ML model, and based on the EWC loss term(s), a corresponding client gradient, and transmit the corresponding client gradient to the remote server. Implementations may further generate an updated global ML model based on at least the corresponding client gradients.
    Type: Application
    Filed: May 2, 2022
    Publication date: November 2, 2023
    Inventors: Andrew Hard, Kurt Partridge, Rajiv Mathews, Sean Augenstein
  • Publication number: 20230352004
    Abstract: Implementations disclosed herein are directed to federated learning of machine learning (“ML”) model(s) based on gradient(s) generated at corresponding client devices and a remote system. Processor(s) of the corresponding client devices can process client data generated locally at the corresponding client devices using corresponding on-device ML model(s) to generate corresponding predicted outputs, generate corresponding client gradients based on the corresponding predicted outputs, and transmit the corresponding client gradients to the remote system. Processor(s) of the remote system can process remote data obtained from remote database(s) using global ML model(s) to generate additional corresponding predicted outputs, generate corresponding remote gradients based on the additional corresponding predicted outputs. Further, the remote system can utilize the corresponding client gradients and the corresponding remote gradients to update the global ML model(s) or weights thereof.
    Type: Application
    Filed: July 5, 2023
    Publication date: November 2, 2023
    Inventors: Françoise Beaufays, Andrew Hard, Swaroop Indra Ramaswamy, Om Dipakbhai Thakkar, Rajiv Mathews
  • Publication number: 20230335126
    Abstract: A method includes inserting a set of canary text samples into a corpus of training text samples and training an external language model on the corpus of training text samples and the set of canary text samples inserted into the corpus of training text samples. For each canary text sample, the method also includes generating a corresponding synthetic speech utterance and generating an initial transcription for the corresponding synthetic speech utterance. The method also includes rescoring the initial transcription generated for each corresponding synthetic speech utterance using the external language model. The method also includes determining a word error rate (WER) of the external language model based on the rescored initial transcriptions and the canary text samples and detecting memorization of the canary text samples by the external language model based on the WER of the external language model.
    Type: Application
    Filed: April 19, 2023
    Publication date: October 19, 2023
    Applicant: Google LLC
    Inventors: Ronny Huang, Steve Chien, Om Thakkar, Rajiv Mathews
  • Publication number: 20230317082
    Abstract: An unintentional memorization measure can be used to determine whether an automatic speech recognition (ASR) model has unintentionally memorized one or more phrases during training of the ASR model. Various implementations include generating one or more candidate transcripts based on the vocabulary of the ASR model. For example, the system can generate a candidate transcript by appending a token of the vocabulary to a previous candidate transcript. Various implementations include processing the candidate transcript using a speech synthesis model to generate synthesized speech audio data that includes synthesized speech of the candidate transcript. Additionally or alternatively, the synthesized speech audio data can be processed using the ASR model to generate ASR output. Various implementations can include generating a loss based on comparing the ASR output and the candidate transcript.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Inventors: Om Dipakbhai Thakkar, Hakim Sidahmed, W. Ronny Huang, Rajiv Mathews, Françoise Beaufays, Florian Tramèr
  • Patent number: 11749261
    Abstract: Implementations disclosed herein are directed to federated learning of machine learning (“ML”) model(s) based on gradient(s) generated at corresponding client devices and a remote system. Processor(s) of the corresponding client devices can process client data generated locally at the corresponding client devices using corresponding on-device ML model(s) to generate corresponding predicted outputs, generate corresponding client gradients based on the corresponding predicted outputs, and transmit the corresponding client gradients to the remote system. Processor(s) of the remote system can process remote data obtained from remote database(s) using global ML model(s) to generate additional corresponding predicted outputs, generate corresponding remote gradients based on the additional corresponding predicted outputs. Further, the remote system can utilize the corresponding client gradients and the corresponding remote gradients to update the global ML model(s) or weights thereof.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: September 5, 2023
    Assignee: GOOGLE LLC
    Inventors: Françoise Beaufays, Andrew Hard, Swaroop Indra Ramaswamy, Om Dipakbhai Thakkar, Rajiv Mathews
  • Patent number: 11741953
    Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: August 29, 2023
    Assignee: GOOGLE LLC
    Inventors: Françoise Beaufays, Rajiv Mathews, Dragan Zivkovic, Kurt Partridge, Andrew Hard
  • Publication number: 20230178094
    Abstract: A method of phrase extraction for ASR models includes obtaining audio data characterizing an utterance and a corresponding ground-truth transcription of the utterance and modifying the audio data to obfuscate a particular phrase recited in the utterance. The method also includes processing, using a trained ASR model, the modified audio data to generate a predicted transcription of the utterance, and determining whether the predicted transcription includes the particular phrase by comparing the predicted transcription of the utterance to the ground-truth transcription of the utterance. When the predicted transcription includes the particular phrase, the method includes generating an output indicating that the trained ASR model leaked the particular phrase from a training data set used to train the ASR model.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 8, 2023
    Applicant: Google LLC
    Inventors: Ehsan Amid, Om Thakkar, Rajiv Mathews, Francoise Beaufays
  • Publication number: 20230103911
    Abstract: A method include obtaining a set of differentially private (DP) gradients each generated based on processing corresponding private data, and obtaining a set of public gradients each generated based on processing corresponding public data. The method also includes applying mirror descent to the set of public gradients to learn a geometry for the set of DP gradients, and reshaping the set of DP gradients based on the learned geometry. The method further includes training a machine learning model based on the reshaped set of DP gradients.
    Type: Application
    Filed: October 4, 2022
    Publication date: April 6, 2023
    Applicant: Google LLC
    Inventors: Om Dipakbhai Thakkar, Ehsan Amid, Arun Ganesh, Rajiv Mathews, Swaroop Ramaswamy, Shuang Song, Thomas Steinke, Vinith Suriyakumar, Abhradeep Guha Thakurta
  • Publication number: 20220383204
    Abstract: Implementations relate to ascertaining to what extent predictions, generated using a machine learning model, can be effectively reconstructed from model updates, where the model updates are generated based on those predictions and based on applying a particular loss technique (e.g., a particular cross-entropy loss technique). Some implementations disclosed generate measures that each indicate a degree of conformity between a corresponding reconstruction, generated using a corresponding model update, and a corresponding prediction. In some of those implementations, the measures are utilized in determining whether to utilize the particular loss technique (utilized in generating the model updates) in federated learning of the machine learning model and/or of additional machine learning model(s).
    Type: Application
    Filed: November 24, 2021
    Publication date: December 1, 2022
    Inventors: Om Dipakbhai Thakkar, Trung Dang, Swaroop Indra Ramaswamy, Rajiv Mathews, Françoise Beaufays
  • Publication number: 20220293093
    Abstract: Implementations disclosed herein are directed to federated learning of machine learning (“ML”) model(s) based on gradient(s) generated at corresponding client devices and a remote system. Processor(s) of the corresponding client devices can process client data generated locally at the corresponding client devices using corresponding on-device ML model(s) to generate corresponding predicted outputs, generate corresponding client gradients based on the corresponding predicted outputs, and transmit the corresponding client gradients to the remote system. Processor(s) of the remote system can process remote data obtained from remote database(s) using global ML model(s) to generate additional corresponding predicted outputs, generate corresponding remote gradients based on the additional corresponding predicted outputs. Further, the remote system can utilize the corresponding client gradients and the corresponding remote gradients to update the global ML model(s) or weights thereof.
    Type: Application
    Filed: March 10, 2021
    Publication date: September 15, 2022
    Inventors: Françoise Beaufays, Andrew Hard, Swaroop Indra Ramaswamy, Om Dipakbhai Thakkar, Rajiv Mathews
  • Publication number: 20210327421
    Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
    Type: Application
    Filed: November 8, 2019
    Publication date: October 21, 2021
    Inventors: Françoise Beaufays, Rajiv Mathews, Dragan Zivkovic, Kurt Partridge, Andrew Hard