Patents by Inventor Trevor Strohman

Trevor Strohman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250016387
    Abstract: Implementations disclosed herein are directed to ephemeral learning of machine learning (“ML”) model(s) based on gradient(s) generated at a remote system (e.g., remote server(s)). Processor(s) of the remote system can receive stream(s) of audio data capturing spoken utterance(s) from a client device of a user. A fulfillment pipeline can process the stream(s) of audio data to cause certain fulfillment(s) of the spoken utterance(s) to be performed. Meanwhile, a training pipeline can process the stream(s) of audio data to generate gradient(s) using unsupervised learning techniques. Subsequent to the processing by the fulfillment pipeline and/or the training pipeline, the stream(s) of audio data are discarded by the remote system. Accordingly, the ML model(s) can be trained at the remote system without storing or logging of the stream(s) of audio data by non-transient memory thereof, thereby providing more efficient training mechanisms for training the ML model(s) and also increasing security of user data.
    Type: Application
    Filed: September 19, 2024
    Publication date: January 9, 2025
    Inventors: Françoise Beaufays, Khe Chai Sim, Trevor Strohman, Oren Litvin
  • Patent number: 12190869
    Abstract: A computer-implemented method includes receiving a sequence of acoustic frames as input to an automatic speech recognition (ASR) model. Here, the ASR model includes a causal encoder and a decoder. The method also includes generating, by the causal encoder, a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The method also includes generating, by the decoder, a first probability distribution over possible speech recognition hypotheses. Here, the causal encoder includes a stack of causal encoder layers each including a Recurrent Neural Network (RNN) Attention-Performer module that applies linear attention.
    Type: Grant
    Filed: September 29, 2022
    Date of Patent: January 7, 2025
    Assignee: Google LLC
    Inventors: Tara N. Sainath, Rami Botros, Anmol Gulati, Krzysztof Choromanski, Ruoming Pang, Trevor Strohman, Weiran Wang, Jiahui Yu
  • Patent number: 12183322
    Abstract: A method includes receiving a sequence of acoustic frames characterizing one or more utterances as input to a multilingual automated speech recognition (ASR) model. The method also includes generating a higher order feature representation for a corresponding acoustic frame. The method also includes generating a hidden representation based on a sequence of non-blank symbols output by a final softmax layer. The method also includes generating a probability distribution over possible speech recognition hypotheses based on the hidden representation generated by the prediction network at each of the plurality of output steps and the higher order feature representation generated by the encoder at each of the plurality of output steps. The method also includes predicting an end of utterance (EOU) token at an end of each utterance. The method also includes classifying each acoustic frame as either speech, initial silence, intermediate silence, or final silence.
    Type: Grant
    Filed: September 22, 2022
    Date of Patent: December 31, 2024
    Assignee: Google LLC
    Inventors: Bo Li, Tara N. Sainath, Ruoming Pang, Shuo-yiin Chang, Qiumin Xu, Trevor Strohman, Vince Chen, Qiao Liang, Heguang Liu, Yanzhang He, Parisa Haghani, Sameer Bidichandani
  • Publication number: 20240428786
    Abstract: A method includes receiving a sequence of acoustic frames and generating, by a first encoder, a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The method also includes generating, by a first pass transducer decoder, a first pass speech recognition hypothesis for a corresponding first higher order feature representation and generating, by a text encoder, a text encoding for a corresponding first pass speech recognition hypothesis. The method also includes generating, by a second encoder, a second higher order feature representation for a corresponding first higher order feature representation. The method also includes generating, by a second pass transducer decoder, a second pass speech recognition hypothesis using a corresponding second higher order feature representation and a corresponding text encoding.
    Type: Application
    Filed: September 6, 2024
    Publication date: December 26, 2024
    Applicant: Google LLC
    Inventors: Ke Hu, Tara N. Sainath, Arun Narayanan, Ruoming Pang, Trevor Strohman
  • Patent number: 12154581
    Abstract: An automated speech recognition (ASR) model includes a first encoder, a second encoder, and a decoder. The first encoder receives, as input, a sequence of acoustic frames, and generates, at each of a plurality of output steps, a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The second encoder receives, as input, the first higher order feature representation generated by the first encoder at each of the plurality of output steps, and generates, at each of the plurality of output steps, a second higher order feature representation for a corresponding first higher order feature frame. The decoder receives, as input, the second higher order feature representation generated by the second encoder at each of the plurality of output steps, and generates, at each of the plurality of time steps, a first probability distribution over possible speech recognition hypotheses.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: November 26, 2024
    Assignee: Google LLC
    Inventors: Arun Narayanan, Tara Sainath, Chung-Cheng Chiu, Ruoming Pang, Rohit Prabhavalkar, Jiahui Yu, Ehsan Variani, Trevor Strohman
  • Publication number: 20240371363
    Abstract: An ASR model includes a first encoder configured to receive a sequence of acoustic frames and generate a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The ASR model also includes a second encoder configured to receive the first higher order feature representation generated by the first encoder at each of the plurality of output steps and generate a second higher order feature representation for a corresponding first higher order feature frame. The ASR model also includes a decoder configured to receive the second higher order feature representation generated by the second encoder at each of the plurality of output steps and generate a first probability distribution over possible speech recognition hypothesis. The ASR model also includes a language model configured to receive the first probability distribution over possible speech hypothesis and generate a rescored probability distribution.
    Type: Application
    Filed: July 15, 2024
    Publication date: November 7, 2024
    Applicant: Google LLC
    Inventors: Tara Sainath, Arun Narayanan, Rami Botros, Yanzhang He, Ehsan Variani, Cyril Allauzen, David Rybach, Ruoming Pang, Trevor Strohman
  • Patent number: 12126845
    Abstract: Implementations disclosed herein are directed to ephemeral learning of machine learning (“ML”) model(s) based on gradient(s) generated at a remote system (e.g., remote server(s)). Processor(s) of the remote system can receive stream(s) of audio data capturing spoken utterance(s) from a client device of a user. A fulfillment pipeline can process the stream(s) of audio data to cause certain fulfillment(s) of the spoken utterance(s) to be performed. Meanwhile, a training pipeline can process the stream(s) of audio data to generate gradient(s) using unsupervised learning techniques. Subsequent to the processing by the fulfillment pipeline and/or the training pipeline, the stream(s) of audio data are discarded by the remote system. Accordingly, the ML model(s) can be trained at the remote system without storing or logging of the stream(s) of audio data by non-transient memory thereof, thereby providing more efficient training mechanisms for training the ML model(s) and also increasing security of user data.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: October 22, 2024
    Assignee: GOOGLE LLC
    Inventors: Françoise Beaufays, Khe Chai Sim, Trevor Strohman, Oren Litvin
  • Patent number: 12118988
    Abstract: A method includes receiving a sequence of acoustic frames and generating, by a first encoder, a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The method also includes generating, by a first pass transducer decoder, a first pass speech recognition hypothesis for a corresponding first higher order feature representation and generating, by a text encoder, a text encoding for a corresponding first pass speech recognition hypothesis. The method also includes generating, by a second encoder, a second higher order feature representation for a corresponding first higher order feature representation. The method also includes generating, by a second pass transducer decoder, a second pass speech recognition hypothesis using a corresponding second higher order feature representation and a corresponding text encoding.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: October 15, 2024
    Assignee: Google LLC
    Inventors: Ke Hu, Tara N. Sainath, Arun Narayanan, Ruoming Pang, Trevor Strohman
  • Publication number: 20240331699
    Abstract: Implementations described herein relate to reducing latency in automated assistant interactions. In some implementations, a client device can receive audio data that captures a spoken utterance of a user. The audio data can be processed to determine an assistant command to be performed by an automated assistant. The assistant command can be processed, using a latency prediction model, to generate a predicted latency to fulfill the assistant command. Further, the client device (or the automated assistant) can determine, based on the predicted latency, whether to audibly render pre-cached content for presentation to the user prior to audibly rendering content that is responsive to the spoken utterance. The pre-cached content can be tailored to the assistant command and audibly rendered for presentation to the user while the content is being obtained, and the content can be audibly rendered for presentation to the user subsequent to the pre-cached content.
    Type: Application
    Filed: June 13, 2024
    Publication date: October 3, 2024
    Inventors: Lior Alon, Rafael Goldfarb, Dekel Auster, Dan Rasin, Michael Andrew Goodman, Trevor Strohman, Nino Tasca, Valerie Nygaard, Jaclyn Konzelmann
  • Publication number: 20240330334
    Abstract: Implementations relate to reducing latency in generating and/or rendering a given stream of natural language (NL) based output generated using a large language model (LLM). Processor(s) of a system can: receive NL based input associated with a client device, generate the stream of NL based output utilizing the LLM that is responsive to the NL based input and that is for a given dialog context of an ongoing dialog, and cause the stream of NL based output to be rendered at the client device. Notably, the processor(s) can employ attribute classifier(s) and a multi-objective scorer to implement a blockwise controlled decoding technique in generating the stream of NL based output utilizing the LLM. By implementing the blockwise controlled decoding technique in generating the stream of NL based output utilizing the LLM, the processor(s) can reduce latency in generating and/or of the stream of NL based output generated utilizing the LLM.
    Type: Application
    Filed: July 25, 2023
    Publication date: October 3, 2024
    Inventors: Sidharth Mudgal, Ahmad Beirami, Jilin Chen, Alex Beutel, Harish Ganapathy, YaGuang Li, Tao Wang, Yanping Huang, Trevor Strohman
  • Publication number: 20240321263
    Abstract: A method includes receiving a training example that includes audio data representing a spoken utterance and a ground truth transcription. For each word in the spoken utterance, the method also includes inserting a placeholder symbol before the respective word identifying a respective ground truth alignment for a beginning and an end of the respective word, determining a beginning word piece and an ending word piece, and generating a first constrained alignment for the beginning word piece and a second constrained alignment for the ending word piece. The first constrained alignment is aligned with the ground truth alignment for the beginning of the respective word and the second constrained alignment is aligned with the ground truth alignment for the ending of the respective word. The method also includes constraining an attention head of a second pass decoder by applying the first and second constrained alignments.
    Type: Application
    Filed: May 31, 2024
    Publication date: September 26, 2024
    Applicant: Google LLC
    Inventors: Tara N. Sainath, Basilio Garcia Castillo, David Rybach, Trevor Strohman, Ruoming Pang
  • Publication number: 20240311405
    Abstract: Implementations disclose selecting, in response to receiving a request and from among multiple candidate generative models (e.g., multiple candidate large language models (LLMs)) with differing computational efficiencies, a particular generative model to utilize in generating a response to the request. Those implementations reduce latency and/or conserve computational resource(s) through selection, for various requests, of a more computationally efficient generative model for utilization in lieu of a less computationally efficient generative model. Further, those implementations seek to achieve such benefits, through utilization of more computationally efficient generative models, while also still selectively utilizing less computationally efficient generative models for certain requests to mitigate occurrences of a generated response being inaccurate and/or under-specified.
    Type: Application
    Filed: June 19, 2023
    Publication date: September 19, 2024
    Inventors: Seungyeon Kim, Ankit Singh Rawat, Wittawat Jitkrittum, Hari Narasimhan, Sashank Reddi, Neha Gupta, Srinadh Bhojanapalli, Aditya Menon, Manzil Zaheer, Tal Schuster, Sanjiv Kumar, Toby Boyd, Zhifeng Chen, Emanuel Taropa, Vikram Kasivajhula, Trevor Strohman, Martin Baeuml, Leif Schelin, Yanping Huang
  • Publication number: 20240311402
    Abstract: Implementations relate to reducing latency in generating and/or rendering natural language (NL) output generated using a large language model (LLM). Processor(s) of a system can: receive NL based input associated with a client device, and generate the NL based output utilizing the LLM. The NL based output can be a stream of NL based output in that it includes a plurality of segments, and is generated on a segment-by-segment basis. In some implementations, a first segment of the stream of NL based output is selected for inclusion in the stream of NL based output as a second segment (and any subsequent segment) is being generated to reduce latency in evaluating the NL based output as a whole prior to rendering thereof. In some versions of those implementations, the first segment is rendered as the second segment (and any subsequent segment) is being generated to further reduce latency in rendering thereof.
    Type: Application
    Filed: April 19, 2023
    Publication date: September 19, 2024
    Inventors: Martin Baeuml, Yanping Huang, Wenhao Jia, Chang Lan, Yuanzhong Xu, Junwhan Ahn, Alexander Bailey, Leif Schelin, Trevor Strohman, Emanuel Taropa, Sidharth Mudgal, Yanyan Zheng, Zhifeng Chen, Ahmad Beirami
  • Publication number: 20240311575
    Abstract: Implementations relate to dialog management of a large language model (LLM) utilized in generating natural language (NL) output during an ongoing dialog. Processor(s) of a system can: receive NL based input as part of the ongoing dialog, generate NL based output utilizing the LLM, and cause the NL based output to be rendered. Further, the processor(s) can receive subsequent NL based input as part of the ongoing dialog. In some implementations, the processor(s) can determine whether to modify a corresponding dialog context in generating subsequent NL based output, and modify the corresponding dialog context accordingly. For example, the processor(s) can restrict the corresponding dialog context, or supplant the corresponding dialog context with a corresponding curated dialog context. In additional or alternative implementations, the processor(s) can modify a corresponding NL based output threshold utilized in generating the subsequent NL based response to ensure the resulting NL based output is desirable.
    Type: Application
    Filed: March 17, 2023
    Publication date: September 19, 2024
    Inventors: Martin Baeuml, Alexander Bailey, Jonas Bragagnolo, Florent D'Halluin, Trevor Strohman
  • Publication number: 20240312460
    Abstract: As part of a dialog session between a user and an automated assistant, implementations can process, using a streaming ASR model, a stream of audio data that captures a portion of a spoken utterance to generate ASR output, process, using an NLU model, the ASR output to generate NLU output, and cause, based on the NLU output, a stream of fulfillment data to be generated. Further, implementations can further determine, based on processing the stream of audio data, audio-based characteristics associated with the portion of the spoken utterance captured in the stream of audio data. Based on the audio-based characteristics and/the stream of NLU output, implementations can determine whether the user has paused in providing the spoken utterance or has completed providing of the spoken utterance. If the user has paused, implementations can cause natural conversation output to be provided for presentation to the user.
    Type: Application
    Filed: May 24, 2024
    Publication date: September 19, 2024
    Inventors: Jaclyn Konzelmann, Trevor Strohman, Jonathan Bloom, Johan Schalkwyk, Joseph Smarr
  • Publication number: 20240289395
    Abstract: Implementations relate to helping a large language model generate factual responses to prompts that request factual content is disclosed. The large language model may receive a prompt context, a plurality of encoded context passages as input. The large language model is trained to determine whether or not to utilize the encoded context passages in generating the response. Implementations also relate to different methods of fine-tuning the responses generated by the large language model through query refinements, response re-writes, and evaluation of factual accuracy.
    Type: Application
    Filed: December 4, 2023
    Publication date: August 29, 2024
    Inventors: Hao Zhou, Shrestha Basu Mallick, Trevor Strohman, Patricia Luisa Romero Domingo, Amirhossein Kiani, Yu Du, Xinying Song, Heng-Tze Cheng, Quoc V. Le, Ed Huai-Hsin Chi, Christopher Jamie Maclean Hall
  • Patent number: 12051404
    Abstract: An ASR model includes a first encoder configured to receive a sequence of acoustic frames and generate a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The ASR model also includes a second encoder configured to receive the first higher order feature representation generated by the first encoder at each of the plurality of output steps and generate a second higher order feature representation for a corresponding first higher order feature frame. The ASR model also includes a decoder configured to receive the second higher order feature representation generated by the second encoder at each of the plurality of output steps and generate a first probability distribution over possible speech recognition hypothesis. The ASR model also includes a language model configured to receive the first probability distribution over possible speech hypothesis and generate a rescored probability distribution.
    Type: Grant
    Filed: June 16, 2023
    Date of Patent: July 30, 2024
    Assignee: Google LLC
    Inventors: Tara Sainath, Arun Narayanan, Rami Botros, Yanzhang He, Ehsan Variani, Cyril Allauzen, David Rybach, Ruoming Pang, Trevor Strohman
  • Patent number: 12051416
    Abstract: Implementations described herein relate to reducing latency in automated assistant interactions. In some implementations, a client device can receive audio data that captures a spoken utterance of a user. The audio data can be processed to determine an assistant command to be performed by an automated assistant. The assistant command can be processed, using a latency prediction model, to generate a predicted latency to fulfill the assistant command. Further, the client device (or the automated assistant) can determine, based on the predicted latency, whether to audibly render pre-cached content for presentation to the user prior to audibly rendering content that is responsive to the spoken utterance. The pre-cached content can be tailored to the assistant command and audibly rendered for presentation to the user while the content is being obtained, and the content can be audibly rendered for presentation to the user subsequent to the pre-cached content.
    Type: Grant
    Filed: August 1, 2023
    Date of Patent: July 30, 2024
    Assignee: GOOGLE LLC
    Inventors: Lior Alon, Rafael Goldfarb, Dekel Auster, Dan Rasin, Michael Andrew Goodman, Trevor Strohman, Nino Tasca, Valerie Nygaard, Jaclyn Konzelmann
  • Patent number: 12027154
    Abstract: A method includes receiving a training example that includes audio data representing a spoken utterance and a ground truth transcription. For each word in the spoken utterance, the method also includes inserting a placeholder symbol before the respective word identifying a respective ground truth alignment for a beginning and an end of the respective word, determining a beginning word piece and an ending word piece, and generating a first constrained alignment for the beginning word piece and a second constrained alignment for the ending word piece. The first constrained alignment is aligned with the ground truth alignment for the beginning of the respective word and the second constrained alignment is aligned with the ground truth alignment for the ending of the respective word. The method also includes constraining an attention head of a second pass decoder by applying the first and second constrained alignments.
    Type: Grant
    Filed: February 9, 2023
    Date of Patent: July 2, 2024
    Assignee: Google LLC
    Inventors: Tara N. Sainath, Basilio Garcia Castillo, David Rybach, Trevor Strohman, Ruoming Pang
  • Patent number: 12020703
    Abstract: As part of a dialog session between a user and an automated assistant, implementations can process, using a streaming ASR model, a stream of audio data that captures a portion of a spoken utterance to generate ASR output, process, using an NLU model, the ASR output to generate NLU output, and cause, based on the NLU output, a stream of fulfillment data to be generated. Further, implementations can further determine, based on processing the stream of audio data, audio-based characteristics associated with the portion of the spoken utterance captured in the stream of audio data. Based on the audio-based characteristics and/the stream of NLU output, implementations can determine whether the user has paused in providing the spoken utterance or has completed providing of the spoken utterance. If the user has paused, implementations can cause natural conversation output to be provided for presentation to the user.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: June 25, 2024
    Assignee: GOOGLE LLC
    Inventors: Jaclyn Konzelmann, Trevor Strohman, Jonathan Bloom, Johan Schalkwyk, Joseph Smarr