Patents by Inventor Ignacio Lopez

Ignacio Lopez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210343276
    Abstract: In some implementations, an utterance is determined to include a particular user speaking a hotword based at least on a first set of samples of the particular user speaking the hotword. In response to determining that an utterance includes a particular user speaking a hotword based at least on a first set of samples of the particular user speaking the hotword, at least a portion of the utterance is stored as a new sample. A second set of samples of the particular user speaking the utterance is obtained, where the second set of samples includes the new sample and less than all the samples in the first set of samples. A second utterance is determined to include the particular user speaking the hotword based at least on the second set of samples of the user speaking the hotword.
    Type: Application
    Filed: July 14, 2021
    Publication date: November 4, 2021
    Inventors: Ignacio Lopez Moreno, Diego Melendo Casado
  • Publication number: 20210312907
    Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.
    Type: Application
    Filed: December 4, 2019
    Publication date: October 7, 2021
    Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
  • Publication number: 20210280177
    Abstract: Determining a language for speech recognition of a spoken utterance received via an automated assistant interface for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Implementations determine a user profile that corresponds to audio data that captures a spoken utterance, and utilize language(s), and optionally corresponding probabilities, assigned to the user profile in determining a language for speech recognition of the spoken utterance. Some implementations select only a subset of languages, assigned to the user profile, to utilize in speech recognition of a given spoken utterance of the user.
    Type: Application
    Filed: May 24, 2021
    Publication date: September 9, 2021
    Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno, William Zhang
  • Publication number: 20210272562
    Abstract: The technology described in this document can be embodied in a computer-implemented method that includes receiving, at a processing system, a first signal including an output of a speaker device and an additional audio signal. The method also includes determining, by the processing system, based at least in part on a model trained to identify the output of the speaker device, that the additional audio signal corresponds to an utterance of a user. The method further includes initiating a reduction in an audio output level of the speaker device based on determining that the additional audio signal corresponds to the utterance of the user.
    Type: Application
    Filed: May 21, 2021
    Publication date: September 2, 2021
    Applicant: Google LLC
    Inventors: Diego Melendo Casado, Ignacio Lopez Moreno, Javier Gonzalez-Dominguez
  • Patent number: 11107478
    Abstract: This document generally describes systems, methods, devices, and other techniques related to speaker verification, including (i) training a neural network for a speaker verification model, (ii) enrolling users at a client device, and (iii) verifying identities of users based on characteristics of the users' voices. Some implementations include a computer-implemented method. The method can include receiving, at a computing device, data that characterizes an utterance of a user of the computing device. A speaker representation can be generated, at the computing device, for the utterance using a neural network on the computing device. The neural network can be trained based on a plurality of training samples that each: (i) include data that characterizes a first utterance and data that characterizes one or more second utterances, and (ii) are labeled as a matching speakers sample or a non-matching speakers sample.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: August 31, 2021
    Assignee: Google LLC
    Inventors: Georg Heigold, Samuel Bengio, Ignacio Lopez Moreno
  • Publication number: 20210256981
    Abstract: Methods, systems, apparatus, including computer programs encoded on computer storage medium, to facilitate language independent-speaker verification. In one aspect, a method includes actions of receiving, by a user device, audio data representing an utterance of a user. Other actions may include providing, to a neural network stored on the user device, input data derived from the audio data and a language identifier. The neural network may be trained using speech data representing speech in different languages or dialects. The method may include additional actions of generating, based on output of the neural network, a speaker representation and determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user. The method may provide the user with access to the user device based on determining that the utterance is an utterance of the user.
    Type: Application
    Filed: May 4, 2021
    Publication date: August 19, 2021
    Applicant: Google LLC
    Inventors: Ignacio Lopez Moreno, Li Wan, Quan Wang
  • Patent number: 11087743
    Abstract: In some implementations, an utterance is determined to include a particular user speaking a hotword based at least on a first set of samples of the particular user speaking the hotword. In response to determining that an utterance includes a particular user speaking a hotword based at least on a first set of samples of the particular user speaking the hotword, at least a portion of the utterance is stored as a new sample. A second set of samples of the particular user speaking the utterance is obtained, where the second set of samples includes the new sample and less than all the samples in the first set of samples. A second utterance is determined to include the particular user speaking the hotword based at least on the second set of samples of the user speaking the hotword.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: August 10, 2021
    Assignee: GOOGLE LLC
    Inventors: Ignacio Lopez Moreno, Diego Melendo Casado
  • Publication number: 20210217411
    Abstract: Speaker diarization techniques that enable processing of audio data to generate one or more refined versions of the audio data, where each of the refined versions of the audio data isolates one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by generating a speaker embedding for the single human speaker, and processing the audio data using a trained generative model—and using the speaker embedding in determining activations for hidden layers of the trained generative model during the processing. Output is generated over the trained generative model based on the processing, and the output is the refined version of the audio data.
    Type: Application
    Filed: March 29, 2021
    Publication date: July 15, 2021
    Inventors: Ignacio Lopez Moreno, Luis Carlos Cobo Rus
  • Publication number: 20210217404
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech synthesis. The methods, systems, and apparatus include actions of obtaining an audio representation of speech of a target speaker, obtaining input text for which speech is to be synthesized in a voice of the target speaker, generating a speaker vector by providing the audio representation to a speaker encoder engine that is trained to distinguish speakers from one another, generating an audio representation of the input text spoken in the voice of the target speaker by providing the input text and the speaker vector to a spectrogram generation engine that is trained using voices of reference speakers to generate audio representations, and providing the audio representation of the input text spoken in the voice of the target speaker for output.
    Type: Application
    Filed: May 17, 2019
    Publication date: July 15, 2021
    Applicant: Google LLC
    Inventors: Ye Jia, Zhifeng Chen, Yonghui Wu, Jonathan Shen, Ruoming Pang, Ron J. Weiss, Ignacio Lopez Moreno, Fei Ren, Yu Zhang, Quan Wang, Patrick Nguyen
  • Patent number: 11031002
    Abstract: The technology described in this document can be embodied in a computer-implemented method that includes receiving, at a processing system, a first signal including an output of a speaker device and an additional audio signal. The method also includes determining, by the processing system, based at least in part on a model trained to identify the output of the speaker device, that the additional audio signal corresponds to an utterance of a user. The method further includes initiating a reduction in an audio output level of the speaker device based on determining that the additional audio signal corresponds to the utterance of the user.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: June 8, 2021
    Assignee: Google LLC
    Inventors: Diego Melendo Casado, Ignacio Lopez Moreno, Javier Gonzalez-Dominguez
  • Patent number: 11017766
    Abstract: Determining a language for speech recognition of a spoken utterance received via an automated assistant interface for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Implementations determine a user profile that corresponds to audio data that captures a spoken utterance, and utilize language(s), and optionally corresponding probabilities, assigned to the user profile in determining a language for speech recognition of the spoken utterance. Some implementations select only a subset of languages, assigned to the user profile, to utilize in speech recognition of a given spoken utterance of the user.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: May 25, 2021
    Assignee: GOOGLE LLC
    Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno, William Zhang
  • Patent number: 11017784
    Abstract: Methods, systems, apparatus, including computer programs encoded on computer storage medium, to facilitate language independent-speaker verification. In one aspect, a method includes actions of receiving, by a user device, audio data representing an utterance of a user. Other actions may include providing, to a neural network stored on the user device, input data derived from the audio data and a language identifier. The neural network may be trained using speech data representing speech in different languages or dialects. The method may include additional actions of generating, based on output of the neural network, a speaker representation and determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user. The method may provide the user with access to the user device based on determining that the utterance is an utterance of the user.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: May 25, 2021
    Assignee: Google LLC
    Inventors: Ignacio Lopez Moreno, Li Wan, Quan Wang
  • Patent number: 10978059
    Abstract: Speaker diarization techniques that enable processing of audio data to generate one or more refined versions of the audio data, where each of the refined versions of the audio data isolates one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by generating a speaker embedding for the single human speaker, and processing the audio data using a trained generative model—and using the speaker embedding in determining activations for hidden layers of the trained generative model during the processing. Output is generated over the trained generative model based on the processing, and the output is the refined version of the audio data.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: April 13, 2021
    Assignee: GOOGLE LLC
    Inventors: Ignacio Lopez Moreno, Luis Carlos Cobo Rus
  • Publication number: 20210097981
    Abstract: Implementations relate to determining a language for speech recognition of a spoken utterance, received via an automated assistant interface, for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Selection of a speech recognition model for a particular language can based on one or more interaction characteristics exhibited during a dialog session between a user and an automated assistant. Such interaction characteristics can include anticipated user input types, anticipated user input durations, a duration for monitoring for a user response, and/or an actual duration of a provided user response.
    Type: Application
    Filed: December 14, 2020
    Publication date: April 1, 2021
    Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno
  • Publication number: 20210074280
    Abstract: Determining a language for speech recognition of a spoken utterance received via an automated assistant interface for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Implementations determine a user profile that corresponds to audio data that captures a spoken utterance, and utilize language(s), and optionally corresponding probabilities, assigned to the user profile in determining a language for speech recognition of the spoken utterance. Some implementations select only a subset of languages, assigned to the user profile, to utilize in speech recognition of a given spoken utterance of the user.
    Type: Application
    Filed: November 16, 2020
    Publication date: March 11, 2021
    Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno
  • Publication number: 20210074295
    Abstract: Implementations relate to determining a language for speech recognition of a spoken utterance, received via an automated assistant interface, for interacting with an automated assistant. In various implementations, audio data indicative of a voice input that includes a natural language request from a user may be applied as input across multiple speech-to-text (“STT”) machine learning models to generate multiple candidate speech recognition outputs. Each STT machine learning model may trained in a particular language. For each respective STT machine learning model of the multiple STT models, the multiple candidate speech recognition outputs may be analyzed to determine an entropy score for the respective STT machine learning model. Based on the entropy scores, a target language associated with at least one STT machine learning model of the multiple STT machine learning models may be selected. The automated assistant may respond to the request using the target language.
    Type: Application
    Filed: January 8, 2019
    Publication date: March 11, 2021
    Inventors: Ignacio Lopez Moreno, Lukas Lopatovsky, Ágoston Weisz
  • Patent number: 10930271
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using neural networks. A feature vector that models audio characteristics of a portion of an utterance is received. Data indicative of latent variables of multivariate factor analysis is received. The feature vector and the data indicative of the latent variables is provided as input to a neural network. A candidate transcription for the utterance is determined based on at least an output of the neural network.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: February 23, 2021
    Inventors: Andrew W. Senior, Ignacio Lopez Moreno
  • Publication number: 20210043191
    Abstract: Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.
    Type: Application
    Filed: December 2, 2019
    Publication date: February 11, 2021
    Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno, Quan Wang
  • Publication number: 20210043210
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for contextual hotwords are disclosed. In one aspect, a method, during a boot process of a computing device, includes the actions of determining, by a computing device, a context associated with the computing device. The actions further include, based on the context associated with the computing device, determining a hotword. The actions further include, after determining the hotword, receiving audio data that corresponds to an utterance. The actions further include determining that the audio data includes the hotword. The actions further include, in response to determining that the audio data includes the hotword, performing an operation associated with the hotword.
    Type: Application
    Filed: October 12, 2020
    Publication date: February 11, 2021
    Applicant: Google LLC
    Inventors: Christopher Thaddeus Hughes, Ignacio Lopez Moreno, Aleksandar Kracun
  • Patent number: 10896672
    Abstract: Implementations relate to determining a language for speech recognition of a spoken utterance, received via an automated assistant interface, for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Selection of a speech recognition model for a particular language can based on one or more interaction characteristics exhibited during a dialog session between a user and an automated assistant. Such interaction characteristics can include anticipated user input types, anticipated user input durations, a duration for monitoring for a user response, and/or an actual duration of a provided user response.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: January 19, 2021
    Assignee: GOOGLE LLC
    Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno