Patents by Inventor Ignacio Lopez Moreno
Ignacio Lopez Moreno has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250131916Abstract: Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.Type: ApplicationFiled: December 2, 2024Publication date: April 24, 2025Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno, Quan Wang
-
Publication number: 20250095630Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech synthesis. The methods, systems, and apparatus include actions of obtaining an audio representation of speech of a target speaker, obtaining input text for which speech is to be synthesized in a voice of the target speaker, generating a speaker vector by providing the audio representation to a speaker encoder engine that is trained to distinguish speakers from one another, generating an audio representation of the input text spoken in the voice of the target speaker by providing the input text and the speaker vector to a spectrogram generation engine that is trained using voices of reference speakers to generate audio representations, and providing the audio representation of the input text spoken in the voice of the target speaker for output.Type: ApplicationFiled: December 2, 2024Publication date: March 20, 2025Applicant: Google LLCInventors: Ye Jia, Zhifeng Chen, Yonghui Wu, Jonathan Shen, Ruoming Pang, Ron J. Weiss, Ignacio Lopez Moreno, Fei Ren, Yu Zhang, Quan Wang, Patrick An Phu Nguyen
-
Patent number: 12254891Abstract: Processing of acoustic features of audio data to generate one or more revised versions of the acoustic features, where each of the revised versions of the acoustic features isolates one or more utterances of a single respective human speaker. Various implementations generate the acoustic features by processing audio data using portion(s) of an automatic speech recognition system. Various implementations generate the revised acoustic features by processing the acoustic features using a mask generated by processing the acoustic features and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using the automatic speech recognition system to generate a predicted text representation of the utterance(s) of the single human speaker without reconstructing the audio data.Type: GrantFiled: October 10, 2019Date of Patent: March 18, 2025Assignee: GOOGLE LLCInventors: Quan Wang, Ignacio Lopez Moreno, Li Wan
-
Patent number: 12254876Abstract: The technology described in this document can be embodied in a computer-implemented method that includes receiving, at a processing system, a first signal including an output of a speaker device and an additional audio signal. The method also includes determining, by the processing system, based at least in part on a model trained to identify the output of the speaker device, that the additional audio signal corresponds to an utterance of a user. The method further includes initiating a reduction in an audio output level of the speaker device based on determining that the additional audio signal corresponds to the utterance of the user.Type: GrantFiled: March 19, 2024Date of Patent: March 18, 2025Assignee: Google LLCInventors: Diego Melendo Casado, Ignacio Lopez Moreno, Javier Gonzalez-Dominguez
-
Patent number: 12249319Abstract: Implementations relate to determining a language for speech recognition of a spoken utterance, received via an automated assistant interface, for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Selection of a speech recognition model for a particular language can based on one or more interaction characteristics exhibited during a dialog session between a user and an automated assistant. Such interaction characteristics can include anticipated user input types, anticipated user input durations, a duration for monitoring for a user response, and/or an actual duration of a provided user response.Type: GrantFiled: November 13, 2023Date of Patent: March 11, 2025Assignee: GOOGLE LLCInventors: Pu-Sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno
-
Publication number: 20250078840Abstract: A method includes receiving audio data corresponding to an utterance spoken by a particular user and captured in streaming audio by a user device. The method also includes performing speaker identification on the audio data to identify an identity of the particular user that spoke the utterance. The method also includes obtaining a keyword detection model personalized for the particular user based on the identity of the particular user that spoke the utterance. The keyword detection model is conditioned on speaker characteristic information associated with the particular user to adapt the keyword detection model to detect a presence of a keyword in audio for the particular user. The method also includes determining that the utterance includes the keyword using the keyword detection model personalized for the particular user.Type: ApplicationFiled: August 22, 2024Publication date: March 6, 2025Applicant: Google LLCInventors: Pai Zhu, Beltrán Labrador Serrano, Guanlong Zhao, Angelo Alfredo Scorza Scarpati, Quan Wang, Alex Seungryong Park, Ignacio Lopez Moreno
-
Patent number: 12175963Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech synthesis. The methods, systems, and apparatus include actions of obtaining an audio representation of speech of a target speaker, obtaining input text for which speech is to be synthesized in a voice of the target speaker, generating a speaker vector by providing the audio representation to a speaker encoder engine that is trained to distinguish speakers from one another, generating an audio representation of the input text spoken in the voice of the target speaker by providing the input text and the speaker vector to a spectrogram generation engine that is trained using voices of reference speakers to generate audio representations, and providing the audio representation of the input text spoken in the voice of the target speaker for output.Type: GrantFiled: November 30, 2023Date of Patent: December 24, 2024Assignee: Google LLCInventors: Ye Jia, Zhifeng Chen, Yonghui Wu, Jonathan Shen, Ruoming Pang, Ron J. Weiss, Ignacio Lopez Moreno, Fei Ren, Yu Zhang, Quan Wang, Patrick An Phu Nguyen
-
Patent number: 12165628Abstract: Techniques are disclosed that enable determining and/or utilizing a misrecognition of a spoken utterance, where the misrecognition is generated using an automatic speech recognition (ASR) model. Various implementations include determining a recognition based on the spoken utterance and a previous utterance spoken prior to the spoken utterance. Additionally or alternatively, implementations include personalizing an ASR engine for a user based on the spoken utterance and the previous utterance spoken prior to the spoken utterance (e.g., based on audio data capturing the previous utterance and a text representation of the spoken utterance).Type: GrantFiled: July 8, 2020Date of Patent: December 10, 2024Assignee: GOOGLE LLCInventors: Ágoston Weisz, Ignacio Lopez Moreno, Alexandru Dovlecel
-
Patent number: 12159622Abstract: Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.Type: GrantFiled: December 9, 2022Date of Patent: December 3, 2024Assignee: GOOGLE LLCInventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno, Quan Wang
-
Patent number: 12148433Abstract: This document generally describes systems, methods, devices, and other techniques related to speaker verification, including (i) training a neural network for a speaker verification model, (ii) enrolling users at a client device, and (iii) verifying identities of users based on characteristics of the users' voices. Some implementations include a computer-implemented method. The method can include receiving, at a computing device, data that characterizes an utterance of a user of the computing device. A speaker representation can be generated, at the computing device, for the utterance using a neural network on the computing device. The neural network can be trained based on a plurality of training samples that each: (i) include data that characterizes a first utterance and data that characterizes one or more second utterances, and (ii) are labeled as a matching speakers sample or a non-matching speakers sample.Type: GrantFiled: October 11, 2023Date of Patent: November 19, 2024Assignee: Google LLCInventors: Georg Heigold, Samuel Bengio, Ignacio Lopez Moreno
-
Patent number: 12125476Abstract: A method for training a neural network includes receiving a training input audio sequence including a sequence of input frames defining a hotword that initiates a wake-up process on a user device. The method further includes obtaining a first label and a second label for the training input audio sequence. The method includes generating, using a memorized neural network and the training input audio sequence, an output indicating a likelihood the training input audio sequence includes the hotword. The method further includes determining a first loss based on the first label and the output. The method includes determining a second loss based on the second label and the output. The method further includes optimizing the memorized neural network based on the first loss and the second loss associated with the training input audio sequence.Type: GrantFiled: February 28, 2022Date of Patent: October 22, 2024Assignee: Google LLCInventors: Hyun Jin Park, Alex Seungryong Park, Ignacio Lopez Moreno
-
Publication number: 20240331700Abstract: A method includes receiving a sequence of input audio frames and processing each corresponding input audio frame to determine a language ID event that indicates a predicted language. The method also includes obtaining speech recognition events each including a respective speech recognition result determined by a first language pack. Based on determining that the utterance includes a language switch from the first language to a second language, the method also includes loading a second language pack onto the client device and rewinding the input audio data buffered by an audio buffer to a time of the corresponding input audio frame associated with the language ID event that first indicated the second language as the predicted language. The method also includes emitting a first transcription and processing, using the second language pack loaded onto the client device, the rewound buffered audio data to generate a second transcription.Type: ApplicationFiled: March 28, 2023Publication date: October 3, 2024Applicant: Google LLCInventors: Yang Yu, Quan Wang, Ignacio Lopez Moreno
-
Patent number: 12046233Abstract: Determining a language for speech recognition of a spoken utterance received via an automated assistant interface for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Implementations determine a user profile that corresponds to audio data that captures a spoken utterance, and utilize language(s), and optionally corresponding probabilities, assigned to the user profile in determining a language for speech recognition of the spoken utterance. Some implementations select only a subset of languages, assigned to the user profile, to utilize in speech recognition of a given spoken utterance of the user.Type: GrantFiled: July 28, 2023Date of Patent: July 23, 2024Assignee: GOOGLE LLCInventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno, William Zhang
-
Publication number: 20240221737Abstract: The technology described in this document can be embodied in a computer-implemented method that includes receiving, at a processing system, a first signal including an output of a speaker device and an additional audio signal. The method also includes determining, by the processing system, based at least in part on a model trained to identify the output of the speaker device, that the additional audio signal corresponds to an utterance of a user. The method further includes initiating a reduction in an audio output level of the speaker device based on determining that the additional audio signal corresponds to the utterance of the user.Type: ApplicationFiled: March 19, 2024Publication date: July 4, 2024Applicant: Google LLCInventors: Diego Melendo Casado, Ignacio Lopez Moreno, Javier Gonzalez-Dominguez
-
Patent number: 12027162Abstract: Teacher-student learning can be used to train a keyword spotting (KWS) model using augmented training instance(s). Various implementations include aggressively augmenting (e.g., using spectral augmentation) base audio data to generate augmented audio data, where one or more portions of the base instance of audio data can be masked in the augmented instance of audio data (e.g., one or more time frames can be masked, one or more frequencies can be masked, etc.). Many implementations include processing augmented audio data using a KWS teacher model to generate a soft label, and processing the augmented audio data using a KWS student model to generate predicted output. One or more portions of the KWS student model can be updated based on a comparison of the soft label and the generated predicted output.Type: GrantFiled: March 3, 2021Date of Patent: July 2, 2024Assignee: GOOGLE LLCInventors: Hyun Jin Park, Pai Zhu, Ignacio Lopez Moreno, Niranjan Subrahmanya
-
Publication number: 20240203426Abstract: Techniques are disclosed that enable processing of audio data to generate one or more refined versions of audio data, where each of the refined versions of audio data isolate one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by processing a spectrogram representation of the audio data (generated by processing the audio data with a frequency transformation) using a mask generated by processing the spectrogram of the audio data and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using an inverse of the frequency transformation to generate the refined audio data.Type: ApplicationFiled: March 4, 2024Publication date: June 20, 2024Inventors: Quan Wang, Prashant Sridhar, Ignacio Lopez Moreno, Hannah Muckenhim
-
Publication number: 20240203400Abstract: Implementations relate to an automated assistant that can bypass invocation phrase detection when an estimation of device-to-device distance satisfies a distance threshold. The estimation of distance can be performed for a set of devices, such as a computerized watch and a cellular phone, and/or any other combination of devices. The devices can communicate ultrasonic signals between each other, and the estimated distance can be determined based on when the ultrasonic signals are sent and/or received by each respective device. When an estimated distance satisfies the distance threshold, the automated assistant can operate as if the user is holding onto their cellular phone while wearing their computerized watch. This scenario can indicate that the user may be intending to hold their device to interact with the automated assistant and, based on this indication, the automated assistant can temporarily bypass invocation phrase detection (e.g., invoke the automated assistant).Type: ApplicationFiled: December 22, 2023Publication date: June 20, 2024Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
-
Publication number: 20240194191Abstract: Implementations relate to determining a language for speech recognition of a spoken utterance, received via an automated assistant interface, for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Selection of a speech recognition model for a particular language can based on one or more interaction characteristics exhibited during a dialog session between a user and an automated assistant. Such interaction characteristics can include anticipated user input types, anticipated user input durations, a duration for monitoring for a user response, and/or an actual duration of a provided user response.Type: ApplicationFiled: November 13, 2023Publication date: June 13, 2024Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno
-
Patent number: 11961525Abstract: This document generally describes systems, methods, devices, and other techniques related to speaker verification, including (i) training a neural network for a speaker verification model, (ii) enrolling users at a client device, and (iii) verifying identities of users based on characteristics of the users' voices. Some implementations include a computer-implemented method. The method can include receiving, at a computing device, data that characterizes an utterance of a user of the computing device. A speaker representation can be generated, at the computing device, for the utterance using a neural network on the computing device. The neural network can be trained based on a plurality of training samples that each: (i) include data that characterizes a first utterance and data that characterizes one or more second utterances, and (ii) are labeled as a matching speakers sample or a non-matching speakers sample.Type: GrantFiled: August 3, 2021Date of Patent: April 16, 2024Assignee: Google LLCInventors: Georg Heigold, Samuel Bengio, Ignacio Lopez Moreno
-
Publication number: 20240112667Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech synthesis. The methods, systems, and apparatus include actions of obtaining an audio representation of speech of a target speaker, obtaining input text for which speech is to be synthesized in a voice of the target speaker, generating a speaker vector by providing the audio representation to a speaker encoder engine that is trained to distinguish speakers from one another, generating an audio representation of the input text spoken in the voice of the target speaker by providing the input text and the speaker vector to a spectrogram generation engine that is trained using voices of reference speakers to generate audio representations, and providing the audio representation of the input text spoken in the voice of the target speaker for output.Type: ApplicationFiled: November 30, 2023Publication date: April 4, 2024Applicant: Google LLCInventors: Ye Jia, Zhifeng Chen, Yonghui Wu, Jonathan Shen, Ruoming Pang, Ron J. Weiss, Ignacio Lopez Moreno, Fei Ren, Yu Zhang, Quan Wang, Patrick An Phu Nguyen