Patents by Inventor Ignacio Lopez

Ignacio Lopez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11817085
    Abstract: Implementations relate to determining a language for speech recognition of a spoken utterance, received via an automated assistant interface, for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Selection of a speech recognition model for a particular language can based on one or more interaction characteristics exhibited during a dialog session between a user and an automated assistant. Such interaction characteristics can include anticipated user input types, anticipated user input durations, a duration for monitoring for a user response, and/or an actual duration of a provided user response.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: November 14, 2023
    Assignee: GOOGLE LLC
    Inventors: Pu-Sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno
  • Patent number: 11817084
    Abstract: The present disclosure relates generally to determining a language for speech recognition of a spoken utterance, received via an automated assistant interface, for interacting with an automated assistant. The system can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Selection of a speech recognition model for a particular language can based on one or more interaction characteristics exhibited during a dialog session between a user and an automated assistant. Such interaction characteristics can include anticipated user input types, anticipated user input durations, a duration for monitoring for a user response, and/or an actual duration of a provided user response.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: November 14, 2023
    Assignee: GOOGLE LLC
    Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno
  • Patent number: 11798541
    Abstract: Determining a language for speech recognition of a spoken utterance received via an automated assistant interface for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Implementations determine a user profile that corresponds to audio data that captures a spoken utterance, and utilize language(s), and optionally corresponding probabilities, assigned to the user profile in determining a language for speech recognition of the spoken utterance. Some implementations select only a subset of languages, assigned to the user profile, to utilize in speech recognition of a given spoken utterance of the user.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: October 24, 2023
    Assignee: GOOGLE LLC
    Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno
  • Patent number: 11798562
    Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.
    Type: Grant
    Filed: May 16, 2021
    Date of Patent: October 24, 2023
    Assignee: Google LLC
    Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Yiling Huang, Mert Saglam
  • Publication number: 20230335116
    Abstract: In some implementations, processor(s) can receive an utterance from a speaker, and determine whether the speaker is a known user of a user device or not a known user of the user device. The user device can be shared by a plurality of known users. Further, the processor(s) can determine whether the utterance corresponds to a personal request or non-personal request. Moreover, and in response to determining that the speaker not a known user of the user device and in response to determining that the utterance corresponds to a non-personal request, the processor(s) can cause a response to the utterance to be provided for presentation to the speaker at the user device response to the utterance, or can cause an action to be performed by the user device responsive to the utterance.
    Type: Application
    Filed: June 16, 2023
    Publication date: October 19, 2023
    Inventors: Meltem Oktem, Taral Pradeep Joglekar, Fnu Heryandi, Pu-sen Chao, Ignacio Lopez Moreno, Salil Rajadhyaksha, Alexander H. Gruenstein, Diego Melendo Casado
  • Publication number: 20230274731
    Abstract: A method for training a neural network includes receiving a training input audio sequence including a sequence of input frames defining a hotword that initiates a wake-up process on a user device. The method further includes obtaining a first label and a second label for the training input audio sequence. The method includes generating, using a memorized neural network and the training input audio sequence, an output indicating a likelihood the training input audio sequence includes the hotword. The method further includes determining a first loss based on the first label and the output. The method includes determining a second loss based on the second label and the output. The method further includes optimizing the memorized neural network based on the first loss and the second loss associated with the training input audio sequence.
    Type: Application
    Filed: February 28, 2022
    Publication date: August 31, 2023
    Applicant: Google LLC
    Inventors: Hyun Jin Park, Alex Seungryong Park, Ignacio Lopez Moreno
  • Patent number: 11735173
    Abstract: Determining a language for speech recognition of a spoken utterance received via an automated assistant interface for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Implementations determine a user profile that corresponds to audio data that captures a spoken utterance, and utilize language(s), and optionally corresponding probabilities, assigned to the user profile in determining a language for speech recognition of the spoken utterance. Some implementations select only a subset of languages, assigned to the user profile, to utilize in speech recognition of a given spoken utterance of the user.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno, William Zhang
  • Patent number: 11735176
    Abstract: Speaker diarization techniques that enable processing of audio data to generate one or more refined versions of the audio data, where each of the refined versions of the audio data isolates one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by generating a speaker embedding for the single human speaker, and processing the audio data using a trained generative model—and using the speaker embedding in determining activations for hidden layers of the trained generative model during the processing. Output is generated over the trained generative model based on the processing, and the output is the refined version of the audio data.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Ignacio Lopez Moreno, Luis Carlos Cobo Rus
  • Patent number: 11727918
    Abstract: In some implementations, a set of audio recordings capturing utterances of a user is received by a first speech-enabled device. Based on the set of audio recordings, the first speech-enabled device generates a first user voice recognition model for use in subsequently recognizing a voice of the user at the first speech-enabled device. Further, a particular user account associated with the first voice recognition model is determined, and an indication that a second speech-enabled device that is associated with the particular user account is received. In response to receiving the indication, the set of audio recordings is provided to the second speech-enabled device. Based on the set of audio recordings, the second speech-enabled device generates a second user voice recognition model for use in subsequently recognizing the voice of the user at the second speech-enabled device.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: August 15, 2023
    Assignee: GOOGLE LLC
    Inventors: Ignacio Lopez Moreno, Diego Melendo Casado
  • Patent number: 11721326
    Abstract: In some implementations, processor(s) can receive an utterance from a speaker, and determine whether the speaker is a known user of a user device or not a known user of the user device. The user device can be shared by a plurality of known users. Further, the processor(s) can determine whether the utterance corresponds to a personal request or non-personal request. Moreover, and in response to determining that the speaker is not a known user of the user device and in response to determining that the utterance corresponds to a non-personal request, the processor(s) can cause a response to the utterance to be provided for presentation to the speaker at the user device response to the utterance, or can cause an action to be performed by the user device responsive to the utterance.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: August 8, 2023
    Assignee: GOOGLE LLC
    Inventors: Meltem Oktem, Taral Pradeep Joglekar, Fnu Heryandi, Pu-sen Chao, Ignacio Lopez Moreno, Salil Rajadhyaksha, Alexander H. Gruenstein, Diego Melendo Casado
  • Publication number: 20230223891
    Abstract: A plurality of posts and actuator posts connected to a foundation and to a torque tube on which solar panels are duly mounted is disclosed. The actuator posts include a hinge next to a radial arm that is disposed solidly connected to the torque tube, the arm being hinged at the other end to a linear actuator with screw drive, the bottom end is connected to the actuator post by a joint. Each linear actuator is actuated by a gear that engages with an endless screw solidly connected to a Cardan-type drive shared by all the actuators and is actuated by an electric motor, the Cardan-type drive fitting closely to the torque tube by supports.
    Type: Application
    Filed: April 27, 2021
    Publication date: July 13, 2023
    Inventors: Jose Miguel RODRIGUEZ GONZALEZ, Diego LOPEZ ZOZAYA, Jose Ignacio LOPEZ AYARZA, Juan Manuel GOMEZ GARCIA
  • Patent number: 11646011
    Abstract: Methods and systems for training and/or using a language selection model for use in determining a particular language of a spoken utterance captured in audio data. Features of the audio data can be processed using the trained language selection model to generate a predicted probability for each of N different languages, and a particular language selected based on the generated probabilities. Speech recognition results for the particular language can be utilized responsive to selecting the particular language of the spoken utterance. Many implementations are directed to training the language selection model utilizing tuple losses in lieu of traditional cross-entropy losses. Training the language selection model utilizing the tuple losses can result in more efficient training and/or can result in a more accurate and/or robust model—thereby mitigating erroneous language selections for spoken utterances.
    Type: Grant
    Filed: June 22, 2022
    Date of Patent: May 9, 2023
    Assignee: GOOGLE LLC
    Inventors: Li Wan, Yang Yu, Prashant Sridhar, Ignacio Lopez Moreno, Quan Wang
  • Publication number: 20230113617
    Abstract: Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.
    Type: Application
    Filed: December 9, 2022
    Publication date: April 13, 2023
    Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno, Quan Wang
  • Patent number: 11620989
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes generating, by a speech recognition system, a matrix from a predetermined quantity of vectors that each represent input for a layer of a neural network, generating a plurality of sub-matrices from the matrix, using, for each of the sub-matrices, the respective sub-matrix as input to a node in the layer of the neural network to determine whether an utterance encoded in an audio signal comprises a keyword for which the neural network is trained.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: April 4, 2023
    Assignee: Google LLC
    Inventors: Ignacio Lopez Moreno, Yu-hsin Joyce Chen
  • Publication number: 20230089308
    Abstract: A method includes receiving an input audio signal that corresponds to utterances spoken by multiple speakers. The method also includes processing the input audio to generate a transcription of the utterances and a sequence of speaker turn tokens each indicating a location of a respective speaker turn. The method also includes segmenting the input audio signal into a plurality of speaker segments based on the sequence of speaker tokens. The method also includes extracting a speaker-discriminative embedding from each speaker segment and performing spectral clustering on the speaker-discriminative embeddings to cluster the plurality of speaker segments into k classes. The method also includes assigning a respective speaker label to each speaker segment clustered into the respective class that is different than the respective speaker label assigned to the speaker segments clustered into each other class of the k classes.
    Type: Application
    Filed: December 14, 2021
    Publication date: March 23, 2023
    Applicant: Google LLC
    Inventors: Quan Wang, Han Lu, Evan Clark, Ignacio Lopez Moreno, Hasim Sak, Wei Xia, Taral Joglekar, Anshuman Tripathi
  • Patent number: 11594230
    Abstract: Methods, systems, apparatus, including computer programs encoded on computer storage medium, to facilitate language independent-speaker verification. In one aspect, a method includes actions of receiving, by a user device, audio data representing an utterance of a user. Other actions may include providing, to a neural network stored on the user device, input data derived from the audio data and a language identifier. The neural network may be trained using speech data representing speech in different languages or dialects. The method may include additional actions of generating, based on output of the neural network, a speaker representation and determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user. The method may provide the user with access to the user device based on determining that the utterance is an utterance of the user.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: February 28, 2023
    Assignee: Google LLC
    Inventors: Ignacio Lopez Moreno, Li Wan, Quan Wang
  • Publication number: 20230015169
    Abstract: A method of generating an accurate speaker representation for an audio sample includes receiving a first audio sample from a first speaker and a second audio sample from a second speaker. The method includes dividing a respective audio sample into a plurality of audio slices. The method also includes, based on the plurality of slices, generating a set of candidate acoustic embeddings where each candidate acoustic embedding includes a vector representation of acoustic features. The method further includes removing a subset of the candidate acoustic embeddings from the set of candidate acoustic embeddings. The method additionally includes generating an aggregate acoustic embedding from the remaining candidate acoustic embeddings in the set of candidate acoustic embeddings after removing the subset of the candidate acoustic embeddings.
    Type: Application
    Filed: September 19, 2022
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventors: Yeming Fang, Quan Wang, Pedro Jose Moreno Mengibar, Ignacio Lopez Moreno, Gang Feng, Fang Chu, Jin Shi, Jason William Pelecanos
  • Patent number: 11545157
    Abstract: Techniques are described for training and/or utilizing an end-to-end speaker diarization model. In various implementations, the model is a recurrent neural network (RNN) model, such as an RNN model that includes at least one memory layer, such as a long short-term memory (LSTM) layer. Audio features of audio data can be applied as input to an end-to-end speaker diarization model trained according to implementations disclosed herein, and the model utilized to process the audio features to generate, as direct output over the model, speaker diarization results. Further, the end-to-end speaker diarization model can be a sequence-to-sequence model, where the sequence can have variable length. Accordingly, the model can be utilized to generate speaker diarization results for any of various length audio segments.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: January 3, 2023
    Assignee: GOOGLE LLC
    Inventors: Quan Wang, Yash Sheth, Ignacio Lopez Moreno, Li Wan
  • Patent number: 11527235
    Abstract: Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: December 13, 2022
    Assignee: GOOGLE LLC
    Inventors: Pu-sen Chao, Diego Melendo Casado, Ignacio Lopez Moreno, Quan Wang
  • Publication number: 20220366914
    Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.
    Type: Application
    Filed: May 16, 2021
    Publication date: November 17, 2022
    Applicant: Google LLC
    Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Yiling Huang, Mert Saglam