Patents by Inventor Jason Pelecanos

Jason Pelecanos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12205578
    Abstract: Implementations disclosed herein are directed to techniques for selectively enabling and/or disabling non-transient storage of one or more instances of assistant interaction data for turn(s) of a dialog between a user and an automated assistant. Implementations are additionally or alternatively directed to techniques for retroactive wiping of non-transiently stored assistant interaction data from previous assistant interaction(s).
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: January 21, 2025
    Assignee: GOOGLE LLC
    Inventors: Fadi Biadsy, Johan Schalkwyk, Jason Pelecanos
  • Patent number: 12154574
    Abstract: A method for evaluating a verification model includes receiving a first and a second set of verification results where each verification result indicates whether a primary model or an alternative model verifies an identity of a user as a registered user. The method further includes identifying each verification result in the first and second sets that includes a performance metric. The method also includes determining a first score of the primary model based on a number of the verification results identified in the first set that includes the performance metric and determining a second score of the alternative model based on a number of the verification results identified in the second set that includes the performance metric. The method further includes determining whether a verification capability of the alternative model is better than a verification capability of the primary model based on the first score and the second score.
    Type: Grant
    Filed: November 9, 2023
    Date of Patent: November 26, 2024
    Assignee: Google LLC
    Inventors: Jason Pelecanos, Pu-sen Chao, Yiling Huang, Quan Wang
  • Publication number: 20240203400
    Abstract: Implementations relate to an automated assistant that can bypass invocation phrase detection when an estimation of device-to-device distance satisfies a distance threshold. The estimation of distance can be performed for a set of devices, such as a computerized watch and a cellular phone, and/or any other combination of devices. The devices can communicate ultrasonic signals between each other, and the estimated distance can be determined based on when the ultrasonic signals are sent and/or received by each respective device. When an estimated distance satisfies the distance threshold, the automated assistant can operate as if the user is holding onto their cellular phone while wearing their computerized watch. This scenario can indicate that the user may be intending to hold their device to interact with the automated assistant and, based on this indication, the automated assistant can temporarily bypass invocation phrase detection (e.g., invoke the automated assistant).
    Type: Application
    Filed: December 22, 2023
    Publication date: June 20, 2024
    Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
  • Publication number: 20240135934
    Abstract: A method includes obtaining a multi-utterance training sample that includes audio data characterizing utterances spoken by two or more different speakers and obtaining ground-truth speaker change intervals indicating time intervals in the audio data where speaker changes among the two or more different speakers occur. The method also includes processing the audio data to generate a sequence of predicted speaker change tokens using a sequence transduction model. For each corresponding predicted speaker change token, the method includes labeling the corresponding predicted speaker change token as correct when the predicted speaker change token overlaps with one of the ground-truth speaker change intervals. The method also includes determining a precision metric of the sequence transduction model based on a number of the predicted speaker change tokens labeled as correct and a total number of the predicted speaker change tokens in the sequence of predicted speaker change tokens.
    Type: Application
    Filed: October 9, 2023
    Publication date: April 25, 2024
    Applicant: Google LLC
    Inventors: Guanlong Zhao, Quan Wang, Han Lu, Yiling Huang, Jason Pelecanos
  • Patent number: 11942094
    Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing a first portion of the audio data that characterizes a predetermined hotword to generate a text-dependent evaluation vector, and generating one or more text-dependent confidence scores. When one of the text-dependent confidence scores satisfies a threshold, the operations include identifying a speaker of the utterance as a respective enrolled user associated with the text-dependent confidence score that satisfies the threshold and initiating performance of an action without performing speaker verification. When none of the text-dependent confidence scores satisfy the threshold, the operations include processing a second portion of the audio data that characterizes a query to generate a text-independent evaluation vector, generating one or more text-independent confidence scores, and determining whether the identity of the speaker of the utterance includes any of the enrolled users.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: March 26, 2024
    Assignee: Google LLC
    Inventors: Roza Chojnacka, Jason Pelecanos, Quan Wang, Ignacio Lopez Moreno
  • Publication number: 20240079013
    Abstract: A method for evaluating a verification model includes receiving a first and a second set of verification results where each verification result indicates whether a primary model or an alternative model verifies an identity of a user as a registered user. The method further includes identifying each verification result in the first and second sets that includes a performance metric. The method also includes determining a first score of the primary model based on a number of the verification results identified in the first set that includes the performance metric and determining a second score of the alternative model based on a number of the verification results identified in the second set that includes the performance metric. The method further includes determining whether a verification capability of the alternative model is better than a verification capability of the primary model based on the first score and the second score.
    Type: Application
    Filed: November 9, 2023
    Publication date: March 7, 2024
    Applicant: Google LLC
    Inventors: Jason Pelecanos, Pu-sen Chao, Yiling Huang, Quan Wang
  • Publication number: 20240029742
    Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.
    Type: Application
    Filed: October 2, 2023
    Publication date: January 25, 2024
    Applicant: Google LLC
    Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Yiling Huang, Mert Saglam
  • Patent number: 11854533
    Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: December 26, 2023
    Assignee: GOOGLE LLC
    Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
  • Patent number: 11837238
    Abstract: A method for evaluating a verification model includes receiving a first and a second set of verification results where each verification result indicates whether a primary model or an alternative model verifies an identity of a user as a registered user. The method further includes identifying each verification result in the first and second sets that includes a performance metric. The method also includes determining a first score of the primary model based on a number of the verification results identified in the first set that includes the performance metric and determining a second score of the alternative model based on a number of the verification results identified in the second set that includes the performance metric. The method further includes determining whether a verification capability of the alternative model is better than a verification capability of the primary model based on the first score and the second score.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: December 5, 2023
    Assignee: Google LLC
    Inventors: Jason Pelecanos, Pu-sen Chao, Yiling Huang, Quan Wang
  • Patent number: 11798562
    Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.
    Type: Grant
    Filed: May 16, 2021
    Date of Patent: October 24, 2023
    Assignee: Google LLC
    Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Yiling Huang, Mert Saglam
  • Publication number: 20230037085
    Abstract: Implementations disclosed herein are directed to techniques for selectively enabling and/or disabling non-transient storage of one or more instances of assistant interaction data for turn(s) of a dialog between a user and an automated assistant. Implementations are additionally or alternatively directed to techniques for retroactive wiping of non-transiently stored assistant interaction data from previous assistant interaction(s).
    Type: Application
    Filed: January 7, 2021
    Publication date: February 2, 2023
    Inventors: Fadi Biadsy, Johan Schalkwyk, Jason Pelecanos
  • Publication number: 20220366914
    Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.
    Type: Application
    Filed: May 16, 2021
    Publication date: November 17, 2022
    Applicant: Google LLC
    Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Yiling Huang, Mert Saglam
  • Publication number: 20220310098
    Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing a first portion of the audio data that characterizes a predetermined hotword to generate a text-dependent evaluation vector, and generating one or more text-dependent confidence scores. When one of the text-dependent confidence scores satisfies a threshold, the operations include identifying a speaker of the utterance as a respective enrolled user associated with the text-dependent confidence score that satisfies the threshold and initiating performance of an action without performing speaker verification. When none of the text-dependent confidence scores satisfy the threshold, the operations include processing a second portion of the audio data that characterizes a query to generate a text-independent evaluation vector, generating one or more text-independent confidence scores, and determining whether the identity of the speaker of the utterance includes any of the enrolled users.
    Type: Application
    Filed: March 24, 2021
    Publication date: September 29, 2022
    Applicant: Google LLC
    Inventors: Roza Chojnacka, Jason Pelecanos, Quan Wang, Ignacio Lopez Moreno
  • Publication number: 20220157298
    Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.
    Type: Application
    Filed: January 28, 2022
    Publication date: May 19, 2022
    Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
  • Publication number: 20220122614
    Abstract: A method for evaluating a verification model includes receiving a first and a second set of verification results where each verification result indicates whether a primary model or an alternative model verifies an identity of a user as a registered user. The method further includes identifying each verification result in the first and second sets that includes a performance metric. The method also includes determining a first score of the primary model based on a number of the verification results identified in the first set that includes the performance metric and determining a second score of the alternative model based on a number of the verification results identified in the second set that includes the performance metric. The method further includes determining whether a verification capability of the alternative model is better than a verification capability of the primary model based on the first score and the second score.
    Type: Application
    Filed: October 21, 2020
    Publication date: April 21, 2022
    Applicant: Google LLC
    Inventors: Jason Pelecanos, Pu-sen Chao, Yiling Huang, Quan Wang
  • Patent number: 11238847
    Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: February 1, 2022
    Assignee: Google LLC
    Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
  • Publication number: 20210312907
    Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.
    Type: Application
    Filed: December 4, 2019
    Publication date: October 7, 2021
    Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
  • Publication number: 20080208581
    Abstract: A system and method for speaker recognition speaker modelling whereby prior speaker information is incorporated into the modelling process, utilising the maximum a posteriori (MAP) algorithm and extending it to contain prior Gaussian component correlation information. Firstly a background model (10) is estimated. Pooled acoustic reference data (11) relating to a specific demographic of speakers (population of interest) from a given total population is then trained via the Expectation Maximization (EM) algorithm (12) to produce a background model (13). The background model (13) is adapted utilising information from a plurality of reference speakers (21) in accordance with the Maximum A Posteriori (MAP) criterion (22). Utilizing MAP estimation technique, the reference speaker data and prior information obtained from the background model parameters are combined to produce a library of adapted speaker models, namely Gaussian Mixture Models (23).
    Type: Application
    Filed: December 3, 2004
    Publication date: August 28, 2008
    Inventors: Jason Pelecanos, Subramanian Sridharan, Robert Vogt
  • Publication number: 20070256499
    Abstract: A method, system and program storage device are provided for machine diagnostics, detection and profiling using pressure waves, the method including profiling known sources, acquiring pressure wave data, analyzing the acquired pressure wave data, and detecting if the analyzed pressure wave data matches a profiled known source; the system including a processor, a pressure wave transducer in signal communication with the processor, a pressure wave analysis unit in signal communication with the processor, and a source or threat detection unit in signal communication with the processor; and the program storage device including program steps for profiling known sources, acquiring pressure wave data, analyzing the acquired pressure wave data, and detecting if the analyzed pressure wave data matches a profiled known source.
    Type: Application
    Filed: April 21, 2006
    Publication date: November 8, 2007
    Inventors: Jason Pelecanos, Douglas Heintzman, Jiri Navratil, Ganesh Ramaswamy
  • Publication number: 20070239441
    Abstract: A method and system for speaker recognition and identification includes transforming features of a speaker utterance in a first condition state to match a second condition state and provide a transformed utterance. A discriminative criterion is used to generate a transform that maps an utterance to obtain a computed result. The discriminative criterion is maximized over a plurality of speakers to obtain a best transform for recognizing speech and/or identifying a speaker under the second condition state. Speech recognition and speaker identity may be determined by employing the best transform for decoding speech to reduce channel mismatch.
    Type: Application
    Filed: March 29, 2006
    Publication date: October 11, 2007
    Inventors: Jiri Navratil, Jason Pelecanos, Ganesh Ramaswamy