Patents by Inventor Jason Pelecanos
Jason Pelecanos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12205578Abstract: Implementations disclosed herein are directed to techniques for selectively enabling and/or disabling non-transient storage of one or more instances of assistant interaction data for turn(s) of a dialog between a user and an automated assistant. Implementations are additionally or alternatively directed to techniques for retroactive wiping of non-transiently stored assistant interaction data from previous assistant interaction(s).Type: GrantFiled: January 7, 2021Date of Patent: January 21, 2025Assignee: GOOGLE LLCInventors: Fadi Biadsy, Johan Schalkwyk, Jason Pelecanos
-
Patent number: 12154574Abstract: A method for evaluating a verification model includes receiving a first and a second set of verification results where each verification result indicates whether a primary model or an alternative model verifies an identity of a user as a registered user. The method further includes identifying each verification result in the first and second sets that includes a performance metric. The method also includes determining a first score of the primary model based on a number of the verification results identified in the first set that includes the performance metric and determining a second score of the alternative model based on a number of the verification results identified in the second set that includes the performance metric. The method further includes determining whether a verification capability of the alternative model is better than a verification capability of the primary model based on the first score and the second score.Type: GrantFiled: November 9, 2023Date of Patent: November 26, 2024Assignee: Google LLCInventors: Jason Pelecanos, Pu-sen Chao, Yiling Huang, Quan Wang
-
Publication number: 20240203400Abstract: Implementations relate to an automated assistant that can bypass invocation phrase detection when an estimation of device-to-device distance satisfies a distance threshold. The estimation of distance can be performed for a set of devices, such as a computerized watch and a cellular phone, and/or any other combination of devices. The devices can communicate ultrasonic signals between each other, and the estimated distance can be determined based on when the ultrasonic signals are sent and/or received by each respective device. When an estimated distance satisfies the distance threshold, the automated assistant can operate as if the user is holding onto their cellular phone while wearing their computerized watch. This scenario can indicate that the user may be intending to hold their device to interact with the automated assistant and, based on this indication, the automated assistant can temporarily bypass invocation phrase detection (e.g., invoke the automated assistant).Type: ApplicationFiled: December 22, 2023Publication date: June 20, 2024Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
-
Publication number: 20240135934Abstract: A method includes obtaining a multi-utterance training sample that includes audio data characterizing utterances spoken by two or more different speakers and obtaining ground-truth speaker change intervals indicating time intervals in the audio data where speaker changes among the two or more different speakers occur. The method also includes processing the audio data to generate a sequence of predicted speaker change tokens using a sequence transduction model. For each corresponding predicted speaker change token, the method includes labeling the corresponding predicted speaker change token as correct when the predicted speaker change token overlaps with one of the ground-truth speaker change intervals. The method also includes determining a precision metric of the sequence transduction model based on a number of the predicted speaker change tokens labeled as correct and a total number of the predicted speaker change tokens in the sequence of predicted speaker change tokens.Type: ApplicationFiled: October 9, 2023Publication date: April 25, 2024Applicant: Google LLCInventors: Guanlong Zhao, Quan Wang, Han Lu, Yiling Huang, Jason Pelecanos
-
Patent number: 11942094Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing a first portion of the audio data that characterizes a predetermined hotword to generate a text-dependent evaluation vector, and generating one or more text-dependent confidence scores. When one of the text-dependent confidence scores satisfies a threshold, the operations include identifying a speaker of the utterance as a respective enrolled user associated with the text-dependent confidence score that satisfies the threshold and initiating performance of an action without performing speaker verification. When none of the text-dependent confidence scores satisfy the threshold, the operations include processing a second portion of the audio data that characterizes a query to generate a text-independent evaluation vector, generating one or more text-independent confidence scores, and determining whether the identity of the speaker of the utterance includes any of the enrolled users.Type: GrantFiled: March 24, 2021Date of Patent: March 26, 2024Assignee: Google LLCInventors: Roza Chojnacka, Jason Pelecanos, Quan Wang, Ignacio Lopez Moreno
-
Publication number: 20240079013Abstract: A method for evaluating a verification model includes receiving a first and a second set of verification results where each verification result indicates whether a primary model or an alternative model verifies an identity of a user as a registered user. The method further includes identifying each verification result in the first and second sets that includes a performance metric. The method also includes determining a first score of the primary model based on a number of the verification results identified in the first set that includes the performance metric and determining a second score of the alternative model based on a number of the verification results identified in the second set that includes the performance metric. The method further includes determining whether a verification capability of the alternative model is better than a verification capability of the primary model based on the first score and the second score.Type: ApplicationFiled: November 9, 2023Publication date: March 7, 2024Applicant: Google LLCInventors: Jason Pelecanos, Pu-sen Chao, Yiling Huang, Quan Wang
-
Publication number: 20240029742Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.Type: ApplicationFiled: October 2, 2023Publication date: January 25, 2024Applicant: Google LLCInventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Yiling Huang, Mert Saglam
-
Patent number: 11854533Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.Type: GrantFiled: January 28, 2022Date of Patent: December 26, 2023Assignee: GOOGLE LLCInventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
-
Patent number: 11837238Abstract: A method for evaluating a verification model includes receiving a first and a second set of verification results where each verification result indicates whether a primary model or an alternative model verifies an identity of a user as a registered user. The method further includes identifying each verification result in the first and second sets that includes a performance metric. The method also includes determining a first score of the primary model based on a number of the verification results identified in the first set that includes the performance metric and determining a second score of the alternative model based on a number of the verification results identified in the second set that includes the performance metric. The method further includes determining whether a verification capability of the alternative model is better than a verification capability of the primary model based on the first score and the second score.Type: GrantFiled: October 21, 2020Date of Patent: December 5, 2023Assignee: Google LLCInventors: Jason Pelecanos, Pu-sen Chao, Yiling Huang, Quan Wang
-
Patent number: 11798562Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.Type: GrantFiled: May 16, 2021Date of Patent: October 24, 2023Assignee: Google LLCInventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Yiling Huang, Mert Saglam
-
Publication number: 20230037085Abstract: Implementations disclosed herein are directed to techniques for selectively enabling and/or disabling non-transient storage of one or more instances of assistant interaction data for turn(s) of a dialog between a user and an automated assistant. Implementations are additionally or alternatively directed to techniques for retroactive wiping of non-transiently stored assistant interaction data from previous assistant interaction(s).Type: ApplicationFiled: January 7, 2021Publication date: February 2, 2023Inventors: Fadi Biadsy, Johan Schalkwyk, Jason Pelecanos
-
Publication number: 20220366914Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.Type: ApplicationFiled: May 16, 2021Publication date: November 17, 2022Applicant: Google LLCInventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Yiling Huang, Mert Saglam
-
Publication number: 20220310098Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing a first portion of the audio data that characterizes a predetermined hotword to generate a text-dependent evaluation vector, and generating one or more text-dependent confidence scores. When one of the text-dependent confidence scores satisfies a threshold, the operations include identifying a speaker of the utterance as a respective enrolled user associated with the text-dependent confidence score that satisfies the threshold and initiating performance of an action without performing speaker verification. When none of the text-dependent confidence scores satisfy the threshold, the operations include processing a second portion of the audio data that characterizes a query to generate a text-independent evaluation vector, generating one or more text-independent confidence scores, and determining whether the identity of the speaker of the utterance includes any of the enrolled users.Type: ApplicationFiled: March 24, 2021Publication date: September 29, 2022Applicant: Google LLCInventors: Roza Chojnacka, Jason Pelecanos, Quan Wang, Ignacio Lopez Moreno
-
Publication number: 20220157298Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.Type: ApplicationFiled: January 28, 2022Publication date: May 19, 2022Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
-
Publication number: 20220122614Abstract: A method for evaluating a verification model includes receiving a first and a second set of verification results where each verification result indicates whether a primary model or an alternative model verifies an identity of a user as a registered user. The method further includes identifying each verification result in the first and second sets that includes a performance metric. The method also includes determining a first score of the primary model based on a number of the verification results identified in the first set that includes the performance metric and determining a second score of the alternative model based on a number of the verification results identified in the second set that includes the performance metric. The method further includes determining whether a verification capability of the alternative model is better than a verification capability of the primary model based on the first score and the second score.Type: ApplicationFiled: October 21, 2020Publication date: April 21, 2022Applicant: Google LLCInventors: Jason Pelecanos, Pu-sen Chao, Yiling Huang, Quan Wang
-
Patent number: 11238847Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.Type: GrantFiled: December 4, 2019Date of Patent: February 1, 2022Assignee: Google LLCInventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
-
Publication number: 20210312907Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.Type: ApplicationFiled: December 4, 2019Publication date: October 7, 2021Inventors: Ignacio Lopez Moreno, Quan Wang, Jason Pelecanos, Li Wan, Alexander Gruenstein, Hakan Erdogan
-
Publication number: 20080208581Abstract: A system and method for speaker recognition speaker modelling whereby prior speaker information is incorporated into the modelling process, utilising the maximum a posteriori (MAP) algorithm and extending it to contain prior Gaussian component correlation information. Firstly a background model (10) is estimated. Pooled acoustic reference data (11) relating to a specific demographic of speakers (population of interest) from a given total population is then trained via the Expectation Maximization (EM) algorithm (12) to produce a background model (13). The background model (13) is adapted utilising information from a plurality of reference speakers (21) in accordance with the Maximum A Posteriori (MAP) criterion (22). Utilizing MAP estimation technique, the reference speaker data and prior information obtained from the background model parameters are combined to produce a library of adapted speaker models, namely Gaussian Mixture Models (23).Type: ApplicationFiled: December 3, 2004Publication date: August 28, 2008Inventors: Jason Pelecanos, Subramanian Sridharan, Robert Vogt
-
Publication number: 20070256499Abstract: A method, system and program storage device are provided for machine diagnostics, detection and profiling using pressure waves, the method including profiling known sources, acquiring pressure wave data, analyzing the acquired pressure wave data, and detecting if the analyzed pressure wave data matches a profiled known source; the system including a processor, a pressure wave transducer in signal communication with the processor, a pressure wave analysis unit in signal communication with the processor, and a source or threat detection unit in signal communication with the processor; and the program storage device including program steps for profiling known sources, acquiring pressure wave data, analyzing the acquired pressure wave data, and detecting if the analyzed pressure wave data matches a profiled known source.Type: ApplicationFiled: April 21, 2006Publication date: November 8, 2007Inventors: Jason Pelecanos, Douglas Heintzman, Jiri Navratil, Ganesh Ramaswamy
-
Publication number: 20070239441Abstract: A method and system for speaker recognition and identification includes transforming features of a speaker utterance in a first condition state to match a second condition state and provide a transformed utterance. A discriminative criterion is used to generate a transform that maps an utterance to obtain a computed result. The discriminative criterion is maximized over a plurality of speakers to obtain a best transform for recognizing speech and/or identifying a speaker under the second condition state. Speech recognition and speaker identity may be determined by employing the best transform for decoding speech to reduce channel mismatch.Type: ApplicationFiled: March 29, 2006Publication date: October 11, 2007Inventors: Jiri Navratil, Jason Pelecanos, Ganesh Ramaswamy