Patents by Inventor Dragan Zivkovic
Dragan Zivkovic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240296843Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.Type: ApplicationFiled: May 7, 2024Publication date: September 5, 2024Inventors: Françoise Beaufays, Rajiv Mathews, Dragan Zivkovic, Kurt Partridge, Andrew Hard
-
Publication number: 20240257799Abstract: A method includes receiving a biased transcription for a voice command spoken by a user and captured by a user device, the biased transcription biased to include a biasing phrase from a set of biasing phrases specific to the user. The method also includes instructing an application executing on the user device to perform an action specified by the biased transcription for the voice command, and receiving one or more user behavior signals responsive to the application performing the action specified by the biased transcription. The method further includes generating, as output from a confidence model, a confidence score of the biased transcription based on the one or more user behavior signals input to the confidence model and, based on the confidence score output from the confidence model, training a speech recognizer on the biased transcription.Type: ApplicationFiled: January 30, 2023Publication date: August 1, 2024Applicant: Google LLCInventors: Dragan Zivkovic, Agoston Weisz
-
Patent number: 12014739Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.Type: GrantFiled: July 6, 2023Date of Patent: June 18, 2024Assignee: GOOGLE LLCInventors: Françoise Beaufays, Rajiv Mathews, Dragan Zivkovic, Kurt Partridge, Andrew Hard
-
Publication number: 20240112672Abstract: On-device processor(s) of a client device may store, in on-device storage and in association with a time to live (TTL) in the on-device storage, a correction directed to ASR processing of audio data. The correction may include a portion of a given speech hypothesis that was modified to an alternate speech hypothesis. Further, the on-device processor(s) may cause an on-device ASR model to be personalized based on the correction. Moreover, and based on additional ASR processing of additional audio data, the on-device processor(s) may store, in the on-device storage and in association with an additional TTL in the on-device storage, a pseudo-correction directed to the additional ASR processing. Accordingly, the on-device processor(s) may cause the on-device ASR model to be personalized based on the pseudo-correction to prevent forgetting by the on-device ASR model.Type: ApplicationFiled: October 4, 2022Publication date: April 4, 2024Inventors: Rajiv Mathews, Dragan Zivkovic, Khe Chai Sim
-
Publication number: 20240086063Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.Type: ApplicationFiled: November 22, 2023Publication date: March 14, 2024Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
-
Patent number: 11842045Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.Type: GrantFiled: August 31, 2022Date of Patent: December 12, 2023Assignee: Google LLCInventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
-
Publication number: 20230352019Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.Type: ApplicationFiled: July 6, 2023Publication date: November 2, 2023Inventors: Françoise Beaufays, Rajiv Mathews, Dragan Zivkovic, Kurt Partridge, Andrew Hard
-
Patent number: 11741953Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.Type: GrantFiled: November 8, 2019Date of Patent: August 29, 2023Assignee: GOOGLE LLCInventors: Françoise Beaufays, Rajiv Mathews, Dragan Zivkovic, Kurt Partridge, Andrew Hard
-
Publication number: 20220413696Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.Type: ApplicationFiled: August 31, 2022Publication date: December 29, 2022Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
-
Publication number: 20220308975Abstract: Implementations disclosed herein are directed to systems and methods for evaluating new feature(s) for client device(s) based on performance measure(s) of the client device(s) and/or the new feature(s). The new feature(s) can include, for example, machine learning (ML) model(s), non-ML software-enabled functionality, non-ML hardware-enabled functionality, and/or ML or non-ML software application features for a given software application utilized by the client device(s). The client device(s) can generate the performance measure(s) by processing a plurality of testing instances for the new feature(s). The performance measure(s) can include, for example, latency measure(s), memory consumption measure(s), CPU usage measure(s), precision and/or recall measure(s), and/or other measures. In some implementations, the new feature(s) may be activated for use locally at the client device(s) based on the performance measure(s), and optionally at other client device(s) that share the same device characteristics.Type: ApplicationFiled: April 11, 2022Publication date: September 29, 2022Inventors: Dragan Zivkovic, Harry Bleyan, Tamar Lucassen, Akash Agrawal
-
Publication number: 20220309389Abstract: Implementations disclosed herein are directed to systems and methods for evaluating on-device machine learning (ML) model(s) based on performance measure(s) of client device(s) and/or the on-device ML model(s). The client device(s) can include on-device memory that stores the on-device ML model(s) and a plurality of testing instances for the on-device ML model(s). When certain condition(s) are satisfied, the client device(s) can process, using the on-device ML model(s), the plurality of testing instances to generate the performance measure(s). The performance measure(s) can include, for example, latency measure(s), memory consumption measure(s), CPU usage measure(s), ML model measure(s) (e.g., precision and/or recall), and/or other measures. In some implementations, the on-device ML model(s) can be activated (or kept active) for use locally at the client device(s) based on the performance measure(s). In other implementations, the on-device ML model(s) can be sparsified based on the performance measure(s).Type: ApplicationFiled: March 29, 2021Publication date: September 29, 2022Inventors: Dragan Zivkovic, Akash Agrawal, Françoise Beaufays, Tamar Lucassen
-
Patent number: 11435898Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.Type: GrantFiled: October 6, 2020Date of Patent: September 6, 2022Assignee: Google LLCInventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
-
Publication number: 20220229548Abstract: A keyboard is described that determines, using a first decoder and based on a selection of keys of a graphical keyboard, text. Responsive to determining that a characteristic of the text satisfies a threshold, a model of the keyboard identifies the target language of the text, and determines whether the target language is different than a language associated with the first decoder. If the target language of the text is not different than the language associated with the first decoder, the keyboard outputs, for display, an indication of first candidate words determined by the first decoder from the text. If the target language of the text is different: the keyboard enables a second decoder, where a language associated with the second decoder matches the target language of the text, and outputs, for display, an indication of second candidate words determined by the second decoder from the text.Type: ApplicationFiled: April 6, 2022Publication date: July 21, 2022Applicant: Google LLCInventors: Ouais Alsharif, Peter Ciccotto, Francoise Beaufays, Dragan Zivkovic
-
Patent number: 11327652Abstract: A keyboard is described that determines, using a first decoder and based on a selection of keys of a graphical keyboard, text. Responsive to determining that a characteristic of the text satisfies a threshold, a model of the keyboard identifies the target language of the text, and determines whether the target language is different than a language associated with the first decoder. If the target language of the text is not different than the language associated with the first decoder, the keyboard outputs, for display, an indication of first candidate words determined by the first decoder from the text. If the target language of the text is different: the keyboard enables a second decoder, where a language associated with the second decoder matches the target language of the text, and outputs, for display, an indication of second candidate words determined by the second decoder from the text.Type: GrantFiled: August 10, 2020Date of Patent: May 10, 2022Assignee: Google LLCInventors: Ouais Alsharif, Peter Ciccotto, Francoise Beaufays, Dragan Zivkovic
-
Publication number: 20210327421Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.Type: ApplicationFiled: November 8, 2019Publication date: October 21, 2021Inventors: Françoise Beaufays, Rajiv Mathews, Dragan Zivkovic, Kurt Partridge, Andrew Hard
-
Publication number: 20210019046Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.Type: ApplicationFiled: October 6, 2020Publication date: January 21, 2021Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
-
Publication number: 20200371686Abstract: A keyboard is described that determines, using a first decoder and based on a selection of keys of a graphical keyboard, text. Responsive to determining that a characteristic of the text satisfies a threshold, a model of the keyboard identifies the target language of the text, and determines whether the target language is different than a language associated with the first decoder. If the target language of the text is not different than the language associated with the first decoder, the keyboard outputs, for display, an indication of first candidate words determined by the first decoder from the text. If the target language of the text is different: the keyboard enables a second decoder, where a language associated with the second decoder matches the target language of the text, and outputs, for display, an indication of second candidate words determined by the second decoder from the text.Type: ApplicationFiled: August 10, 2020Publication date: November 26, 2020Applicant: Google LLCInventors: Ouais Alsharif, Peter Ciccotto, Francoise Beaufays, Dragan Zivkovic
-
Patent number: 10831366Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.Type: GrantFiled: December 29, 2016Date of Patent: November 10, 2020Assignee: Google LLCInventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
-
Patent number: 10747427Abstract: A keyboard is described that determines, using a first decoder and based on a selection of keys of a graphical keyboard, text. Responsive to determining that a characteristic of the text satisfies a threshold, a model of the keyboard identifies the target language of the text, and determines whether the target language is different than a language associated with the first decoder. If the target language of the text is not different than the language associated with the first decoder, the keyboard outputs, for display, an indication of first candidate words determined by the first decoder from the text. If the target language of the text is different: the keyboard enables a second decoder, where a language associated with the second decoder matches the target language of the text, and outputs, for display, an indication of second candidate words determined by the second decoder from the text.Type: GrantFiled: February 1, 2017Date of Patent: August 18, 2020Assignee: Google LLCInventors: Ouais Alsharif, Peter Ciccotto, Francoise Beaufays, Dragan Zivkovic
-
Patent number: 10210242Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for presenting forked auto-completions. In one aspect, a method includes receiving characters from a user device, obtaining an auto-completion that corresponds to the received characters, obtaining corpora and respective corpus scores associated with the auto-completion, selecting corpora based on the corpus scores, and providing the user device with data identifying the auto-completion and the selected corpora.Type: GrantFiled: April 1, 2016Date of Patent: February 19, 2019Assignee: Google LLCInventors: Dragan Zivkovic, Hidetoshi Tajima, Peter Jin Hong