Patents by Inventor Dragan Zivkovic

Dragan Zivkovic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240112672
    Abstract: On-device processor(s) of a client device may store, in on-device storage and in association with a time to live (TTL) in the on-device storage, a correction directed to ASR processing of audio data. The correction may include a portion of a given speech hypothesis that was modified to an alternate speech hypothesis. Further, the on-device processor(s) may cause an on-device ASR model to be personalized based on the correction. Moreover, and based on additional ASR processing of additional audio data, the on-device processor(s) may store, in the on-device storage and in association with an additional TTL in the on-device storage, a pseudo-correction directed to the additional ASR processing. Accordingly, the on-device processor(s) may cause the on-device ASR model to be personalized based on the pseudo-correction to prevent forgetting by the on-device ASR model.
    Type: Application
    Filed: October 4, 2022
    Publication date: April 4, 2024
    Inventors: Rajiv Mathews, Dragan Zivkovic, Khe Chai Sim
  • Publication number: 20240086063
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.
    Type: Application
    Filed: November 22, 2023
    Publication date: March 14, 2024
    Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
  • Patent number: 11842045
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: December 12, 2023
    Assignee: Google LLC
    Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
  • Publication number: 20230352019
    Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
    Type: Application
    Filed: July 6, 2023
    Publication date: November 2, 2023
    Inventors: Françoise Beaufays, Rajiv Mathews, Dragan Zivkovic, Kurt Partridge, Andrew Hard
  • Patent number: 11741953
    Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: August 29, 2023
    Assignee: GOOGLE LLC
    Inventors: Françoise Beaufays, Rajiv Mathews, Dragan Zivkovic, Kurt Partridge, Andrew Hard
  • Publication number: 20220413696
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.
    Type: Application
    Filed: August 31, 2022
    Publication date: December 29, 2022
    Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
  • Publication number: 20220309389
    Abstract: Implementations disclosed herein are directed to systems and methods for evaluating on-device machine learning (ML) model(s) based on performance measure(s) of client device(s) and/or the on-device ML model(s). The client device(s) can include on-device memory that stores the on-device ML model(s) and a plurality of testing instances for the on-device ML model(s). When certain condition(s) are satisfied, the client device(s) can process, using the on-device ML model(s), the plurality of testing instances to generate the performance measure(s). The performance measure(s) can include, for example, latency measure(s), memory consumption measure(s), CPU usage measure(s), ML model measure(s) (e.g., precision and/or recall), and/or other measures. In some implementations, the on-device ML model(s) can be activated (or kept active) for use locally at the client device(s) based on the performance measure(s). In other implementations, the on-device ML model(s) can be sparsified based on the performance measure(s).
    Type: Application
    Filed: March 29, 2021
    Publication date: September 29, 2022
    Inventors: Dragan Zivkovic, Akash Agrawal, Françoise Beaufays, Tamar Lucassen
  • Publication number: 20220308975
    Abstract: Implementations disclosed herein are directed to systems and methods for evaluating new feature(s) for client device(s) based on performance measure(s) of the client device(s) and/or the new feature(s). The new feature(s) can include, for example, machine learning (ML) model(s), non-ML software-enabled functionality, non-ML hardware-enabled functionality, and/or ML or non-ML software application features for a given software application utilized by the client device(s). The client device(s) can generate the performance measure(s) by processing a plurality of testing instances for the new feature(s). The performance measure(s) can include, for example, latency measure(s), memory consumption measure(s), CPU usage measure(s), precision and/or recall measure(s), and/or other measures. In some implementations, the new feature(s) may be activated for use locally at the client device(s) based on the performance measure(s), and optionally at other client device(s) that share the same device characteristics.
    Type: Application
    Filed: April 11, 2022
    Publication date: September 29, 2022
    Inventors: Dragan Zivkovic, Harry Bleyan, Tamar Lucassen, Akash Agrawal
  • Patent number: 11435898
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: September 6, 2022
    Assignee: Google LLC
    Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
  • Publication number: 20220229548
    Abstract: A keyboard is described that determines, using a first decoder and based on a selection of keys of a graphical keyboard, text. Responsive to determining that a characteristic of the text satisfies a threshold, a model of the keyboard identifies the target language of the text, and determines whether the target language is different than a language associated with the first decoder. If the target language of the text is not different than the language associated with the first decoder, the keyboard outputs, for display, an indication of first candidate words determined by the first decoder from the text. If the target language of the text is different: the keyboard enables a second decoder, where a language associated with the second decoder matches the target language of the text, and outputs, for display, an indication of second candidate words determined by the second decoder from the text.
    Type: Application
    Filed: April 6, 2022
    Publication date: July 21, 2022
    Applicant: Google LLC
    Inventors: Ouais Alsharif, Peter Ciccotto, Francoise Beaufays, Dragan Zivkovic
  • Patent number: 11327652
    Abstract: A keyboard is described that determines, using a first decoder and based on a selection of keys of a graphical keyboard, text. Responsive to determining that a characteristic of the text satisfies a threshold, a model of the keyboard identifies the target language of the text, and determines whether the target language is different than a language associated with the first decoder. If the target language of the text is not different than the language associated with the first decoder, the keyboard outputs, for display, an indication of first candidate words determined by the first decoder from the text. If the target language of the text is different: the keyboard enables a second decoder, where a language associated with the second decoder matches the target language of the text, and outputs, for display, an indication of second candidate words determined by the second decoder from the text.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: May 10, 2022
    Assignee: Google LLC
    Inventors: Ouais Alsharif, Peter Ciccotto, Francoise Beaufays, Dragan Zivkovic
  • Publication number: 20210327421
    Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
    Type: Application
    Filed: November 8, 2019
    Publication date: October 21, 2021
    Inventors: Françoise Beaufays, Rajiv Mathews, Dragan Zivkovic, Kurt Partridge, Andrew Hard
  • Publication number: 20210019046
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.
    Type: Application
    Filed: October 6, 2020
    Publication date: January 21, 2021
    Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
  • Publication number: 20200371686
    Abstract: A keyboard is described that determines, using a first decoder and based on a selection of keys of a graphical keyboard, text. Responsive to determining that a characteristic of the text satisfies a threshold, a model of the keyboard identifies the target language of the text, and determines whether the target language is different than a language associated with the first decoder. If the target language of the text is not different than the language associated with the first decoder, the keyboard outputs, for display, an indication of first candidate words determined by the first decoder from the text. If the target language of the text is different: the keyboard enables a second decoder, where a language associated with the second decoder matches the target language of the text, and outputs, for display, an indication of second candidate words determined by the second decoder from the text.
    Type: Application
    Filed: August 10, 2020
    Publication date: November 26, 2020
    Applicant: Google LLC
    Inventors: Ouais Alsharif, Peter Ciccotto, Francoise Beaufays, Dragan Zivkovic
  • Patent number: 10831366
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: November 10, 2020
    Assignee: Google LLC
    Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
  • Patent number: 10747427
    Abstract: A keyboard is described that determines, using a first decoder and based on a selection of keys of a graphical keyboard, text. Responsive to determining that a characteristic of the text satisfies a threshold, a model of the keyboard identifies the target language of the text, and determines whether the target language is different than a language associated with the first decoder. If the target language of the text is not different than the language associated with the first decoder, the keyboard outputs, for display, an indication of first candidate words determined by the first decoder from the text. If the target language of the text is different: the keyboard enables a second decoder, where a language associated with the second decoder matches the target language of the text, and outputs, for display, an indication of second candidate words determined by the second decoder from the text.
    Type: Grant
    Filed: February 1, 2017
    Date of Patent: August 18, 2020
    Assignee: Google LLC
    Inventors: Ouais Alsharif, Peter Ciccotto, Francoise Beaufays, Dragan Zivkovic
  • Patent number: 10210242
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for presenting forked auto-completions. In one aspect, a method includes receiving characters from a user device, obtaining an auto-completion that corresponds to the received characters, obtaining corpora and respective corpus scores associated with the auto-completion, selecting corpora based on the corpus scores, and providing the user device with data identifying the auto-completion and the selected corpora.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: February 19, 2019
    Assignee: Google LLC
    Inventors: Dragan Zivkovic, Hidetoshi Tajima, Peter Jin Hong
  • Publication number: 20180217749
    Abstract: A keyboard is described that determines, using a first decoder and based on a selection of keys of a graphical keyboard, text. Responsive to determining that a characteristic of the text satisfies a threshold, a model of the keyboard identifies the target language of the text, and determines whether the target language is different than a language associated with the first decoder. If the target language of the text is not different than the language associated with the first decoder, the keyboard outputs, for display, an indication of first candidate words determined by the first decoder from the text. If the target language of the text is different: the keyboard enables a second decoder, where a language associated with the second decoder matches the target language of the text, and outputs, for display, an indication of second candidate words determined by the second decoder from the text.
    Type: Application
    Filed: February 1, 2017
    Publication date: August 2, 2018
    Inventors: Ouais Alsharif, Peter Ciccotto, Francoise Beaufays, Dragan Zivkovic
  • Publication number: 20180188948
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 5, 2018
    Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
  • Patent number: 9712618
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for providing asynchronous and synchronous links to resources. According to one example implementation, a method includes receiving a request for a resource, identifying resources to be referenced by the requested resource, and identifying one or more of the referenced resources that are associated with client-side click tracking, and one or more of the referenced resources that are associated with server-side click tracking. The method also includes providing the requested resource. The provided resource includes one or more client-side click tracking links to the referenced resources that are associated with client-side click tracking, and one or more server-side click tracking links to the referenced resources that are associated with server-side click tracking.
    Type: Grant
    Filed: August 15, 2016
    Date of Patent: July 18, 2017
    Assignee: Google Inc.
    Inventors: Zhongli Ding, Dragan Zivkovic