Patents by Inventor Thushan Amarasiriwardena

Thushan Amarasiriwardena has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230343324
    Abstract: Implementations relate to dynamically adapting a given assistant output based on a given persona, from among a plurality of disparate personas, assigned to an automated assistant. In some implementations, the given assistant output can be generated and subsequently adapted based on the given persona assigned to the automated assistant. In other implementations, the given assistant output can be generated specific to the given persona and without having to subsequently adapt the given assistant output to the given persona. Notably, the given assistant output can include a stream of textual content to be synthesized for audible presentation to the user, and a stream of visual cues utilized in controlling a display of a client device and/or in controlling a visualized representation of the automated assistant. Various implementations utilize large language models (LLMs), or output previously generated utilizing LLMs, to reflect the given persona in the given assistant output.
    Type: Application
    Filed: May 13, 2022
    Publication date: October 26, 2023
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Gianluca Martini
  • Publication number: 20230343323
    Abstract: Implementations relate to dynamically adapting a given assistant output based on a given persona, from among a plurality of disparate personas, assigned to an automated assistant. In some implementations, the given assistant output can be generated and subsequently adapted based on the given persona assigned to the automated assistant. In other implementations, the given assistant output can be generated specific to the given persona and without having to subsequently adapt the given assistant output to the given persona. Notably, the given assistant output can include a stream of textual content to be synthesized for audible presentation to the user, and a stream of visual cues utilized in controlling a display of a client device and/or in controlling a visualized representation of the automated assistant. Various implementations utilize large language models (LLMs), or output previously generated utilizing LLMs, to reflect the given persona in the given assistant output.
    Type: Application
    Filed: April 21, 2022
    Publication date: October 26, 2023
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Gianluca Martini
  • Publication number: 20230343336
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Application
    Filed: June 30, 2023
    Publication date: October 26, 2023
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Patent number: 11735182
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Publication number: 20230074406
    Abstract: As part of a dialog session between a user and an automated assistant, implementations can receive a stream of audio data that captures a spoken utterance including an assistant query, determine, based on processing the stream of audio data, a set of assistant outputs that are each predicted to be responsive to the assistant query, process, using large language model (LLM) output(s), the assistant outputs and context of the dialog session to generate a set of modified assistant outputs, and cause given modified assistant output, from among the set of modified assistant outputs, to be provided for presentation to the user in response to the spoken utterance. In some implementations, the LLM output(s) can be generated in an offline manner for subsequent use in an online manner. In additional or alternative implementations, the LLM output(s) can be generated in an online manner when the spoken utterance is received.
    Type: Application
    Filed: November 22, 2021
    Publication date: March 9, 2023
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Vikram Sridar, Daniel De Freitas Adiwardana, Noam M. Shazeer, Quoc Le
  • Patent number: 11347801
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: May 31, 2022
    Assignee: GOOGLE LLC
    Inventors: Adam Coimbra, Ulas Kirazci, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Patent number: 11200893
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: December 14, 2021
    Assignee: GOOGLE LLC
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena, Yudong Sun, Xiao Gao
  • Patent number: 11170772
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: November 9, 2021
    Assignee: GOOGLE LLC
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena, Yudong Sun, Xiao Gao
  • Publication number: 20210193146
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Application
    Filed: March 4, 2021
    Publication date: June 24, 2021
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Patent number: 10984786
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: April 20, 2021
    Assignee: GOOGLE LLC
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Publication number: 20200294497
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Application
    Filed: May 7, 2018
    Publication date: September 17, 2020
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Publication number: 20190340200
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Application
    Filed: January 4, 2019
    Publication date: November 7, 2019
    Inventors: Adam Coimbra, Ulas Kirazci, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Publication number: 20190341040
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Application
    Filed: February 6, 2019
    Publication date: November 7, 2019
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena, Yudong Sun, Xiao Gao