Patents by Inventor Abraham Lee

Abraham Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11966764
    Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: April 23, 2024
    Assignee: GOOGLE LLC
    Inventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
  • Patent number: 11959764
    Abstract: Implementations set forth herein relate to interactions, between vehicle computing devices and mobile computing devices, that reduce duplicative processes from occurring at either device. Reduction of such processes can be performed, in some instances, via communications between a vehicle computing device and a mobile computing device in order to determine, for example, how to uniquely render content at an interface of each respective computing device while the user is driving the vehicle. These communications can occur before a user has entered a vehicle, while the user is in the vehicle, and/or after a user has left the vehicle. For instance, just before a user enters a vehicle, a vehicle computing device can be primed for certain automated assistant interactions between the user and their mobile computing device. Alternatively, or additionally, the user can authorize the vehicle computing device to perform certain processes immediately after leaving the vehicle.
    Type: Grant
    Filed: April 2, 2021
    Date of Patent: April 16, 2024
    Assignee: GOOGLE LLC
    Inventors: Effie Goenawan, Abraham Lee, Arvind Sivaram Sharma, Austin Chang
  • Publication number: 20240118910
    Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).
    Type: Application
    Filed: December 18, 2023
    Publication date: April 11, 2024
    Inventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
  • Patent number: 11893402
    Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: February 6, 2024
    Assignee: GOOGLE LLC
    Inventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
  • Publication number: 20230343336
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Application
    Filed: June 30, 2023
    Publication date: October 26, 2023
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Patent number: 11735182
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Publication number: 20220316904
    Abstract: Implementations set forth herein relate to interactions, between vehicle computing devices and mobile computing devices, that reduce duplicative processes from occurring at either device. Reduction of such processes can be performed, in some instances, via communications between a vehicle computing device and a mobile computing device in order to determine, for example, how to uniquely render content at an interface of each respective computing device while the user is driving the vehicle. These communications can occur before a user has entered a vehicle, while the user is in the vehicle, and/or after a user has left the vehicle. For instance, just before a user enters a vehicle, a vehicle computing device can be primed for certain automated assistant interactions between the user and their mobile computing device. Alternatively, or additionally, the user can authorize the vehicle computing device to perform certain processes immediately after leaving the vehicle.
    Type: Application
    Filed: April 2, 2021
    Publication date: October 6, 2022
    Inventors: Effie Goenawan, Abraham Lee, Arvind Sivaram Sharma, Austin Chang
  • Patent number: 11347801
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: May 31, 2022
    Assignee: GOOGLE LLC
    Inventors: Adam Coimbra, Ulas Kirazci, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Publication number: 20220157309
    Abstract: Implementations described herein relate to providing suggestions, via a display modality, for completing a spoken utterance for an automated assistant, in order to reduce a frequency and/or a length of time that the user will participate in a current and/or subsequent dialog session with the automated assistant. A user request can be compiled from content of an ongoing spoken utterance and content of any selected suggestion elements. When a currently compiled portion of the user request (from content of a selected suggestion(s) and an incomplete spoken utterance) is capable of being performed via the automated assistant, any actions corresponding to the currently compiled portion of the user request can be performed via the automated assistant. Furthermore, any further content resulting from performance of the actions, along with any discernible context, can be used for providing further suggestions.
    Type: Application
    Filed: January 31, 2022
    Publication date: May 19, 2022
    Inventors: Gleb Skobeltsyn, Olga Kapralova, Konstantin Shagin, Vladimir Vuskovic, Yufei Zhao, Bradley Nelson, Alessio Macrì, Abraham Lee
  • Publication number: 20220107824
    Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).
    Type: Application
    Filed: December 16, 2021
    Publication date: April 7, 2022
    Inventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
  • Patent number: 11238857
    Abstract: Implementations described herein relate to providing suggestions, via a display modality, for completing a spoken utterance for an automated assistant, in order to reduce a frequency and/or a length of time that the user will participate in a current and/or subsequent dialog session with the automated assistant. A user request can be compiled from content of an ongoing spoken utterance and content of any selected suggestion elements. When a currently compiled portion of the user request (from content of a selected suggestion(s) and an incomplete spoken utterance) is capable of being performed via the automated assistant, any actions corresponding to the currently compiled portion of the user request can be performed via the automated assistant. Furthermore, any further content resulting from performance of the actions, along with any discernible context, can be used for providing further suggestions.
    Type: Grant
    Filed: February 7, 2019
    Date of Patent: February 1, 2022
    Assignee: Google LLC
    Inventors: Gleb Skobeltsyn, Olga Kapralova, Konstantin Shagin, Vladimir Vuskovic, Yufei Zhao, Bradley Nelson, Alessio Macrí, Abraham Lee
  • Patent number: 11216292
    Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: January 4, 2022
    Assignee: GOOGLE LLC
    Inventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
  • Patent number: 11200893
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: December 14, 2021
    Assignee: GOOGLE LLC
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena, Yudong Sun, Xiao Gao
  • Patent number: 11170772
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: November 9, 2021
    Assignee: GOOGLE LLC
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena, Yudong Sun, Xiao Gao
  • Patent number: 11126555
    Abstract: A system for prefetching data for a processor includes a processor core, a memory configured to store information for use by the processor core, a cache memory configured to fetch and store information from the memory, and a prefetch circuit. The prefetch circuit may be configured to issue a multi-group prefetch request to retrieve information from the memory to store in the cache memory using a predicted address. The multi-group prefetch request may include a depth value indicative of a number of fetch groups to retrieve. The prefetch circuit may also be configured to generate an accuracy value based on a cache hit rate of prefetched information over a particular time interval, and to modify the depth value based on the accuracy value.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: September 21, 2021
    Assignee: Oracle International Corporation
    Inventors: Hyunjin Abraham Lee, Yuan Chou, John Pape
  • Publication number: 20210280180
    Abstract: Implementations described herein relate to providing suggestions, via a display modality, for completing a spoken utterance for an automated assistant, in order to reduce a frequency and/or a length of time that the user will participate in a current and/or subsequent dialog session with the automated assistant. A user request can be compiled from content of an ongoing spoken utterance and content of any selected suggestion elements. When a currently compiled portion of the user request (from content of a selected suggestion(s) and an incomplete spoken utterance) is capable of being performed via the automated assistant, any actions corresponding to the currently compiled portion of the user request can be performed via the automated assistant. Furthermore, any further content resulting from performance of the actions, along with any discernible context, can be used for providing further suggestions.
    Type: Application
    Filed: February 7, 2019
    Publication date: September 9, 2021
    Inventors: Gleb Skobeltsyn, Olga Kapralova, Konstantin Shagin, Vladimir Vuskovic, Yufei Zhao, Bradley Nelson, Alessio Macrì, Abraham Lee
  • Publication number: 20210193146
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Application
    Filed: March 4, 2021
    Publication date: June 24, 2021
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Patent number: 10984786
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: April 20, 2021
    Assignee: GOOGLE LLC
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Publication number: 20200364067
    Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).
    Type: Application
    Filed: August 27, 2019
    Publication date: November 19, 2020
    Inventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
  • Publication number: 20200294497
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Application
    Filed: May 7, 2018
    Publication date: September 17, 2020
    Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena