Patents by Inventor Abraham Lee
Abraham Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11966764Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).Type: GrantFiled: December 16, 2021Date of Patent: April 23, 2024Assignee: GOOGLE LLCInventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
-
Patent number: 11959764Abstract: Implementations set forth herein relate to interactions, between vehicle computing devices and mobile computing devices, that reduce duplicative processes from occurring at either device. Reduction of such processes can be performed, in some instances, via communications between a vehicle computing device and a mobile computing device in order to determine, for example, how to uniquely render content at an interface of each respective computing device while the user is driving the vehicle. These communications can occur before a user has entered a vehicle, while the user is in the vehicle, and/or after a user has left the vehicle. For instance, just before a user enters a vehicle, a vehicle computing device can be primed for certain automated assistant interactions between the user and their mobile computing device. Alternatively, or additionally, the user can authorize the vehicle computing device to perform certain processes immediately after leaving the vehicle.Type: GrantFiled: April 2, 2021Date of Patent: April 16, 2024Assignee: GOOGLE LLCInventors: Effie Goenawan, Abraham Lee, Arvind Sivaram Sharma, Austin Chang
-
Publication number: 20240118910Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).Type: ApplicationFiled: December 18, 2023Publication date: April 11, 2024Inventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
-
Patent number: 11893402Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).Type: GrantFiled: December 16, 2021Date of Patent: February 6, 2024Assignee: GOOGLE LLCInventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
-
Publication number: 20230343336Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.Type: ApplicationFiled: June 30, 2023Publication date: October 26, 2023Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
-
Patent number: 11735182Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.Type: GrantFiled: March 4, 2021Date of Patent: August 22, 2023Assignee: GOOGLE LLCInventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
-
Publication number: 20220316904Abstract: Implementations set forth herein relate to interactions, between vehicle computing devices and mobile computing devices, that reduce duplicative processes from occurring at either device. Reduction of such processes can be performed, in some instances, via communications between a vehicle computing device and a mobile computing device in order to determine, for example, how to uniquely render content at an interface of each respective computing device while the user is driving the vehicle. These communications can occur before a user has entered a vehicle, while the user is in the vehicle, and/or after a user has left the vehicle. For instance, just before a user enters a vehicle, a vehicle computing device can be primed for certain automated assistant interactions between the user and their mobile computing device. Alternatively, or additionally, the user can authorize the vehicle computing device to perform certain processes immediately after leaving the vehicle.Type: ApplicationFiled: April 2, 2021Publication date: October 6, 2022Inventors: Effie Goenawan, Abraham Lee, Arvind Sivaram Sharma, Austin Chang
-
Patent number: 11347801Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.Type: GrantFiled: January 4, 2019Date of Patent: May 31, 2022Assignee: GOOGLE LLCInventors: Adam Coimbra, Ulas Kirazci, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
-
Publication number: 20220157309Abstract: Implementations described herein relate to providing suggestions, via a display modality, for completing a spoken utterance for an automated assistant, in order to reduce a frequency and/or a length of time that the user will participate in a current and/or subsequent dialog session with the automated assistant. A user request can be compiled from content of an ongoing spoken utterance and content of any selected suggestion elements. When a currently compiled portion of the user request (from content of a selected suggestion(s) and an incomplete spoken utterance) is capable of being performed via the automated assistant, any actions corresponding to the currently compiled portion of the user request can be performed via the automated assistant. Furthermore, any further content resulting from performance of the actions, along with any discernible context, can be used for providing further suggestions.Type: ApplicationFiled: January 31, 2022Publication date: May 19, 2022Inventors: Gleb Skobeltsyn, Olga Kapralova, Konstantin Shagin, Vladimir Vuskovic, Yufei Zhao, Bradley Nelson, Alessio Macrì, Abraham Lee
-
Publication number: 20220107824Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).Type: ApplicationFiled: December 16, 2021Publication date: April 7, 2022Inventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
-
Patent number: 11238857Abstract: Implementations described herein relate to providing suggestions, via a display modality, for completing a spoken utterance for an automated assistant, in order to reduce a frequency and/or a length of time that the user will participate in a current and/or subsequent dialog session with the automated assistant. A user request can be compiled from content of an ongoing spoken utterance and content of any selected suggestion elements. When a currently compiled portion of the user request (from content of a selected suggestion(s) and an incomplete spoken utterance) is capable of being performed via the automated assistant, any actions corresponding to the currently compiled portion of the user request can be performed via the automated assistant. Furthermore, any further content resulting from performance of the actions, along with any discernible context, can be used for providing further suggestions.Type: GrantFiled: February 7, 2019Date of Patent: February 1, 2022Assignee: Google LLCInventors: Gleb Skobeltsyn, Olga Kapralova, Konstantin Shagin, Vladimir Vuskovic, Yufei Zhao, Bradley Nelson, Alessio Macrí, Abraham Lee
-
Patent number: 11216292Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).Type: GrantFiled: August 27, 2019Date of Patent: January 4, 2022Assignee: GOOGLE LLCInventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
-
Patent number: 11200893Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.Type: GrantFiled: February 6, 2019Date of Patent: December 14, 2021Assignee: GOOGLE LLCInventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena, Yudong Sun, Xiao Gao
-
Patent number: 11170772Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.Type: GrantFiled: February 6, 2019Date of Patent: November 9, 2021Assignee: GOOGLE LLCInventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena, Yudong Sun, Xiao Gao
-
Patent number: 11126555Abstract: A system for prefetching data for a processor includes a processor core, a memory configured to store information for use by the processor core, a cache memory configured to fetch and store information from the memory, and a prefetch circuit. The prefetch circuit may be configured to issue a multi-group prefetch request to retrieve information from the memory to store in the cache memory using a predicted address. The multi-group prefetch request may include a depth value indicative of a number of fetch groups to retrieve. The prefetch circuit may also be configured to generate an accuracy value based on a cache hit rate of prefetched information over a particular time interval, and to modify the depth value based on the accuracy value.Type: GrantFiled: March 2, 2020Date of Patent: September 21, 2021Assignee: Oracle International CorporationInventors: Hyunjin Abraham Lee, Yuan Chou, John Pape
-
Publication number: 20210280180Abstract: Implementations described herein relate to providing suggestions, via a display modality, for completing a spoken utterance for an automated assistant, in order to reduce a frequency and/or a length of time that the user will participate in a current and/or subsequent dialog session with the automated assistant. A user request can be compiled from content of an ongoing spoken utterance and content of any selected suggestion elements. When a currently compiled portion of the user request (from content of a selected suggestion(s) and an incomplete spoken utterance) is capable of being performed via the automated assistant, any actions corresponding to the currently compiled portion of the user request can be performed via the automated assistant. Furthermore, any further content resulting from performance of the actions, along with any discernible context, can be used for providing further suggestions.Type: ApplicationFiled: February 7, 2019Publication date: September 9, 2021Inventors: Gleb Skobeltsyn, Olga Kapralova, Konstantin Shagin, Vladimir Vuskovic, Yufei Zhao, Bradley Nelson, Alessio Macrì, Abraham Lee
-
Publication number: 20210193146Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.Type: ApplicationFiled: March 4, 2021Publication date: June 24, 2021Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
-
Patent number: 10984786Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.Type: GrantFiled: May 7, 2018Date of Patent: April 20, 2021Assignee: GOOGLE LLCInventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
-
Publication number: 20200364067Abstract: Some implementations are directed to adapting a client application on a feature phone based on experiment parameters. Some of those implementations are directed to adapting an assistant client application, where the assistant client application interacts with remote assistant component(s) to provide automated assistant functionalities via the assistant client application of the feature phone. Some implementations are additionally or alternatively directed to determining whether an invocation, of an assistant client application on a feature phone, is a request for transcription of voice data received in conjunction with the invocation, or is instead a request for an assistant response that is responsive to the transcription of the voice data (e.g., includes assistant content that is based on and in addition to the transcription, and that optionally lacks the transcription itself).Type: ApplicationFiled: August 27, 2019Publication date: November 19, 2020Inventors: Diego Accame, Abraham Lee, Yujie Wan, Shriya Raghunathan, Raymond Carino, Feng Ji, Shashwat Lal Das, Nickolas Westman
-
Publication number: 20200294497Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.Type: ApplicationFiled: May 7, 2018Publication date: September 17, 2020Inventors: Ulas Kirazci, Adam Coimbra, Abraham Lee, Wei Dong, Thushan Amarasiriwardena