Patents by Inventor Lisa Takehana
Lisa Takehana has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240184503Abstract: A request is received to initiate an operation to share a first document displayed via a first graphical user interface (GUI) on a first client device of a first participant of a conference call with a second participant of the conference call via a second GUI on a second client device. Image data depicting the first participant during the conference call is fed as input to a model. A second document including second content corresponding to first content of the first document is obtained based on an output of the model. One or more regions of the second document satisfy an image placement criterion. The second document and a portion of the image data depicting the first participant is provided during the conference call for presentation in at least one of the one or more regions of the second document via the second GUI on the second client device.Type: ApplicationFiled: February 9, 2024Publication date: June 6, 2024Inventors: Alexander H. Rothera, Lisa Takehana
-
Publication number: 20240013783Abstract: Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For instance, a spoken utterance such as “stop,” may be intended to affect different assistant actions based on a context in which the user provided the spoken utterance. The context can refer to a location of the user relative to other rooms in a home, a time of day, a user providing the spoken utterance, an arrangement of the assistant devices within a home, and/or a state of each device in the home.Type: ApplicationFiled: September 11, 2023Publication date: January 11, 2024Inventors: Raunaq Shah, Jaclyn Konzelmann, Lisa Takehana, Ruxandra Davies, Adrian Diaconu
-
Patent number: 11756546Abstract: Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For instance, a spoken utterance such as “stop,” may be intended to affect different assistant actions based on a context in which the user provided the spoken utterance. The context can refer to a location of the user relative to other rooms in a home, a time of day, a user providing the spoken utterance, an arrangement of the assistant devices within a home, and/or a state of each device in the home.Type: GrantFiled: June 14, 2021Date of Patent: September 12, 2023Assignee: GOOGLE LLCInventors: Raunaq Shah, Jaclyn Konzelmann, Lisa Takehana, Ruxandra Davies, Adrian Diaconu
-
Publication number: 20220374190Abstract: Systems and methods for overlaying an image of a conference call participant with a shared document are provided. A request is received to initiate a document sharing operation to share a document displayed via a first graphical user interface (GUI) on a first client device associated with a first participant of a conference call with a second participant of the conference call via a second GUI on a second client device. Image data corresponding to a view of the first participant in a surrounding environment is also received. An image depicting the first participant is obtained based on the received image data. One or more regions of the document that satisfy one or more image placement criteria are identified. The document and the image depicting the first participant are provided for presentation via the second GUI on the second client device. The image depicting the first participant is presented at a region of the identified one of more regions of the document.Type: ApplicationFiled: December 13, 2021Publication date: November 24, 2022Inventors: Alexander H. Rothera, Lisa Takehana
-
Patent number: 11508371Abstract: Processing stacked data structures is provided. A system receives an input audio signal detected by a sensor of a local computing device, identifies an acoustic signature, and identifies an account corresponding to the signature. The system establishes a session and a profile stack data structure including a first profile layer having policies configured by a third-party device. The system pushes, to the profile stack data structure, a second profile layer retrieved from the account. The system parses the input audio signal to identify a request and a trigger keyword. The system generates, based on the trigger keyword and the second profile layer, a first action data structure compatible with the first profile layer. The system provides the first action data structure for execution. The system disassembles the profile stack data structure to remove the first profile layer or the second profile layer from the profile stack data structure.Type: GrantFiled: May 14, 2020Date of Patent: November 22, 2022Assignee: GOOGLE LLCInventors: Anshul Kothari, Tarun Jain, Gaurav Bhaya, Ruxandra Davies, Lisa Takehana
-
Publication number: 20220276722Abstract: Implementations provided herein relate to correlating available input gestures to recently created application functions, and adapting available input gestures, and/or user-created input gestures, to be correlated with existing application functions. Available input gestures (e.g., a hand wave) can be those that can be readily performed upon setup of a computing device. When a user installs an application that is not initially configured to handle the available input gestures, the available input gestures can be correlated to certain functions of the application. Furthermore, a user can create new gestures for application actions and/or modify existing gestures according to their own preferences and/or physical capabilities. When multiple users elect to modify an existing gesture in the same way, the modification can be made universal, with permission from the users, in order to eliminate latency when subsequently adapting to preferences of other users.Type: ApplicationFiled: May 18, 2022Publication date: September 1, 2022Inventors: Ruxandra Davies, Lisa Takehana
-
Patent number: 11340705Abstract: Implementations provided herein relate to correlating available input gestures to recently created application functions, and adapting available input gestures, and/or user-created input gestures, to be correlated with existing application functions. Available input gestures (e.g., a hand wave) can be those that can be readily performed upon setup of a computing device. When a user installs an application that is not initially configured to handle the available input gestures, the available input gestures can be correlated to certain functions of the application. Furthermore, a user can create new gestures for application actions and/or modify existing gestures according to their own preferences and/or physical capabilities. When multiple users elect to modify an existing gesture in the same way, the modification can be made universal, with permission from the users, in order to eliminate latency when subsequently adapting to preferences of other users.Type: GrantFiled: March 27, 2019Date of Patent: May 24, 2022Assignee: GOOGLE LLCInventors: Ruxandra Davies, Lisa Takehana
-
Publication number: 20210304764Abstract: Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For instance, a spoken utterance such as “stop,” may be intended to affect different assistant actions based on a context in which the user provided the spoken utterance. The context can refer to a location of the user relative to other rooms in a home, a time of day, a user providing the spoken utterance, an arrangement of the assistant devices within a home, and/or a state of each device in the home.Type: ApplicationFiled: June 14, 2021Publication date: September 30, 2021Inventors: Raunaq Shah, Jaclyn Konzelmann, Lisa Takehana, Ruxandra Davies, Adrian Diaconu
-
Patent number: 11081099Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.Type: GrantFiled: December 20, 2019Date of Patent: August 3, 2021Assignee: GOOGLE LLCInventors: Justin Lewis, Lisa Takehana
-
Patent number: 11037562Abstract: Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For instance, a spoken utterance such as “stop,” may be intended to affect different assistant actions based on a context in which the user provided the spoken utterance. The context can refer to a location of the user relative to other rooms in a home, a time of day, a user providing the spoken utterance, an arrangement of the assistant devices within a home, and/or a state of each device in the home.Type: GrantFiled: August 23, 2018Date of Patent: June 15, 2021Assignee: GOOGLE LLCInventors: Raunaq Shah, Jaclyn Konzelmann, Lisa Takehana, Ruxandra Davies, Adrian Diaconu
-
Publication number: 20200301512Abstract: Implementations provided herein relate to correlating available input gestures to recently created application functions, and adapting available input gestures, and/or user-created input gestures, to be correlated with existing application functions. Available input gestures (e.g., a hand wave) can be those that can be readily performed upon setup of a computing device. When a user installs an application that is not initially configured to handle the available input gestures, the available input gestures can be correlated to certain functions of the application. Furthermore, a user can create new gestures for application actions and/or modify existing gestures according to their own preferences and/or physical capabilities. When multiple users elect to modify an existing gesture in the same way, the modification can be made universal, with permission from the users, in order to eliminate latency when subsequently adapting to preferences of other users.Type: ApplicationFiled: March 27, 2019Publication date: September 24, 2020Inventors: Ruxandra Davies, Lisa Takehana
-
Publication number: 20200302925Abstract: Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For instance, a spoken utterance such as “stop,” may be intended to affect different assistant actions based on a context in which the user provided the spoken utterance. The context can refer to a location of the user relative to other rooms in a home, a time of day, a user providing the spoken utterance, an arrangement of the assistant devices within a home, and/or a state of each device in the home.Type: ApplicationFiled: August 23, 2018Publication date: September 24, 2020Inventors: Raunaq Shah, Jaclyn Konzelmann, Lisa Takehana, Ruxandra Davies, Adrian Diaconu
-
Publication number: 20200273461Abstract: Processing stacked data structures is provided. A system receives an input audio signal detected by a sensor of a local computing device, identifies an acoustic signature, and identifies an account corresponding to the signature. The system establishes a session and a profile stack data structure including a first profile layer having policies configured by a third-party device. The system pushes, to the profile stack data structure, a second profile layer retrieved from the account. The system parses the input audio signal to identify a request and a trigger keyword. The system generates, based on the trigger keyword and the second profile layer, a first action data structure compatible with the first profile layer. The system provides the first action data structure for execution. The system disassembles the profile stack data structure to remove the first profile layer or the second profile layer from the profile stack data structure.Type: ApplicationFiled: May 14, 2020Publication date: August 27, 2020Inventors: Anshul Kothari, Tarun Jain, Gaurav Bhaya, Ruxandra Davies, Lisa Takehana
-
Publication number: 20200243063Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.Type: ApplicationFiled: December 20, 2019Publication date: July 30, 2020Inventors: Justin Lewis, Lisa Takehana
-
Patent number: 10665236Abstract: Processing stacked data structures is provided. A system receives an input audio signal detected by a sensor of a local computing device, identifies an acoustic signature, and identifies an account corresponding to the signature. The system establishes a session and a profile stack data structure including a first profile layer having policies configured by a third-party device. The system pushes, to the profile stack data structure, a second profile layer retrieved from the account. The system parses the input audio signal to identify a request and a trigger keyword. The system generates, based on the trigger keyword and the second profile layer, a first action data structure compatible with the first profile layer. The system provides the first action data structure for execution. The system disassembles the profile stack data structure to remove the first profile layer or the second profile layer from the profile stack data structure.Type: GrantFiled: April 30, 2018Date of Patent: May 26, 2020Assignee: Google LLCInventors: Anshul Kothari, Tarun Jain, Gaurav Bhaya, Lisa Takehana, Ruxandra Davies
-
Patent number: 10559296Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.Type: GrantFiled: June 1, 2018Date of Patent: February 11, 2020Assignee: Google LLCInventors: Justin Lewis, Lisa Takehana
-
Publication number: 20190180742Abstract: Processing stacked data structures is provided. A system receives an input audio signal detected by a sensor of a local computing device, identifies an acoustic signature, and identifies an account corresponding to the signature. The system establishes a session and a profile stack data structure including a first profile layer having policies configured by a third-party device. The system pushes, to the profile stack data structure, a second profile layer retrieved from the account. The system parses the input audio signal to identify a request and a trigger keyword. The system generates, based on the trigger keyword and the second profile layer, a first action data structure compatible with the first profile layer. The system provides the first action data structure for execution. The system disassembles the profile stack data structure to remove the first profile layer or the second profile layer from the profile stack data structure.Type: ApplicationFiled: April 30, 2018Publication date: June 13, 2019Inventors: Anshul Kothari, Tarun Jain, Gaurav Bhaya, Lisa Takehana, Ruxandra Davies
-
Publication number: 20180286382Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.Type: ApplicationFiled: June 1, 2018Publication date: October 4, 2018Inventors: Justin Lewis, Lisa Takehana
-
Publication number: 20180190262Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.Type: ApplicationFiled: December 29, 2016Publication date: July 5, 2018Inventors: Justin Lewis, Lisa Takehana
-
Patent number: 10013971Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.Type: GrantFiled: December 29, 2016Date of Patent: July 3, 2018Assignee: Google LLCInventors: Justin Lewis, Lisa Takehana