Patents by Inventor Lisa Takehana

Lisa Takehana has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240184503
    Abstract: A request is received to initiate an operation to share a first document displayed via a first graphical user interface (GUI) on a first client device of a first participant of a conference call with a second participant of the conference call via a second GUI on a second client device. Image data depicting the first participant during the conference call is fed as input to a model. A second document including second content corresponding to first content of the first document is obtained based on an output of the model. One or more regions of the second document satisfy an image placement criterion. The second document and a portion of the image data depicting the first participant is provided during the conference call for presentation in at least one of the one or more regions of the second document via the second GUI on the second client device.
    Type: Application
    Filed: February 9, 2024
    Publication date: June 6, 2024
    Inventors: Alexander H. Rothera, Lisa Takehana
  • Publication number: 20240013783
    Abstract: Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For instance, a spoken utterance such as “stop,” may be intended to affect different assistant actions based on a context in which the user provided the spoken utterance. The context can refer to a location of the user relative to other rooms in a home, a time of day, a user providing the spoken utterance, an arrangement of the assistant devices within a home, and/or a state of each device in the home.
    Type: Application
    Filed: September 11, 2023
    Publication date: January 11, 2024
    Inventors: Raunaq Shah, Jaclyn Konzelmann, Lisa Takehana, Ruxandra Davies, Adrian Diaconu
  • Patent number: 11756546
    Abstract: Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For instance, a spoken utterance such as “stop,” may be intended to affect different assistant actions based on a context in which the user provided the spoken utterance. The context can refer to a location of the user relative to other rooms in a home, a time of day, a user providing the spoken utterance, an arrangement of the assistant devices within a home, and/or a state of each device in the home.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: September 12, 2023
    Assignee: GOOGLE LLC
    Inventors: Raunaq Shah, Jaclyn Konzelmann, Lisa Takehana, Ruxandra Davies, Adrian Diaconu
  • Publication number: 20220374190
    Abstract: Systems and methods for overlaying an image of a conference call participant with a shared document are provided. A request is received to initiate a document sharing operation to share a document displayed via a first graphical user interface (GUI) on a first client device associated with a first participant of a conference call with a second participant of the conference call via a second GUI on a second client device. Image data corresponding to a view of the first participant in a surrounding environment is also received. An image depicting the first participant is obtained based on the received image data. One or more regions of the document that satisfy one or more image placement criteria are identified. The document and the image depicting the first participant are provided for presentation via the second GUI on the second client device. The image depicting the first participant is presented at a region of the identified one of more regions of the document.
    Type: Application
    Filed: December 13, 2021
    Publication date: November 24, 2022
    Inventors: Alexander H. Rothera, Lisa Takehana
  • Patent number: 11508371
    Abstract: Processing stacked data structures is provided. A system receives an input audio signal detected by a sensor of a local computing device, identifies an acoustic signature, and identifies an account corresponding to the signature. The system establishes a session and a profile stack data structure including a first profile layer having policies configured by a third-party device. The system pushes, to the profile stack data structure, a second profile layer retrieved from the account. The system parses the input audio signal to identify a request and a trigger keyword. The system generates, based on the trigger keyword and the second profile layer, a first action data structure compatible with the first profile layer. The system provides the first action data structure for execution. The system disassembles the profile stack data structure to remove the first profile layer or the second profile layer from the profile stack data structure.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: November 22, 2022
    Assignee: GOOGLE LLC
    Inventors: Anshul Kothari, Tarun Jain, Gaurav Bhaya, Ruxandra Davies, Lisa Takehana
  • Publication number: 20220276722
    Abstract: Implementations provided herein relate to correlating available input gestures to recently created application functions, and adapting available input gestures, and/or user-created input gestures, to be correlated with existing application functions. Available input gestures (e.g., a hand wave) can be those that can be readily performed upon setup of a computing device. When a user installs an application that is not initially configured to handle the available input gestures, the available input gestures can be correlated to certain functions of the application. Furthermore, a user can create new gestures for application actions and/or modify existing gestures according to their own preferences and/or physical capabilities. When multiple users elect to modify an existing gesture in the same way, the modification can be made universal, with permission from the users, in order to eliminate latency when subsequently adapting to preferences of other users.
    Type: Application
    Filed: May 18, 2022
    Publication date: September 1, 2022
    Inventors: Ruxandra Davies, Lisa Takehana
  • Patent number: 11340705
    Abstract: Implementations provided herein relate to correlating available input gestures to recently created application functions, and adapting available input gestures, and/or user-created input gestures, to be correlated with existing application functions. Available input gestures (e.g., a hand wave) can be those that can be readily performed upon setup of a computing device. When a user installs an application that is not initially configured to handle the available input gestures, the available input gestures can be correlated to certain functions of the application. Furthermore, a user can create new gestures for application actions and/or modify existing gestures according to their own preferences and/or physical capabilities. When multiple users elect to modify an existing gesture in the same way, the modification can be made universal, with permission from the users, in order to eliminate latency when subsequently adapting to preferences of other users.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: May 24, 2022
    Assignee: GOOGLE LLC
    Inventors: Ruxandra Davies, Lisa Takehana
  • Publication number: 20210304764
    Abstract: Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For instance, a spoken utterance such as “stop,” may be intended to affect different assistant actions based on a context in which the user provided the spoken utterance. The context can refer to a location of the user relative to other rooms in a home, a time of day, a user providing the spoken utterance, an arrangement of the assistant devices within a home, and/or a state of each device in the home.
    Type: Application
    Filed: June 14, 2021
    Publication date: September 30, 2021
    Inventors: Raunaq Shah, Jaclyn Konzelmann, Lisa Takehana, Ruxandra Davies, Adrian Diaconu
  • Patent number: 11081099
    Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 3, 2021
    Assignee: GOOGLE LLC
    Inventors: Justin Lewis, Lisa Takehana
  • Patent number: 11037562
    Abstract: Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For instance, a spoken utterance such as “stop,” may be intended to affect different assistant actions based on a context in which the user provided the spoken utterance. The context can refer to a location of the user relative to other rooms in a home, a time of day, a user providing the spoken utterance, an arrangement of the assistant devices within a home, and/or a state of each device in the home.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: June 15, 2021
    Assignee: GOOGLE LLC
    Inventors: Raunaq Shah, Jaclyn Konzelmann, Lisa Takehana, Ruxandra Davies, Adrian Diaconu
  • Publication number: 20200301512
    Abstract: Implementations provided herein relate to correlating available input gestures to recently created application functions, and adapting available input gestures, and/or user-created input gestures, to be correlated with existing application functions. Available input gestures (e.g., a hand wave) can be those that can be readily performed upon setup of a computing device. When a user installs an application that is not initially configured to handle the available input gestures, the available input gestures can be correlated to certain functions of the application. Furthermore, a user can create new gestures for application actions and/or modify existing gestures according to their own preferences and/or physical capabilities. When multiple users elect to modify an existing gesture in the same way, the modification can be made universal, with permission from the users, in order to eliminate latency when subsequently adapting to preferences of other users.
    Type: Application
    Filed: March 27, 2019
    Publication date: September 24, 2020
    Inventors: Ruxandra Davies, Lisa Takehana
  • Publication number: 20200302925
    Abstract: Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For instance, a spoken utterance such as “stop,” may be intended to affect different assistant actions based on a context in which the user provided the spoken utterance. The context can refer to a location of the user relative to other rooms in a home, a time of day, a user providing the spoken utterance, an arrangement of the assistant devices within a home, and/or a state of each device in the home.
    Type: Application
    Filed: August 23, 2018
    Publication date: September 24, 2020
    Inventors: Raunaq Shah, Jaclyn Konzelmann, Lisa Takehana, Ruxandra Davies, Adrian Diaconu
  • Publication number: 20200273461
    Abstract: Processing stacked data structures is provided. A system receives an input audio signal detected by a sensor of a local computing device, identifies an acoustic signature, and identifies an account corresponding to the signature. The system establishes a session and a profile stack data structure including a first profile layer having policies configured by a third-party device. The system pushes, to the profile stack data structure, a second profile layer retrieved from the account. The system parses the input audio signal to identify a request and a trigger keyword. The system generates, based on the trigger keyword and the second profile layer, a first action data structure compatible with the first profile layer. The system provides the first action data structure for execution. The system disassembles the profile stack data structure to remove the first profile layer or the second profile layer from the profile stack data structure.
    Type: Application
    Filed: May 14, 2020
    Publication date: August 27, 2020
    Inventors: Anshul Kothari, Tarun Jain, Gaurav Bhaya, Ruxandra Davies, Lisa Takehana
  • Publication number: 20200243063
    Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.
    Type: Application
    Filed: December 20, 2019
    Publication date: July 30, 2020
    Inventors: Justin Lewis, Lisa Takehana
  • Patent number: 10665236
    Abstract: Processing stacked data structures is provided. A system receives an input audio signal detected by a sensor of a local computing device, identifies an acoustic signature, and identifies an account corresponding to the signature. The system establishes a session and a profile stack data structure including a first profile layer having policies configured by a third-party device. The system pushes, to the profile stack data structure, a second profile layer retrieved from the account. The system parses the input audio signal to identify a request and a trigger keyword. The system generates, based on the trigger keyword and the second profile layer, a first action data structure compatible with the first profile layer. The system provides the first action data structure for execution. The system disassembles the profile stack data structure to remove the first profile layer or the second profile layer from the profile stack data structure.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: May 26, 2020
    Assignee: Google LLC
    Inventors: Anshul Kothari, Tarun Jain, Gaurav Bhaya, Lisa Takehana, Ruxandra Davies
  • Patent number: 10559296
    Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: February 11, 2020
    Assignee: Google LLC
    Inventors: Justin Lewis, Lisa Takehana
  • Publication number: 20190180742
    Abstract: Processing stacked data structures is provided. A system receives an input audio signal detected by a sensor of a local computing device, identifies an acoustic signature, and identifies an account corresponding to the signature. The system establishes a session and a profile stack data structure including a first profile layer having policies configured by a third-party device. The system pushes, to the profile stack data structure, a second profile layer retrieved from the account. The system parses the input audio signal to identify a request and a trigger keyword. The system generates, based on the trigger keyword and the second profile layer, a first action data structure compatible with the first profile layer. The system provides the first action data structure for execution. The system disassembles the profile stack data structure to remove the first profile layer or the second profile layer from the profile stack data structure.
    Type: Application
    Filed: April 30, 2018
    Publication date: June 13, 2019
    Inventors: Anshul Kothari, Tarun Jain, Gaurav Bhaya, Lisa Takehana, Ruxandra Davies
  • Publication number: 20180286382
    Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.
    Type: Application
    Filed: June 1, 2018
    Publication date: October 4, 2018
    Inventors: Justin Lewis, Lisa Takehana
  • Publication number: 20180190262
    Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 5, 2018
    Inventors: Justin Lewis, Lisa Takehana
  • Patent number: 10013971
    Abstract: Methods, systems, and apparatus for determining candidate user profiles as being associated with a shared device, and identifying, from the candidate user profiles, candidate pronunciation attributes associated with at least one of the candidate user profiles determined to be associated with the shared device. The methods, systems, and apparatus are also for receiving, at the shared device, a spoken utterance; determining a received pronunciation attribute based on received audio data corresponding to the spoken utterance; comparing the received pronunciation attribute to at least one of the candidate pronunciation attributes; and selecting a particular pronunciation attribute from the candidate pronunciation attributes based on a result of the comparison of the received pronunciation attribute to at least one of the candidate pronunciation attributes.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: July 3, 2018
    Assignee: Google LLC
    Inventors: Justin Lewis, Lisa Takehana