Patents by Inventor Yuzhao Ni

Yuzhao Ni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200265835
    Abstract: Transferring (e.g., automatically) an automated assistant routine between client devices during execution of the automated assistant routine. The automated assistant routine can correspond to a set of actions to be performed by one or more agents and/or one or more devices. While content, corresponding to an action of the routine, is being rendered at a particular device, the user may walk away from the particular device and toward a separate device. The automated assistant routine can be automatically transferred in response, and the separate device can continue to rendering the content for the user.
    Type: Application
    Filed: April 23, 2018
    Publication date: August 20, 2020
    Inventor: Yuzhao Ni
  • Publication number: 20200192630
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for tailoring composite graphical assistant interfaces for interacting with multiple different connected devices. The composite graphical assistant interfaces can be generated in response to a user providing a request for an automated assistant to cause a connected device to perform a particular function. In response to the automated assistant receiving the request, the automated assistant can identify other functions that the connected device is capable of performing. The other functions can then be mapped to various graphical control elements in order to provide a composite graphical assistant interface from which the user can interact with the connected device. Each graphical control element can be arranged according to a status of the connected device, in order to reflect how the connected device is operating simultaneous to the presentation of the composite graphical assistant interface.
    Type: Application
    Filed: May 7, 2018
    Publication date: June 18, 2020
    Inventors: David Roy Schairer, Triona Butler, Cindy Tran, Mark Spates, IV, Di Lin, Yuzhao Ni, Lisa Williams
  • Publication number: 20200117502
    Abstract: Methods, apparatus, systems, and computer-readable media for engaging an automated assistant to perform multiple tasks through a multitask command. The multitask command can be a command that, when provided by a user, causes the automated assistant to invoke multiple different agent modules for performing tasks to complete the multitask command. During execution of the multitask command, a user can provide input that can be used by one or more agent modules to perform their respective tasks. Furthermore, feedback from one or more agent modules can be used by the automated assistant to dynamically alter tasks in order to more effectively use resources available during completion of the multitask command.
    Type: Application
    Filed: December 13, 2019
    Publication date: April 16, 2020
    Inventors: Yuzhao Ni, David Schairer
  • Publication number: 20200112454
    Abstract: Various arrangements for using captured voice to generate a custom interface controller are presented. A vocal recording from a user may be captured in which a spoken command and multiple smart-home devices are indicated. One or more common functions that map to the multiple smart-home devices may be determined. A custom interface controller may be generated that controls the one or more common functions of each smart-home device of the multiple smart-home devices.
    Type: Application
    Filed: October 8, 2018
    Publication date: April 9, 2020
    Applicant: Google LLC
    Inventors: Benjamin Brown, Christopher Conover, Lishan Zhang, Alexander Crettenand, Lantian Zheng, Mafang Yao, Minoo Erfani Joorabchi, Francisco Pedro Lopes Pimenta, Rui Zhao, Yuzhao Ni
  • Publication number: 20200089709
    Abstract: Generating and/or recommending command bundles for a user of an automated assistant. A command bundle comprises a plurality of discrete actions that can be performed by an automated assistant. One or more of the actions of a command bundle can cause transmission of a corresponding command and/or other data to one or more devices and/or agents that are distinct from devices and/or agents to which data is transmitted based on other action(s) of the bundle. Implementations determine command bundles that are likely relevant to a user, and present those command bundles as suggestions to the user. In some of those implementations, a machine learning model is utilized to generate a user action embedding for the user, and a command bundle embedding for each of a plurality of command bundles. Command bundle(s) can be selected for suggestion based on comparison of the user action embedding and the command bundle embeddings.
    Type: Application
    Filed: November 22, 2019
    Publication date: March 19, 2020
    Inventor: Yuzhao Ni
  • Patent number: 10552204
    Abstract: Methods, apparatus, systems, and computer-readable media for engaging an automated assistant to perform multiple tasks through a multitask command. The multitask command can be a command that, when provided by a user, causes the automated assistant to invoke multiple different agent modules for performing tasks to complete the multitask command. During execution of the multitask command, a user can provide input that can be used by one or more agent modules to perform their respective tasks. Furthermore, feedback from one or more agent modules can be used by the automated assistant to dynamically alter tasks in order to more effectively use resources available during completion of the multitask command.
    Type: Grant
    Filed: July 7, 2017
    Date of Patent: February 4, 2020
    Assignee: GOOGLE LLC
    Inventors: Yuzhao Ni, David Schairer
  • Publication number: 20200035239
    Abstract: In one example, a method includes receiving audio data generated by one or more microphones of a computing device, the audio data representing a spoken utterance; identifying, based on the audio data, a user that provided the spoken utterance; identifying, based on the audio data, an automation action associated with one or more automation devices, the automation action corresponding to the spoken utterance; determining whether the identified user is authorized to cause performance of the identified automation action; and responsive to determining that the identified user is authorized to cause performance of the identified automation action, causing the one or more automation devices to perform the identified automation action.
    Type: Application
    Filed: October 7, 2019
    Publication date: January 30, 2020
    Inventors: Yuzhao Ni, David Roy Schairer
  • Patent number: 10546023
    Abstract: Generating and/or recommending command bundles for a user of an automated assistant. A command bundle comprises a plurality of discrete actions that can be performed by an automated assistant. One or more of the actions of a command bundle can cause transmission of a corresponding command and/or other data to one or more devices and/or agents that are distinct from devices and/or agents to which data is transmitted based on other action(s) of the bundle. Implementations determine command bundles that are likely relevant to a user, and present those command bundles as suggestions to the user. In some of those implementations, a machine learning model is utilized to generate a user action embedding for the user, and a command bundle embedding for each of a plurality of command bundles. Command bundle(s) can be selected for suggestion based on comparison of the user action embedding and the command bundle embeddings.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: January 28, 2020
    Assignee: GOOGLE LLC
    Inventor: Yuzhao Ni
  • Publication number: 20190361575
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for tailoring composite graphical assistant interfaces for interacting with multiple different connected devices. The composite graphical assistant interfaces can be generated proactively and/or in response to a user providing a request for an automated assistant to cause a connected device to perform a particular function. In response to the automated assistant receiving the request, the automated assistant can identify other connected devices, and other functions capable of being performed by the other connected devices. The other functions can then be mapped to various graphical control elements in order to provide a composite graphical assistant interface from which the user can interact with different connected devices. Each graphical control element can be arranged to reflect how each connected device is operating simultaneous to the presentation of the composite graphical assistant interface.
    Type: Application
    Filed: August 6, 2019
    Publication date: November 28, 2019
    Inventors: Yuzhao Ni, David Roy Schairer
  • Patent number: 10490190
    Abstract: In various implementations, upon receiving a given voice command from a user, a voice-based trigger may be selected from a library of voice-based triggers previously used across a population of users. The library may include association(s) between each voice-based trigger and responsive action(s) previously performed in response to the voice-based trigger. The selecting may be based on a measure of similarity between the given voice command and the selected voice-based trigger. One or more responsive actions associated with the selected voice-based trigger in the library may be determined. Based on the one or more responsive actions, current responsive action(s) may be performed by a target client device selected based on sensor-dependent context. Feedback associated with performance of the current responsive action(s) may be received from the user and used to alter a strength of an association between the selected voice-based trigger and the one or more responsive actions.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: November 26, 2019
    Assignee: GOOGLE LLC
    Inventors: Yuzhao Ni, Bo Wang, Barnaby James, Pravir Gupta, David Schairer
  • Patent number: 10438584
    Abstract: In one example, a method includes receiving audio data generated by one or more microphones of a computing device, the audio data representing a spoken utterance; identifying, based on the audio data, a user that provided the spoken utterance; identifying, based on the audio data, an automation action associated with one or more automation devices, the automation action corresponding to the spoken utterance; determining whether the identified user is authorized to cause performance of the identified automation action; and responsive to determining that the identified user is authorized to cause performance of the identified automation action, causing the one or more automation devices to perform the identified automation action.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: October 8, 2019
    Assignee: GOOGLE LLC
    Inventors: Yuzhao Ni, David Roy Schairer
  • Patent number: 10297254
    Abstract: In various implementations, upon receiving a given voice command from a user, a voice-based trigger may be selected from a library of voice-based triggers previously used across a population of users. The library may include association(s) between each voice-based trigger and responsive action(s) previously performed in response to the voice-based trigger. The selecting may be based on a measure of similarity between the given voice command and the selected voice-based trigger. One or more responsive actions associated with the selected voice-based trigger in the library may be determined. Based on the one or more responsive actions, current responsive action(s) may be performed by the client device.
    Type: Grant
    Filed: October 3, 2016
    Date of Patent: May 21, 2019
    Assignee: GOOGLE LLC
    Inventors: Yuzhao Ni, Bo Wang, Barnaby James, Pravir Gupta, David Schairer
  • Publication number: 20190103103
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for using shortcut command phrases to operate an automated assistant. A user of the automated assistant can request that a shortcut command phrase be established for causing the automated assistant to perform a variety of different actions. In this way, the user does not necessarily have to provide an individual command for each action to be performed but, rather, can use a shortcut command phrase to cause the automated assistant to perform the actions. The shortcut command phrases can be used to control peripheral devices, IoT devices, applications, websites, and/or any other apparatuses or processes capable of being controlled through an automated assistant.
    Type: Application
    Filed: October 16, 2017
    Publication date: April 4, 2019
    Inventors: Yuzhao Ni, Lucas Palmer
  • Publication number: 20190102482
    Abstract: Generating and/or recommending command bundles for a user of an automated assistant. A command bundle comprises a plurality of discrete actions that can be performed by an automated assistant. One or more of the actions of a command bundle can cause transmission of a corresponding command and/or other data to one or more devices and/or agents that are distinct from devices and/or agents to which data is transmitted based on other action(s) of the bundle. Implementations determine command bundles that are likely relevant to a user, and present those command bundles as suggestions to the user. In some of those implementations, a machine learning model is utilized to generate a user action embedding for the user, and a command bundle embedding for each of a plurality of command bundles. Command bundle(s) can be selected for suggestion based on comparison of the user action embedding and the command bundle embeddings.
    Type: Application
    Filed: October 11, 2017
    Publication date: April 4, 2019
    Inventor: Yuzhao Ni
  • Publication number: 20190096406
    Abstract: In various implementations, upon receiving a given voice command from a user, a voice-based trigger may be selected from a library of voice-based triggers previously used across a population of users. The library may include association(s) between each voice-based trigger and responsive action(s) previously performed in response to the voice-based trigger. The selecting may be based on a measure of similarity between the given voice command and the selected voice-based trigger. One or more responsive actions associated with the selected voice-based trigger in the library may be determined. Based on the one or more responsive actions, current responsive action(s) may be performed by the client device. Feedback associated with performance of the current responsive action(s) may be received from the user and used to alter a strength of an association between the selected voice-based trigger and the one or more responsive actions.
    Type: Application
    Filed: November 28, 2018
    Publication date: March 28, 2019
    Inventors: Yuzhao Ni, Bo Wang, Barnaby James, Pravir Gupta, David Schairer
  • Publication number: 20190012198
    Abstract: Methods, apparatus, systems, and computer-readable media for engaging an automated assistant to perform multiple tasks through a multitask command. The multitask command can be a command that, when provided by a user, causes the automated assistant to invoke multiple different agent modules for performing tasks to complete the multitask command. During execution of the multitask command, a user can provide input that can be used by one or more agent modules to perform their respective tasks. Furthermore, feedback from one or more agent modules can be used by the automated assistant to dynamically alter tasks in order to more effectively use resources available during completion of the multitask command.
    Type: Application
    Filed: July 7, 2017
    Publication date: January 10, 2019
    Inventors: Yuzhao Ni, David Schairer
  • Publication number: 20180293981
    Abstract: In one example, a method includes receiving audio data generated by one or more microphones of a computing device, the audio data representing a spoken utterance; identifying, based on the audio data, a user that provided the spoken utterance; identifying, based on the audio data, an automation action associated with one or more automation devices, the automation action corresponding to the spoken utterance; determining whether the identified user is authorized to cause performance of the identified automation action; and responsive to determining that the identified user is authorized to cause performance of the identified automation action, causing the one or more automation devices to perform the identified automation action.
    Type: Application
    Filed: April 7, 2017
    Publication date: October 11, 2018
    Inventors: Yuzhao Ni, David Roy Schairer
  • Publication number: 20180096681
    Abstract: In various implementations, upon receiving a given voice command from a user, a voice-based trigger may be selected from a library of voice-based triggers previously used across a population of users. The library may include association(s) between each voice-based trigger and responsive action(s) previously performed in response to the voice-based trigger. The selecting may be based on a measure of similarity between the given voice command and the selected voice-based trigger. One or more responsive actions associated with the selected voice-based trigger in the library may be determined. Based on the one or more responsive actions, current responsive action(s) may be performed by the client device. Feedback associated with performance of the current responsive action(s) may be received from the user and used to alter a strength of an association between the selected voice-based trigger and the one or more responsive actions.
    Type: Application
    Filed: October 3, 2016
    Publication date: April 5, 2018
    Inventors: Yuzhao Ni, Bo Wang, Barnaby James, Pravir Gupta, David Schairer