Patents by Inventor Zaheed Sabur

Zaheed Sabur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230125662
    Abstract: Implementations set forth herein allow a user to access a first application in a foreground of a graphical interface, and simultaneously employ an automated assistant to respond to notifications arising from a second application. The user can provide an input, such as a spoken utterance, while viewing the first application in the foreground in order to respond to notifications from the second application without performing certain intervening steps that can arise under certain circumstances. Such intervening steps can include providing a user confirmation, which can be bypassed, and/or time-limited according to a timer, which can be displayed in response to the user providing a responsive input directed at the notification. A period for the timer can be set according to one or more characteristics that are associated with the notification, the user, and/or any other information that can be associated with the user receiving the notification.
    Type: Application
    Filed: December 21, 2022
    Publication date: April 27, 2023
    Inventors: Denis Burakov, Sergey Nazarov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Zaheed Sabur, Okan Kolak, Yan Zhong, Vinh Quoc Ly
  • Publication number: 20230119561
    Abstract: Implementations relate to providing information items for display during a communication session. In some implementations, a computer-implemented method includes receiving, during a communication session between a first computing device and a second computing device, first media content from the communication session. The method further includes determining a first information item for display in the communication session based at least in part on the first media content. The method further includes sending a first command to at least one of the first computing device and the second computing device to display the first information item.
    Type: Application
    Filed: November 21, 2022
    Publication date: April 20, 2023
    Applicant: Google LLC
    Inventors: Fredrik BERGENLID, Vladyslav LYSYCHKIN, Denis BURAKOV, Behshad BEHZADI, Andrea Terwisscha VAN SCHELTINGA, Quentin Lascombes DE LAROUSSILHE, Mikhail GOLIKOV, Koa METTER, Ibrahim BADR, Zaheed SABUR
  • Patent number: 11631412
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: April 18, 2023
    Assignee: GOOGLE LLC
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Publication number: 20230061929
    Abstract: Implementations described herein relate to configuring a dynamic warm word button, that is associated with a client device, with particular assistant commands based on detected occurrences of warm word activation events at the client device. In response to detecting an occurrence of a given warm word activation event at the client device, implementations can determine whether user verification is required for a user that actuated the warm word button. Further, in response to determining that the user verification is required for the user that actuated the warm word button, the user verification can be performed. Moreover, in response to determining that the user that actuated the warm word button has been verified, implementations can cause an automated assistant to perform the particular assistant command associated with the warm word activation event. Audio-based and/or non-audio-based techniques can be utilized to perform the user verification.
    Type: Application
    Filed: November 22, 2021
    Publication date: March 2, 2023
    Inventors: Victor Carbune, Antonio Gaetani, Bastiaan Van Eeckhoudt, Daniel Valcarce, Michael Golikov, Justin Lu, Ondrej Skopek, Nicolo D'Ercole, Zaheed Sabur, Behshad Behzadi, Luv Kothari
  • Publication number: 20230041517
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant by, for example, obviating the need to provide an explicit invocation to the automated assistant, such as by saying a hot-word/phrase or performing a specific user input, prior to speaking a command or query. In addition, the automated assistant can optionally receive, understand, and/or respond to the command or query without communicating with a server, thereby further reducing the time in which a response can be provided. Implementations only selectively initiate on-device speech recognition responsive to determining one or more condition(s) are satisfied. Further, in some implementations, on-device NLU, on-device fulfillment, and/or resulting execution occur only responsive to determining, based on recognized text form the on-device speech recognition, that such further processing should occur.
    Type: Application
    Filed: October 21, 2022
    Publication date: February 9, 2023
    Inventors: Michael Golikov, Zaheed Sabur, Denis Burakov, Behshad Behzadi, Sergey Nazarov, Daniel Cotting, Mario Bertschler, Lucas Mirelmann, Steve Cheng, Bohdan Vlasyuk, Jonathan Lee, Lucia Terrenghi, Adrian Zumbrunnen
  • Patent number: 11553051
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for pairing a speech-enabled device with a display device. A determination may be made to pair a speech-enabled device with a display device of a particular type. A set of display devices that are associated with the speech-enabled device may be identified in response to determining to pair the speech-enabled device with the display device of the particular type. An instruction may be provided to each of the display devices. The instruction may cause the display device to determine (i) whether the display device is of the particular type and (ii) whether the display device and the speech-enabled device both share a local area network and display on the display device an indication regarding pairing with the speech-enabled device.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: January 10, 2023
    Assignee: GOOGLE LLC
    Inventors: Zaheed Sabur, Andrea Terwisscha van Scheltinga, Mikhail Reutov, Lucas Mirelmann
  • Patent number: 11545151
    Abstract: Implementations set forth herein allow a user to access a first application in a foreground of a graphical interface, and simultaneously employ an automated assistant to respond to notifications arising from a second application. The user can provide an input, such as a spoken utterance, while viewing the first application in the foreground in order to respond to notifications from the second application without performing certain intervening steps that can arise under certain circumstances. Such intervening steps can include providing a user confirmation, which can be bypassed, and/or time-limited according to a timer, which can be displayed in response to the user providing a responsive input directed at the notification. A period for the timer can be set according to one or more characteristics that are associated with the notification, the user, and/or any other information that can be associated with the user receiving the notification.
    Type: Grant
    Filed: June 5, 2019
    Date of Patent: January 3, 2023
    Assignee: GOOGLE LLC
    Inventors: Denis Burakov, Sergey Nazarov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Zaheed Sabur, Okan Kolak, Yan Zhong, Vinh Quoc Ly
  • Patent number: 11509616
    Abstract: Implementations relate to providing information items for display during a communication session. In some implementations, a computer-implemented method includes receiving, during a communication session between a first computing device and a second computing device, first media content from the communication session. The method further includes determining a first information item for display in the communication session based at least in part on the first media content. The method further includes sending a first command to at least one of the first computing device and the second computing device to display the first information item.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: November 22, 2022
    Assignee: Google LLC
    Inventors: Fredrik Bergenlid, Vladyslav Lysychkin, Denis Burakov, Behshad Behzadi, Andrea Terwisscha Van Scheltinga, Quentin Lascombes De Laroussilhe, Mikhail Golikov, Koa Metter, Ibrahim Badr, Zaheed Sabur
  • Publication number: 20220366910
    Abstract: Systems and methods described herein relate to determining whether to incorporate recognized text, that corresponds to a spoken utterance of a user of a client device, into a transcription displayed at the client device, or to cause an assistant command, that is associated with the transcription and that is based on the recognized text, to be performed by an automated assistant implemented by the client device. The spoken utterance is received during a dictation session between the user and the automated assistant. Implementations can process, using automatic speech recognition model(s), audio data that captures the spoken utterance to generate the recognized text. Further, implementations can determine whether to incorporate the recognized text into the transcription or cause the assistant command to be performed based on touch input being directed to the transcription, a state of the transcription, and/or audio-based characteristic(s) of the spoken utterance.
    Type: Application
    Filed: May 17, 2021
    Publication date: November 17, 2022
    Inventors: Victor Carbune, Alvin Abdagic, Behshad Behzadi, Jacopo Sannazzaro Natta, Julia Proskurnia, Krzysztof Andrzej Goj, Srikanth Pandiri, Viesturs Zarins, Nicolo D'Ercole, Zaheed Sabur, Luv Kothari
  • Publication number: 20220366911
    Abstract: Implementations described herein relate to an application and/or automated assistant that can identify arrangement operations to perform for arranging text during speech-to-text operations—without a user having to expressly identify the arrangement operations. In some instances, a user that is dictating a document (e.g., an email, a text message, etc.) can provide a spoken utterance to an application in order to incorporate textual content. However, in some of these instances, certain corresponding arrangements are needed for the textual content in the document. The textual content that is derived from the spoken utterance can be arranged by the application based on an intent, vocalization features, and/or contextual features associated with the spoken utterance and/or a type of the application associated with the document, without the user expressly identifying the corresponding arrangements.
    Type: Application
    Filed: June 3, 2021
    Publication date: November 17, 2022
    Inventors: Victor Carbune, Krishna Sapkota, Behshad Behzadi, Julia Proskurnia, Jacopo Sannazzaro Natta, Justin Lu, Magali Boizot-Roche, Márius Sajgalík, Nicolo D'Ercole, Zaheed Sabur, Luv Kothari
  • Patent number: 11482217
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant by, for example, obviating the need to provide an explicit invocation to the automated assistant, such as by saying a hot-word/phrase or performing a specific user input, prior to speaking a command or query. In addition, the automated assistant can optionally receive, understand, and/or respond to the command or query without communicating with a server, thereby further reducing the time in which a response can be provided. Implementations only selectively initiate on-device speech recognition responsive to determining one or more condition(s) are satisfied. Further, in some implementations, on-device NLU, on-device fulfillment, and/or resulting execution occur only responsive to determining, based on recognized text form the on-device speech recognition, that such further processing should occur.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: October 25, 2022
    Assignee: GOOGLE LLC
    Inventors: Michael Golikov, Zaheed Sabur, Denis Burakov, Behshad Behzadi, Sergey Nazarov, Daniel Cotting, Mario Bertschler, Lucas Mirelmann, Steve Cheng, Bohdan Vlasyuk, Jonathan Lee, Lucia Terrenghi, Adrian Zumbrunnen
  • Publication number: 20220253277
    Abstract: Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.
    Type: Application
    Filed: December 13, 2019
    Publication date: August 11, 2022
    Inventors: Srikanth Pandiri, Luv Kothari, Behshad Behzadi, Zaheed Sabur, Domenico Carbotta, Akshay Kannan, Qi Wang, Gokay Baris Gultekin, Angana Ghosh, Xu Liu, Yang Lu, Steve Cheng
  • Publication number: 20220157317
    Abstract: Implementations set forth herein relate to a system that employs an automated assistant to further interactions between a user and another application, which can provide the automated assistant with permission to initialize relevant application actions simultaneous to the user interacting with the other application. Furthermore, the system can allow the automated assistant to initialize actions of different applications, despite being actively operating a particular application. Available actions can be gleaned by the automated assistant using various application-specific schemas, which can be compared with incoming requests from a user to the automated assistant. Additional data, such as context and historical interactions, can also be used to rank and identify a suitable application action to be initialized via the automated assistant.
    Type: Application
    Filed: January 31, 2022
    Publication date: May 19, 2022
    Inventors: Denis Burakov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Sergey NAZAROV, Zaheed Sabur, Marcin Nowak-Przygodzki, Mugurel Ionut Andreica, Radu Voroneanu
  • Publication number: 20220130385
    Abstract: Implementations herein relate to pre-caching data, corresponding to predicted interactions between a user and an automated assistant, using data characterizing previous interactions between the user and the automated assistant. An interaction can be predicted based on details of a current interaction between the user and an automated assistant. One or more predicted interactions can be initialized, and/or any corresponding data pre-cached, prior to the user commanding the automated assistant in furtherance of the predicted interaction. Interaction predictions can be generated using a user-parameterized machine learning model, which can be used when processing input(s) that characterize a recent user interaction with the automated assistant.
    Type: Application
    Filed: January 6, 2022
    Publication date: April 28, 2022
    Inventors: Lucas Mirelmann, Zaheed Sabur, Bohdan Vlasyuk, Marie Patriarche Bledowski, Sergey NAZAROV, Denis Burakov, Behshad Behzadi, Michael Golikov, Steve CHENG, Daniel Cotting, Mario Bertschler
  • Publication number: 20220130386
    Abstract: Methods, apparatus, and computer readable media are described related to automated assistants that proactively incorporate, into human-to-computer dialog sessions, unsolicited content of potential interest to a user. In various implementations, in an existing human-to-computer dialog session between a user and an automated assistant, it may be determined that the automated assistant has responded to all natural language input received from the user. Based on characteristic(s) of the user, information of potential interest to the user or action(s) of potential interest to the user may be identified. Unsolicited content indicative of the information of potential interest to the user or the action(s) may be generated and incorporated by the automated assistant into the existing human-to-computer dialog session.
    Type: Application
    Filed: January 10, 2022
    Publication date: April 28, 2022
    Inventors: Ibrahim Badr, Zaheed Sabur, Vladimir Vuskovic, Adrian Zumbrunnen, Lucas Mirelmann
  • Publication number: 20220108696
    Abstract: Determining whether, upon cessation of a second automated assistant session that interrupted and supplanted a prior first automated assistant session: (1) to automatically resume the prior first automated assistant session, or (2) to transition to an alternative automated assistant state in which the prior first session is not automatically resumed. Implementations further relate to selectively causing, based on the determining and upon cessation of the second automated assistant session, either the automatic resumption of the prior first automated assistant session that was interrupted, or the transition to the state in which the first session is not automatically resumed.
    Type: Application
    Filed: December 16, 2021
    Publication date: April 7, 2022
    Inventors: Andrea Terwisscha van Scheltinga, Nicolo D'Ercole, Zaheed Sabur, Bibo Xu, Megan Knight, Alvin Abdagic, Jan Lamecki, Bo Zhang
  • Publication number: 20220060804
    Abstract: A first computing device may receive an indication of user input that is at least a part of a conversation between a user and a first assistant executing at the first computing device. The first assistant and/or an assistant executing at a digital assistant system may determine whether to handoff the conversation from the first assistant executing at the first computing device to a second assistant executing at a second computing device. In response to determining to handoff the conversation to the second assistant executing at the second computing device, the first assistant and/or the assistant executing at the digital assistant system may send to the second computing device a request to handoff the conversation which includes at least an indication of the conversation.
    Type: Application
    Filed: November 1, 2021
    Publication date: February 24, 2022
    Inventors: Andrea Terwisscha van Scheltinga, Zaheed Sabur, Michael Reutov, Pratik Gilda
  • Publication number: 20220059093
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Application
    Filed: November 8, 2021
    Publication date: February 24, 2022
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Patent number: 11238868
    Abstract: Implementations set forth herein relate to a system that employs an automated assistant to further interactions between a user and another application, which can provide the automated assistant with permission to initialize relevant application actions simultaneous to the user interacting with the other application. Furthermore, the system can allow the automated assistant to initialize actions of different applications, despite being actively operating a particular application. Available actions can be gleaned by the automated assistant using various application-specific schemas, which can be compared with incoming requests from a user to the automated assistant. Additional data, such as context and historical interactions, can also be used to rank and identify a suitable application action to be initialized via the automated assistant.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: February 1, 2022
    Assignee: Google LLC
    Inventors: Denis Burakov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Sergey Nazarov, Zaheed Sabur, Marcin Nowak-Przygodzki, Mugurel Ionut Andreica, Radu Voroneanu
  • Patent number: 11232792
    Abstract: Methods, apparatus, and computer readable media are described related to automated assistants that proactively incorporate, into human-to-computer dialog sessions, unsolicited content of potential interest to a user. In various implementations, in an existing human-to-computer dialog session between a user and an automated assistant, it may be determined that the automated assistant has responded to all natural language input received from the user. Based on characteristic(s) of the user, information of potential interest to the user or action(s) of potential interest to the user may be identified. Unsolicited content indicative of the information of potential interest to the user or the action(s) may be generated and incorporated by the automated assistant into the existing human-to-computer dialog session.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: January 25, 2022
    Assignee: Google LLC
    Inventors: Ibrahim Badr, Zaheed Sabur, Vladimir Vuskovic, Adrian Zumbrunnen, Lucas Mirelmann