Patents by Inventor Zaheed Sabur
Zaheed Sabur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12243526Abstract: Determining whether, upon cessation of a second automated assistant session that interrupted and supplanted a prior first automated assistant session: (1) to automatically resume the prior first automated assistant session, or (2) to transition to an alternative automated assistant state in which the prior first session is not automatically resumed. Implementations further relate to selectively causing, based on the determining and upon cessation of the second automated assistant session, either the automatic resumption of the prior first automated assistant session that was interrupted, or the transition to the state in which the first session is not automatically resumed.Type: GrantFiled: August 28, 2023Date of Patent: March 4, 2025Assignee: GRAY ICE HIGDONInventors: Andrea Terwisscha van Scheltinga, Nicolo D'Ercole, Zaheed Sabur, Bibo Xu, Megan Knight, Alvin Abdagic, Jan Lamecki, Bo Zhang
-
Patent number: 12243533Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.Type: GrantFiled: April 1, 2024Date of Patent: March 4, 2025Assignee: GOOGLE LLCInventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
-
Patent number: 12230272Abstract: Methods, apparatus, and computer readable media are described related to automated assistants that proactively incorporate, into human-to-computer dialog sessions, unsolicited content of potential interest to a user. In various implementations, in an existing human-to-computer dialog session between a user and an automated assistant, it may be determined that the automated assistant has responded to all natural language input received from the user. Based on characteristic(s) of the user, information of potential interest to the user or action(s) of potential interest to the user may be identified. Unsolicited content indicative of the information of potential interest to the user or the action(s) may be generated and incorporated by the automated assistant into the existing human-to-computer dialog session.Type: GrantFiled: December 14, 2023Date of Patent: February 18, 2025Assignee: GOOGLE LLCInventors: Ibrahim Badr, Zaheed Sabur, Vladimir Vuskovic, Adrian Zumbrunnen, Lucas Mirelmann
-
DYNAMICALLY DELAYING EXECUTION OF AUTOMATED ASSISTANT ACTIONS AND/OR BACKGROUND APPLICATION REQUESTS
Publication number: 20240420697Abstract: Implementations set forth herein allow a user to access a first application in a foreground of a graphical interface, and simultaneously employ an automated assistant to respond to notifications arising from a second application. The user can provide an input, such as a spoken utterance, while viewing the first application in the foreground in order to respond to notifications from the second application without performing certain intervening steps that can arise under certain circumstances. Such intervening steps can include providing a user confirmation, which can be bypassed, and/or time-limited according to a timer, which can be displayed in response to the user providing a responsive input directed at the notification. A period for the timer can be set according to one or more characteristics that are associated with the notification, the user, and/or any other information that can be associated with the user receiving the notification.Type: ApplicationFiled: August 26, 2024Publication date: December 19, 2024Inventors: Denis Burakov, Sergey Nazarov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Zaheed Sabur, Okan Kolak, Yan Zhong, Vinh Quoc Ly -
Publication number: 20240420699Abstract: Systems and methods described herein relate to determining whether to incorporate recognized text, that corresponds to a spoken utterance of a user of a client device, into a transcription displayed at the client device, or to cause an assistant command, that is associated with the transcription and that is based on the recognized text, to be performed by an automated assistant implemented by the client device. The spoken utterance is received during a dictation session between the user and the automated assistant. Implementations can process, using automatic speech recognition model(s), audio data that captures the spoken utterance to generate the recognized text. Further, implementations can determine whether to incorporate the recognized text into the transcription or cause the assistant command to be performed based on touch input being directed to the transcription, a state of the transcription, and/or audio-based characteristic(s) of the spoken utterance.Type: ApplicationFiled: August 26, 2024Publication date: December 19, 2024Inventors: Victor Carbune, Alvin Abdagic, Behshad Behzadi, Jacopo Sannazzaro Natta, Julia Proskurnia, Krzysztof Andrzej Goj, Srikanth Pandiri, Viesturs Zarins, Nicolo D'Ercole, Zaheed Sabur, Luv Kothari
-
Publication number: 20240411771Abstract: Implementations include actions of obtaining a set of entities based on one or more terms of a query, obtaining one or more entities associated with each live event of a plurality of live events, identifying a live event that is responsive to the query based on comparing at least one entity in the set of entities to one or more entities associated with each live event of a plurality of live events, determining that an event search result corresponding to the live event is to be displayed in search results, and in response: providing the event search result for display, the event search result including information associated with the live event, the information including an indicator of an occurrence of the live event.Type: ApplicationFiled: June 14, 2024Publication date: December 12, 2024Inventors: Tilke Mary Judd, Zaheed Sabur, Eduardo Jodas Samper, Alexandru Ovidiu Dovlecel, Ardan Arac
-
Publication number: 20240411513Abstract: Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.Type: ApplicationFiled: August 19, 2024Publication date: December 12, 2024Inventors: Srikanth Pandiri, Luv Kothari, Behshad Behzadi, Zaheed Sabur, Domenico Carbotta, Akshay Kannan, Qi Wang, Gokay Baris Gultekin, Angana Ghosh, Xu Liu, Yang Lu, Steve Cheng
-
Publication number: 20240355327Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.Type: ApplicationFiled: April 1, 2024Publication date: October 24, 2024Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
-
Patent number: 12106759Abstract: Implementations set forth herein relate to a system that employs an automated assistant to further interactions between a user and another application, which can provide the automated assistant with permission to initialize relevant application actions simultaneous to the user interacting with the other application. Furthermore, the system can allow the automated assistant to initialize actions of different applications, despite being actively operating a particular application. Available actions can be gleaned by the automated assistant using various application-specific schemas, which can be compared with incoming requests from a user to the automated assistant. Additional data, such as context and historical interactions, can also be used to rank and identify a suitable application action to be initialized via the automated assistant.Type: GrantFiled: January 31, 2022Date of Patent: October 1, 2024Assignee: GOOGLE LLCInventors: Denis Burakov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Sergey Nazarov, Zaheed Sabur, Marcin Nowak-Przygodzki, Mugurel Ionut Andreica, Radu Voroneanu
-
Patent number: 12106758Abstract: Systems and methods described herein relate to determining whether to incorporate recognized text, that corresponds to a spoken utterance of a user of a client device, into a transcription displayed at the client device, or to cause an assistant command, that is associated with the transcription and that is based on the recognized text, to be performed by an automated assistant implemented by the client device. The spoken utterance is received during a dictation session between the user and the automated assistant. Implementations can process, using automatic speech recognition model(s), audio data that captures the spoken utterance to generate the recognized text. Further, implementations can determine whether to incorporate the recognized text into the transcription or cause the assistant command to be performed based on touch input being directed to the transcription, a state of the transcription, and/or audio-based characteristic(s) of the spoken utterance.Type: GrantFiled: May 17, 2021Date of Patent: October 1, 2024Assignee: GOOGLE LLCInventors: Victor Carbune, Alvin Abdagic, Behshad Behzadi, Jacopo Sannazzaro Natta, Julia Proskurnia, Krzysztof Andrzej Goj, Srikanth Pandiri, Viesturs Zarins, Nicolo D'Ercole, Zaheed Sabur, Luv Kothari
-
Publication number: 20240321277Abstract: Implementations described herein relate to an application and/or automated assistant that can identify arrangement operations to perform for arranging text during speech-to-text operations—without a user having to expressly identify the arrangement operations. In some instances, a user that is dictating a document (e.g., an email, a text message, etc.) can provide a spoken utterance to an application in order to incorporate textual content. However, in some of these instances, certain corresponding arrangements are needed for the textual content in the document. The textual content that is derived from the spoken utterance can be arranged by the application based on an intent, vocalization features, and/or contextual features associated with the spoken utterance and/or a type of the application associated with the document, without the user expressly identifying the corresponding arrangements.Type: ApplicationFiled: May 29, 2024Publication date: September 26, 2024Inventors: Victor Carbune, Krishna Sapkota, Behshad Behzadi, Julia Proskurnia, Jacopo Sannazzaro Natta, Justin Lu, Magali Boizot-Roche, Marius Sajgalik, Nicolo D'Ercole, Zaheed Sabur, Luv Kothari
-
Publication number: 20240314094Abstract: Implementations relate to providing information items for display during a communication session. In some implementations, a computer-implemented method includes receiving, during a communication session between a first computing device and a second computing device, first media content from the communication session. The method further includes determining a first information item for display in the communication session based at least in part on the first media content. The method further includes sending a first command to at least one of the first computing device and the second computing device to display the first information item.Type: ApplicationFiled: May 21, 2024Publication date: September 19, 2024Applicant: Google LLCInventors: Fredrik BERGENLID, Vladyslav LYSYCHKIN, Denis BURAKOV, Behshad BEHZADI, Andrea Terwisscha VAN SCHELTINGA, Quentin Lascombes DE LAROUSSILHE, Mikhail GOLIKOV, Koa METTER, Ibrahim BADR, Zaheed SABUR
-
Publication number: 20240312461Abstract: Implementations relate to receiving natural language input that requests an automated assistant to provide information and processing the natural language input to identify the requested information and to identify one or more predicted actions. Those implementations further cause a computing device, at which the natural language input is received, to render the requested information and the one or more predicted actions in response to the natural language input. Yet further, those implementations, in response to the user confirming a rendered predicted action, cause the automated assistant to initialize the predicted action.Type: ApplicationFiled: May 24, 2024Publication date: September 19, 2024Inventors: Lucas Mirelmann, Zaheed Sabur, Bohdan Vlasyuk, Marie Patriarche Bledowski, Sergey Nazarov, Denis Burakov, Behshad Behzadi, Michael Golikov, Steve Cheng, Daniel Cotting, Mario Bertschler
-
Patent number: 12093609Abstract: Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.Type: GrantFiled: November 9, 2023Date of Patent: September 17, 2024Assignee: GOOGLE LLCInventors: Srikanth Pandiri, Luv Kothari, Behshad Behzadi, Zaheed Sabur, Domenico Carbotta, Akshay Kannan, Qi Wang, Gokay Baris Gultekin, Angana Ghosh, Xu Liu, Yang Lu, Steve Cheng
-
Dynamically delaying execution of automated assistant actions and/or background application requests
Patent number: 12073835Abstract: Implementations set forth herein allow a user to access a first application in a foreground of a graphical interface, and simultaneously employ an automated assistant to respond to notifications arising from a second application. The user can provide an input, such as a spoken utterance, while viewing the first application in the foreground in order to respond to notifications from the second application without performing certain intervening steps that can arise under certain circumstances. Such intervening steps can include providing a user confirmation, which can be bypassed, and/or time-limited according to a timer, which can be displayed in response to the user providing a responsive input directed at the notification. A period for the timer can be set according to one or more characteristics that are associated with the notification, the user, and/or any other information that can be associated with the user receiving the notification.Type: GrantFiled: September 1, 2023Date of Patent: August 27, 2024Assignee: GOOGLE LLCInventors: Denis Burakov, Sergey Nazarov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Zaheed Sabur, Okan Kolak, Yan Zhong, Vinh Quoc Ly -
Publication number: 20240272970Abstract: Implementations set forth herein relate to an automated assistant that can be invoked while a user is interfacing with a foreground application in order to retrieve data from one or more different applications, and then provide the retrieved data to the foreground application. A user can invoke the automated assistant while operating the foreground application by providing a spoken utterance, and the automated assistant can select one or more other applications to query based on content of the spoken utterance. Application data collected by the automated assistant from the one or more other applications can then be used to provide an input to the foreground application. In this way, the user can bypass switching between applications in the foreground in order to retrieve data that has been generated by other applications.Type: ApplicationFiled: April 29, 2024Publication date: August 15, 2024Inventors: Bohdan Vlasyuk, Behshad Behzadi, Mario Bertschler, Denis Burakov, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Sergey Nazarov, Zaheed Sabur, Jonathan Lee, Lucia Terrenghi, Adrian Zumbrunnen
-
Patent number: 12045248Abstract: Implementations include actions of obtaining a set of entities based on one or more terms of a query, obtaining one or more entities associated with each live event of a plurality of live events, identifying a live event that is responsive to the query based on comparing at least one entity in the set of entities to one or more entities associated with each live event of a plurality of live events, determining that an event search result corresponding to the live event is to be displayed in search results, and in response: providing the event search result for display, the event search result including information associated with the live event, the information including an indicator of an occurrence of the live event.Type: GrantFiled: January 8, 2021Date of Patent: July 23, 2024Assignee: GOOGLE LLCInventors: Tilke Mary Judd, Zaheed Sabur, Eduardo Jodas Samper, Alexandru Ovidiu Dovlecel, Ardan Arac
-
Patent number: 12033637Abstract: Implementations described herein relate to an application and/or automated assistant that can identify arrangement operations to perform for arranging text during speech-to-text operations—without a user having to expressly identify the arrangement operations. In some instances, a user that is dictating a document (e.g., an email, a text message, etc.) can provide a spoken utterance to an application in order to incorporate textual content. However, in some of these instances, certain corresponding arrangements are needed for the textual content in the document. The textual content that is derived from the spoken utterance can be arranged by the application based on an intent, vocalization features, and/or contextual features associated with the spoken utterance and/or a type of the application associated with the document, without the user expressly identifying the corresponding arrangements.Type: GrantFiled: June 3, 2021Date of Patent: July 9, 2024Assignee: GOOGLE LLCInventors: Victor Carbune, Krishna Sapkota, Behshad Behzadi, Julia Proskurnia, Jacopo Sannazzaro Natta, Justin Lu, Magali Boizot-Roche, Márius {hacek over (S)}ajgalík, Nicolo D'Ercole, Zaheed Sabur, Luv Kothari
-
Patent number: 12028302Abstract: Implementations relate to providing information items for display during a communication session. In some implementations, a computer-implemented method includes receiving, during a communication session between a first computing device and a second computing device, first media content from the communication session. The method further includes determining a first information item for display in the communication session based at least in part on the first media content. The method further includes sending a first command to at least one of the first computing device and the second computing device to display the first information item.Type: GrantFiled: June 28, 2023Date of Patent: July 2, 2024Assignee: Google LLCInventors: Fredrik Bergenlid, Vladyslav Lysychkin, Denis Burakov, Behshad Behzadi, Andrea Terwisscha Van Scheltinga, Quentin Lascombes De Laroussilhe, Mikhail Golikov, Koa Metter, Ibrahim Badr, Zaheed Sabur
-
Publication number: 20240185857Abstract: Implementations set forth herein relate to a system that employs an automated assistant to further interactions between a user and another application, which can provide the automated assistant with permission to initialize relevant application actions simultaneous to the user interacting with the other application. Furthermore, the system can allow the automated assistant to initialize actions of different applications, despite being actively operating a particular application. Available actions can be gleaned by the automated assistant using various application-specific schemas, which can be compared with incoming requests from a user to the automated assistant. Additional data, such as context and historical interactions, can also be used to rank and identify a suitable application action to be initialized via the automated assistant.Type: ApplicationFiled: February 12, 2024Publication date: June 6, 2024Inventors: Denis Burakov, Behshad Behzadi, Mario Bertschlewr, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Sergey Nazarov, Zaheed Sabur, Marcin Nowak-Przygodzki, Mugurel Ionut Andreica, Radu Voroneanu