Patents by Inventor Behshad Behzadi

Behshad Behzadi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230177272
    Abstract: Implementations set forth herein relate to an automated assistant that operates according to a variety of different location-based biasing modes for rendering responsive content for a user and/or proactively suggesting content for the user. The user can provide condensed spoken utterances to the automated assistant, when the automated assistant is operating according to one or more location-based biasing modes, but nonetheless receive accurate responsive outputs from the automated assistant. A responsive output generated by biasing toward a subset of location characteristic data that has been prioritized over other subsets of location characteristic data. The biasing allows the automated assistant to compensate for any details that may be missing from a spoken utterance, but allows the user to provide shorter spoken utterances, thereby reducing an amount of language processing when processing inputs from the user.
    Type: Application
    Filed: January 30, 2023
    Publication date: June 8, 2023
    Inventors: Sharon Stovezky, Yariv Adan, Radu Voroneanu, Behshad Behzadi, Ragnar Groot Koerkamp, Marcin Nowak-Przygodzki
  • Patent number: 11664028
    Abstract: Implementations herein relate to pre-caching data, corresponding to predicted interactions between a user and an automated assistant, using data characterizing previous interactions between the user and the automated assistant. An interaction can be predicted based on details of a current interaction between the user and an automated assistant. One or more predicted interactions can be initialized, and/or any corresponding data pre-cached, prior to the user commanding the automated assistant in furtherance of the predicted interaction. Interaction predictions can be generated using a user-parameterized machine learning model, which can be used when processing input(s) that characterize a recent user interaction with the automated assistant. The predicted interaction(s) can include action(s) to be performed by third-party application(s).
    Type: Grant
    Filed: January 6, 2022
    Date of Patent: May 30, 2023
    Assignee: GOOGLE LLC
    Inventors: Lucas Mirelmann, Zaheed Sabur, Bohdan Vlasyuk, Marie Patriarche Bledowski, Sergey Nazarov, Denis Burakov, Behshad Behzadi, Michael Golikov, Steve Cheng, Daniel Cotting, Mario Bertschler
  • Publication number: 20230125662
    Abstract: Implementations set forth herein allow a user to access a first application in a foreground of a graphical interface, and simultaneously employ an automated assistant to respond to notifications arising from a second application. The user can provide an input, such as a spoken utterance, while viewing the first application in the foreground in order to respond to notifications from the second application without performing certain intervening steps that can arise under certain circumstances. Such intervening steps can include providing a user confirmation, which can be bypassed, and/or time-limited according to a timer, which can be displayed in response to the user providing a responsive input directed at the notification. A period for the timer can be set according to one or more characteristics that are associated with the notification, the user, and/or any other information that can be associated with the user receiving the notification.
    Type: Application
    Filed: December 21, 2022
    Publication date: April 27, 2023
    Inventors: Denis Burakov, Sergey Nazarov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Zaheed Sabur, Okan Kolak, Yan Zhong, Vinh Quoc Ly
  • Publication number: 20230119561
    Abstract: Implementations relate to providing information items for display during a communication session. In some implementations, a computer-implemented method includes receiving, during a communication session between a first computing device and a second computing device, first media content from the communication session. The method further includes determining a first information item for display in the communication session based at least in part on the first media content. The method further includes sending a first command to at least one of the first computing device and the second computing device to display the first information item.
    Type: Application
    Filed: November 21, 2022
    Publication date: April 20, 2023
    Applicant: Google LLC
    Inventors: Fredrik BERGENLID, Vladyslav LYSYCHKIN, Denis BURAKOV, Behshad BEHZADI, Andrea Terwisscha VAN SCHELTINGA, Quentin Lascombes DE LAROUSSILHE, Mikhail GOLIKOV, Koa METTER, Ibrahim BADR, Zaheed SABUR
  • Patent number: 11631412
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: April 18, 2023
    Assignee: GOOGLE LLC
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Patent number: 11615124
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating subqueries from a query. In one aspect, a method includes obtaining a query, generating a set of two subqueries from the query, where the set includes a first subquery and a second subquery, determining a quality score for the set of two subqueries, determining whether the quality score for the set of two subqueries satisfies a quality threshold, and in response to determining that the quality score for the set of two subqueries satisfies the quality threshold, providing a first response to the first subquery that is responsive to a first operation that receives the first subquery as input and providing a second response to the second subquery that is responsive to a second operation that receives the second subquery as input.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: March 28, 2023
    Assignee: Google LLC
    Inventors: Vladimir Vuskovic, Joseph Lange, Behshad Behzadi, Marcin M. Nowak-Przygodzki
  • Publication number: 20230061929
    Abstract: Implementations described herein relate to configuring a dynamic warm word button, that is associated with a client device, with particular assistant commands based on detected occurrences of warm word activation events at the client device. In response to detecting an occurrence of a given warm word activation event at the client device, implementations can determine whether user verification is required for a user that actuated the warm word button. Further, in response to determining that the user verification is required for the user that actuated the warm word button, the user verification can be performed. Moreover, in response to determining that the user that actuated the warm word button has been verified, implementations can cause an automated assistant to perform the particular assistant command associated with the warm word activation event. Audio-based and/or non-audio-based techniques can be utilized to perform the user verification.
    Type: Application
    Filed: November 22, 2021
    Publication date: March 2, 2023
    Inventors: Victor Carbune, Antonio Gaetani, Bastiaan Van Eeckhoudt, Daniel Valcarce, Michael Golikov, Justin Lu, Ondrej Skopek, Nicolo D'Ercole, Zaheed Sabur, Behshad Behzadi, Luv Kothari
  • Publication number: 20230054023
    Abstract: An example method includes receiving, by a computational assistant executing at one or more processors, a representation of an utterance spoken at a computing device; identifying, based on the utterance, a task to be performed by the computational assistant; responsive to determining, by the computational assistant, that complete performance of the task will take more than a threshold amount of time, outputting, for playback by one or more speakers operably connected to the computing device, synthesized voice data that informs a user of the computing device that complete performance of the task will not be immediate; and performing, by the computational assistant, the task.
    Type: Application
    Filed: November 8, 2022
    Publication date: February 23, 2023
    Inventors: Yariv Adan, Vladimir Vuskovic, Behshad Behzadi
  • Publication number: 20230047212
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving a query provided by a user and comprising one or more terms. Obtaining context data based on at least a portion of a first resource displayed to the user at a time that the query is received. Obtaining a revised query that is based on the query and the context data. Receiving a plurality of search results responsive to the revised query. Automatically, selecting a search result that represents a second resource from the plurality of search results, and providing the second resource for display to the user.
    Type: Application
    Filed: October 31, 2022
    Publication date: February 16, 2023
    Inventors: Gokhan H. Bakir, Behshad Behzadi, Marcin M. Nowak-Przygodzki
  • Patent number: 11580181
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, modifying queries based on non-textual content. In one aspect, a method includes receiving, from a user device, a query including a plurality of terms; determining active non-textual data displayed in an application environment on the user device; determining, from the non-textual textual data, modification data for the query; generating a set of modified queries based on the query and the modification parameters; scoring the modified queries according to one or more scoring criteria; selecting one of the modified queries based on the scoring; and providing, to the user device, search results responsive to the selected modified query.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: February 14, 2023
    Assignee: GOOGLE LLC
    Inventors: Gokhan H. Bakir, Behshad Behzadi
  • Publication number: 20230041517
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant by, for example, obviating the need to provide an explicit invocation to the automated assistant, such as by saying a hot-word/phrase or performing a specific user input, prior to speaking a command or query. In addition, the automated assistant can optionally receive, understand, and/or respond to the command or query without communicating with a server, thereby further reducing the time in which a response can be provided. Implementations only selectively initiate on-device speech recognition responsive to determining one or more condition(s) are satisfied. Further, in some implementations, on-device NLU, on-device fulfillment, and/or resulting execution occur only responsive to determining, based on recognized text form the on-device speech recognition, that such further processing should occur.
    Type: Application
    Filed: October 21, 2022
    Publication date: February 9, 2023
    Inventors: Michael Golikov, Zaheed Sabur, Denis Burakov, Behshad Behzadi, Sergey Nazarov, Daniel Cotting, Mario Bertschler, Lucas Mirelmann, Steve Cheng, Bohdan Vlasyuk, Jonathan Lee, Lucia Terrenghi, Adrian Zumbrunnen
  • Patent number: 11574013
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for providing contextual information to a user. In one aspect, a method includes receiving, from a user device, a query-independent request for contextual information relevant to an active resource displayed in an application environment on the user device, generating multiple queries from displayed content from the resource, determining a quality score for each of the multiple queries, selecting one or more of the multiple queries based on their respective quality scores, and providing, to the user device for each of the selected one or more queries, a respective user interface element for display with the active resource, wherein each user interface element includes contextual information regarding the respective query and includes the respective query.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: February 7, 2023
    Assignee: Google LLC
    Inventors: Michal Jastrzebski, Aurelien Boffy, Gokhan H. Bakir, Behshad Behzadi, Marcin M. Nowak-Przygodzki
  • Patent number: 11568146
    Abstract: Implementations set forth herein relate to an automated assistant that operates according to a variety of different location-based biasing modes for rendering responsive content for a user and/or proactively suggesting content for the user. The user can provide condensed spoken utterances to the automated assistant, when the automated assistant is operating according to one or more location-based biasing modes, but nonetheless receive accurate responsive outputs from the automated assistant. A responsive output generated by biasing toward a subset of location characteristic data that has been prioritized over other subsets of location characteristic data. The biasing allows the automated assistant to compensate for any details that may be missing from a spoken utterance, but allows the user to provide shorter spoken utterances, thereby reducing an amount of language processing when processing inputs from the user.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: January 31, 2023
    Assignee: GOOGLE LLC
    Inventors: Sharon Stovezky, Yariv Adan, Radu Voroneanu, Behshad Behzadi, Ragnar Groot Koerkamp, Marcin Nowak-Przygodzki
  • Publication number: 20230013581
    Abstract: Techniques are described related to enabling automated assistants to enter into a “conference mode” in which they can “participate” in meetings between multiple human participants and perform various functions described herein. In various implementations, an automated assistant implemented at least in part on conference computing device(s) may be set to a conference mode in which the automated assistant performs speech-to-text processing on multiple distinct spoken utterances, provided by multiple meeting participants, without requiring explicit invocation prior to each utterance. The automated assistant may perform semantic processing on first text generated from the speech-to-text processing of one or more of the spoken utterances, and generate, based on the semantic processing, data that is pertinent to the first text. The data may be output to the participants at conference computing device(s).
    Type: Application
    Filed: September 14, 2022
    Publication date: January 19, 2023
    Inventors: Marcin Nowak-Przygodzki, Jan Lamecki, Behshad Behzadi
  • Patent number: 11545151
    Abstract: Implementations set forth herein allow a user to access a first application in a foreground of a graphical interface, and simultaneously employ an automated assistant to respond to notifications arising from a second application. The user can provide an input, such as a spoken utterance, while viewing the first application in the foreground in order to respond to notifications from the second application without performing certain intervening steps that can arise under certain circumstances. Such intervening steps can include providing a user confirmation, which can be bypassed, and/or time-limited according to a timer, which can be displayed in response to the user providing a responsive input directed at the notification. A period for the timer can be set according to one or more characteristics that are associated with the notification, the user, and/or any other information that can be associated with the user receiving the notification.
    Type: Grant
    Filed: June 5, 2019
    Date of Patent: January 3, 2023
    Assignee: GOOGLE LLC
    Inventors: Denis Burakov, Sergey Nazarov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Zaheed Sabur, Okan Kolak, Yan Zhong, Vinh Quoc Ly
  • Publication number: 20220414333
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for detecting a continued conversation are disclosed. In one aspect, a method includes the actions of receiving first audio data of a first utterance. The actions further include obtaining a first transcription of the first utterance. The actions further include receiving second audio data of a second utterance. The actions further include obtaining a second transcription of the second utterance. The actions further include determining whether the second utterance includes a query directed to a query processing system based on analysis of the second transcription and the first transcription or a response to the first query. The actions further include configuring the data routing component to provide the second transcription of the second utterance to the query processing system as a second query or bypass routing the second transcription.
    Type: Application
    Filed: September 2, 2022
    Publication date: December 29, 2022
    Inventors: Nathan David Howard, Gabor Simko, Andrei Giurgiu, Behshad Behzadi, Marcin M. Nowak-Przygodzki
  • Patent number: 11521037
    Abstract: An example method includes receiving, by a computational assistant executing at one or more processors, a representation of an utterance spoken at a computing device; identifying, based on the utterance, a task to be performed by the computational assistant; responsive to determining, by the computational assistant, that complete performance of the task will take more than a threshold amount of time, outputting, for playback by one or more speakers operably connected to the computing device, synthesized voice data that informs a user of the computing device that complete performance of the task will not be immediate; and performing, by the computational assistant, the task.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: December 6, 2022
    Assignee: GOOGLE LLC
    Inventors: Yariv Adan, Vladimir Vuskovic, Behshad Behzadi
  • Patent number: 11514035
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining query refinements using search data. In one aspect, a method includes receiving a first query and a second query each comprising one or more n-grams for a user session, determining a first set of query refinements for the first query, determining a second set of query refinements from the first set of query refinements, each query refinement in the second set of query refinements including at least one n-gram that is similar to an n-gram from the first query and at least on n-gram that is similar to an n-gram from the second query, scoring each query refinement in the second set of query refinements, selecting a third query from a group consisting of the second set of query refinements and the second query, and providing the third query as input to a search operation.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: November 29, 2022
    Assignee: GOOGLE LLC
    Inventors: Matthias Heiler, Behshad Behzadi, Evgeny A. Cherepanov, Nils Grimsmo, Aurelien Boffy, Alessandro Agostini, Karoly Csalogany, Fredrik Bergenlid, Marcin M. Nowak-Przygodzki
  • Patent number: 11509616
    Abstract: Implementations relate to providing information items for display during a communication session. In some implementations, a computer-implemented method includes receiving, during a communication session between a first computing device and a second computing device, first media content from the communication session. The method further includes determining a first information item for display in the communication session based at least in part on the first media content. The method further includes sending a first command to at least one of the first computing device and the second computing device to display the first information item.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: November 22, 2022
    Assignee: Google LLC
    Inventors: Fredrik Bergenlid, Vladyslav Lysychkin, Denis Burakov, Behshad Behzadi, Andrea Terwisscha Van Scheltinga, Quentin Lascombes De Laroussilhe, Mikhail Golikov, Koa Metter, Ibrahim Badr, Zaheed Sabur
  • Publication number: 20220366910
    Abstract: Systems and methods described herein relate to determining whether to incorporate recognized text, that corresponds to a spoken utterance of a user of a client device, into a transcription displayed at the client device, or to cause an assistant command, that is associated with the transcription and that is based on the recognized text, to be performed by an automated assistant implemented by the client device. The spoken utterance is received during a dictation session between the user and the automated assistant. Implementations can process, using automatic speech recognition model(s), audio data that captures the spoken utterance to generate the recognized text. Further, implementations can determine whether to incorporate the recognized text into the transcription or cause the assistant command to be performed based on touch input being directed to the transcription, a state of the transcription, and/or audio-based characteristic(s) of the spoken utterance.
    Type: Application
    Filed: May 17, 2021
    Publication date: November 17, 2022
    Inventors: Victor Carbune, Alvin Abdagic, Behshad Behzadi, Jacopo Sannazzaro Natta, Julia Proskurnia, Krzysztof Andrzej Goj, Srikanth Pandiri, Viesturs Zarins, Nicolo D'Ercole, Zaheed Sabur, Luv Kothari