Patents by Inventor Denis Burakov

Denis Burakov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220059093
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Application
    Filed: November 8, 2021
    Publication date: February 24, 2022
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Patent number: 11238868
    Abstract: Implementations set forth herein relate to a system that employs an automated assistant to further interactions between a user and another application, which can provide the automated assistant with permission to initialize relevant application actions simultaneous to the user interacting with the other application. Furthermore, the system can allow the automated assistant to initialize actions of different applications, despite being actively operating a particular application. Available actions can be gleaned by the automated assistant using various application-specific schemas, which can be compared with incoming requests from a user to the automated assistant. Additional data, such as context and historical interactions, can also be used to rank and identify a suitable application action to be initialized via the automated assistant.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: February 1, 2022
    Assignee: Google LLC
    Inventors: Denis Burakov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Sergey Nazarov, Zaheed Sabur, Marcin Nowak-Przygodzki, Mugurel Ionut Andreica, Radu Voroneanu
  • Patent number: 11222637
    Abstract: Implementations herein relate to pre-caching data, corresponding to predicted interactions between a user and an automated assistant, using data characterizing previous interactions between the user and the automated assistant. An interaction can be predicted based on details of a current interaction between the user and an automated assistant. One or more predicted interactions can be initialized, and/or any corresponding data pre-cached, prior to the user commanding the automated assistant in furtherance of the predicted interaction. Interaction predictions can be generated using a user-parameterized machine learning model, which can be used when processing input(s) that characterize a recent user interaction with the automated assistant.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: January 11, 2022
    Assignee: GOOGLE LLC
    Inventors: Lucas Mirelmann, Zaheed Sabur, Bohdan Vlasyuk, Marie Patriarche Bledowski, Sergey Nazarov, Denis Burakov, Behshad Behzadi, Michael Golikov, Steve Cheng, Daniel Cotting, Mario Bertschler
  • Patent number: 11170777
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: November 9, 2021
    Assignee: GOOGLE LLC
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Publication number: 20210335356
    Abstract: Implementations set forth herein allow a user to access a first application in a foreground of a graphical interface, and simultaneously employ an automated assistant to respond to notifications arising from a second application. The user can provide an input, such as a spoken utterance, while viewing the first application in the foreground in order to respond to notifications from the second application without performing certain intervening steps that can arise under certain circumstances. Such intervening steps can include providing a user confirmation, which can be bypassed, and/or time-limited according to a timer, which can be displayed in response to the user providing a responsive input directed at the notification. A period for the timer can be set according to one or more characteristics that are associated with the notification, the user, and/or any other information that can be associated with the user receiving the notification.
    Type: Application
    Filed: June 5, 2019
    Publication date: October 28, 2021
    Inventors: Denis Burakov, Sergey Nazarov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Zaheed Sabur, Okan Kolak, Yan Zhong, Vinh Quoc Ly
  • Publication number: 20210216384
    Abstract: Implementations set forth herein relate to an automated assistant that can be invoked while a user is interfacing with a foreground application in order to retrieve data from one or more different applications, and then provide the retrieved data to the foreground application. A user can invoke the automated assistant while operating the foreground application by providing a spoken utterance, and the automated assistant can select one or more other applications to query based on content of the spoken utterance. Application data collected by the automated assistant from the one or more other applications can then be used to provide an input to the foreground application. In this way, the user can bypass switching between applications in the foreground in order to retrieve data that has been generated by other applications.
    Type: Application
    Filed: August 6, 2019
    Publication date: July 15, 2021
    Inventors: Bohdan Vlasyuk, Behshad Behzadi, Mario Bertschler, Denis Burakov, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve CHENG, Sergey NAZAROV, Zaheed Sabur, Jonathan Lee, Lucia Terrenghi, Adrian Zumbrunnen
  • Publication number: 20210074285
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant by, for example, obviating the need to provide an explicit invocation to the automated assistant, such as by saying a hot-word/phrase or performing a specific user input, prior to speaking a command or query. In addition, the automated assistant can optionally receive, understand, and/or respond to the command or query without communicating with a server, thereby further reducing the time in which a response can be provided. Implementations only selectively initiate on-device speech recognition responsive to determining one or more condition(s) are satisfied. Further, in some implementations, on-device NLU, on-device fulfillment, and/or resulting execution occur only responsive to determining, based on recognized text form the on-device speech recognition, that such further processing should occur.
    Type: Application
    Filed: May 31, 2019
    Publication date: March 11, 2021
    Inventors: Michael Golikov, Zaheed Sabur, Denis Burakov, Behshad Behzadi, Sergey Nazarov, Daniel Cotting, Mario Bertschler, Lucas Mirelmann, Steve Cheng, Bohdan Vlasyuk, Jonathan Lee, Lucia Terrenghi, Adrian Zumbrunnen
  • Publication number: 20210074286
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Application
    Filed: May 31, 2019
    Publication date: March 11, 2021
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Patent number: 10893202
    Abstract: The present disclosure relates to user-selected metadata related to images captured by a camera of a client device. User-selected metadata may include contextual information and/or information provided by a user when the images are captured. In various implementations, a free form input may be received at a first client device of one or more client devices operated by a user. A task request may be recognized from the free form input, and it may be determined that the task request includes a request to store metadata related to one or more images captured by a camera of the first client device. The metadata may be selected based on content of the task request. The metadata may then be stored, e.g., in association with one or more images captured by the camera, in computer-readable media. The computer-readable media may be searchable by the metadata.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 12, 2021
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Gökhan Bakir, Daniel Kunkle, Kavin Karthik Ilangovan, Denis Burakov
  • Publication number: 20210006523
    Abstract: Implementations relate to providing information items for display during a communication session. In some implementations, a computer-implemented method includes receiving, during a communication session between a first computing device and a second computing device, first media content from the communication session. The method further includes determining a first information item for display in the communication session based at least in part on the first media content. The method further includes sending a first command to at least one of the first computing device and the second computing device to display the first information item.
    Type: Application
    Filed: September 24, 2020
    Publication date: January 7, 2021
    Applicant: Google LLC
    Inventors: Fredrik BERGENLID, Vladyslav LYSYCHKIN, Denis BURAKOV, Behshad BEHZADI, Andrea Terwisscha VAN SCHELTINGA, Quentin Lascombes DE LAROUSSILHE, Mikhail GOLIKOV, Koa METTER, Ibrahim BADR, Zaheed SABUR
  • Publication number: 20200395018
    Abstract: Implementations set forth herein relate to a system that employs an automated assistant to further interactions between a user and another application, which can provide the automated assistant with permission to initialize relevant application actions simultaneous to the user interacting with the other application. Furthermore, the system can allow the automated assistant to initialize actions of different applications, despite being actively operating a particular application. Available actions can be gleaned by the automated assistant using various application-specific schemas, which can be compared with incoming requests from a user to the automated assistant. Additional data, such as context and historical interactions, can also be used to rank and identify a suitable application action to be initialized via the automated assistant.
    Type: Application
    Filed: June 13, 2019
    Publication date: December 17, 2020
    Inventors: Denis Burakov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Sergey NAZAROV, Zaheed Sabur, Marcin Nowak-Przygodzki, Mugurel Ionut Andreica, Radu Voroneanu
  • Publication number: 20200357395
    Abstract: Implementations herein relate to pre-caching data, corresponding to predicted interactions between a user and an automated assistant, using data characterizing previous interactions between the user and the automated assistant. An interaction can be predicted based on details of a current interaction between the user and an automated assistant. One or more predicted interactions can be initialized, and/or any corresponding data pre-cached, prior to the user commanding the automated assistant in furtherance of the predicted interaction. Interaction predictions can be generated using a user-parameterized machine learning model, which can be used when processing input(s) that characterize a recent user interaction with the automated assistant.
    Type: Application
    Filed: May 31, 2019
    Publication date: November 12, 2020
    Inventors: Lucas Mirelmann, Zaheed Sabur, Bohdan Vlasyuk, Marie Patriarche Bledowski, Sergey Nazarov, Denis Burakov, Behshad Behzadi, Michael Golikov, Steve Cheng, Daniel Cotting, Mario Bertschler
  • Patent number: 10791078
    Abstract: Implementations relate to providing information items for display during a communication session. In some implementations, a computer-implemented method includes receiving, during a communication session between a first computing device and a second computing device, first media content from the communication session. The method further includes determining a first information item for display in the communication session based at least in part on the first media content. The method further includes sending a first command to at least one of the first computing device and the second computing device to display the first information item.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: September 29, 2020
    Assignee: Google LLC
    Inventors: Fredrik Bergenlid, Vladyslav Lysychkin, Denis Burakov, Behshad Behzadi, Andrea Terwisscha van Scheltinga, Quentin Lascombes de Laroussilhe, Mikhail Golikov, Koa Metter, Ibrahim Badr, Zaheed Sabur
  • Publication number: 20200021740
    Abstract: The present disclosure relates to user-selected metadata related to images captured by a camera of a client device. User-selected metadata may include contextual information and/or information provided by a user when the images are captured. In various implementations, a free form input may be received at a first client device of one or more client devices operated by a user. A task request may be recognized from the free form input, and it may be determined that the task request includes a request to store metadata related to one or more images captured by a camera of the first client device. The metadata may be selected based on content of the task request. The metadata may then be stored, e.g., in association with one or more images captured by the camera, in computer-readable media. The computer-readable media may be searchable by the metadata.
    Type: Application
    Filed: September 27, 2019
    Publication date: January 16, 2020
    Inventors: Ibrahim Badr, Gökhan Bakir, Daniel Kunkle, Kavin Karthik Ilangovan, Denis Burakov
  • Patent number: 10469755
    Abstract: The present disclosure relates to user-selected metadata related to images captured by a camera of a client device. User-selected metadata may include contextual information and/or information provided by a user when the images are captured. In various implementations, a free form input may be received at a first client device of one or more client devices operated by a user. A task request may be recognized from the free form input, and it may be determined that the task request includes a request to store metadata related to one or more images captured by a camera of the first client device. The metadata may be selected based on content of the task request. The metadata may then be stored, e.g., in association with one or more images captured by the camera, in computer-readable media. The computer-readable media may be searchable by the metadata.
    Type: Grant
    Filed: May 23, 2017
    Date of Patent: November 5, 2019
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Gökhan Bakir, Daniel Kunkle, Kavin Karthik Ilangovan, Denis Burakov
  • Publication number: 20190036856
    Abstract: Implementations relate to providing information items for display during a communication session. In some implementations, a computer-implemented method includes receiving, during a communication session between a first computing device and a second computing device, first media content from the communication session. The method further includes determining a first information item for display in the communication session based at least in part on the first media content. The method further includes sending a first command to at least one of the first computing device and the second computing device to display the first information item.
    Type: Application
    Filed: April 13, 2018
    Publication date: January 31, 2019
    Applicant: Google LLC
    Inventors: Fredrik Bergenlid, Vladyslav Lysychkin, Denis Burakov, Behshad Behzadi, Andrea Terwisscha van Scheltinga, Quentin Lascombes de Laroussilhe, Mikhail Golikov, Koa Metter, Ibrahim Badr, Zaheed Sabur
  • Publication number: 20180338109
    Abstract: The present disclosure relates to user-selected metadata related to images captured by a camera of a client device. User-selected metadata may include contextual information and/or information provided by a user when the images are captured. In various implementations, a free form input may be received at a first client device of one or more client devices operated by a user. A task request may be recognized from the free form input, and it may be determined that the task request includes a request to store metadata related to one or more images captured by a camera of the first client device. The metadata may be selected based on content of the task request. The metadata may then be stored, e.g., in association with one or more images captured by the camera, in computer-readable media. The computer-readable media may be searchable by the metadata.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 22, 2018
    Inventors: Ibrahim Badr, Gökhan Bakir, Daniel Kunkle, Kavin Karthik Ilangovan, Denis Burakov