Patents by Inventor Gokhan Bakir

Gokhan Bakir has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11967321
    Abstract: Implementations set forth herein relate to an automated assistant that can interact with applications that may not have been pre-configured for interfacing with the automated assistant. The automated assistant can identify content of an application interface of the application to determine synonymous terms that a user may speak when commanding the automated assistant to perform certain tasks. Speech processing operations employed by the automated assistant can be biased towards these synonymous terms when the user is accessing an application interface of the application. In some implementations, the synonymous terms can be identified in a responsive language of the automated assistant when the content of the application interface is being rendered in a different language. This can allow the automated assistant to operate as an interface between the user and certain applications that may not be rendering content in a native language of the user.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: April 23, 2024
    Assignee: GOOGLE LLC
    Inventors: Joseph Lange, Abhanshu Sharma, Adam Coimbra, Gökhan Bakir, Gabriel Taubman, Ilya Firman, Jindong Chen, James Stout, Marcin Nowak-Przygodzki, Reed Enger, Thomas Weedon Hume, Vishwath Mohan, Jacek Szmigiel, Yunfan Jin, Kyle Pedersen, Gilles Baechler
  • Patent number: 11908187
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Grant
    Filed: March 6, 2023
    Date of Patent: February 20, 2024
    Assignee: GOOGLE LLC
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Publication number: 20240028822
    Abstract: A method includes receiving, via a user interface of a client device, a request to populate one or more cells of a plurality of cells of a document having a tabular structure, wherein the one or more cells correspond to a first attribute pertaining to a first column header and a first object pertaining to a first row header; analyzing the request and one or more additional cells corresponding to one or more additional attributes and one or more additional objects of the document to obtain contextual information for the request; generating a query based at least in part on the contextual information; initiating an execution of the query to obtain a response using one or more data sources; and causing the user interface to be modified to populate the response in the one or more cells corresponding to the first attribute and the first object.
    Type: Application
    Filed: July 25, 2023
    Publication date: January 25, 2024
    Inventor: Gökhan Bakir
  • Publication number: 20230394816
    Abstract: Methods, apparatus, and computer readable media are described related to causing processing of sensor data to be performed in response to determining a request related to an environmental object that is likely captured by the sensor data. Some implementations further relate to determining whether the request is resolvable based on the processing of the sensor data. When it is determined that the request is not resolvable, a prompt is determined and provided as user interface output, where the prompt provides guidance on further input that will enable the request to be resolved. In those implementations, the further input (e.g., additional sensor data and/or the user interface input) received in response to the prompt can then be utilized to resolve the request.
    Type: Application
    Filed: August 21, 2023
    Publication date: December 7, 2023
    Inventors: Ibrahim Badr, Nils Grimsmo, Gökhan Bakir
  • Patent number: 11734581
    Abstract: Systems and methods provide an application programming interface to offer action suggestions to third-party applications using context data associated with the third-party. An example method includes receiving content information and context information from a source mobile application, the content information representing information to be displayed on a mobile device as part of a source mobile application administered by a third party, the context information being information specific to the third party and unavailable to a screen scraper. The method also includes predicting an action based on the content information and the context information, the action representing a deep link for a target mobile application. The method further includes providing the action to the source mobile application with a title and a thumbnail, the source mobile application using the title and thumbnail to display a selectable control that, when selected, causes the mobile device to initiate the action.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Mauricio Zuluaga, Aneto Okonkwo, Gökhan Bakir
  • Patent number: 11734926
    Abstract: Methods, apparatus, and computer readable media are described related to causing processing of sensor data to be performed in response to determining a request related to an environmental object that is likely captured by the sensor data. Some implementations further relate to determining whether the request is resolvable based on the processing of the sensor data. When it is determined that the request is not resolvable, a prompt is determined and provided as user interface output, where the prompt provides guidance on further input that will enable the request to be resolved. In those implementations, the further input (e.g., additional sensor data and/or the user interface input) received in response to the prompt can then be utilized to resolve the request.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Nils Grimsmo, Gökhan Bakir
  • Patent number: 11709994
    Abstract: A method includes receiving, via a user interface of a device associated with a user, a request to populate one or more cells of a plurality of cells of a document having a tabular structure, wherein the one or more cells correspond to a first attribute pertaining to a first column and a first object pertaining to a first row; analyzing the request to obtain contextual information indicating the first attribute and the first object; generating a query based at least in part on the contextual information; initiating an execution of the query to obtain a response using one or more data sources; causing the user interface to be modified to populate the response in the one or more cells corresponding to the first attribute and the first object; determining second contextual information based on the response, the second contextual information indicating a second attribute and a second object; generating a second query based at least in part on the second contextual information; initiating an execution of the secon
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: July 25, 2023
    Assignee: Google LLC
    Inventor: Gökhan Bakir
  • Publication number: 20230206628
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Application
    Filed: March 6, 2023
    Publication date: June 29, 2023
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Publication number: 20230103677
    Abstract: Implementations set forth herein relate to an automated assistant that can interact with applications that may not have been pre-configured for interfacing with the automated assistant. The automated assistant can identify content of an application interface of the application to determine synonymous terms that a user may speak when commanding the automated assistant to perform certain tasks. Speech processing operations employed by the automated assistant can be biased towards these synonymous terms when the user is accessing an application interface of the application. In some implementations, the synonymous terms can be identified in a responsive language of the automated assistant when the content of the application interface is being rendered in a different language. This can allow the automated assistant to operate as an interface between the user and certain applications that may not be rendering content in a native language of the user.
    Type: Application
    Filed: November 30, 2021
    Publication date: April 6, 2023
    Inventors: Joseph Lange, Abhanshu Sharma, Adam Coimbra, Gökhan Bakir, GABRIEL Taubman, Ilya Firman, Jindong Chen, James Stout, Marcin Nowak-Przygodzki, Reed Enger, THOMAS Weedon Hume, Vishwath Mohan, Jacek Szmigiel, Yunfan Jin, Kyle Pedersen, Gilles Baechler
  • Patent number: 11600065
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Grant
    Filed: June 13, 2022
    Date of Patent: March 7, 2023
    Assignee: GOOGLE LLC
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Publication number: 20230050054
    Abstract: Implementations are described herein for analyzing existing interactive web sites to facilitate automatic engagement with those web sites, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those websites. For example, in various implementations, techniques described herein may be used to abstract, validate, maintain, generalize, extend and/or distribute individual actions and “traces” of actions that are useable to navigate through various interactive websites. Additionally, techniques are described herein for leveraging these actions and/or traces to automate aspects of interaction with a third party website.
    Type: Application
    Filed: October 26, 2022
    Publication date: February 16, 2023
    Inventors: Gökhan Bakir, Andre Elisseeff, Torsten Marek, João Paulo Pagaime da Silva, Mathias Carlen, Dana Ritter, Lukasz Suder, Ernest Galbrun, Matthew Stokes, Marcin Nowak-Przygodzki, Mugurel-Ionut Andreica, Marius Dumitran
  • Patent number: 11567642
    Abstract: Contextual paste target prediction is used to predict one or more target applications for a paste action, and do so based upon a context associated with the content that has previously been selected and copied. The results of the prediction may be used to present to a user one or more user controls to enable the user to activate one or more predicted application, and in some instances, additionally configure a state of a predicted application to use the selected and copied content once activated. As such, upon completing a copy action, a user may, in some instances, be provided with an ability to quickly switch to an application into which the user was intending to paste the content. This can provide a simpler user interface in a device such as phones and tablet computers with limited display size and limited input device facilities. It can result in a paste operation into a different application with fewer steps than is possible conventionally.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: January 31, 2023
    Assignee: GOOGLE LLC
    Inventors: Aayush Kumar, Gokhan Bakir
  • Patent number: 11557119
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Grant
    Filed: June 13, 2022
    Date of Patent: January 17, 2023
    Assignee: GOOGLE LLC
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Publication number: 20220392216
    Abstract: Techniques described herein enable a user to interact with an automated assistant and obtain relevant output from the automated assistant without requiring arduous typed input to be provided by the user and/or without requiring the user to provide spoken input that could cause privacy concerns (e.g., if other individuals are nearby). The assistant application can operate in multiple different image conversation modes in which the assistant application is responsive to various objects in a field of view of the camera. The image conversation modes can be suggested to the user when a particular object is detected in the field of view of the camera. When the user selects an image conversation mode, the assistant application can thereafter provide output, for presentation, that is based on the selected image conversation mode and that is based on object(s) captured by image(s) of the camera.
    Type: Application
    Filed: August 15, 2022
    Publication date: December 8, 2022
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Patent number: 11487832
    Abstract: Implementations are described herein for analyzing existing interactive web sites to facilitate automatic engagement with those web sites, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those websites. For example, in various implementations, techniques described herein may be used to abstract, validate, maintain, generalize, extend and/or distribute individual actions and “traces” of actions that are useable to navigate through various interactive websites. Additionally, techniques are described herein for leveraging these actions and/or traces to automate aspects of interaction with a third party website.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: November 1, 2022
    Assignee: GOOGLE LLC
    Inventors: Gökhan Bakir, Andre Elisseeff, Torsten Marek, João Paulo Pagaime da Silva, Mathias Carlen, Dana Ritter, Lukasz Suder, Ernest Galbrun, Matthew Stokes, Marcin Nowak-Przygodzki, Mugurel-Ionut Andreica, Marius Dumitran
  • Publication number: 20220309788
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Application
    Filed: June 13, 2022
    Publication date: September 29, 2022
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Patent number: 11417092
    Abstract: Techniques described herein enable a user to interact with an automated assistant and obtain relevant output from the automated assistant without requiring arduous typed input to be provided by the user and/or without requiring the user to provide spoken input that could cause privacy concerns (e.g., if other individuals are nearby). The assistant application can operate in multiple different image conversation modes in which the assistant application is responsive to various objects in a field of view of the camera. The image conversation modes can be suggested to the user when a particular object is detected in the field of view of the camera. When the user selects an image conversation mode, the assistant application can thereafter provide output, for presentation, that is based on the selected image conversation mode and that is based on object(s) captured by image(s) of the camera.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: August 16, 2022
    Assignee: GOOGLE LLC
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Patent number: 11361539
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: June 14, 2022
    Assignee: GOOGLE LLC
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Patent number: 11017037
    Abstract: Techniques are described herein for automated assistants that search various alternative corpora for information. In various implementations, a method may include receiving, by an automated assistant via an input component of a first client device, a free form input, wherein the free form input includes a request for specific information; searching a general purpose corpus of online documents to obtain a first set of candidate response(s) to the request for specific information; searching a user-specific corpus of active document(s) to obtain a second set of candidate response(s) to the request for specific information; comparing the first and second sets of candidate responses; based on the comparing, selecting a given response to the request for specific information from the first or second set; and providing, by the automated assistant, output indicative of the given response.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: May 25, 2021
    Assignee: GOOGLE LLC
    Inventors: Mugurel Ionut Andreica, Vladimir Vuskovic, Gökhan Bakir, Marcin Nowak-Przygodzki
  • Patent number: 11017299
    Abstract: Systems and methods provide an application programming interface to offer action suggestions to third-party applications using context data associated with the third-party. An example method includes receiving content information and context information from a source mobile application, the content information representing information to be displayed on a mobile device as part of a source mobile application administered by a third party, the context information being information specific to the third party and unavailable to a screen scraper. The method also includes predicting an action based on the content information and the context information, the action representing a deep link for a target mobile application. The method further includes providing the action to the source mobile application with a title and a thumbnail, the source mobile application using the title and thumbnail to display a selectable control that, when selected, causes the mobile device to initiate the action.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: May 25, 2021
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Mauricio Zuluaga, Aneto Okonkwo, Gökhan Bakir