Patents by Inventor Gokhan Bakir

Gokhan Bakir has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11017299
    Abstract: Systems and methods provide an application programming interface to offer action suggestions to third-party applications using context data associated with the third-party. An example method includes receiving content information and context information from a source mobile application, the content information representing information to be displayed on a mobile device as part of a source mobile application administered by a third party, the context information being information specific to the third party and unavailable to a screen scraper. The method also includes predicting an action based on the content information and the context information, the action representing a deep link for a target mobile application. The method further includes providing the action to the source mobile application with a title and a thumbnail, the source mobile application using the title and thumbnail to display a selectable control that, when selected, causes the mobile device to initiate the action.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: May 25, 2021
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Mauricio Zuluaga, Aneto Okonkwo, Gökhan Bakir
  • Publication number: 20210056310
    Abstract: Methods, apparatus, and computer readable media are described related to causing processing of sensor data to be performed in response to determining a request related to an environmental object that is likely captured by the sensor data. Some implementations further relate to determining whether the request is resolvable based on the processing of the sensor data. When it is determined that the request is not resolvable, a prompt is determined and provided as user interface output, where the prompt provides guidance on further input that will enable the request to be resolved. In those implementations, the further input (e.g., additional sensor data and/or the user interface input) received in response to the prompt can then be utilized to resolve the request.
    Type: Application
    Filed: November 10, 2020
    Publication date: February 25, 2021
    Inventors: Ibrahim Badr, Nils Grimsmo, Gökhan Bakir
  • Patent number: 10893202
    Abstract: The present disclosure relates to user-selected metadata related to images captured by a camera of a client device. User-selected metadata may include contextual information and/or information provided by a user when the images are captured. In various implementations, a free form input may be received at a first client device of one or more client devices operated by a user. A task request may be recognized from the free form input, and it may be determined that the task request includes a request to store metadata related to one or more images captured by a camera of the first client device. The metadata may be selected based on content of the task request. The metadata may then be stored, e.g., in association with one or more images captured by the camera, in computer-readable media. The computer-readable media may be searchable by the metadata.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 12, 2021
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Gökhan Bakir, Daniel Kunkle, Kavin Karthik Ilangovan, Denis Burakov
  • Patent number: 10867180
    Abstract: Methods, apparatus, and computer readable media are described related to causing processing of sensor data to be performed in response to determining a request related to an environmental object that is likely captured by the sensor data. Some implementations further relate to determining whether the request is resolvable based on the processing of the sensor data. When it is determined that the request is not resolvable, a prompt is determined and provided as user interface output, where the prompt provides guidance on further input that will enable the request to be resolved. In those implementations, the further input (e.g., additional sensor data and/or the user interface input) received in response to the prompt can then be utilized to resolve the request.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: December 15, 2020
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Nils Grimsmo, Gökhan Bakir
  • Publication number: 20200342039
    Abstract: Implementations are described herein for analyzing existing interactive web sites to facilitate automatic engagement with those web sites, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those websites. For example, in various implementations, techniques described herein may be used to abstract, validate, maintain, generalize, extend and/or distribute individual actions and “traces” of actions that are useable to navigate through various interactive websites. Additionally, techniques are described herein for leveraging these actions and/or traces to automate aspects of interaction with a third party website.
    Type: Application
    Filed: May 9, 2019
    Publication date: October 29, 2020
    Inventors: Gökhan Bakir, Andre Elisseeff, Torsten Marek, João Paulo Pagaime da Silva, Mathias Carlen, Dana Ritter, Lukasz Suder, Ernest Galbrun, Matthew Stokes
  • Publication number: 20200250433
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Application
    Filed: April 16, 2020
    Publication date: August 6, 2020
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Publication number: 20200202130
    Abstract: Techniques described herein enable a user to interact with an automated assistant and obtain relevant output from the automated assistant without requiring arduous typed input to be provided by the user and/or without requiring the user to provide spoken input that could cause privacy concerns (e.g., if other individuals are nearby). The assistant application can operate in multiple different image conversation modes in which the assistant application is responsive to various objects in a field of view of the camera. The image conversation modes can be suggested to the user when a particular object is detected in the field of view of the camera. When the user selects an image conversation mode, the assistant application can thereafter provide output, for presentation, that is based on the selected image conversation mode and that is based on object(s) captured by image(s) of the camera.
    Type: Application
    Filed: March 2, 2020
    Publication date: June 25, 2020
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Patent number: 10657374
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: May 19, 2020
    Assignee: GOOGLE LLC
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Patent number: 10607082
    Abstract: Techniques described herein enable a user to interact with an automated assistant and obtain relevant output from the automated assistant without requiring arduous typed input to be provided by the user and/or without requiring the user to provide spoken input that could cause privacy concerns (e.g., if other individuals are nearby). The assistant application can operate in multiple different image conversation modes in which the assistant application is responsive to various objects in a field of view of the camera. The image conversation modes can be suggested to the user when a particular object is detected in the field of view of the camera. When the user selects an image conversation mode, the assistant application can thereafter provide output, for presentation, that is based on the selected image conversation mode and that is based on object(s) captured by image(s) of the camera.
    Type: Grant
    Filed: September 9, 2017
    Date of Patent: March 31, 2020
    Assignee: GOOGLE LLC
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Publication number: 20200081609
    Abstract: Contextual paste target prediction is used to predict one or more target applications for a paste action, and do so based upon a context associated with the content that has previously been selected and copied. The results of the prediction may be used to present to a user one or more user controls to enable the user to activate one or more predicted application, and in some instances, additionally configure a state of a predicted application to use the selected and copied content once activated. As such, upon completing a copy action, a user may, in some instances, be provided with an ability to quickly switch to an application into which the user was intending to paste the content. This can provide a simpler user interface in a device such as phones and tablet computers with limited display size and limited input device facilities. It can result in a paste operation into a different application with fewer steps than is possible conventionally.
    Type: Application
    Filed: November 12, 2019
    Publication date: March 12, 2020
    Inventors: Aayush Kumar, Gokhan Bakir
  • Publication number: 20200021740
    Abstract: The present disclosure relates to user-selected metadata related to images captured by a camera of a client device. User-selected metadata may include contextual information and/or information provided by a user when the images are captured. In various implementations, a free form input may be received at a first client device of one or more client devices operated by a user. A task request may be recognized from the free form input, and it may be determined that the task request includes a request to store metadata related to one or more images captured by a camera of the first client device. The metadata may be selected based on content of the task request. The metadata may then be stored, e.g., in association with one or more images captured by the camera, in computer-readable media. The computer-readable media may be searchable by the metadata.
    Type: Application
    Filed: September 27, 2019
    Publication date: January 16, 2020
    Inventors: Ibrahim Badr, Gökhan Bakir, Daniel Kunkle, Kavin Karthik Ilangovan, Denis Burakov
  • Patent number: 10535005
    Abstract: Systems and methods provide an application programming interface to offer action suggestions to third-party applications using context data associated with the third-party. An example method includes receiving content information and context information from a source mobile application, the content information representing information to be displayed on a mobile device as part of a source mobile application administered by a third party, the context information being information specific to the third party and unavailable to a screen scraper. The method also includes predicting an action based on the content information and the context information, the action representing a deep link for a target mobile application. The method further includes providing the action to the source mobile application with a title and a thumbnail, the source mobile application using the title and thumbnail to display a selectable control that, when selected, causes the mobile device to initiate the action.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: January 14, 2020
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Mauricio Zuluaga, Aneto Okonkwo, Gökhan Bakir
  • Patent number: 10514833
    Abstract: Contextual paste target prediction is used to predict one or more target applications for a paste action, and do so based upon a context associated with the content that has previously been selected and copied. The results of the prediction may be used to present to a user one or more user controls to enable the user to activate one or more predicted application, and in some instances, additionally configure a state of a predicted application to use the selected and copied content once activated. As such, upon completing a copy action, a user may, in some instances, be provided with an ability to quickly switch to an application into which the user was intending to paste the content. This can provide a simpler user interface in a device such as phones and tablet computers with limited display size and limited input device facilities. It can result in a paste operation into a different application with fewer steps than is possible conventionally.
    Type: Grant
    Filed: January 27, 2017
    Date of Patent: December 24, 2019
    Assignee: GOOGLE LLC
    Inventors: Aayush Kumar, Gokhan Bakir
  • Patent number: 10469755
    Abstract: The present disclosure relates to user-selected metadata related to images captured by a camera of a client device. User-selected metadata may include contextual information and/or information provided by a user when the images are captured. In various implementations, a free form input may be received at a first client device of one or more client devices operated by a user. A task request may be recognized from the free form input, and it may be determined that the task request includes a request to store metadata related to one or more images captured by a camera of the first client device. The metadata may be selected based on content of the task request. The metadata may then be stored, e.g., in association with one or more images captured by the camera, in computer-readable media. The computer-readable media may be searchable by the metadata.
    Type: Grant
    Filed: May 23, 2017
    Date of Patent: November 5, 2019
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Gökhan Bakir, Daniel Kunkle, Kavin Karthik Ilangovan, Denis Burakov
  • Publication number: 20190325222
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Application
    Filed: July 2, 2019
    Publication date: October 24, 2019
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Patent number: 10366291
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Grant
    Filed: September 9, 2017
    Date of Patent: July 30, 2019
    Assignee: GOOGLE LLC
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Publication number: 20190220667
    Abstract: Methods, apparatus, and computer readable media are described related to causing processing of sensor data to be performed in response to determining a request related to an environmental object that is likely captured by the sensor data. Some implementations further relate to determining whether the request is resolvable based on the processing of the sensor data. When it is determined that the request is not resolvable, a prompt is determined and provided as user interface output, where the prompt provides guidance on further input that will enable the request to be resolved. In those implementations, the further input (e.g., additional sensor data and/or the user interface input) received in response to the prompt can then be utilized to resolve the request.
    Type: Application
    Filed: March 21, 2019
    Publication date: July 18, 2019
    Inventors: Ibrahim Badr, Nils Grimsmo, Gökhan Bakir
  • Patent number: 10275651
    Abstract: Methods, apparatus, and computer readable media are described related to causing processing of sensor data to be performed in response to determining a request related to an environmental object that is likely captured by the sensor data. Some implementations further relate to determining whether the request is resolvable based on the processing of the sensor data. When it is determined that the request is not resolvable, a prompt is determined and provided as user interface output, where the prompt provides guidance on further input that will enable the request to be resolved. In those implementations, the further input (e.g., additional sensor data and/or the user interface input) received in response to the prompt can then be utilized to resolve the request.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: April 30, 2019
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Nils Grimsmo, Gökhan Bakir
  • Publication number: 20190080169
    Abstract: Techniques described herein enable a user to interact with an automated assistant and obtain relevant output from the automated assistant without requiring arduous typed input to be provided by the user and/or without requiring the user to provide spoken input that could cause privacy concerns (e.g., if other individuals are nearby). The assistant application can operate in multiple different image conversation modes in which the assistant application is responsive to various objects in a field of view of the camera. The image conversation modes can be suggested to the user when a particular object is detected in the field of view of the camera. When the user selects an image conversation mode, the assistant application can thereafter provide output, for presentation, that is based on the selected image conversation mode and that is based on object(s) captured by image(s) of the camera.
    Type: Application
    Filed: September 9, 2017
    Publication date: March 14, 2019
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Publication number: 20190080168
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Application
    Filed: September 9, 2017
    Publication date: March 14, 2019
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir