Patents by Inventor Daniel Cotting

Daniel Cotting has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11170777
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: November 9, 2021
    Assignee: GOOGLE LLC
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Publication number: 20210335356
    Abstract: Implementations set forth herein allow a user to access a first application in a foreground of a graphical interface, and simultaneously employ an automated assistant to respond to notifications arising from a second application. The user can provide an input, such as a spoken utterance, while viewing the first application in the foreground in order to respond to notifications from the second application without performing certain intervening steps that can arise under certain circumstances. Such intervening steps can include providing a user confirmation, which can be bypassed, and/or time-limited according to a timer, which can be displayed in response to the user providing a responsive input directed at the notification. A period for the timer can be set according to one or more characteristics that are associated with the notification, the user, and/or any other information that can be associated with the user receiving the notification.
    Type: Application
    Filed: June 5, 2019
    Publication date: October 28, 2021
    Inventors: Denis Burakov, Sergey Nazarov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Zaheed Sabur, Okan Kolak, Yan Zhong, Vinh Quoc Ly
  • Publication number: 20210216384
    Abstract: Implementations set forth herein relate to an automated assistant that can be invoked while a user is interfacing with a foreground application in order to retrieve data from one or more different applications, and then provide the retrieved data to the foreground application. A user can invoke the automated assistant while operating the foreground application by providing a spoken utterance, and the automated assistant can select one or more other applications to query based on content of the spoken utterance. Application data collected by the automated assistant from the one or more other applications can then be used to provide an input to the foreground application. In this way, the user can bypass switching between applications in the foreground in order to retrieve data that has been generated by other applications.
    Type: Application
    Filed: August 6, 2019
    Publication date: July 15, 2021
    Inventors: Bohdan Vlasyuk, Behshad Behzadi, Mario Bertschler, Denis Burakov, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve CHENG, Sergey NAZAROV, Zaheed Sabur, Jonathan Lee, Lucia Terrenghi, Adrian Zumbrunnen
  • Publication number: 20210074285
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant by, for example, obviating the need to provide an explicit invocation to the automated assistant, such as by saying a hot-word/phrase or performing a specific user input, prior to speaking a command or query. In addition, the automated assistant can optionally receive, understand, and/or respond to the command or query without communicating with a server, thereby further reducing the time in which a response can be provided. Implementations only selectively initiate on-device speech recognition responsive to determining one or more condition(s) are satisfied. Further, in some implementations, on-device NLU, on-device fulfillment, and/or resulting execution occur only responsive to determining, based on recognized text form the on-device speech recognition, that such further processing should occur.
    Type: Application
    Filed: May 31, 2019
    Publication date: March 11, 2021
    Inventors: Michael Golikov, Zaheed Sabur, Denis Burakov, Behshad Behzadi, Sergey Nazarov, Daniel Cotting, Mario Bertschler, Lucas Mirelmann, Steve Cheng, Bohdan Vlasyuk, Jonathan Lee, Lucia Terrenghi, Adrian Zumbrunnen
  • Publication number: 20210074286
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Application
    Filed: May 31, 2019
    Publication date: March 11, 2021
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Publication number: 20200395018
    Abstract: Implementations set forth herein relate to a system that employs an automated assistant to further interactions between a user and another application, which can provide the automated assistant with permission to initialize relevant application actions simultaneous to the user interacting with the other application. Furthermore, the system can allow the automated assistant to initialize actions of different applications, despite being actively operating a particular application. Available actions can be gleaned by the automated assistant using various application-specific schemas, which can be compared with incoming requests from a user to the automated assistant. Additional data, such as context and historical interactions, can also be used to rank and identify a suitable application action to be initialized via the automated assistant.
    Type: Application
    Filed: June 13, 2019
    Publication date: December 17, 2020
    Inventors: Denis Burakov, Behshad Behzadi, Mario Bertschler, Bohdan Vlasyuk, Daniel Cotting, Michael Golikov, Lucas Mirelmann, Steve Cheng, Sergey NAZAROV, Zaheed Sabur, Marcin Nowak-Przygodzki, Mugurel Ionut Andreica, Radu Voroneanu
  • Publication number: 20200357395
    Abstract: Implementations herein relate to pre-caching data, corresponding to predicted interactions between a user and an automated assistant, using data characterizing previous interactions between the user and the automated assistant. An interaction can be predicted based on details of a current interaction between the user and an automated assistant. One or more predicted interactions can be initialized, and/or any corresponding data pre-cached, prior to the user commanding the automated assistant in furtherance of the predicted interaction. Interaction predictions can be generated using a user-parameterized machine learning model, which can be used when processing input(s) that characterize a recent user interaction with the automated assistant.
    Type: Application
    Filed: May 31, 2019
    Publication date: November 12, 2020
    Inventors: Lucas Mirelmann, Zaheed Sabur, Bohdan Vlasyuk, Marie Patriarche Bledowski, Sergey Nazarov, Denis Burakov, Behshad Behzadi, Michael Golikov, Steve Cheng, Daniel Cotting, Mario Bertschler
  • Patent number: 9842272
    Abstract: A system and computer implemented method for detecting the location of a mobile device using semantic indicators is provided. The method includes receiving, using one or more processors, a plurality of images captured by a mobile device at an area. The area is associated with a set of candidate locations. Using the one or more processors, one or more feature indicators associated with the plurality of images are detected. These feature indicators include semantic features related to the area. The semantic features are compared with a plurality of stored location features for the set of candidate locations. In accordance with the comparison, a location from the set of candidate locations is selected to identify an estimated position of the mobile device.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: December 12, 2017
    Assignee: Google LLC
    Inventors: Daniel Raynaud, Boris Bluntschli, Daniel Cotting
  • Patent number: 9792021
    Abstract: Methods, systems, and computer program products for transitioning an interface to a related image are provided. A method for transitioning an interface to a related image may include receiving information describing a homography between a first image and a second image, and adjusting the interface to present the second image at one or more transition intervals in a transition period until the second image is fully displayed and the first image is no longer visible. The interface may be adjusted by determining, based on the homography, a region of the second image to overlay onto a corresponding area of the first image, blending the determined region with the corresponding area to reduce visible seams occurring between the first image and the second image, and updating the interface by gradually decreasing visual intensity of the first image while gradually and proportionally increasing visual intensity of the second image.
    Type: Grant
    Filed: June 13, 2014
    Date of Patent: October 17, 2017
    Assignee: Google Inc.
    Inventors: Daniel Joseph Filip, Daniel Cotting
  • Patent number: 9645981
    Abstract: A system and machine-implemented method for providing image content corresponding to a business establishment is provided. Several webpages corresponding to a business establishment are received, and one or more webpages are selected from the several webpages, based on the content of each of the several webpages. At least one webpage related to the selected one or more webpages is retrieved. Image content is extracted from the retrieved at least one webpage. At least one annotation is generated for the extracted image content based on at least one characteristic of the extracted image content. The image content is filtered based on the generated at least one annotation for the extracted image content. The filtered image content is provided for display.
    Type: Grant
    Filed: January 18, 2013
    Date of Patent: May 9, 2017
    Assignee: Google Inc.
    Inventors: Hylke Niekele Buisman, Daniel Cotting, Avni Shah, Elizabeth Reid
  • Publication number: 20170061606
    Abstract: A system and computer implemented method for detecting the location of a mobile device using semantic indicators is provided. The method includes receiving, using one or more processors, a plurality of images captured by a mobile device at an area. The area is associated with a set of candidate locations. Using the one or more processors, one or more feature indicators associated with the plurality of images are detected. These feature indicators include semantic features related to the area. The semantic features are compared with a plurality of stored location features for the set of candidate locations. In accordance with the comparison, a location from the set of candidate locations is selected to identify an estimated position of the mobile device.
    Type: Application
    Filed: November 16, 2016
    Publication date: March 2, 2017
    Inventors: Daniel Raynaud, Boris Bluntschli, Daniel Cotting
  • Patent number: 9552375
    Abstract: Systems and methods for determining a geocode for an image based on user-provided search queries and corresponding user selections are provided. One example method includes determining a selection value for each of a plurality of search strings associated with an image based at least in part on user selection data. The method includes generating a textual document for the image based at least in part on the selection values. The textual document includes one or more of the plurality of search strings. The method includes identifying a plurality of geographic entities by analyzing the textual document using a textual processor. The method includes selecting one of the plurality of geographic entities as a primary geographic entity and associating, by the one or more computing devices, a geocode associated with the primary geographic entity with the image.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: January 24, 2017
    Assignee: Google Inc.
    Inventors: Wojciech Stanislaw Smietanka, Daniel Cotting, Boris Bluntschli, Nicolas Dumazet
  • Patent number: 9524435
    Abstract: A system and computer implemented method for detecting the location of a mobile device using semantic indicators is provided. The method includes receiving, using one or more processors, a plurality of images captured by a mobile device at an area. The area is associated with a set of candidate locations. Using the one or more processors, one or more feature indicators associated with the plurality of images are detected. These feature indicators include semantic features related to the area. The semantic features are compared with a plurality of stored location features for the set of candidate locations. In accordance with the comparison, a location from the set of candidate locations is selected to identify an estimated position of the mobile device.
    Type: Grant
    Filed: March 20, 2015
    Date of Patent: December 20, 2016
    Assignee: Google Inc.
    Inventors: Daniel Raynaud, Boris Bluntschli, Daniel Cotting
  • Publication number: 20160275350
    Abstract: A system and computer implemented method for detecting the location of a mobile device using semantic indicators is provided. The method includes receiving, using one or more processors, a plurality of images captured by a mobile device at an area. The area is associated with a set of candidate locations. Using the one or more processors, one or more feature indicators associated with the plurality of images are detected. These feature indicators include semantic features related to the area. The semantic features are compared with a plurality of stored location features for the set of candidate locations. In accordance with the comparison, a location from the set of candidate locations is selected to identify an estimated position of the mobile device.
    Type: Application
    Filed: March 20, 2015
    Publication date: September 22, 2016
    Inventors: Daniel Raynaud, Boris Bluntschli, Daniel Cotting
  • Patent number: 9208171
    Abstract: Aspects of the disclosure relate generally to systems and methods for geographically locating images. For example, images from different sources may be associated with different types of location information or simply none at all. In order to reduce inconsistency among images, location information may be gathered for an image using bitmap processing, metadata processing, and information retrieved from where the image was found. This location information can be filtered to remove less reliable or conflicting information. Images may be clustered based on their appearance on an interactive online resource that corresponds to a user-defined event, based on image similarity, and by their appearance in a user photo album. The location information of the images of a cluster is then copied to all of the other images of that cluster.
    Type: Grant
    Filed: September 5, 2013
    Date of Patent: December 8, 2015
    Assignee: Google Inc.
    Inventors: Daniel Cotting, Krzysztof Sikora, Roland Kehl, Boris Bluntschli, Wojciech Stanislaw Smietanka, Martin Stefcek
  • Patent number: 9171352
    Abstract: Systems and methods for the processing of images are provided. In particular, a candidate image can be obtained for processing. The candidate image can have one or more associated image categorization parameters. One or more pixel groups can then be detected in the candidate image and the one or more pixel groups can be associated with semantic data. At least one reference image can then be identified based at least in part on the semantic data of the one or more pixel groups. Once the at least one reference image has been identified, a plurality of adjustment parameters can be determined. One or more pixel groups from the candidate image can then be processed to generate a processed image based at least in part on the plurality of adjustment parameters.
    Type: Grant
    Filed: December 4, 2014
    Date of Patent: October 27, 2015
    Assignee: Google Inc.
    Inventors: Daniel Paul Raynaud, Boris Bluntschli, Daniel Cotting
  • Publication number: 20150178322
    Abstract: Systems and methods for determining a geocode for an image based on user-provided search queries and corresponding user selections are provided. One example method includes determining a selection value for each of a plurality of search strings associated with an image based at least in part on user selection data. The method includes generating a textual document for the image based at least in part on the selection values. The textual document includes one or more of the plurality of search strings. The method includes identifying a plurality of geographic entities by analyzing the textual document using a textual processor. The method includes selecting one of the plurality of geographic entities as a primary geographic entity and associating, by the one or more computing devices, a geocode associated with the primary geographic entity with the image.
    Type: Application
    Filed: December 20, 2013
    Publication date: June 25, 2015
    Applicant: Google Inc.
    Inventors: Wojciech Stanislaw Smietanka, Daniel Cotting, Boris Bluntschli, Nicolas Dumazet
  • Publication number: 20150153933
    Abstract: Methods and systems for presenting imagery associated with a geographic location to a user include providing at least one geographic map or panoramic imagery to a client for display in an interface configured for interactive navigation of the at least one geographic map or panoramic imagery, receiving a user selection collected by the interface indicating a location corresponding to the at least one geographic map or panoramic imagery, identifying a plurality of images associated with the received user selection, obtaining at least one user preference associated with the identified images, ranking the identified images based on at least one of the retrieved user preferences, and providing at least one ranked image for display in the interface, in accordance with the ranking.
    Type: Application
    Filed: March 16, 2012
    Publication date: June 4, 2015
    Applicant: GOOGLE INC.
    Inventors: Daniel J. Filip, Dennis Tell, Daniel Cotting, Stephane Lafon, Andrew T. Szybalski, Luc Vincent
  • Patent number: 8933929
    Abstract: Systems and methods are disclosed for transferring information metadata from a first digital image to a second digital image. In one embodiment, an assignment module is configured to assign a corresponding portion of the first image to the second image using geolocation data. An extraction module is configured to extract a collection of features associated with the second image and the corresponding portion of the first image. An alignment module is configured to align the second image with a portion of the first image by transforming the second image so that features associated with the second image are geometrically aligned with the corresponding features of the portion of the first image. A metadata module is configured to associate metadata from the portion of the first image with the transformed second image. An annotation module is configured to annotate the second image with the associated metadata to generate an annotated image.
    Type: Grant
    Filed: January 3, 2012
    Date of Patent: January 13, 2015
    Assignee: Google Inc.
    Inventors: Daniel J. Filip, Daniel Cotting
  • Patent number: 8885952
    Abstract: Methods, systems, and articles of manufacture for presenting similar images are disclosed. A method for presenting similar images on a display device is disclosed. The method includes displaying a first image on the display device; determining one or more homographic relationships between the first image and a plurality of images; identifying, using the determined one or more homographic relationships, at least one image having a scene and a perspective which are similar to that of the first image; and displaying the identified image. Corresponding system and computer readable media embodiments are also disclosed.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: November 11, 2014
    Assignee: Google Inc.
    Inventors: Daniel J. Filip, Daniel Cotting