Patents by Inventor David Petrou

David Petrou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9836597
    Abstract: Systems and methods prevent or restrict the mining of content on a mobile device. For example, a method may include identifying a mining-restriction mark in low order bits or high order bits in a frame buffer of a mobile device and determining whether the mining-restriction mark prevents mining of content. Mining includes non-transient storage of a copy or derivations of data in the frame buffer. The method may also include preventing the mining of data in the frame buffer when the mining-restriction mark prevents mining.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: December 5, 2017
    Assignee: Google Inc.
    Inventors: Alfred Zalmon Spector, David Petrou, Blaise Aguera-Arcas, Matthew Sharifi
  • Patent number: 9824079
    Abstract: Systems and methods identify actionable content in onscreen content and provide at least a default action for the content. For example, a method may include receiving an image of a screen captured from a display of a mobile device, determining areas of actionable content in the image, and determining a respective action for each area of actionable content. The method may also include generating annotation data for the image, the annotation data including a visual cue that corresponds to a first area of actionable content, the visual cue being actionable to initiate the respective action when selected and providing the annotation data for display on the mobile device.
    Type: Grant
    Filed: August 4, 2014
    Date of Patent: November 21, 2017
    Assignee: Google LLC
    Inventors: Matthew Sharifi, David Petrou
  • Publication number: 20170330336
    Abstract: Methods and apparatus directed to segmenting content displayed on a computing device into regions. The segmenting of content displayed on the computing device into regions is accomplished via analysis of pixels of a “screenshot image” that captures at least a portion of (e.g., all of) the displayed content. Individual pixels of the screenshot image may be analyzed to determine one or more regions of the screenshot image and to optionally assign a corresponding semantic type to each of the regions. Some implementations are further directed to generating, based on one or more of the regions, interactive content to provide for presentation to the user via the computing device.
    Type: Application
    Filed: May 14, 2016
    Publication date: November 16, 2017
    Inventors: Dominik Roblek, David Petrou, Matthew Sharifi
  • Patent number: 9811352
    Abstract: Systems and methods are provided for automating user input using onscreen content. For example, a method includes receiving a selection of a first image representing a previously captured screen of a mobile device, the first image having a corresponding timestamp, determining a set of stored user input actions that occur prior to the timestamp corresponding to the first image and after a timestamp corresponding to a reference image, the reference image representing another previously captured screen of the mobile device, and providing a user interface element configured to, when selected, initiate a replaying of the set of user input actions on the mobile device, starting from a state corresponding to the reference image.
    Type: Grant
    Filed: August 4, 2014
    Date of Patent: November 7, 2017
    Assignee: GOOGLE INC.
    Inventors: Matthew Sharifi, David Petrou
  • Patent number: 9798708
    Abstract: Systems and methods are provided for highlighting relevant mobile onscreen content. For example, a mobile device can include memory storing instructions that, when executed by at least one processor, cause the mobile device to perform operations including capturing an image of a screen on the mobile device, the screen being displayed on a display of the mobile device, and providing the image to a server. The operations may also include receiving annotation data from the server, the annotation data including a visual cue that corresponds to a portion of the image that includes an entry in a list, the entry being associated with an entity in a graph-based data store relevant to a user of the mobile device, and display the annotation data with a second screen being displayed on the displaying of the mobile device so that the visual cue aligns with the entry in the second screen.
    Type: Grant
    Filed: August 4, 2014
    Date of Patent: October 24, 2017
    Assignee: GOOGLE INC.
    Inventors: Matthew Sharifi, David Petrou
  • Publication number: 20170300495
    Abstract: Methods, systems, and apparatus for receiving a query image, receiving one or more entities that are associated with the query image, identifying, for one or more of the entities, one or more candidate search queries that are pre-associated with the one or more entities, generating a respective relevance score for each of the candidate search queries, selecting, as a representative search query for the query image, a particular candidate search query based at least on the generated respective relevance scores and providing the representative search query for output in response to receiving the query image.
    Type: Application
    Filed: April 18, 2016
    Publication date: October 19, 2017
    Inventors: Matthew Sharifi, David Petrou, Abhanshu Sharma
  • Patent number: 9792304
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing queries made up of images. In one aspect, a method includes indexing images by image descriptors. The method further includes associating descriptive n-grams with the images. In another aspect, a method includes receiving a query, identifying text describing the query, and performing a search according to the text identified for the query.
    Type: Grant
    Filed: November 19, 2015
    Date of Patent: October 17, 2017
    Assignee: Google Inc.
    Inventors: Ulrich Buddemeier, Gabriel Taubman, Hartwig Adam, Charles J. Rosenberg, Hartmut Neven, David Petrou, Fernando Brucher
  • Patent number: 9788179
    Abstract: Systems and methods are provided for detecting and ranking entities identified in screen content displayed on a mobile device. For example, a method includes receiving an image captured from a mobile device display for a mobile application and determining a window that includes a chronological set of images, the images each representing a respective screen captured from a display of a mobile device and having an associated timestamp. The method also includes identifying entities appearing in images in a first portion of the window using text for images in a remaining portion of the window as context to disambiguate ambiguous entity references.
    Type: Grant
    Filed: May 14, 2015
    Date of Patent: October 10, 2017
    Assignee: Google Inc.
    Inventors: Matthew Sharifi, David Petrou
  • Publication number: 20170286493
    Abstract: A visual query is received from a client system, along with location information for the client system, and processed by a server system. The server system sends the visual query and the location information to a visual query search system, and receives from the visual query search system enhanced location information based on the visual query and the location information. The server system then sends a search query, including the enhanced location information, to a location-based search system. The search system receives and provides to the client one or more search results to the client system.
    Type: Application
    Filed: June 15, 2017
    Publication date: October 5, 2017
    Inventors: David Petrou, John Flynn, Hartwig Adam, Hartmut Neven
  • Patent number: 9762651
    Abstract: Systems and methods are provided for sharing a screen from a mobile device. For example, a method includes capturing an image of a screen displayed on the mobile device in response to a command to share the screen, receiving user instructions for redacting a portion of the image, and transmitting the image with the selected portion redacted to a recipient device selected by the user. As another example, a method includes receiving, from a first mobile device, an identifier for a recipient and an image representing a captured screen of a first mobile device, copying the image to an image repository associated with the recipient, performing recognition on the image, generating annotation data for the image, based on the recognition, that includes at least one visual cue, and providing the image and the annotation data to a second mobile device, the second mobile device being associated with the recipient.
    Type: Grant
    Filed: August 21, 2014
    Date of Patent: September 12, 2017
    Assignee: Google Inc.
    Inventors: Matthew Sharifi, David Petrou
  • Publication number: 20170212901
    Abstract: Embodiments retrieve a set of search results that have been previously identified as having at least one associated date or location. A timeline or map is displayed that visually represents the distribution of the dates or locations within the results. The timeline is displayed with a histogram graph corresponding to the number of dates in the search results at points along the timeline. The map is displayed with markers at the locations corresponding to the locations in the search results. The user can navigate the result set using the displayed timeline or map.
    Type: Application
    Filed: February 6, 2017
    Publication date: July 27, 2017
    Inventors: Jeffrey C. Reynar, Michael Gordon, David J. Vespe, David Petrou, Andrew W. Hogue
  • Patent number: 9703541
    Abstract: Systems and methods are provided for suggesting actions for entities discovered in content on a mobile device. An example method can include running a mobile device emulator with a deep-link for a mobile application, determining a main entity for the deep link, mapping the main entity to the deep link, storing the mapping of the main entity to the deep link in a memory, and providing the mapping to a mobile device, the mapping enabling a user of the mobile device to select the deep link when the main entity is displayed on a screen of the mobile device. Another example method can include identifying at least one entity in content generated by a mobile application, identifying an action mapped to the at least one entity, the action representing a deep link into a second mobile application, and providing a control to initiate the action for the entity.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: July 11, 2017
    Assignee: Google Inc.
    Inventors: Matthew Dominic Sharifi, David Petrou
  • Patent number: 9684693
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, relating to on-device query annotating. In some implementations, a search query is received, and a mobile device identifies a reference to a particular entity and a reference to a category based on the query. A model that is stored on the mobile device and stores one or more facts that are associated with one or more entities is accessed. A subset of facts from among the facts that are stored in the model for the particular entity is selected. The search query is annotated based at least on one or more facts of the subset of facts that are stored in the model for the particular entity. The annotated search query is transmitted, from the mobile device to a search engine, for processing. A result of processing the annotated search query is received by the mobile device.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: June 20, 2017
    Assignee: Google Inc.
    Inventors: David Petrou, Matthew Sharifi
  • Publication number: 20170155850
    Abstract: Implementations of the present disclosure include actions of receiving image data of an image capturing a scene, receiving data describing one or more entities determined from the scene, the one or more entities being determined from the scene, determining one or more actions based on the one or more entities, each action being provided at least partly based on search results from searching the one or more entities, and providing instructions to display an action interface comprising one or more action elements, each action element being to induce execution of a respective action, the action interface being displayed in a viewfinder
    Type: Application
    Filed: February 9, 2017
    Publication date: June 1, 2017
    Inventors: Teresa Ko, Hartwig Adam, Mikkel Crone Koser, Alexei Masterov, Andrews-Junior Kimbembe, Matthew J. Bridges, Paul Chang, David Petrou, Adam Berenzweig
  • Publication number: 20170153782
    Abstract: Implementations provide an improved drag-and-drop operation on a mobile device. For example, a method includes identifying a drag area in a user interface of a first mobile application in response to a drag command and receiving a drop location in a second mobile application that differs from the first mobile application. The method may also include determining that a drop location is a text input control and the drag area is not text-based, performing a search for a text description of the drag area, and pasting the text description into the text input control. The method may also include determining that a drop location is an image input control and that the drag area is text based, performing a search using the drag area for a responsive image, and pasting the responsive image into the image input control.
    Type: Application
    Filed: February 14, 2017
    Publication date: June 1, 2017
    Inventors: Matthew SHARIFI, David PETROU
  • Publication number: 20170139879
    Abstract: Systems and methods simulate a hyperlink in regular content displayed on a screen. An example method can include generating, responsive to detecting a simulated hyperlink indication, a centered selection from content displayed on a display of a computing device, providing the centered selection to a simulated hyperlink model that predicts an operation given the centered selection, and initiating the operation using an intent associated with a mobile application. The simulated hyperlink model may also provide, from the centered selection, an intelligent selection used the intent's parameter.
    Type: Application
    Filed: November 18, 2015
    Publication date: May 18, 2017
    Inventors: Matthew SHARIFI, David PETROU
  • Publication number: 20170118576
    Abstract: Systems and methods are provided for a personalized entity repository. For example, a computing device comprises a personalized entity repository having fixed sets of entities from an entity repository stored at a server, a processor, and memory storing instructions that cause the computing device to identify fixed sets of entities that are relevant to a user based on context associated with the computing device, rank the fixed sets by relevancy, and update the personalized entity repository using selected sets determined based on the rank and on set usage parameters applicable to the user. In another example, a method includes generating fixed sets of entities from an entity repository, including location-based sets and topic-based sets, and providing a subset of the fixed sets to a client, the client requesting the subset based on the client's location and on items identified in content generated for display on the client.
    Type: Application
    Filed: December 8, 2015
    Publication date: April 27, 2017
    Inventors: Matthew Sharifi, Jorge Pereira, Dominik Roblek, Julian Odell, Cong Li, David Petrou
  • Publication number: 20170098159
    Abstract: Systems and methods are provided for suggesting actions for selected text based on content displayed on a mobile device. An example method can include converting a selection made via a display device into a query, providing the query to an action suggestion model that is trained to predict an action given a query, each action being associated with a mobile application, receiving one or more predicted actions, and initiating display of the one or more predicted actions on the display device. Another example method can include identifying, from search records, queries where a website is highly ranked, the website being one of a plurality of websites in a mapping of websites to mobile applications. The method can also include generating positive training examples for an action suggestion model from the identified queries, and training the action suggestion model using the positive training examples.
    Type: Application
    Filed: October 1, 2015
    Publication date: April 6, 2017
    Inventors: Matthew Sharifi, Daniel Ramage, David Petrou
  • Publication number: 20170091448
    Abstract: Systems and methods prevent or restrict the mining of content on a mobile device. For example, a method may include identifying a mining-restriction mark in low order bits or high order bits in a frame buffer of a mobile device and determining whether the mining-restriction mark prevents mining of content. Mining includes non-transient storage of a copy or derivations of data in the frame buffer. The method may also include preventing the mining of data in the frame buffer when the mining-restriction mark prevents mining.
    Type: Application
    Filed: December 14, 2016
    Publication date: March 30, 2017
    Inventors: Alfred Zalmon SPECTOR, David PETROU, Blaise AGUERA-ARCAS, Matthew SHARIFI
  • Patent number: 9606716
    Abstract: Implementations provide an improved drag-and-drop operation on a mobile device. For example, a method includes identifying a drag area in a user interface of a first mobile application in response to a drag command, identifying an entity from a data store based on recognition performed on content in the drag area, receiving a drop location associated with a second mobile application, determining an action to perform in the second mobile application based on the drop location, and performing the action in the second mobile action using the entity. Another method may include receiving a selection of a smart copy control for a text input control in a first mobile application, receiving a selected area of a display generated by a second mobile application, identifying an entity in the selected area, automatically navigating back to the text input control, and pasting a description of the entity in the text input control.
    Type: Grant
    Filed: October 24, 2014
    Date of Patent: March 28, 2017
    Assignee: Google Inc.
    Inventors: Matthew Sharifi, David Petrou