Patents by Inventor David Petrou
David Petrou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200358901Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, at a mobile computing device that is associated with a called user, a call from a calling computing device that is associated with a calling user; in response to receiving the call, determining, by the mobile computing device, that data associated with the called user indicates that the called user will not respond to the call; in response to determining that the called user will not respond to the call, inferring, by the mobile computing device, an informational need of the calling user; and automatically providing, from the mobile computing device to the calling computing device, information associated with the called user and that satisfies the inferred informational need of the calling user.Type: ApplicationFiled: May 9, 2019Publication date: November 12, 2020Applicant: Google LLCInventors: Shavit Matias, Noam Etzion-Rosenberg, Blaise Aguera-Arcas, Benjamin Schlesinger, Brandon Barbello, Ori Kabeli, David Petrou, Yossi Matias, Nadav Bar
-
Publication number: 20200348813Abstract: Implementations provide an improved drag-and-drop operation on a mobile device. For example, a method includes identifying a drag area in a user interface of a first mobile application in response to a drag command and receiving a drop location in a second mobile application that differs from the first mobile application. The method may also include determining that a drop location is a text input control and the drag area is not text-based, performing a search for a text description of the drag area, and pasting the text description into the text input control. The method may also include determining that a drop location is an image input control and that the drag area is text based, performing a search using the drag area for a responsive image, and pasting the responsive image into the image input control.Type: ApplicationFiled: July 22, 2020Publication date: November 5, 2020Inventors: Matthew SHARIFI, David PETROU
-
Patent number: 10803408Abstract: Systems and methods are provided for a content-based security for computing devices. An example method includes identifying content rendered by a mobile application, the content being rendered during a session, generating feature vectors from the content and determining that the feature vectors do not match a classification model. The method also includes providing, in response to the determination that the feature vectors do not match the classification model, a challenge configured to authenticate a user of the mobile device. Another example method includes determining a computing device is located at a trusted location, capturing information from a session, the information coming from content rendered by a mobile application during the session, generating feature vectors for the session, and repeating this until a training criteria is met. The method also includes training a classification model using the feature vectors and authenticating a user of the device using the trained classification model.Type: GrantFiled: September 17, 2018Date of Patent: October 13, 2020Assignee: GOOGLE LLCInventors: Matthew Sharifi, Kai Wang, David Petrou
-
Patent number: 10803391Abstract: Systems and methods are provided for a personal entity modeling for computing devices. For example, a computing device comprises at least one processor and memory storing instructions that, when executed by the at least one processor, cause the mobile device to perform operations including identifying a personal entity in content generated for display on the mobile device, generating training examples for the personal entity from the content, and updating an embedding used to model the personal entity using the training examples. The embedding may be used to make predictions regarding the personal entity. For example, the operations may also include predicting an association between a first personal entity displayed on the computing device and a second entity based on the embedding, and providing a recommendation, to be displayed on the computing device, related to the second entity.Type: GrantFiled: July 29, 2015Date of Patent: October 13, 2020Assignee: GOOGLE LLCInventors: Matthew Sharifi, David Petrou, Pranav Khaitan
-
Publication number: 20200288063Abstract: Implementations of the present disclosure include actions of receiving image data of an image capturing a scene, receiving data describing one or more entities determined from the scene, the one or more entities being determined from the scene, determining one or more actions based on the one or more entities, each action being provided at least partly based on search results from searching the one or more entities, and providing instructions to display an action interface comprising one or more action elements, each action element being to induce execution of a respective action, the action interface being displayed in a viewfinderType: ApplicationFiled: May 22, 2020Publication date: September 10, 2020Inventors: Teresa Ko, Hartwig Adam, Mikkel Crone Koser, Alexei Masterov, Andrews-Junior Kimbembe, Matthew J. Bridges, Paul Chang, David Petrou, Adam Berenzweig
-
Publication number: 20200285670Abstract: Methods, systems, and apparatus for receiving a query image and a user tap location, processing the received query image based on the user tap location, identifying one or more entities associated with the processed query image and in response to receiving (i) the query image, and (ii) the user tap location, providing information about the identified one or more of the entities.Type: ApplicationFiled: May 22, 2020Publication date: September 10, 2020Inventors: Abhanshu Sharma, David Petrou, Matthew Sharifi
-
Patent number: 10739982Abstract: Implementations provide an improved drag-and-drop operation on a mobile device. For example, a method includes identifying a drag area in a user interface of a first mobile application in response to a drag command and receiving a drop location in a second mobile application that differs from the first mobile application. The method may also include determining that a drop location is a text input control and the drag area is not text-based, performing a search for a text description of the drag area, and pasting the text description into the text input control. The method may also include determining that a drop location is an image input control and that the drag area is text based, performing a search using the drag area for a responsive image, and pasting the responsive image into the image input control.Type: GrantFiled: March 7, 2019Date of Patent: August 11, 2020Assignee: GOOGLE LLCInventors: Matthew Sharifi, David Petrou
-
Patent number: 10733360Abstract: Systems and methods simulate a hyperlink in regular content displayed on a screen. An example method can include generating, responsive to detecting a simulated hyperlink indication, a centered selection from content displayed on a display of a computing device, providing the centered selection to a simulated hyperlink model that predicts an operation given the centered selection, and initiating the operation using an intent associated with a mobile application. The simulated hyperlink model may also provide, from the centered selection, an intelligent selection used the intent's parameter.Type: GrantFiled: July 31, 2018Date of Patent: August 4, 2020Assignee: GOOGLE LLCInventors: Matthew Sharifi, David Petrou
-
Publication number: 20200226187Abstract: Systems and methods are provided for a personalized entity repository. For example, a computing device comprises a personalized entity repository having fixed sets of entities from an entity repository stored at a server, a processor, and memory storing instructions that cause the computing device to identify fixed sets of entities that are relevant to a user based on context associated with the computing device, rank the fixed sets by relevancy, and update the personalized entity repository using selected sets determined based on the rank and on set usage parameters applicable to the user. In another example, a method includes generating fixed sets of entities from an entity repository, including location-based sets and topic-based sets, and providing a subset of the fixed sets to a client, the client requesting the subset based on the client's location and on items identified in content generated for display on the client.Type: ApplicationFiled: January 7, 2019Publication date: July 16, 2020Inventors: Matthew Sharifi, Jorge Pereira, Dominik Roblek, Julian Odell, Cong Li, David Petrou
-
Patent number: 10701272Abstract: Implementations of the present disclosure include actions of receiving image data of an image capturing a scene, receiving data describing one or more entities determined from the scene, the one or more entities being determined from the scene, determining one or more actions based on the one or more entities, each action being provided at least partly based on search results from searching the one or more entities, and providing instructions to display an action interface comprising one or more action elements, each action element being to induce execution of a respective action, the action interface being displayed in a viewfinder.Type: GrantFiled: September 12, 2019Date of Patent: June 30, 2020Assignee: Google LLCInventors: Teresa Ko, Hartwig Adam, Mikkel Crone Koser, Alexei Masterov, Andrews-Junior Kimbembe, Matthew J. Bridges, Paul Chang, David Petrou, Adam Berenzweig
-
Patent number: 10664519Abstract: Methods, systems, and apparatus for receiving a query image and a user tap location, processing the received query image based on the user tap location, identifying one or more entities associated with the processed query image and in response to receiving (i) the query image, and (ii) the user tap location, providing information about the identified one or more of the entities.Type: GrantFiled: June 7, 2019Date of Patent: May 26, 2020Assignee: Google LLCInventors: Abhanshu Sharma, David Petrou, Matthew Sharifi
-
Publication number: 20200151211Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.Type: ApplicationFiled: January 16, 2020Publication date: May 14, 2020Inventors: David Petrou, Matthew J. Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
-
Patent number: 10652706Abstract: Systems and methods are provided for disambiguating entities in a mobile environment and using the disambiguated entity to perform actions. An example method includes identifying an ambiguous entity reference in a screen shot of an interface generated by a first mobile application executing on a mobile device, determining a chronological window of content captured prior to the screen shot, and identifying a plurality of entities appearing in the chronological window of content. The method also includes disambiguating the ambiguous entity reference using the plurality of entities appearing in the chronological window and using the disambiguated entity to perform an action in a second mobile application executing on the mobile device.Type: GrantFiled: March 14, 2019Date of Patent: May 12, 2020Assignee: GOOGLE LLCInventors: Matthew Sharifi, David Petrou
-
Publication number: 20200098011Abstract: Systems and methods are shown for providing private local sponsored content selection and improving intelligence models through distribution among mobile devices. This allows greater data gathering capabilities through the use of the sensors of the mobile devices as well as data stored on data storage components of the mobile devices to create predicted models while offering better opportunities to preserve privacy. Locally stored profiles comprising machine intelligence models may also be used to determine the relevance of the data gathered and in improving an aggregated model for identifying the relevance of data and the selection of sponsored content items. Distributed optimization is used in conjunction with privacy techniques to create the improved machine intelligence models. Publishers may also benefit from the improved privacy by protecting the statistics of type or volume of sponsored content items shown with publisher content.Type: ApplicationFiled: November 25, 2019Publication date: March 26, 2020Applicant: Google LLCInventors: Keith Bonawitz, Daniel Ramage, David Petrou
-
Patent number: 10592261Abstract: Systems and methods are provided for automating user input using onscreen content. For example, a method includes receiving a selection of a first screen capture image representing a screen captured on a mobile device associated with a user, the first image having a first timestamp. The method also includes determining, using a data store of images of previously captured screens of the mobile device, a reference image from the data store that has a timestamp prior to the first timestamp, identifying a plurality of images in the data store that have respective timestamps between the timestamp for the reference image and the first timestamp, and providing the reference image, the plurality of images, and the first image to the mobile device.Type: GrantFiled: April 1, 2019Date of Patent: March 17, 2020Assignee: GOOGLE LLCInventors: Matthew Sharifi, David Petrou
-
Publication number: 20200073883Abstract: Embodiments retrieve a set of search results that have been previously identified as having at least one associated date or location. A timeline or map is displayed that visually represents the distribution of the dates or locations within the results. The timeline is displayed with a histogram graph corresponding to the number of dates in the search results at points along the timeline. The map is displayed with markers at the locations corresponding to the locations in the search results. The user can navigate the result set using the displayed timeline or map.Type: ApplicationFiled: November 6, 2019Publication date: March 5, 2020Inventors: Jeffrey C. Reynar, Michael Gordon, David J. Vespe, David Petrou, Andrew W. Hogue
-
Publication number: 20200050610Abstract: Methods, systems, and apparatus for receiving a query image, receiving one or more entities that are associated with the query image, identifying, for one or more of the entities, one or more candidate search queries that are pre-associated with the one or more entities, generating a respective relevance score for each of the candidate search queries, selecting, as a representative search query for the query image, a particular candidate search query based at least on the generated respective relevance scores and providing the representative search query for output in response to receiving the query image.Type: ApplicationFiled: October 18, 2019Publication date: February 13, 2020Inventors: Matthew Sharifi, David Petrou, Abhanshu Sharma
-
Patent number: 10552476Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.Type: GrantFiled: September 6, 2019Date of Patent: February 4, 2020Assignee: Google LLCInventors: David Petrou, Matthew J. Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
-
Patent number: 10534808Abstract: A visual query such as a photograph, a screen shot, a scanned image, a video frame, or an image created by a content authoring application is submitted to a visual query search system. The search system processes the visual query by sending it to a plurality of parallel search systems, each implementing a distinct visual query search process. These parallel search systems may include but are not limited to optical character recognition (OCR), facial recognition, product recognition, bar code recognition, object-or-object-category recognition, named entity recognition, and color recognition. Then at least one search result is sent to the client system. In some embodiments, when the visual query is an image containing a text element and a non-text element, at least one search result includes an optical character recognition result for the text element and at least one image-match result for the non-text element.Type: GrantFiled: February 18, 2014Date of Patent: January 14, 2020Assignee: GOOGLE LLCInventor: David Petrou
-
Publication number: 20200007774Abstract: Implementations of the present disclosure include actions of receiving image data of an image capturing a scene, receiving data describing one or more entities determined from the scene, the one or more entities being determined from the scene, determining one or more actions based on the one or more entities, each action being provided at least partly based on search results from searching the one or more entities, and providing instructions to display an action interface comprising one or more action elements, each action element being to induce execution of a respective action, the action interface being displayed in a viewfinderType: ApplicationFiled: September 12, 2019Publication date: January 2, 2020Inventors: Teresa Ko, Hartwig Adam, Mikkel Crone Koser, Alexei Masterov, Andrews-Junior Kimbembe, Matthew J. Bridges, Paul Chang, David Petrou, Adam Berenzweig