Patents by Inventor David Petrou
David Petrou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190391996Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.Type: ApplicationFiled: September 6, 2019Publication date: December 26, 2019Inventors: David Petrou, Matthew J. Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
-
Patent number: 10515114Abstract: A facial recognition search system identifies one or more likely names (or other personal identifiers) corresponding to the facial image(s) in a query as follows. After receiving the visual query with one or more facial images, the system identifies images that potentially match the respective facial image in accordance with visual similarity criteria. Then one or more persons associated with the potential images are identified. For each identified person, person-specific data comprising metrics of social connectivity to the requester are retrieved from a plurality of applications such as communications applications, social networking applications, calendar applications, and collaborative applications. An ordered list of persons is then generated by ranking the identified persons in accordance with at least metrics of visual similarity between the respective facial image and the potential image matches and with the social connection metrics.Type: GrantFiled: July 9, 2018Date of Patent: December 24, 2019Assignee: Google LLCInventors: David Petrou, Andrew Rabinovich, Hartwig Adam
-
Patent number: 10509817Abstract: Embodiments retrieve a set of search results that have been previously identified as having at least one associated date or location. A timeline or map is displayed that visually represents the distribution of the dates or locations within the results. The timeline is displayed with a histogram graph corresponding to the number of dates in the search results at points along the timeline. The map is displayed with markers at the locations corresponding to the locations in the search results. The user can navigate the result set using the displayed timeline or map.Type: GrantFiled: February 6, 2017Date of Patent: December 17, 2019Assignee: Google LLCInventors: Jeffrey C. Reynar, Michael Gordon, David J. Vespe, David Petrou, Andrew W. Hogue
-
Patent number: 10504154Abstract: Systems and methods are shown for providing private local sponsored content selection and improving intelligence models through distribution among mobile devices. This allows greater data gathering capabilities through the use of the sensors of the mobile devices as well as data stored on data storage components of the mobile devices to create predicted models while offering better opportunities to preserve privacy. Locally stored profiles comprising machine intelligence models may also be used to determine the relevance of the data gathered and in improving an aggregated model for identifying the relevance of data and the selection of sponsored content items. Distributed optimization is used in conjunction with privacy techniques to create the improved machine intelligence models. Publishers may also benefit from the improved privacy by protecting the statistics of type or volume of sponsored content items shown with publisher content.Type: GrantFiled: September 20, 2016Date of Patent: December 10, 2019Assignee: Google LLCInventors: Keith Bonawitz, Daniel Ramage, David Petrou
-
Publication number: 20190370301Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, relating to on-device query annotating. In some implementations, a search query is received, and a mobile device identifies a reference to a particular entity and a reference to a category based on the query. A model that is stored on the mobile device and stores one or more facts that are associated with one or more entities is accessed. A subset of facts from among the facts that are stored in the model for the particular entity is selected. The search query is annotated based at least on one or more facts of the subset of facts that are stored in the model for the particular entity. The annotated search query is transmitted, from the mobile device to a search engine, for processing. A result of processing the annotated search query is received by the mobile device.Type: ApplicationFiled: July 8, 2019Publication date: December 5, 2019Inventors: David Petrou, Matthew Sharifi
-
Patent number: 10489410Abstract: Methods, systems, and apparatus for receiving a query image, receiving one or more entities that are associated with the query image, identifying, for one or more of the entities, one or more candidate search queries that are pre-associated with the one or more entities, generating a respective relevance score for each of the candidate search queries, selecting, as a representative search query for the query image, a particular candidate search query based at least on the generated respective relevance scores and providing the representative search query for output in response to receiving the query image.Type: GrantFiled: April 18, 2016Date of Patent: November 26, 2019Assignee: Google LLCInventors: Matthew Sharifi, David Petrou, Abhanshu Sharma
-
Patent number: 10491660Abstract: Systems and methods are provided for sharing a screen from a mobile device. For example, a method includes receiving, at a second mobile device, an image of a screen captured from a first mobile device and determining whether to trigger an automated action. The method may also include displaying, responsive to not triggering the automated action, annotation data generated for the image with the image on a display of the second mobile device, the annotation data including at least one visual cue corresponding to content in the image relevant to a user of the second mobile device. The method may further include, responsive to triggering the automated action, determining that a mobile application associated with the image is installed on the second mobile device and replaying user input actions received with the image on the second mobile device starting from a reference screen associated with the mobile application.Type: GrantFiled: August 18, 2017Date of Patent: November 26, 2019Assignee: GOOGLE LLCInventors: Matthew Sharifi, David Petrou
-
Patent number: 10440279Abstract: Implementations of the present disclosure include actions of receiving image data of an image capturing a scene, receiving data describing one or more entities determined from the scene, the one or more entities being determined from the scene, determining one or more actions based on the one or more entities, each action being provided at least partly based on search results from searching the one or more entities, and providing instructions to display an action interface comprising one or more action elements, each action element being to induce execution of a respective action, the action interface being displayed in a viewfinder.Type: GrantFiled: April 5, 2018Date of Patent: October 8, 2019Assignee: Google LLCInventors: Teresa Ko, Hartwig Adam, Mikkel Crone Koser, Alexei Masterov, Andrews-Junior Kimbembe, Matthew J. Bridges, Paul Chang, David Petrou, Adam Berenzweig
-
Publication number: 20190286649Abstract: Methods, systems, and apparatus for receiving a query image and a user tap location, processing the received query image based on the user tap location, identifying one or more entities associated with the processed query image and in response to receiving (i) the query image, and (ii) the user tap location, providing information about the identified one or more of the entities.Type: ApplicationFiled: June 7, 2019Publication date: September 19, 2019Inventors: Abhanshu Sharma, David Petrou, Matthew Sharifi
-
Patent number: 10409855Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.Type: GrantFiled: January 9, 2019Date of Patent: September 10, 2019Assignee: Google LLCInventors: David Petrou, Matthew J. Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
-
Patent number: 10353950Abstract: Methods, systems, and apparatus for receiving a query image and a user tap location, processing the received query image based on the user tap location, identifying one or more entities associated with the processed query image and in response to receiving (i) the query image, and (ii) the user tap location, providing information about the identified one or more of the entities.Type: GrantFiled: June 28, 2016Date of Patent: July 16, 2019Assignee: Google LLCInventors: Abhanshu Sharma, David Petrou, Matthew Sharifi
-
Patent number: 10346493Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, relating to on-device query annotating. In some implementations, a search query is received, and a mobile device identifies a reference to a particular entity and a reference to a category based on the query. A model that is stored on the mobile device and stores one or more facts that are associated with one or more entities is accessed. A subset of facts from among the facts that are stored in the model for the particular entity is selected. The search query is annotated based at least on one or more facts of the subset of facts that are stored in the model for the particular entity. The annotated search query is transmitted, from the mobile device to a search engine, for processing. A result of processing the annotated search query is received by the mobile device.Type: GrantFiled: May 15, 2017Date of Patent: July 9, 2019Assignee: Google LLCInventors: David Petrou, Matthew Sharifi
-
Patent number: 10346463Abstract: A visual query is received from a client system, along with location information for the client system, and processed by a server system. The server system sends the visual query and the location information to a visual query search system, and receives from the visual query search system enhanced location information based on the visual query and the location information. The server system then sends a search query, including the enhanced location information, to a location-based search system. The search system receives and provides to the client one or more search results to the client system.Type: GrantFiled: June 15, 2017Date of Patent: July 9, 2019Assignee: Google LLCInventors: David Petrou, John Flynn, Hartwig Adam, Hartmut Neven
-
Publication number: 20190205005Abstract: Implementations provide an improved drag-and-drop operation on a mobile device. For example, a method includes identifying a drag area in a user interface of a first mobile application in response to a drag command and receiving a drop location in a second mobile application that differs from the first mobile application. The method may also include determining that a drop location is a text input control and the drag area is not text-based, performing a search for a text description of the drag area, and pasting the text description into the text input control. The method may also include determining that a drop location is an image input control and that the drag area is text based, performing a search using the drag area for a responsive image, and pasting the responsive image into the image input control.Type: ApplicationFiled: March 7, 2019Publication date: July 4, 2019Inventors: Matthew SHARIFI, David PETROU
-
Publication number: 20190146993Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.Type: ApplicationFiled: January 9, 2019Publication date: May 16, 2019Inventors: David Petrou, Matthew J. Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
-
Patent number: 10248440Abstract: Systems and methods are provided for automating user input using onscreen content. For example, a method includes receiving a selection of a first screen capture image representing a screen captured on a mobile device associated with a user, the first image having a first timestamp. The method also includes determining, using a data store of images of previously captured screens of the mobile device, a reference image from the data store that has a timestamp prior to the first timestamp, identifying a plurality of images in the data store that have respective timestamps between the timestamp for the reference image and the first timestamp, and providing the reference image, the plurality of images, and the first image to the mobile device.Type: GrantFiled: October 13, 2017Date of Patent: April 2, 2019Assignee: GOOGLE LLCInventors: Matthew Sharifi, David Petrou
-
Patent number: 10241668Abstract: Implementations provide an improved drag-and-drop operation on a mobile device. For example, a method includes identifying a drag area in a user interface of a first mobile application in response to a drag command and receiving a drop location in a second mobile application that differs from the first mobile application. The method may also include determining that a drop location is a text input control and the drag area is not text-based, performing a search for a text description of the drag area, and pasting the text description into the text input control. The method may also include determining that a drop location is an image input control and that the drag area is text based, performing a search using the drag area for a responsive image, and pasting the responsive image into the image input control.Type: GrantFiled: February 14, 2017Date of Patent: March 26, 2019Assignee: GOOGLE LLCInventors: Matthew Sharifi, David Petrou
-
Patent number: 10244369Abstract: Systems and methods are provided for receiving a screen capture image from a client device, the screen capture image being generated responsive to a command from a user, and the screen capture image including content generated by an application. The method also includes saving the screen capture image in a data store of images. The data store of images is associated with a user profile of the user using the client device. The method also includes associating a timestamp with the screen capture image and receiving an expiration time for the screen capture image from the user. The screen capture image is available for display until the expiration time.Type: GrantFiled: September 14, 2018Date of Patent: March 26, 2019Assignee: GOOGLE LLCInventors: Matthew Sharifi, David Petrou
-
Patent number: 10198457Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.Type: GrantFiled: August 25, 2016Date of Patent: February 5, 2019Assignee: Google LLCInventors: David Petrou, Matthew J. Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
-
Publication number: 20190019110Abstract: Systems and methods are provided for a content-based security for computing devices. An example method includes identifying content rendered by a mobile application, the content being rendered during a session, generating feature vectors from the content and determining that the feature vectors do not match a classification model. The method also includes providing, in response to the determination that the feature vectors do not match the classification model, a challenge configured to authenticate a user of the mobile device. Another example method includes determining a computing device is located at a trusted location, capturing information from a session, the information coming from content rendered by a mobile application during the session, generating feature vectors for the session, and repeating this until a training criteria is met. The method also includes training a classification model using the feature vectors and authenticating a user of the device using the trained classification model.Type: ApplicationFiled: September 17, 2018Publication date: January 17, 2019Inventors: Matthew SHARIFI, Kai WANG, David PETROU