Patents by Inventor David Petrou

David Petrou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140334746
    Abstract: A server system receives a visual query from a client system, performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system also produces structural information associated with the textual characters in the visual query. Textual characters in the plurality of textual characters are scored. The method further includes identifying, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. A canonical document that includes the one or more high quality textual strings and that is consistent with the structural information is retrieved. At least a portion of the canonical document is sent to the client system.
    Type: Application
    Filed: July 29, 2014
    Publication date: November 13, 2014
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Patent number: 8811742
    Abstract: A server system receives a visual query from a client system, performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system also produces structural information associated with the textual characters in the visual query. Textual characters in the plurality of textual characters are scored. The method further includes identifying, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. A canonical document that includes the one or more high quality textual strings and that is consistent with the structural information is retrieved. At least a portion of the canonical document is sent to the client system.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: August 19, 2014
    Assignee: Google Inc.
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Patent number: 8805079
    Abstract: A server system receives a visual query from a client system distinct from the server system. The server system performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system scores each textual character in the plurality of textual characters in accordance with the geographic location of the client system. The server system identifies, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. Then the server system retrieves a canonical document having the one or more high quality textual strings and sends at least a portion of the canonical document to the client system.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: August 12, 2014
    Assignee: Google Inc.
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Patent number: 8761512
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing queries made up of images. In one aspect, a method includes indexing images by image descriptors. The method further includes associating descriptive n-grams with the images. In another aspect, a method includes receiving a query, identifying text describing the query, and performing a search according to the text identified for the query.
    Type: Grant
    Filed: December 3, 2010
    Date of Patent: June 24, 2014
    Assignee: Google Inc.
    Inventors: Ulrich Buddemeier, Gabriel Taubman, Hartwig Adam, Charles Rosenberg, Hartmut Neven, David Petrou, Fernando Brucher
  • Publication number: 20140172881
    Abstract: A facial recognition search system identifies one or more likely names (or other personal identifiers) corresponding to the facial image(s) in a query as follows. After receiving the visual query with one or more facial images, the system identifies images that potentially match the respective facial image in accordance with visual similarity criteria. Then one or more persons associated with the potential images are identified. For each identified person, person-specific data comprising metrics of social connectivity to the requester are retrieved from a plurality of applications such as communications applications, social networking applications, calendar applications, and collaborative applications. An ordered list of persons is then generated by ranking the identified persons in accordance with at least metrics of visual similarity between the respective facial image and the potential image matches and with the social connection metrics.
    Type: Application
    Filed: February 20, 2014
    Publication date: June 19, 2014
    Applicant: Google Inc.
    Inventors: David Petrou, Andrew Rabinovich, Hartwig Adam
  • Publication number: 20140164406
    Abstract: A visual query such as a photograph, a screen shot, a scanned image, a video frame, or an image created by a content authoring application is submitted to a visual query search system. The search system processes the visual query by sending it to a plurality of parallel search systems, each implementing a distinct visual query search process. These parallel search systems may include but are not limited to optical character recognition (OCR), facial recognition, product recognition, bar code recognition, object-or-object-category recognition, named entity recognition, and color recognition. Then at least one search result is sent to the client system. In some embodiments, when the visual query is an image containing a text element and a non-text element, at least one search result includes an optical character recognition result for the text element and at least one image-match result for the non-text element.
    Type: Application
    Filed: February 18, 2014
    Publication date: June 12, 2014
    Applicant: GOOGLE INC.
    Inventor: David Petrou
  • Patent number: 8670597
    Abstract: A facial recognition search system identifies one or more likely names (or other personal identifiers) corresponding to the facial image(s) in a query as follows. After receiving the visual query with one or more facial images, the system identifies images that potentially match the respective facial image in accordance with visual similarity criteria. Then one or more persons associated with the potential images are identified. For each identified person, person-specific data comprising metrics of social connectivity to the requester are retrieved from a plurality of applications such as communications applications, social networking applications, calendar applications, and collaborative applications. An ordered list of persons is then generated by ranking the identified persons in accordance with at least metrics of visual similarity between the respective facial image and the potential image matches and with the social connection metrics.
    Type: Grant
    Filed: August 5, 2010
    Date of Patent: March 11, 2014
    Assignee: Google Inc.
    Inventors: David Petrou, Andrew Rabinovich, Hartwig Adam
  • Patent number: 8659433
    Abstract: A wearable computer determines unnatural movements of a head-mounted display (HMD) and triggers a locking mechanism. In one embodiment, the wearable computer receives movement data from one or more sensors and determines that the movement of the HMD is unnatural. In one embodiment, the wearable computer receives movement data from one or more sensors and determines that the HMD is being worn by an unauthorized user. In response to determining an unnatural movement and/or an unauthorized user wearing the HMD, the wearable computer triggers a locking mechanism, which can beneficially provide security measures for the wearable computer.
    Type: Grant
    Filed: June 21, 2012
    Date of Patent: February 25, 2014
    Assignee: Google Inc.
    Inventor: David Petrou
  • Publication number: 20140046935
    Abstract: A method, system, and computer readable storage medium is provided for identifying textual terms in response to a visual query is provided. A server system receives a visual query from a client system. The visual query is responded to as follows. A set of image feature values for the visual query is generated. The set of image feature values is mapped to a plurality of textual terms, including a weight for each of the textual terms in the plurality of textual terms. The textual terms are ranked in accordance with the weights of the textual terms. Then, in accordance with the ranking the textual terms, one or more of the ranked textual terms are sent to the client system.
    Type: Application
    Filed: August 8, 2012
    Publication date: February 13, 2014
    Inventors: Samy Bengio, David Petrou
  • Publication number: 20130311506
    Abstract: A method and apparatus for enabling user query disambiguation based on a user context of a mobile computing device. According to embodiments of the invention, a first user search query, along with sensor data, is received from a mobile computing device. A recognition process is performed on the sensor data to identify at least one item. In response to determining the at least one item is a result for the first search query, data identifying the at least one item is transmitted to the mobile computing device as a response to the first search query. In response to determining the at least one item is not the result for the first search query, search results of a second search query is transmitted to the mobile computing device as the response to the first search query, the second search query comprising a query of the at least one item.
    Type: Application
    Filed: January 9, 2012
    Publication date: November 21, 2013
    Applicant: GOOGLE INC.
    Inventors: Gabriel Taubman, David Petrou, Hartwig Adam, Hartmut Neven
  • Publication number: 20130188886
    Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.
    Type: Application
    Filed: December 4, 2012
    Publication date: July 25, 2013
    Inventors: David Petrou, Matthew Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
  • Publication number: 20130179303
    Abstract: A method and apparatus for enabling dynamic product and vendor identification and the display of relevant purchase information are described herein. According to embodiments of the invention, a recognition process is executed on sensor data captured via a mobile computing device to identify one or more items, and to identify at least one product associated with the one or more items. Product and vendor information for the at least one product is retrieved and displayed via the mobile computing device. In the event a user gesture is detected in response to displaying the product and vendor information data, processing logic may submit a purchase order for the product (e.g., for an online vendor) or contact the vendor (e.g., for an in-store vendor).
    Type: Application
    Filed: January 9, 2012
    Publication date: July 11, 2013
    Applicant: GOOGLE INC.
    Inventors: David Petrou, Hartwig Adam, Laura Garcia-Barrio, Hartmut Neven
  • Publication number: 20130069787
    Abstract: A wearable computer determines unnatural movements of a head-mounted display (HMD) and triggers a locking mechanism. In one embodiment, the wearable computer receives movement data from one or more sensors and determines that the movement of the HMD is unnatural. In one embodiment, the wearable computer receives movement data from one or more sensors and determines that the HMD is being worn by an unauthorized user. In response to determining an unnatural movement and/or an unauthorized user wearing the HMD, the wearable computer triggers a locking mechanism, which can beneficially provide security measures for the wearable computer.
    Type: Application
    Filed: June 21, 2012
    Publication date: March 21, 2013
    Applicant: GOOGLE INC.
    Inventor: David Petrou
  • Publication number: 20130036134
    Abstract: A method and apparatus for enabling a searchable history of real-world user experiences is described. The method may include capturing media data by a mobile computing device. The method may also include transmitting the captured media data to a server computer system, the server computer system to perform one or more recognition processes on the captured media data and add the captured media data to a history of real-world experiences of a user of the mobile computing device when the one or more recognition processes find a match. The method may also include transmitting a query of the user to the server computer system to initiate a search of the history or real-world experiences, and receiving results relevant to the query that include data indicative of the media data in the history of real-world experiences.
    Type: Application
    Filed: June 11, 2012
    Publication date: February 7, 2013
    Inventors: Hartmut Neven, David Petrou, Jacob Smullyan, Hartwig Adam
  • Publication number: 20130027572
    Abstract: A head-mounted display (HMD) displays a visual representation of a physical interaction with an input interface that is located outside of the field of view. In one embodiment, the visual representation includes symbols that indicate when close proximity or physical contact is made with the input interface. In another embodiment, the visual representation is a simulation of the physical interaction with the input interface. The visual representation displayed by the HMD can beneficially enable the wearer to interact with the input interface more efficiently.
    Type: Application
    Filed: June 14, 2012
    Publication date: January 31, 2013
    Applicant: GOOGLE INC.
    Inventor: David Petrou
  • Patent number: 8223024
    Abstract: A wearable computer determines unnatural movements of a head-mounted display (HMD) and triggers a locking mechanism. In one embodiment, the wearable computer receives movement data from one or more sensors and determines that the movement of the HMD is unnatural. In one embodiment, the wearable computer receives movement data from one or more sensors and determines that the HMD is being worn by an unauthorized user. In response to determining an unnatural movement and/or an unauthorized user wearing the HMD, the wearable computer triggers a locking mechanism, which can beneficially provide security measures for the wearable computer.
    Type: Grant
    Filed: September 21, 2011
    Date of Patent: July 17, 2012
    Assignee: Google Inc.
    Inventor: David Petrou
  • Patent number: 8217856
    Abstract: A head-mounted display (HMD) displays a visual representation of a physical interaction with an input interface that is located outside of the field of view. In one embodiment, the visual representation includes symbols that indicate when close proximity or physical contact is made with the input interface. In another embodiment, the visual representation is a simulation of the physical interaction with the input interface. The visual representation displayed by the HMD can beneficially enable the wearer to interact with the input interface more efficiently.
    Type: Grant
    Filed: July 27, 2011
    Date of Patent: July 10, 2012
    Assignee: Google Inc.
    Inventor: David Petrou
  • Publication number: 20120134590
    Abstract: A server system receives a visual query from a client system distinct from the server system. The server system performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system scores each textual character in the plurality of textual characters in accordance with the geographic location of the client system. The server system identifies, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. Then the server system retrieves a canonical document having the one or more high quality textual strings and sends at least a portion of the canonical document to the client system.
    Type: Application
    Filed: December 1, 2011
    Publication date: May 31, 2012
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Publication number: 20120128250
    Abstract: A server system receives a visual query from a client system distinct from the server system, performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query, and scores each textual character in the plurality of textual characters. The server system identifies, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query; retrieves a canonical document having the one or more high quality textual strings; generates a combination of the visual query and at least a portion of the canonical document; and sends the combination to the client system.
    Type: Application
    Filed: December 1, 2011
    Publication date: May 24, 2012
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Publication number: 20120128251
    Abstract: A server system receives a visual query from a client system, performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system also produces structural information associated with the textual characters in the visual query. Textual characters in the plurality of textual characters are scored. The method further includes identifying, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. A canonical document that includes the one or more high quality textual strings and that is consistent with the structural information is retrieved. At least a portion of the canonical document is sent to the client system.
    Type: Application
    Filed: December 1, 2011
    Publication date: May 24, 2012
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey