Patents by Inventor Matthew R. Casey

Matthew R. Casey has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9053350
    Abstract: OCR errors are identified and corrected through learning. An error probability estimator is trained using ground truths to learn error probability estimation. Multiple OCR engines process a text image, and convert it into texts. The error probability estimator compares the outcomes of the multiple OCR engines for mismatches, and determines an error probability for each of the mismatches. If the error probability of a mismatch exceeds an error probability threshold, a suspect is generated and grouped together with similar suspects in a cluster. A question for the cluster is generated and rendered to a human operator for answering. The answer from the human operator is then applied to all suspects in the cluster to correct OCR errors in the resulting text. The answer is also used to further train the error probability estimator.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: June 9, 2015
    Assignee: Google Inc.
    Inventors: Ahmad E. Abdulkader, Matthew R. Casey
  • Publication number: 20150112972
    Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.
    Type: Application
    Filed: November 14, 2014
    Publication date: April 23, 2015
    Inventors: David Petrou, Matthew J. Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
  • Patent number: 8977639
    Abstract: A server system receives a visual query and identifies an entity in the visual query. The server system further identifies a client-side action corresponding to the identified entity and creates an actionable search result element configured to launch the client-side action. Examples of actionable search result elements are buttons to initiate a telephone call, to initiate email message, to map an address, to make a restaurant reservation, and to provide an option to purchase a product. The entity identified in the visual query may be indirectly associated with a client-side action whose contact address or appropriate link is found in a search result associated with the identified entity. The client system receives and displays the actionable search result element, and upon a user selection of the actionable search result element, launches the client-side action in an application distinct from the visual query client application.
    Type: Grant
    Filed: August 11, 2010
    Date of Patent: March 10, 2015
    Assignee: Google Inc.
    Inventors: David Petrou, Avi Flamholz, Matthew R. Casey, Theodore Power
  • Patent number: 8891907
    Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.
    Type: Grant
    Filed: December 4, 2012
    Date of Patent: November 18, 2014
    Assignee: Google Inc.
    Inventors: David Petrou, Matthew Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
  • Publication number: 20140334746
    Abstract: A server system receives a visual query from a client system, performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system also produces structural information associated with the textual characters in the visual query. Textual characters in the plurality of textual characters are scored. The method further includes identifying, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. A canonical document that includes the one or more high quality textual strings and that is consistent with the structural information is retrieved. At least a portion of the canonical document is sent to the client system.
    Type: Application
    Filed: July 29, 2014
    Publication date: November 13, 2014
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Patent number: 8832804
    Abstract: A computer implemented system and method are provided for password pre-verification on the client side in client-server applications. An example system comprises a translation module configured to translate user input, in the form of a character string that can represent a password, to obtain a symbolic representation of the user input. The example system also comprises an output module configured to receive the symbolic representation from the translation module and, based on the user input, provide output to the user in the form of visual, audio or haptic cues. Such cues can alert a user as to whether or not the input character string is correctly entered. In a further example embodiment, a system can further comprise a comparison module configured to compare an existing symbolic representation with the symbolic representation generated from the user input by the translation module.
    Type: Grant
    Filed: August 5, 2011
    Date of Patent: September 9, 2014
    Assignee: Google Inc.
    Inventors: Matthew R. Casey, Girts Folkmanis, John Mishanski
  • Patent number: 8811742
    Abstract: A server system receives a visual query from a client system, performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system also produces structural information associated with the textual characters in the visual query. Textual characters in the plurality of textual characters are scored. The method further includes identifying, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. A canonical document that includes the one or more high quality textual strings and that is consistent with the structural information is retrieved. At least a portion of the canonical document is sent to the client system.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: August 19, 2014
    Assignee: Google Inc.
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Patent number: 8805079
    Abstract: A server system receives a visual query from a client system distinct from the server system. The server system performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system scores each textual character in the plurality of textual characters in accordance with the geographic location of the client system. The server system identifies, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. Then the server system retrieves a canonical document having the one or more high quality textual strings and sends at least a portion of the canonical document to the client system.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: August 12, 2014
    Assignee: Google Inc.
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Publication number: 20130212454
    Abstract: An e-book system receives and stores different versions of an e-book supporting different consumption modes. Additionally, the e-book system stores signposts for the e-book. The signposts include corresponding locations in different versions of the e-book. When a user switches from a first version to a second version, the e-book system determines based on the signposts a location in the second version of the e-book that corresponds to the current location in the first version. The e-book system then presents the content in the second version from the determined location.
    Type: Application
    Filed: February 13, 2012
    Publication date: August 15, 2013
    Applicant: GOOGLE INC.
    Inventor: Matthew R. Casey
  • Publication number: 20130188886
    Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.
    Type: Application
    Filed: December 4, 2012
    Publication date: July 25, 2013
    Inventors: David Petrou, Matthew Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
  • Patent number: 8331739
    Abstract: OCR errors are identified and corrected through learning. An error probability estimator is trained using ground truths to learn error probability estimation. Multiple OCR engines process a text image, and convert it into texts. The error probability estimator compares the outcomes of the multiple OCR engines for mismatches, and determines an error probability for each of the mismatches. If the error probability of a mismatch exceeds an error probability threshold, a suspect is generated and grouped together with similar suspects in a cluster. A question for the cluster is generated and rendered to a human operator for answering. The answer from the human operator is then applied to all suspects in the cluster to correct OCR errors in the resulting text. The answer is also used to further train the error probability estimator.
    Type: Grant
    Filed: January 21, 2009
    Date of Patent: December 11, 2012
    Assignee: Google Inc.
    Inventors: Ahmad Abdulkader, Matthew R. Casey
  • Publication number: 20120134590
    Abstract: A server system receives a visual query from a client system distinct from the server system. The server system performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system scores each textual character in the plurality of textual characters in accordance with the geographic location of the client system. The server system identifies, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. Then the server system retrieves a canonical document having the one or more high quality textual strings and sends at least a portion of the canonical document to the client system.
    Type: Application
    Filed: December 1, 2011
    Publication date: May 31, 2012
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Publication number: 20120128250
    Abstract: A server system receives a visual query from a client system distinct from the server system, performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query, and scores each textual character in the plurality of textual characters. The server system identifies, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query; retrieves a canonical document having the one or more high quality textual strings; generates a combination of the visual query and at least a portion of the canonical document; and sends the combination to the client system.
    Type: Application
    Filed: December 1, 2011
    Publication date: May 24, 2012
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Publication number: 20120128251
    Abstract: A server system receives a visual query from a client system, performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system also produces structural information associated with the textual characters in the visual query. Textual characters in the plurality of textual characters are scored. The method further includes identifying, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. A canonical document that includes the one or more high quality textual strings and that is consistent with the structural information is retrieved. At least a portion of the canonical document is sent to the client system.
    Type: Application
    Filed: December 1, 2011
    Publication date: May 24, 2012
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Publication number: 20110129153
    Abstract: A server system receives a visual query from a client system. The visual query is an image containing text such as a picture of a document. At the receiving server or another server, optical character recognition (OCR) is performed on the visual query to produce text recognition data representing textual characters. Each character in a contiguous region of the visual query is individually scored according to its quality. The quality score of a respective character is influenced by the quality scores of neighboring or nearby characters. Using the scores, one or more high quality strings of characters are identified. Each high quality string has a plurality of high quality characters. A canonical document containing the one or more high quality textual strings is retrieved. At least a portion of the canonical document is sent to the client system.
    Type: Application
    Filed: August 6, 2010
    Publication date: June 2, 2011
    Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
  • Publication number: 20110131241
    Abstract: A server system receives a visual query and identifies an entity in the visual query. The server system further identifies a client-side action corresponding to the identified entity and creates an actionable search result element configured to launch the client-side action. Examples of actionable search result elements are buttons to initiate a telephone call, to initiate email message, to map an address, to make a restaurant reservation, and to provide an option to purchase a product. The entity identified in the visual query may be indirectly associated with a client-side action whose contact address or appropriate link is found in a search result associated with the identified entity. The client system receives and displays the actionable search result element, and upon a user selection of the actionable search result element, launches the client-side action in an application distinct from the visual query client application.
    Type: Application
    Filed: August 11, 2010
    Publication date: June 2, 2011
    Inventors: David Petrou, Avi Flamholz, Matthew R. Casey, Theodore Power