Patents by Inventor Alessandro Bissacco

Alessandro Bissacco has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150169971
    Abstract: This disclosure is related to techniques for character recognition. Disclosed techniques include obtaining an electronic image containing depictions of characters, obtaining an initial optical character recognition output for the electronic image, identifying as potentially accurate a set of subsections of the initial optical character recognition output to generate a query, obtaining a search result corresponding to a document and responsive to the query, verifying text in the search result matches the depictions of characters, and outputting computer readable text from the document.
    Type: Application
    Filed: September 7, 2012
    Publication date: June 18, 2015
    Inventors: Mark Joseph Cummins, Matthew Ryan Casey, Alessandro Bissacco
  • Publication number: 20150161465
    Abstract: A text recognition server is configured to recognize text in a sparse text image. Specifically, given an image, the server specifies a plurality of “patches” (blocks of pixels within the image). The system applies a text detection algorithm to the patches to determine a number of the patches that contain text. This application of the text detection algorithm is used both to estimate the orientation of the image and to determine whether the image is textually sparse or textually dense. If the image is determined to be textually sparse, textual patches are identified and grouped into text regions, each of which is then separately processed by an OCR algorithm, and the recognized text for each region is combined into a result for the image as a whole.
    Type: Application
    Filed: May 5, 2014
    Publication date: June 11, 2015
    Applicant: Google Inc.
    Inventors: Alessandro Bissacco, Hartmut Neven
  • Publication number: 20150139506
    Abstract: The technology of the present disclosure includes computer-implemented methods, computer program products, and systems to filter images before transmitting to a system for optical character recognition (“OCR”). A user computing device obtains a first image of the card from the digital scan of a physical card and analyzes features of the first image, the analysis being sufficient to determine if the first image is likely to be usable by an OCR algorithm. If the user computing device determines that the first image is likely to be usable, then the first image is transmitted to an OCR system associated with the OCR algorithm. Upon a determination that the first image is unlikely to be usable, a second image of the card from the digital scan of the physical card is analyzed. The optical character recognition system performs an optical character recognition algorithm on the filtered card.
    Type: Application
    Filed: October 27, 2014
    Publication date: May 21, 2015
    Inventors: Xiaohang Wang, Alessandro Bissacco, Glen Berntson, Marria Nazif, Justin Scheiner, Sam Shih, Mark Leslie Snyder, Daniel Talavera
  • Patent number: 9020265
    Abstract: A system and method is provided for automatically recognizing building numbers in street level images. In one aspect, a processor selects a street level image that is likely to be near an address of interest. The processor identifies those portions of the image that are visually similar to street numbers, and then extracts the numeric values of the characters displayed in such portions. If an extracted value corresponds with the building number of the address of interest such as being substantially equal to the address of interest, the extracted value and the image portion are displayed to a human operator. The human operator confirms, by looking at the image portion, whether the image portion appears to be a building number that matches the extracted value. If so, the processor stores a value that associates that building number with the street level image.
    Type: Grant
    Filed: June 5, 2014
    Date of Patent: April 28, 2015
    Assignee: Google Inc.
    Inventors: Bo Wu, Alessandro Bissacco, Raymond W. Smith, Kong Man Cheung, Andrea Frome, Shlomo Urbach
  • Patent number: 8995758
    Abstract: According to an embodiment, a method for filtering descriptors for visual object recognition is provided. The method includes identifying false positive descriptors having a local match confidence that exceeds a predetermined threshold and a global image match confidence that is less than a second threshold. The method also includes training at least one classifier to discriminate between the false positive descriptors and other descriptors. The method further includes filtering feature point matches using the at least one classifier. According to another embodiment, the filtering step may further include removing one or more feature point matches from a result set. According to a further embodiment, a system for filtering feature point matches for visual object recognition is provided. The system includes a hard false positive identifier, a classifier trainer and a hard false positive filter.
    Type: Grant
    Filed: June 22, 2009
    Date of Patent: March 31, 2015
    Assignee: Google Inc.
    Inventors: Alessandro Bissacco, Ulrich Buddemeier, Hartmut Neven
  • Publication number: 20150055866
    Abstract: Disclosed techniques include receiving an electronic image containing depictions of characters, segmenting at least some of the depictions of characters using a first segmentation technique to produce a first segmented portion, and performing a first character recognition on the first segmented portion to determine a first sequence of characters. The techniques also include determining, based on the performing the first character recognition, that the first sequence of characters does not match the depictions of characters. The techniques further include segmenting at least some of the depictions of characters using a second segmentation technique, based on the determining, to produce a second segmented portion, and performing a second character recognition on at least a portion of the second segmented portion to produce a second sequence of characters. The techniques also include outputting a third sequence of characters based on at least part of the second sequence of characters.
    Type: Application
    Filed: May 25, 2012
    Publication date: February 26, 2015
    Inventors: Mark Joseph Cummins, Alessandro Bissacco
  • Publication number: 20150003667
    Abstract: Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
    Type: Application
    Filed: November 26, 2013
    Publication date: January 1, 2015
    Applicant: GOOGLE INC.
    Inventors: Henry Allan Rowley, Sanjiv Kumar, Xiaohang Wang, Alessandro Bissacco, Jose Jeronimo Moreira Rodrigues, Kishore Ananda Papineni
  • Publication number: 20150006361
    Abstract: Comparing extracted card data from a continuous scan comprises receiving, by one or more computing devices, a digital scan of a card; obtaining a plurality of images of the card from the digital scan of the physical card; performing an optical character recognition algorithm on each of the plurality of images; comparing results of the application of the optical character recognition algorithm for each of the plurality of images; determining if a configured threshold of the results for each of the plurality of images match each other; and verifying the results when the results for each of the plurality of images match each other. Threshold confidence level for the extracted card data can be employed to determine the accuracy of the extraction. Data is further extracted from blended images and three-dimensional models of the card. Embossed text and holograms in the images may be used to prevent fraud.
    Type: Application
    Filed: September 13, 2013
    Publication date: January 1, 2015
    Applicant: GOOGLE INC.
    Inventors: Sanjiv Kumar, Henry Allan Rowley, Xiaohang Wang, Yakov Okshtein, Farhan Shamsi, Alessandro Bissacco
  • Publication number: 20150003733
    Abstract: Extracting financial card information with relaxed alignment comprises a method to receive an image of a card, determine one or more edge finder zones in locations of the image, and identify lines in the one or more edge finder zones. The method further identifies one or more quadrilaterals formed by intersections of extrapolations of the identified lines, determines an aspect ratio of the one or more quadrilateral, and compares the determined aspect ratios of the quadrilateral to an expected aspect ratio. The method then identifies a quadrilateral that matches the expected aspect ratio and performs an optical character recognition algorithm on the rectified model. A similar method is performed on multiple cards in an image. The results of the analysis of each of the cards are compared to improve accuracy of the data.
    Type: Application
    Filed: August 19, 2014
    Publication date: January 1, 2015
    Inventors: Xiaohang Wang, Farhan Shamsi, Yakov Okshtein, Sanjiv Kumar, Henry Allan Rowley, Marcus Quintana Mitchell, Debra Lin Repenning, Alessandro Bissacco, Justin Scheiner, Leon Palm
  • Publication number: 20150006360
    Abstract: Comparing extracted card data from a continuous scan comprises receiving, by one or more computing devices, a digital scan of a card; obtaining a plurality of images of the card from the digital scan of the physical card; performing an optical character recognition algorithm on each of the plurality of images; comparing results of the application of the optical character recognition algorithm for each of the plurality of images; determining if a configured threshold of the results for each of the plurality of images match each other; and verifying the results when the results for each of the plurality of images match each other. Threshold confidence level for the extracted card data can be employed to determine the accuracy of the extraction. Data is further extracted from blended images and three-dimensional models of the card. Embossed text and holograms in the images may be used to prevent fraud.
    Type: Application
    Filed: September 13, 2013
    Publication date: January 1, 2015
    Applicant: GOOGLE INC.
    Inventors: Sanjiv Kumar, Henry Allan Rowley, Xiaohang Wang, Yakov Okshtein, Farhan Shamsi, Alessandro Bissacco
  • Patent number: 8903136
    Abstract: The technology of the present disclosure includes computer-implemented methods, computer program products, and systems to filter images before transmitting to a system for optical character recognition (“OCR”). A user computing device obtains a first image of the card from the digital scan of a physical card and analyzes features of the first image, the analysis being sufficient to determine if the first image is likely to be usable by an OCR algorithm. If the user computing device determines that the first image is likely to be usable, then the first image is transmitted to an OCR system associated with the OCR algorithm. Upon a determination that the first image is unlikely to be usable, a second image of the card from the digital scan of the physical card is analyzed. The optical character recognition system performs an optical character recognition algorithm on the filtered card.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: December 2, 2014
    Assignee: Google Inc.
    Inventors: Xiaohang Wang, Alessandro Bissacco, Glen Berntson, Marria Nazif, Justin Scheiner, Sam Shih, Mark Leslie Snyder, Daniel Talavera
  • Patent number: 8868571
    Abstract: Systems and methods for selecting interest point descriptors for object recognition. In an embodiment, the present invention estimates performance of local descriptors by (1) receiving a local descriptor relating to an object in a first image; (2) identifying one or more nearest neighbor descriptors relating to one or more images different from the first image, the nearest neighbor descriptors comprising nearest neighbors of the local descriptor; (3) calculating a quality score for the local descriptor based on the number of nearest neighbor descriptors that relate to images showing the object; and (4) determining, on the basis of the quality score, if the local descriptor is effective in identifying the object.
    Type: Grant
    Filed: December 22, 2011
    Date of Patent: October 21, 2014
    Assignee: Google Inc.
    Inventors: Alessandro Bissacco, Ulrich Buddemeier, Hartmut Neven
  • Publication number: 20140286573
    Abstract: A system and method is provided for automatically recognizing building numbers in street level images. In one aspect, a processor selects a street level image that is likely to be near an address of interest. The processor identifies those portions of the image that are visually similar to street numbers, and then extracts the numeric values of the characters displayed in such portions. If an extracted value corresponds with the building number of the address of interest such as being substantially equal to the address of interest, the extracted value and the image portion are displayed to a human operator. The human operator confirms, by looking at the image portion, whether the image portion appears to be a building number that matches the extracted value. If so, the processor stores a value that associates that building number with the street level image.
    Type: Application
    Filed: June 5, 2014
    Publication date: September 25, 2014
    Applicant: GOOGLE INC.
    Inventors: Bo Wu, Alessandro Bissacco, Raymond W. Smith, Kong Man Cheung, Andrea Frome, Shlomo Urbach
  • Patent number: 8837833
    Abstract: Extracting financial card information with relaxed alignment comprises a method to receive an image of a card, determine one or more edge finder zones in locations of the image, and identify lines in the one or more edge finder zones. The method further identifies one or more quadrilaterals formed by intersections of extrapolations of the identified lines, determines an aspect ratio of the one or more quadrilateral, and compares the determined aspect ratios of the quadrilateral to an expected aspect ratio. The method then identifies a quadrilateral that matches the expected aspect ratio and performs an optical character recognition algorithm on the rectified model. A similar method is performed on multiple cards in an image. The results of the analysis of each of the cards are compared to improve accuracy of the data.
    Type: Grant
    Filed: December 12, 2013
    Date of Patent: September 16, 2014
    Assignee: Google Inc.
    Inventors: Xiaohang Wang, Farhan Shamsi, Yakov Okshtein, Sanjiv Kumar, Henry Allan Rowley, Marcus Quintana Mitchell, Debra Lin Repenning, Alessandro Bissacco, Justin Scheiner, Leon Palm
  • Patent number: 8805125
    Abstract: Comparing extracted card data from a continuous scan comprises receiving, by one or more computing devices, a digital scan of a card; obtaining a plurality of images of the card from the digital scan of the physical card; performing an optical character recognition algorithm on each of the plurality of images; comparing results of the application of the optical character recognition algorithm for each of the plurality of images; determining if a configured threshold of the results for each of the plurality of images match each other; and verifying the results when the results for each of the plurality of images match each other. Threshold confidence level for the extracted card data can be employed to determine the accuracy of the extraction. Data is further extracted from blended images and three-dimensional models of the card. Embossed text and holograms in the images may be used to prevent fraud.
    Type: Grant
    Filed: September 13, 2013
    Date of Patent: August 12, 2014
    Assignee: Google Inc.
    Inventors: Sanjiv Kumar, Henry Allan Rowley, Xiaohang Wang, Yakov Okshtein, Farhan Shamsi, Alessandro Bissacco
  • Patent number: 8787673
    Abstract: A system and method is provided for automatically recognizing building numbers in street level images. In one aspect, a processor selects a street level image that is likely to be near an address of interest. The processor identifies those portions of the image that are visually similar to street numbers, and then extracts the numeric values of the characters displayed in such portions. If an extracted value corresponds with the building number of the address of interest such as being substantially equal to the address of interest, the extracted value and the image portion are displayed to a human operator. The human operator confirms, by looking at the image portion, whether the image portion appears to be a building number that matches the extracted value. If so, the processor stores a value that associates that building number with the street level image.
    Type: Grant
    Filed: July 12, 2011
    Date of Patent: July 22, 2014
    Assignee: Google Inc.
    Inventors: Bo Wu, Alessandro Bissacco, Raymond W. Smith, Kong man Cheung, Andrea Frome, Shlomo Urbach
  • Patent number: 8755595
    Abstract: Embodiments for automatic extraction of character ground truth data from images are disclosed. A transcription may be rendered in a plurality of fonts and orientations to obtain a set of candidate word templates with associated character bounding boxes. A word template may be selected from the set of candidate word templates, wherein the selected word template corresponds to a word patch from an image. The character bounding boxes, of the selected word template, may be evaluated in a plurality of orientations about each respective character from the word patch to obtain a set of candidate character templates. For each respective character from the word patch, a character template may be selected from the set of candidate character templates, wherein each selected character template corresponds to the respective character from the word patch.
    Type: Grant
    Filed: July 19, 2011
    Date of Patent: June 17, 2014
    Assignee: Google Inc.
    Inventors: Alessandro Bissacco, Krishnendu Chaudhury
  • Patent number: 8718365
    Abstract: A text recognition server is configured to recognize text in a sparse text image. Specifically, given an image, the server specifies a plurality of “patches” (blocks of pixels within the image). The system applies a text detection algorithm to the patches to determine a number of the patches that contain text. This application of the text detection algorithm is used both to estimate the orientation of the image and to determine whether the image is textually sparse or textually dense. If the image is determined to be textually sparse, textual patches are identified and grouped into text regions, each of which is then separately processed by an OCR algorithm, and the recognized text for each region is combined into a result for the image as a whole.
    Type: Grant
    Filed: October 29, 2009
    Date of Patent: May 6, 2014
    Assignee: Google Inc.
    Inventors: Alessandro Bissacco, Hartmut Neven
  • Patent number: 8520949
    Abstract: According to an embodiment, a method for filtering feature point matches for visual object recognition is provided. The method includes identifying local descriptors in an image and determining a self-similarity score for each local descriptor based upon matching each local descriptor to its nearest neighbor descriptors from a descriptor dataset. The method also includes filtering feature point matches having a number of local descriptors with self-similarity scores that exceed a threshold. According to another embodiment, the filtering step may further include removing feature point matches. According to a further embodiment, a system for filtering feature point matches for visual object recognition is provided. The system includes a descriptor identifier, a self-similar descriptor analyzer and a self-similar descriptor filter.
    Type: Grant
    Filed: June 22, 2009
    Date of Patent: August 27, 2013
    Assignee: Google Inc.
    Inventors: Alessandro Bissacco, Ulrich Buddemeier, Hartmut Neven
  • Patent number: 8345921
    Abstract: Embodiments of this invention relate to detecting and blurring images. In an embodiment, a system detects objects in a photographic image. The system includes an object detector module configured to detect regions of the photographic image that include objects of a particular type at least based on the content of the photographic image. The system further includes a false positive detector module configured to determine whether each region detected by the object detector module includes an object of the particular type at least based on information about the context in which the photographic image was taken.
    Type: Grant
    Filed: May 11, 2009
    Date of Patent: January 1, 2013
    Assignee: Google Inc.
    Inventors: Andrea Frome, German Cheung, Ahmad Abdulkader, Marco Zennaro, Bo Wu, Alessandro Bissacco, Hartmut Neven, Luc Vincent, Hartwig Adam