Limited To Specially Coded, Human-readable Characters Patents (Class 382/182)
  • Patent number: 8938385
    Abstract: The present invention provides a method for recognizing a named entity included in natural language, comprising the steps of: performing gradual parsing model training with the natural language to obtain a classification model; performing gradual parsing and recognition according to the obtained classification model to obtain information on positions and types of candidate named entities; performing a refusal recognition process for the candidate named entities; and generating a candidate named entity lattice from the refusal-recognition-processed candidate named entities, and searching for a optimal path. The present invention uses a one-class classifier to score or evaluate these results to obtain the most reliable beginning and end borders of the named entities on the basis of the forward and backward parsing and recognizing results obtained only by using the local features.
    Type: Grant
    Filed: May 15, 2007
    Date of Patent: January 20, 2015
    Assignee: Panasonic Corporation
    Inventors: Pengju Yan, Yufei Sun, Tsuzuki Takashi
  • Publication number: 20150010235
    Abstract: A system for document searching can include a camera. The system may further include an image capturing module configured to capture a first image of a first portion of a document, a feature recognition module in communication with the image capturing module, the feature recognition module configured to determine a first feature associated with the first image, a search module configured to send search information to a server and receive a first result from a first search of a set of documents that was performed based on one or more search criteria determined based on the first feature associated with the first image, and a search interface configured to present the first result on the device.
    Type: Application
    Filed: July 2, 2014
    Publication date: January 8, 2015
    Inventor: Simon Dominic Copsey
  • Patent number: 8929461
    Abstract: Machine-readable media, methods, apparatus and system for caption detection are described. In some embodiments, a plurality of text boxes may be detected from a plurality of frames. A first percentage of the plurality of text boxes whose locations on the plurality of frames fall into a location range may be obtained. A second percentage of the plurality of text boxes whose sizes fall into a size range may be obtained. Then, it may be determined if the first percentage and the location range are acceptable and if the second percentage and the size range are acceptable.
    Type: Grant
    Filed: April 17, 2007
    Date of Patent: January 6, 2015
    Assignee: Intel Corporation
    Inventors: Wei Hu, Rui Ma
  • Publication number: 20150003732
    Abstract: Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
    Type: Application
    Filed: October 21, 2013
    Publication date: January 1, 2015
    Applicant: GOOGLE INC.
    Inventors: Sanjiv Kumar, Henry Allan Rowley, Xiaohang Wang, Jose Jeronimo Moreira Rodrigues
  • Publication number: 20150003734
    Abstract: A service can perform optical character recognition (OCR) on an image of a record to determine a first set of information items about the record. A second set of information items can be identified that are likely part of the record but not determinable from performing OCR on the image. Another resource can be utilized to determine the second set of information items. A classification for the record can be determined based on first and second sets of information items. The record can be associated with a financial resource of the user based at least in part on the classification.
    Type: Application
    Filed: September 17, 2014
    Publication date: January 1, 2015
    Inventors: David M. Barrett, Kevin Michael Kuchta
  • Publication number: 20150003733
    Abstract: Extracting financial card information with relaxed alignment comprises a method to receive an image of a card, determine one or more edge finder zones in locations of the image, and identify lines in the one or more edge finder zones. The method further identifies one or more quadrilaterals formed by intersections of extrapolations of the identified lines, determines an aspect ratio of the one or more quadrilateral, and compares the determined aspect ratios of the quadrilateral to an expected aspect ratio. The method then identifies a quadrilateral that matches the expected aspect ratio and performs an optical character recognition algorithm on the rectified model. A similar method is performed on multiple cards in an image. The results of the analysis of each of the cards are compared to improve accuracy of the data.
    Type: Application
    Filed: August 19, 2014
    Publication date: January 1, 2015
    Inventors: Xiaohang Wang, Farhan Shamsi, Yakov Okshtein, Sanjiv Kumar, Henry Allan Rowley, Marcus Quintana Mitchell, Debra Lin Repenning, Alessandro Bissacco, Justin Scheiner, Leon Palm
  • Publication number: 20150003667
    Abstract: Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
    Type: Application
    Filed: November 26, 2013
    Publication date: January 1, 2015
    Applicant: GOOGLE INC.
    Inventors: Henry Allan Rowley, Sanjiv Kumar, Xiaohang Wang, Alessandro Bissacco, Jose Jeronimo Moreira Rodrigues, Kishore Ananda Papineni
  • Patent number: 8923629
    Abstract: A system and a method are disclosed that determine images with co-occurrence groups of individuals from an image collection. A value of a similarity metric is computed for each pair of images of the image collection, the value of the similarity metric being computed based on a comparison of the number of individuals in common between the images of the pair and the total number of individuals identified in both images of the pair. The collection of images is clustered based on the computed values of the similarity metric. At least one co-occurrence group is determined based on the results of the clustering, where a co-occurrence group is determined as a cluster of images that have a similar combination of individuals.
    Type: Grant
    Filed: April 27, 2011
    Date of Patent: December 30, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Yuli Gao
  • Patent number: 8923618
    Abstract: An expression, for which complementary information can be outputted, is extracted from a document obtained by character recognition for an image. Complementary information related to the extracted expression is outputted when a character or a symbol adjacent to the beginning or the end of the extracted expression is not a predetermined character or symbol. Output of complementary information related to the extracted expression is skipped when the character or symbol adjacent to the beginning or the end of the extracted expression is the predetermined character or symbol. A problem that complementary information unrelated to an original text is outputted is prevented even when a false character recognition occurs.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: December 30, 2014
    Assignee: Sharp Kabushiki Kaisha
    Inventor: Takeshi Kutsumi
  • Patent number: 8923619
    Abstract: A viewfinder screen display is generated and positioned such that a source document is displayed in the viewfinder screen display. Source document image blocks corresponding to different portions of the source document are then defined. For each source document image block, the image capture parameter of an image capture device is set to an optimized image capture parameter setting for the source document image block. The image capture device then captures an image block optimized image of the source document optimized for the source document image block. The optimized source document image blocks are then extracted from each image block optimized image of the source document. The extracted optimized source document image blocks are then aggregated and used to construct an image capture parameter optimized image of the source document.
    Type: Grant
    Filed: June 11, 2013
    Date of Patent: December 30, 2014
    Assignee: Intuit Inc.
    Inventors: Sunil Madhani, Anu Sreepathy, Samir Kakkar
  • Publication number: 20140369602
    Abstract: Methods to select and extract tabular data among the optical character recognition returned strings to automatically process documents, including documents containing academic transcripts.
    Type: Application
    Filed: June 13, 2014
    Publication date: December 18, 2014
    Inventors: Ralph Meier, Harry Urbschat, Thorsten Wanschura, Johannes Hausmann
  • Patent number: 8913833
    Abstract: An image processing apparatus includes: an extraction unit that extracts a first image and a second image similar to the first image, in a first resolution; and a generation unit that generates an image in a second resolution based on the respective images extracted by the extraction unit and phases of the respective images calculated with precision higher than one pixel in the first resolution.
    Type: Grant
    Filed: May 2, 2007
    Date of Patent: December 16, 2014
    Assignee: Fuji Xerox Co., Ltd.
    Inventors: Yutaka Koshi, Shunichi Kimura, Ikken So, Masanori Sekino
  • Publication number: 20140363081
    Abstract: A method of reading data represented by characters formed of an x by y array of dots, e.g. as printed by a dot-matrix printer, is described. An image of the character(s) is captured by a digital camera device and transmitted to a computer, and by using analysis software operating in the computer to which the camera image has been sent, dot shapes are identified and their positions within the captured image detected, using the similarity of dots to idealised representations of dots using a combination of covariance, correlation or colour data. The position information about the detected dots is then processed to determine the distance between dots and to identify “clusters” of adjacent dots in groups of dots close to one another, and to enable such clusters to be mapped on to a notional x by y grid that defines the intended positions of the dots where grid elements intersect.
    Type: Application
    Filed: September 14, 2012
    Publication date: December 11, 2014
    Inventors: Alan Joseph Bell, Martin Robinson, Guanhua Chen
  • Patent number: 8908970
    Abstract: A method for extracting textual information from a document containing text characters using a digital image capture device. A plurality of digital images of the document are captured using the digital image capture device. Each of the captured digital images is automatically analyzed using an optical character recognition process to determine extracted textual data. The extracted textual data for the captured digital images are merged to determine the textual information for the document, wherein differences between the extracted textual data for the captured digital images are analyzed to determine the textual information for the document.
    Type: Grant
    Filed: May 23, 2012
    Date of Patent: December 9, 2014
    Assignee: Eastman Kodak Company
    Inventors: Andrew C. Blose, Peter O. Stubler
  • Patent number: 8894580
    Abstract: An inspection system includes a plurality of acoustic beamformers, where each of the plurality of acoustic beamformers including a plurality of acoustic transmitter elements. The system also includes at least one controller configured for causing each of the plurality of acoustic beamformers to generate an acoustic beam directed to a point in a volume of interest during a first time. Based on a reflected wave intensity detected at a plurality of acoustic receiver elements, an image of the volume of interest can be generated.
    Type: Grant
    Filed: April 25, 2013
    Date of Patent: November 25, 2014
    Assignee: UT-Battelle, LLC
    Inventors: Roger Kisner, Hector J. Santos-Villalobos
  • Patent number: 8897605
    Abstract: Embodiments include a method, a manual device, a handheld manual device, a handheld writing device, a system, and an apparatus. An embodiment provides a device. The device includes an imaging circuit operable to acquire digital information encoded in a hand-formed analog expression marked on a surface by a handheld writing device. The device also includes a translator circuit operable to decode the digital information. The device further includes a correlation circuit operable to generate a signal indicative of the decoded digital information.
    Type: Grant
    Filed: January 17, 2011
    Date of Patent: November 25, 2014
    Assignee: The Invention Science Fund I, LLC
    Inventors: Alexander J. Cohen, B. Isaac Cohen, Ed Harlow, Eric C. Leuthardt, Royce A. Levien, Robert W. Lord, Mark A. Malamud
  • Patent number: 8897579
    Abstract: A computer-implemented method of managing information is disclosed. The method can include receiving a message from a mobile device configured to connect to a mobile device network (the message including a digital image taken by the mobile device and including information corresponding to words), determining the words from the digital image information using optical character recognition, indexing the digital image based on the words, and storing the digital image for later retrieval of the digital image based on one or more received search terms.
    Type: Grant
    Filed: October 9, 2013
    Date of Patent: November 25, 2014
    Assignee: Google Inc.
    Inventors: Krishnendu Chaudhury, Ashutosh Garg, Prasenjit Phukan, Arvind Saraf
  • Patent number: 8897564
    Abstract: A method and system for computer-aided detection of abnormal lesions in digital mammograms is described, wherein digital films are processed using an automated and computerized method of detecting the order and orientation of a set of films. In one embodiment, anatomic features are used to detect the order, orientation and identification of a film series. In another embodiment of the invention, a technologist feeds films into the system in any order and orientation. After processing, the system provides an output on a display device to a radiologist that is in an order and orientation preferred by the radiologist. In yet another embodiment of the invention, films from one case are distinguished from films of another case. In this manner and through the use of a bulk loader, a large number of films can be stacked together and fed into the system at one time.
    Type: Grant
    Filed: December 5, 2006
    Date of Patent: November 25, 2014
    Assignee: Hologic Inc
    Inventors: Keith W. Hartman, Julian Marshall, Alexander C. Schneider, Jimmy R. Roehrig
  • Patent number: 8897565
    Abstract: The present technology proposes techniques for extracting forms and other types of documents from images taken with a mobile client device. By calculating and making adjustments along a document's detected borders, an input image can be transformed such that the document within the image may be properly aligned and background clutter completely removed. The resulting text fields of the extracted document are thus upright, aligned and locatable at predictable points.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: November 25, 2014
    Assignee: Google Inc.
    Inventors: Leon Palm, Hartwig Adam
  • Patent number: 8885229
    Abstract: A method includes invoking an image capture interface via a mobile device; and analyzing video data captured via the capture interface. The analysis includes determining whether an object exhibiting one or more defining characteristics is depicted within the viewfinder; and if so, whether that object satisfies one or more predetermined quality control criteria. The method further includes displaying an indication of success or failure to satisfy the predetermined control criteria on the mobile device display. Where the object depicted within the viewfinder satisfies the one or more predetermined quality control criteria, the method also includes: displaying an indication that the object depicted in the viewfinder exhibits the one or more defining characteristics; automatically capturing an image of the object; and/or automatically storing to memory one or more of the frames in which the object is depicted in the viewfinder. Systems and computer program products are also disclosed.
    Type: Grant
    Filed: May 2, 2014
    Date of Patent: November 11, 2014
    Assignee: Kofax, Inc.
    Inventors: Jan W. Amtrup, Jiyong Ma, Anthony Macciola
  • Patent number: 8872979
    Abstract: Techniques are presented for analyzing audio-video segments, usually from multiple sources. A combined similarity measure is determined from text similarities and video similarities. The text and video similarities measure similarity between audio-video scenes for text and video, respectively. The combined similarity measure is then used to determine similar scenes in the audio-video segments. When the audio-video segments are from multiple audio-video sources, the similar scenes are common scenes in the audio-video segments. Similarities may be converted to or measured by distance. Distance matrices may be determined by using the similarity matrices. The text and video distance matrices are normalized before the combined similarity matrix is determined. Clustering is performed using distance values determined from the combined similarity matrix.
    Type: Grant
    Filed: May 21, 2002
    Date of Patent: October 28, 2014
    Assignee: Avaya Inc.
    Inventors: Amit Bagga, Jianying Hu, Jialin Zhong
  • Patent number: 8873857
    Abstract: A computer-implemented system and method are described for image searching and image indexing that may be incorporated in a mobile device that is part of an object identification system. A computer-implemented system and method relating to a MISIS client and MISIS server that may be associated with mobile pointing and identification system for the searching and indexing of objects in in situ images in geographic space taken from the perspective of a system user located near the surface of the Earth including horizontal, oblique, and airborne perspectives.
    Type: Grant
    Filed: July 8, 2013
    Date of Patent: October 28, 2014
    Assignee: iPointer Inc.
    Inventors: Christopher Edward Frank, David Caduff
  • Patent number: 8861862
    Abstract: The character recognition apparatus recognizes characters from a read document original to correct a character string as a character recognition result in a word unit with a space character as a separator. The character recognition apparatus includes a circumscribed rectangle formation portion which forms a circumscribed rectangle for each recognized alphabet character string, a fixed-pitch font determination portion which determines whether or not a font is a fixed-pitch font based on a distance between center lines in a width direction of adjacent circumscribed rectangles, a portion for determining an excess space character which determines, in the case of a fixed-pitch font, that the space character is an excess based on that a width of a space character in the character string is narrower than a predetermined width, and a portion for deleting the space character determined as an excess from the character string.
    Type: Grant
    Filed: May 23, 2012
    Date of Patent: October 14, 2014
    Assignee: Sharp Kabushiki Kaisha
    Inventor: Ichiko Sata
  • Patent number: 8860699
    Abstract: A two-dimensional code display system has a display unit which can display a two-dimensional code representing predetermined information by plural cells arrange in a matrix, and a control unit which changes the display form of the two-dimensional code. For example, after 0.5 second has elapsed since the display unit starts displaying the image of a logo mark, the control unit controls the display unit to change the display from the image of the logo mark to a two-dimensional code. After 2.0 seconds has elapsed since the display unit starts displaying the two-dimensional code, the control unit controls the display unit to change the display from the two-dimensional code to the image of the logo mark. By repeating such control, the control unit can control the display unit to display alternately the image of the logo mark and the two-dimensional code.
    Type: Grant
    Filed: October 29, 2008
    Date of Patent: October 14, 2014
    Assignee: A. T Communications Co., Ltd.
    Inventor: Hiroshi Ideguchi
  • Patent number: 8861860
    Abstract: A device is configured to capture an image of a monitoring device display, perform optical character recognition to identify alphanumeric data in the image, apply a device profile to map each identified alphanumeric datum to a parameter associated with the monitoring device; and store each datum along with its associated parameter.
    Type: Grant
    Filed: November 21, 2011
    Date of Patent: October 14, 2014
    Assignee: Verizon Patent and Licensing Inc.
    Inventor: Nisheeth Gupta
  • Patent number: 8861861
    Abstract: A service can perform optical character recognition (OCR) on an image of a record to determine a first set of information items about the record. A second set of information items can be identified that are likely part of the record but not determinable from performing OCR on the image. Another resource can be utilized to determine the second set of information items. A classification for the record can be determined based on first and second sets of information items. The record can be associated with a financial resource of the user based at least in part on the classification.
    Type: Grant
    Filed: May 10, 2012
    Date of Patent: October 14, 2014
    Assignee: Expensify, Inc.
    Inventors: David M. Barrett, Kevin Michael Kuchta
  • Publication number: 20140301645
    Abstract: An approach is provided for mapping a point of interest (POI) based on user-captured images. One or more user-captured images are queried based, at least in part, on a POI data record. One or more identifying features of a POI associated with the POI data record are recognized in the one or more user-captured images and a position of the one or more identifying features is determined in the one or more user-captured images. The POI is mapped based, at least in part, on the position of the one or more identifying features and image metadata associated with the one or more user-captured images.
    Type: Application
    Filed: April 3, 2013
    Publication date: October 9, 2014
    Applicant: Nokia Corporation
    Inventor: Ville-Veikko Mattila
  • Patent number: 8855424
    Abstract: A word recognition method in which as a result of a recognition process performed on an image of a character string, one or more character candidates are obtained for each of characters forming the character string, according to which a word corresponding to the character string is recognized using a word database having registered therein a plurality of words includes setting a predetermined number of words included in the word database, as initial word candidates, performing a process in which the characters forming the recognition target character string are set as processing targets, one character by one character, and every time a processing target character is set, word candidates present at a time of the setting are narrowed down to words in which character candidates obtained for the processing target character are arranged at a same location as a location where the processing target character is arranged in the recognition target character string, and identifying, when a narrowing-down process perfor
    Type: Grant
    Filed: October 29, 2010
    Date of Patent: October 7, 2014
    Assignee: OMRON Corporation
    Inventor: Tomoyoshi Aizawa
  • Patent number: 8855425
    Abstract: A method according to one embodiment includes performing optical character recognition (OCR) on an image of a first document; and at least one of: correcting OCR errors in the first document using at least one of textual information from a complementary document and predefined business rules; normalizing data from the complementary document using at least one of textual information from the first document and the predefined business rules; and normalizing data from the first document using at least one of textual information from the complementary document and the predefined business rules. Additional systems, methods and computer program products are also presented.
    Type: Grant
    Filed: July 22, 2013
    Date of Patent: October 7, 2014
    Assignee: Kofax, Inc.
    Inventors: Mauritius A. R. Schmidtler, Roland G. Borrey, Jan W. Amtrup, Stephen Michael Thompson
  • Publication number: 20140294305
    Abstract: A method is proposed for detecting a document in which image data are recorded by means of a camera, in which filtered picture data are determined by a first processing unit on the basis of the recorded image data, and a camera picture is stored by a second processing unit on the basis of the filtered picture data if a stability criterion is fulfilled. Also specified correspondingly are a device, computer program product and storage medium.
    Type: Application
    Filed: April 1, 2014
    Publication date: October 2, 2014
    Applicant: DocuWare GmbH
    Inventors: Mirco Schöpf, Dmitry Toropov
  • Publication number: 20140294304
    Abstract: A viewfinder screen display is generated and positioned such that a source document is displayed in the viewfinder screen display. Source document image blocks corresponding to different portions of the source document are then defined. For each source document image block, the image capture parameter of an image capture device is set to an optimized image capture parameter setting for the source document image block. The image capture device then captures an image block optimized image of the source document optimized for the source document image block. The optimized source document image blocks are then extracted from each image block optimized image of the source document. The extracted optimized source document image blocks are then aggregated and used to construct an image capture parameter optimized image of the source document.
    Type: Application
    Filed: June 11, 2013
    Publication date: October 2, 2014
    Inventors: Sunil Madhani, Anu Sreepathy, Samir Kakkar
  • Patent number: 8847962
    Abstract: Systems and techniques are described to perform operations including displaying a first character in a user interface in response to a first user input, the first character encoded by a first ordered sequence comprising at least one code point, receiving a second user input, determining if the second user input defines an exception input to the first ordered sequence, in response to determining that the second user input defines an exception input to the first ordered sequence, generating a second ordered sequence comprising at least one code point, the second ordered sequence based on the first ordered sequence and the exception input, wherein the second ordered sequence does not include the first ordered sequence in a predicate sequence, and displaying a second character defined by the second ordered sequence in place of the first character in the user interface.
    Type: Grant
    Filed: July 1, 2008
    Date of Patent: September 30, 2014
    Assignee: Google Inc.
    Inventors: Mandayam T. Raghunath, Balaji Gopalan
  • Publication number: 20140286573
    Abstract: A system and method is provided for automatically recognizing building numbers in street level images. In one aspect, a processor selects a street level image that is likely to be near an address of interest. The processor identifies those portions of the image that are visually similar to street numbers, and then extracts the numeric values of the characters displayed in such portions. If an extracted value corresponds with the building number of the address of interest such as being substantially equal to the address of interest, the extracted value and the image portion are displayed to a human operator. The human operator confirms, by looking at the image portion, whether the image portion appears to be a building number that matches the extracted value. If so, the processor stores a value that associates that building number with the street level image.
    Type: Application
    Filed: June 5, 2014
    Publication date: September 25, 2014
    Applicant: GOOGLE INC.
    Inventors: Bo Wu, Alessandro Bissacco, Raymond W. Smith, Kong Man Cheung, Andrea Frome, Shlomo Urbach
  • Publication number: 20140270528
    Abstract: Various embodiments enable regions of text to be identified in an image captured by a camera of a computing device for preprocessing before being analyzed by a visual recognition engine. For example, each of the identified regions can be analyzed or tested to determine whether a respective region contains a quality associated with poor text recognition results, such as poor contrast, blur, noise, and the like, which can be measured by one or more algorithms. Upon identifying a region with such a quality, an image quality enhancement can be automatically applied to the respective region without user instruction or intervention. Accordingly, once each region has been cleared of the quality associated with poor recognition, the regions of text can be processed with a visual recognition algorithm or engine.
    Type: Application
    Filed: March 13, 2013
    Publication date: September 18, 2014
    Applicant: Amazon Technologies, Inc.
    Inventor: Amazon Technologies, Inc.
  • Publication number: 20140270505
    Abstract: A video processing system enhances quality of an overlay image, such as a logo, text, game scores, or other areas forming a region of interest (ROI) in a video stream. The system separately enhances the video quality of the ROI, particularly when screen size is reduced. The data enhancement can be accomplished at decoding with metadata provided with the video data for decoding so that the ROI that can be separately enhanced from the video. In improve legibility, the ROI enhancer can increase contrast, brightness, hue, saturation, and bit density of the ROI. The ROI enhancer can operate down to a pixel-by-pixel level. The ROI enhancer may use stored reference picture templates to enhance a current ROI based on a comparison. When the ROI includes text, a minimum reduction size for the ROI relative to the remaining video can be identified so that the ROI is not reduced below human perceptibility.
    Type: Application
    Filed: August 26, 2013
    Publication date: September 18, 2014
    Applicant: General Instrument Corporation
    Inventor: Sean T. McCarthy
  • Patent number: 8838453
    Abstract: A user input is received by a computing device. An interactive input module determines whether the first user input is a first character of a script for a supported language. If the first user input is a first character, the first character is stored in an input buffer. A plurality of words in the supported language that match a contents of the input buffer are identified, and a subset of the plurality of words are displayed to the user based on a frequency value associated with each of the plurality of words.
    Type: Grant
    Filed: August 31, 2010
    Date of Patent: September 16, 2014
    Assignee: Red Hat, Inc.
    Inventor: Pravin Satpute
  • Patent number: 8837832
    Abstract: Systems, methods, apparatuses and program products for analyzing and/or monitoring the condition of skin are provided. Various embodiments provide for accessing images of the skin, analyzing the characteristics of skin conditions as represented by the images, and providing outputs useful for analyzing and/or monitoring conditions of the skin. Certain embodiments provide for automated analysis of skin conditions such as moles and/or wrinkles. The automated analysis may include for example characterization of a skin condition and comparison to similar skin conditions of a patient or of other patients.
    Type: Grant
    Filed: July 9, 2010
    Date of Patent: September 16, 2014
    Assignee: Skin of Mine Dot Com, LLC
    Inventor: Ellen Eide Kislal
  • Patent number: 8837833
    Abstract: Extracting financial card information with relaxed alignment comprises a method to receive an image of a card, determine one or more edge finder zones in locations of the image, and identify lines in the one or more edge finder zones. The method further identifies one or more quadrilaterals formed by intersections of extrapolations of the identified lines, determines an aspect ratio of the one or more quadrilateral, and compares the determined aspect ratios of the quadrilateral to an expected aspect ratio. The method then identifies a quadrilateral that matches the expected aspect ratio and performs an optical character recognition algorithm on the rectified model. A similar method is performed on multiple cards in an image. The results of the analysis of each of the cards are compared to improve accuracy of the data.
    Type: Grant
    Filed: December 12, 2013
    Date of Patent: September 16, 2014
    Assignee: Google Inc.
    Inventors: Xiaohang Wang, Farhan Shamsi, Yakov Okshtein, Sanjiv Kumar, Henry Allan Rowley, Marcus Quintana Mitchell, Debra Lin Repenning, Alessandro Bissacco, Justin Scheiner, Leon Palm
  • Publication number: 20140254926
    Abstract: Systems, methods and computer-readable storage media are disclosed for accelerating bitmap remoting by extracting non-grid tiles from source bitmaps. A server takes a source image, identifies possibly repetitive features, and tiles the image. For each tile that contains part of a possibly repetitive feature, the server replaces that part with the dominant color of the tile. The system then sends to a client a combination of new tiles and features, and indications to tiles and features that the client has previously received and stored, along with an indication of how to recreate the image based on the tiles and features.
    Type: Application
    Filed: May 23, 2014
    Publication date: September 11, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Nadim Y. Abdo, Voicu Anton Albu, Charles Lawrence Zitnick, III
  • Patent number: 8831351
    Abstract: When a display language is different from an OCR language, which is used for document name OCR, the name of a document to be sent may not be correctly displayed on a screen. A data processing apparatus is provided that includes a document name setting unit configured to set a document name including a character string recognized on the basis of document data for the document data generated by a read unit, and a control unit configured to restrain the document name setting unit from setting the document name when a language specified by a character recognition language specifying unit is different from a language specified by a display language setting unit.
    Type: Grant
    Filed: April 10, 2012
    Date of Patent: September 9, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yoshihide Terao
  • Patent number: 8831706
    Abstract: A method and an apparatus for fiducial-less tracking of a volume of interest (VOI) have been presented. In some embodiments, a pair of intra-operative images of a portion of a patient is generated during treatment of a target region in the patient to show a bony structure of the patient. The bony structure shown is movable responsive to respiration of the patient. Then the pair of intra-operative images is compared with a set of digitally reconstructed radiograph (DRR) pairs, generated from volumes of four-dimensional (4D) diagnostic imaging data, to determine a location of the movable bony structure that corresponds to a particular volume of the 4D diagnostic imaging data.
    Type: Grant
    Filed: November 3, 2006
    Date of Patent: September 9, 2014
    Assignee: Accuray Incorporated
    Inventors: Dongshan Fu, Kajetan R. Berlinger
  • Publication number: 20140247991
    Abstract: A system and method is provided that enables a business to purchase a generic, but unique, kit containing one or more signs, with a machine readable medium. The computer readable medium stores information relating to a unique web address of a configurable web site landing page. An administrator configures the web site as desired so that when a user scans the machine readable medium, the user will be direct to the web site, and will have access to the content configured by the administrator. A system and method is also provided for programming or generating machine readable medium.
    Type: Application
    Filed: February 18, 2014
    Publication date: September 4, 2014
    Inventors: Michael Archuleta, Michael Archuleta, II, Austin Archuleta
  • Publication number: 20140241631
    Abstract: A computer-implemented method of acquiring tax data for use in tax preparation application includes acquiring an image of at least one document containing tax data therein with an imaging device. A computer extracts one or more features from the acquired image of the at least one document and compares the extracted one or more features to a database containing a plurality of different tax forms. The database may include a textual database and/or geometric database. The computer identifies a tax form corresponding to the at least one document from the plurality of different tax forms based at least in part on a confidence level associated with the comparison of the extracted one or more features to the database. At least a portion of the tax data from the acquired image is transferred into corresponding fields of the tax preparation application.
    Type: Application
    Filed: February 28, 2013
    Publication date: August 28, 2014
    Applicant: INTUIT INC.
    Inventors: Nankun Huang, Amir Eftekhari, Carol A. Howe, Alan B. Tifford, Jeffrey P. Ludwig
  • Patent number: 8818111
    Abstract: Provided are an age estimation apparatus, an age estimation method, and an age estimation program capable of reducing the labor of labeling the image data used for age estimation. An age estimation apparatus for estimating an age of a person on image data includes a dimension compression unit for applying dimension compression to the image data to output low dimensional data; a clustering unit for performing clustering of the low dimensional data outputted; a labeling unit for labeling representative data of each cluster among the low dimensional data clustered; and an identification unit for estimating an age of a person on the basis of a learning result using a feature amount contained in labeled low dimensional data and unlabeled low dimensional data.
    Type: Grant
    Filed: April 14, 2010
    Date of Patent: August 26, 2014
    Assignees: NEC Soft, Ltd., Tokyo Institute of Technology
    Inventors: Kazuya Ueki, Masashi Sugiyama, Yasuyuki Ihara
  • Patent number: 8811656
    Abstract: Establishments are identified in geo-tagged images. According to one aspect, text regions are located in a geo-tagged image and text strings in the text regions are recognized using Optical Character Recognition (OCR) techniques. Text phrases are extracted from information associated with establishments known to be near the geographic location specified in the geo-tag of the image. The text strings recognized in the image are compared with the phrases for the establishments for approximate matches, and an establishment is selected as the establishment in the image based on the approximate matches. According to another aspect, text strings recognized in a collection of geo-tagged images are compared with phrases for establishments in the geographic area identified by the geo-tags to generate scores for image-establishment pairs. Establishments in each of the large collection of images as well as representative images showing each establishment are identified using the scores.
    Type: Grant
    Filed: September 6, 2013
    Date of Patent: August 19, 2014
    Assignee: Google Inc.
    Inventors: Shlomo Urbach, Tal Yadid, Yuval Netzer, Andrea Frome, Noam Ben-Haim
  • Publication number: 20140219563
    Abstract: A system and method for comparing a text image and a character string are provided. The method includes embedding a character string into a vectorial space by extracting a set of features from the character string and generating a character string representation based on the extracted features, such as a spatial pyramid bag of characters (SPBOC) representation. A text image is embedded into a vectorial space by extracting a set of features from the text image and generating a text image representation based on the text image extracted features. A compatibility between the text image representation and the character string representation is computed, which includes computing a function of the text image representation and character string representation.
    Type: Application
    Filed: February 1, 2013
    Publication date: August 7, 2014
    Applicant: Xerox Corporation
    Inventors: Jose Antonio Rodriguez-Serrano, Florent C. Perronnin
  • Patent number: 8798404
    Abstract: A system includes an imaging device and an acquisition layer. The imaging device acquires an image. The acquisition layer is logically located between a source manager and the imaging device, the source manager being called by an application when a user of the system requests to acquire the image. The acquisition layer includes imaging acquisition logic that receives the image from the imaging device and performs optical character recognition (OCR) that extracts machine editable text from the image. The acquisition layer forwards the image to the application and makes the machine editable text available to the user.
    Type: Grant
    Filed: May 26, 2010
    Date of Patent: August 5, 2014
    Inventor: Hin Leong Tan
  • Publication number: 20140212040
    Abstract: Example embodiments relate to document alteration based on native text analysis and optical character recognition (OCR). In example embodiments, a system analyzes native text obtained from a native document to identify a text entity in the native document. At this stage, the system may use a native application interface to convert the native document to a document image and perform OCR on the document image to identify a text location of the text entity. The system may then generate an alteration box (e.g., redaction box, highlight box) at the text location in the document image to alter a presentation of the text entity.
    Type: Application
    Filed: January 31, 2013
    Publication date: July 31, 2014
    Applicant: Longsand Limited
    Inventors: James Richard Walker, James Arthur Burtoft
  • Publication number: 20140212039
    Abstract: Machines, systems and methods for character recognition disambiguation are provided. The method comprises selecting a first set of characters that match a first visual profile based on results of a character recognition process applied to target content; selecting a subset of the first set based on criteria associated with at least one of confidence level with which characters grouped in the subset are recognized or fragmentation associated with the characters grouped in the subset; and disambiguating recognition results for the characters grouped in the subset by displaying the characters along with context information, wherein reviewing two or more of the characters on a display screen along with context information associated with said two or more characters allows a human operator to select one or more suspect characters from among the two or more characters.
    Type: Application
    Filed: January 28, 2013
    Publication date: July 31, 2014
    Applicant: International Business Machines Corporation
    Inventors: Ella Barkan, Itoko Toshinari, Asaf Tzadok
  • Publication number: 20140212041
    Abstract: An apparatus for document identification, having a capture device for capturing a document feature of a document, a processor that is designed to perform document identification locally using the document feature if a processing criterion for the local performance of document identification by means of the apparatus for document identification is satisfied, and a transmitter that is designed to send a data record that is dependent on the document feature via a communication network to a communication network address if the processing criterion for the local performance of document identification by means of the apparatus for document identification is not satisfied.
    Type: Application
    Filed: August 28, 2012
    Publication date: July 31, 2014
    Applicant: Bundesdruckerei GmbH
    Inventors: Ilya Komarov, Olaf Dressel, Frank Fritze, Manfred Paeschke