Limited To Specially Coded, Human-readable Characters Patents (Class 382/182)
  • Publication number: 20140010452
    Abstract: A system for verifying and correcting errors after translation of printed text into machine-readable text. The system includes a memory for storing formulas defining relationships between data fields. A processor evaluates the formulas according to data values associated with the data fields to determine whether the formulas evaluate as truthful statements. The processor marks the data fields of the formulas as unverified or as verified based upon this evaluation. The system also uses the processor to calculate a determined value for data fields in an attempt to correct errors in the translation of the printed text into machine-readable text. If different determined values are calculated for the same data field, based upon different formulas, the data field is marked as uncertain. The system iterates based upon the marking of the data fields of the formulas as verified or unverified and as uncertain or not uncertain.
    Type: Application
    Filed: July 5, 2012
    Publication date: January 9, 2014
    Inventors: David A. Wyle, William W. Hosek
  • Patent number: 8626236
    Abstract: A system and a method are provided for displaying text in low-light environments. An original image of text is captured in a low-light environment using a camera on a mobile device, whereby the imaged text comprising images of characters. A brightness setting and a contrast setting of the original image are adjusted to increase the contrast of the imaged text relative to a background of the original image. Optical character recognition is applied to the adjusted image to generate computer readable text or characters corresponding to each of the imaged text. The original image of text is displayed on the mobile device. The computer readable text is also displayed, overlaid the original image, wherein the computer readable text is aligned with the corresponding imaged text.
    Type: Grant
    Filed: October 8, 2010
    Date of Patent: January 7, 2014
    Assignee: Blackberry Limited
    Inventors: Jeffery Lindner, James Hymel
  • Publication number: 20140003721
    Abstract: A method and system to localize data fields of a form. An image of a form is received, where the form includes data fields. Word boxes of the image are identified. The word boxes are grouped into candidate zones, where each of the candidate zones includes one or more of the word boxes. Hypotheses are formed from the data fields and the candidate zones, where each hypothesis assigns one of the candidate zones to one of the data fields or a null data field. A constrained optimization search of the hypotheses is performed for an optimal set of hypotheses. The optimal set of hypotheses assigns word box groups to corresponding data fields.
    Type: Application
    Filed: June 29, 2012
    Publication date: January 2, 2014
    Applicant: PALO ALTO RESEARCH CENTER INCORPORATED
    Inventor: Eric Saund
  • Publication number: 20140003723
    Abstract: A text detection device is provided. The text detection device may include: an image input circuit configured to receive an image; an edge property determination circuit configured to determine a plurality of edge properties for each of a plurality of scales of the image; and a text location determination circuit configured to determine a text location in the image based on the plurality of edge properties for the plurality of scales of the image.
    Type: Application
    Filed: June 24, 2013
    Publication date: January 2, 2014
    Applicant: Agency for Science, Technology and Research
    Inventors: Shijian LU, Joo Hwee LIM
  • Publication number: 20140003722
    Abstract: Systems and methods may include utilizing a structured light pattern that may be, among other things, decoded in the three directions (e.g., vertical, horizontal, and diagonal). In one example, the method may include detecting a first feature of a target image in a return image, designating a feature type of the first feature, and an index with the letter, wherein the index is associated with the pattern slide. The method may also include calculating a horizontal position in the pattern slide of the first feature, calculating a vertical position in the pattern slide of the first feature, and calculating a depth of the first feature.
    Type: Application
    Filed: June 29, 2012
    Publication date: January 2, 2014
    Inventors: Ziv Aviv, David Stanhill, Ron Ferens, Roi Ziss
  • Patent number: 8620062
    Abstract: An apparatus for the detection of a geometrical position of plastics material containers, for example, plastics material pre-forms, having a base member and a thread region may include an image-recording device, which records a locally resolved image of the plastics material container. The image-recording device is arranged in such a way that it observes the plastics material container substantially along its longitudinal direction. The apparatus includes an illumination device, which illuminates at least one region of the plastics material container observed by the image-recording device, and an evaluation device, which on the basis of an image recorded by the image-recording device determines a rotary setting of the plastics material container with respect to its longitudinal direction.
    Type: Grant
    Filed: October 7, 2011
    Date of Patent: December 31, 2013
    Assignee: Krones AG
    Inventor: Rainer Kwirandt
  • Patent number: 8619340
    Abstract: What is disclosed is a novel system and method for augmenting present methods used for determining the orientation direction automatically being detected of digital pages of a plurality of scanned documents in a digital document processing environment. The present method takes advantage of the observation that pages scanned in data processing centers are often highly correlated. The present method contains five primary steps. 1) Page orientation (i.e., up/down) is detected using a traditional method. 2) Each page is classified as either directional or non-directional. 3) The pages classified as directional are clustered into groups. 4) The direction for each group is determined. 5) The directional group's direction is used to revise the orientation for pages contained in the group. Through the implementation of the teachings hereof, performance, in terms of both speed and accuracy, are very high relative to current methods and detection error rates can be reduced significantly.
    Type: Grant
    Filed: October 29, 2010
    Date of Patent: December 31, 2013
    Assignee: Xerox Corporation
    Inventors: Zhigang Fan, Michael R. Campanelli
  • Publication number: 20130343652
    Abstract: In a character string extraction method, a character portion, a rim portion, a character frame, and a character string frame are set, a feature value of each image in the character portion and the rim portion is calculated for each character frame, a character string frame evaluation value is calculated based on the feature value for the character string frame, a position of the character string frame is moved on the paper sheet image, and the image in the character portion is extracted by using the character string frame at a position at which the character string frame evaluation value reaches a maximum.
    Type: Application
    Filed: March 4, 2011
    Publication date: December 26, 2013
    Inventors: Masanori Goto, Toru Yonezawa, Motoko Kuroiwa
  • Publication number: 20130335356
    Abstract: An image is displayed on a touch screen. A user's underline gesture on the displayed image is detected. The area of the image touched by the underline gesture and a surrounding region approximate to the touched area are identified. Skew for text in the surrounding region is determined and compensated. A text region including the text is identified in the surrounding region and cropped from the image. The cropped image is transmitted to an optical character recognition (OCR) engine, which processes the cropped image and returns OCR'ed text. The OCR'ed text is outputted.
    Type: Application
    Filed: July 26, 2013
    Publication date: December 19, 2013
    Applicant: Google Inc.
    Inventors: Dar-Shyang Lee, Lee-Feng Chien, Pin Ting, Aries Hsieh, Kin Wong
  • Patent number: 8611661
    Abstract: In some embodiments, provided are procedures for processing images that may have different font sizes. In some embodiments, it involves OCR'ing with multiple passes at different resolutions.
    Type: Grant
    Filed: December 26, 2007
    Date of Patent: December 17, 2013
    Assignee: Intel Corporation
    Inventors: Oscar Nestares, Badusha Kalathiparambil
  • Publication number: 20130330004
    Abstract: As set forth herein, systems and methods facilitate providing an efficient edge-detection and closed-contour based approach for finding text in natural scenes such as photographic images, digital, and/or electronic images, and the like. Edge information (e.g., edges of structures or objects in the images) is obtained via an edge detection technique. Edges from text characters form closed contours even in the presence of reasonable levels of noise. Closed contour linking and candidate text line formation are two additional features of the described approach. A candidate text line classifier is applied to further screen out false-positive text identifications. Candidate text regions for placement of text in the natural scene of the electronic image are highlighted and presented to a user.
    Type: Application
    Filed: June 12, 2012
    Publication date: December 12, 2013
    Applicant: XEROX CORPORATION
    Inventors: Raja Bala, Zhigang Fan, Hengzhou Ding, Jan P. Allebach, Charles A. Bouman
  • Publication number: 20130330005
    Abstract: A character recognition method to apply one or more user-controlled definitions to scanned writing is implemented by an electronic device. The electronic device includes an image capturing device where a recognition rule is set for recognizing a type of a sequential code according to an arrangement rule of characters of the type of sequential codes. A first sequential code is extracted from a captured image of an object having a sequential code and when the first sequential code does not match with the recognition rule, the first sequential code is corrected according to the preset recognition rule to obtain a second sequential code for matching with the recognition rule.
    Type: Application
    Filed: June 5, 2013
    Publication date: December 12, 2013
    Inventors: XIN LU, HUAN-HUAN ZHANG, FEI WANG, BIAO-GENG ZHONG, XUE-SHUN LIU
  • Publication number: 20130322757
    Abstract: The disclosure provides a document processing apparatus, method and a scanner. The document processing apparatus includes: a text line extraction unit extracting a text line from an input document; a language classification unit determining whether an OCR process is necessary for a language of the input document; an OCR unit determining, by performing the OCR process, an OCR confidence in the case that it is determined that the OCR process is necessary; an graphic feature recognition unit determining an graphic feature recognition confidence; and a determination unit determining a combination confidence based on at least one of the determined graphic feature recognition confidences and the determined OCR confidences, and determining an orientation of the input document based on the combination confidences. This technical solution can determine better an orientation of the document, and is especially applicable when the quality of the image of the document is deteriorated.
    Type: Application
    Filed: May 29, 2013
    Publication date: December 5, 2013
    Inventors: Yifeng PAN, Jun SUN, Yuan HE, Satoshi NAOI
  • Publication number: 20130322759
    Abstract: A technique for identifying font in connection with text data processing. An original font corresponding to an embedded font used in an electronic document is identified. At least one glyph is selected from a glyph collection of the embedded font. The font corresponding to each selected glyph is identified, and the original font corresponding to the embedded font is identified according to the font that corresponds to each selected glyph.
    Type: Application
    Filed: December 3, 2012
    Publication date: December 5, 2013
    Applicants: PEKING UNIVERSITY FOUNDER GROUP CO., LTD., Founder Information Industry Holdings Co., Ltd., Beijing Founder Apabi Technology Ltd.
    Inventor: Ruiheng Qiu
  • Publication number: 20130322758
    Abstract: To make it easier to grasp characters that appear across different images by determining a pair of character area images to be a combination target based on a degree of similarity or a position of each character area image extracted from different images, and connecting and combining overlapping area images that are the determined pair of character area images and that have a similar image feature amount.
    Type: Application
    Filed: May 31, 2013
    Publication date: December 5, 2013
    Inventor: Masahiro Matsushita
  • Publication number: 20130315485
    Abstract: A method for extracting textual information from a document containing text characters using a digital image capture device. A plurality of digital images of the document are captured using the digital image capture device. Each of the captured digital images is automatically analyzed using an optical character recognition process to determine extracted textual data. The extracted textual data for the captured digital images are merged to determine the textual information for the document, wherein differences between the extracted textual data for the captured digital images are analyzed to determine the textual information for the document.
    Type: Application
    Filed: May 23, 2012
    Publication date: November 28, 2013
    Inventors: Andrew C. Blose, Peter O. Stubler
  • Patent number: 8594431
    Abstract: A method and system for recognizing a character affected by a noise or an obstruction is disclosed. After receiving an image with characters, a character being affected by a noise or an obstruction is determined. Then, areas in the character where the noise or obstruction affected are precisely located. Templates representing every possible character in the image are updated by removing equivalent areas to the areas in the character being affected by the noise or obstruction. Then, the character is classified in a template among the updated templates by finding the template having the highest number of matching pixels with the character.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: November 26, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ami Ben-Horesh, Amir Geva, Eugeniusz Walach
  • Publication number: 20130308862
    Abstract: An image processing apparatus includes an accepting unit, a recognizing unit, and a selecting unit. The accepting unit accepts character information about a character image in a character region in an image. The recognizing unit performs character recognition on the character image in the character region. The selecting unit selects a character recognition result which matches the character information accepted by the accepting unit, from multiple character recognition results that are obtained by the recognizing unit.
    Type: Application
    Filed: October 17, 2012
    Publication date: November 21, 2013
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Satoshi KUBOTA, Shunichi KIMURA
  • Patent number: 8588528
    Abstract: Disclosed are techniques and systems to provide a scanned image in which a portion of the image is overlaid with OCR generated text corresponding to the text of the original scanned document.
    Type: Grant
    Filed: June 22, 2010
    Date of Patent: November 19, 2013
    Assignee: K-NFB Reading Technology, Inc.
    Inventors: Peter Chapman, Paul Albrecht
  • Publication number: 20130301919
    Abstract: A method for serving content based on a selection feature for a campaign includes receiving an image associated with particular content and analyzing the image content of the image to derive a selection feature from the image. The selection feature is descriptive of image content. The method further includes identifying at least one keyword based on the selection feature. The method further includes associating the at least one keyword with the particular content and storing the particular content and its associated at least one keyword for serving in response to a content request.
    Type: Application
    Filed: May 11, 2012
    Publication date: November 14, 2013
    Inventor: Song Lin
  • Publication number: 20130301920
    Abstract: A method, a system, and a computer program product for processing the output of an OCR are disclosed. The system receives a first character sequence from the OCR. A first set of characters from the first character sequence are converted to a corresponding second set of characters to generate a second character sequence based on a look-up table and language scores.
    Type: Application
    Filed: May 14, 2012
    Publication date: November 14, 2013
    Applicant: Xerox Corporation
    Inventors: Sriram Venkatapathy, Nicola Cancedda
  • Patent number: 8582888
    Abstract: According to an aspect of an embodiment, a method of detecting boundary line information contained in image information comprising a plurality of pixels in either one of first and second states, comprising: detecting a first group of pixels in the first state disposed continuously in said image information to determine first line information and detecting a second group of pixels in the first state disposed adjacently with each other and surrounded by pixels in the second state to determine edge information based on the contour of the second group of pixels; and determining the boundary line information on the basis of the information of the relation of relative position of the line information and the edge information and the size of the first and second group of pixels.
    Type: Grant
    Filed: February 14, 2008
    Date of Patent: November 12, 2013
    Assignee: Fujitsu Limited
    Inventors: Hiroshi Tanaka, Kenji Nakajima, Akihiro Minagawa, Hiroaki Takebe, Katsuhito Fujimoto
  • Publication number: 20130294694
    Abstract: There is disclosed a method and apparatus for zone based scanning and optical character recognition for metadata acquisition comprising receiving user input identifying a first zone and a second zone on a visible representation of an electronic document and associating the first zone with a first database category and the second zone with a second database category, the association made using a metadata map. The method further comprises scanning a physical document in order to obtain a digital representation of the physical document as an electronic document, performing optical character recognition on the first zone and the second zone on the electronic document to thereby obtain a first metadata element and a second metadata element, and storing the electronic document along with the first metadata element and the second metadata element in a database, the first and second metadata elements stored in the database as directed by the metadata map.
    Type: Application
    Filed: May 1, 2012
    Publication date: November 7, 2013
    Applicants: Toshiba Tec Kabushiki Kaisha, Kabushiki Kaisha Toshiba
    Inventors: Jia Zhang, Silvy Wilson, Michael Yeung
  • Publication number: 20130294695
    Abstract: A method and system are disclosed for post optical character recognition font size determination. Optical character recognition output from an optical character recognition engine that includes character and bounding box information is aggregated into character strings. Measurements are then collected from each character in each character string that correspond to alignment heights of the top or bottom of the character with an ascender-line, a cap-line, a digit-line, a mean-line, a base-line, or a descender-line. Histograms are formed for each of these heights for each character string from the collected measurements. Based on the histograms, a pivot height is selected and used to determine the relative font size of the character string. The relative font size is normalized using a preselected factor associated with the selected pivot height. The normalized font size is then output as the font size of characters in the optical character recognition output.
    Type: Application
    Filed: May 2, 2012
    Publication date: November 7, 2013
    Applicant: XEROX CORPORATION
    Inventor: Jean-Luc Meunier
  • Publication number: 20130294696
    Abstract: An image processing method and apparatus is provided. The image processing method includes steps of: generating a first scale binary image from an image, wherein the first scale is smaller than the original scale of the image; detecting at least one text line in the image based on the first scale binary image; generating a second scale binary image from the image, wherein the second scale is larger than the first scale; for each text line, calculating a similarity between corresponding sections in the first scale binary image and the second scale binary image, and removing the text line for which the similarity is lower than a predetermined level; for one or more of the remaining text line(s), performing OCR on corresponding section(s) in the second scale binary image to determine character orientation(s) of corresponding text line(s); and determining the orientation of the image according to the determined character orientation(s).
    Type: Application
    Filed: May 2, 2013
    Publication date: November 7, 2013
    Applicant: FUJITSU LIMITED
    Inventors: Jun SUN, Yifeng PAN, Satoshi NAOI
  • Patent number: 8577147
    Abstract: An objective is to eliminate dotted lines in a character box in image data to increase the character recognition rate. There are some cases in which a dotted line candidate cannot be extracted due to many overlapping parts of dotted lines and characters or due to a blurry part in a dotted line. In such cases, the position of a dotted line candidate is estimated referring to features such as the interval, length, width, etc. of a dotted line candidate in the same character box (or in a character box for another relevant item), and image data of the estimated position and image data of a previously extracted dotted line (or a reference dotted line) are compared to determine whether or not they are an identical dotted line.
    Type: Grant
    Filed: June 1, 2011
    Date of Patent: November 5, 2013
    Assignee: Fujitsu Frontech Limited
    Inventors: Shohei Hasegawa, Shinichi Eguchi, Hajime Kawashima, Koichi Kanamoto, Hiroki Inoue, Maki Yabuki, Katsutoshi Kobara
  • Patent number: 8576444
    Abstract: A print data generating device comprises a retrieving unit, a bar code data generating unit, and a print data generating unit. The retrieving unit is configured to retrieve image data of an original image. The bar code data generating unit is configured to generate, based on the image data of the original image, data of at least one bar code that stores the original image. The print data generating unit is configured to generate, based on the image data of the original image and the data of the at least one bar code, print data of a print image that includes the at least one bar code arranged on one page and the original image arranged on one or more pages.
    Type: Grant
    Filed: February 23, 2011
    Date of Patent: November 5, 2013
    Assignee: Brother Kogyo Kabushiki Kaisha
    Inventors: Hiroto Sugahara, Masayuki Takata, Tomoyuki Kubo, Yoshinori Yokoe
  • Patent number: 8577145
    Abstract: In a method and apparatus for identifying an embossed character, light of one color is directed in one direction across the embossed character to illuminate certain character parts and light of another color is directed in another direction across the embossed character to illuminate other character parts. Image data for the two colors are captured and are subjected to separate image processing to detect edges highlighted by the directed light. The processed images are combined and supplemented with OCR analysis before being compared with predicted characters. Based on the comparison, a determination is made as to the probable identity of the character.
    Type: Grant
    Filed: December 19, 2009
    Date of Patent: November 5, 2013
    Assignee: PCAS Patient Care Automation Services Inc.
    Inventor: Richard Panetta
  • Publication number: 20130272613
    Abstract: An image scaling service includes determining an image as a candidate for a scaling process, scanning the image for an initial text value, and scaling the image to a next lower resolution. The image scaling service also includes iteratively performing the scaling process until a threshold value of a readability metric is reached, the scaling process includes scanning the scaled image for a scaled text value, comparing a difference between the initial text value and the scaled text value, the difference indicative of the readability metric, and scaling the scaled image to a next lower resolution. In response to reaching the threshold value of the readability metric, the image scaling service further includes selecting from scaled images an image having a lowest resolution resulting from the scaling process before the threshold value of the readability metric was reached.
    Type: Application
    Filed: April 16, 2012
    Publication date: October 17, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: William Bodin, Indiver N. Dwivedi, David Jaramillo
  • Publication number: 20130266176
    Abstract: A system and method for script and orientation detection of images using artificial neural networks (ANNs) are disclosed. In one example, textual content in the image is extracted. Further, a vertical component run (VCR) and horizontal component run (HCR) are obtained by vectorizing each connected component in the extracted textual content. Furthermore, a zonal density run (ZDR) is obtained for each connected component in the extracted textual content. In addition, a concatenated vertical document vector (VDV), horizontal document vector (HDV), and zonal density vector (ZDV) is computed by normalizing the obtained VCR, HCR, and ZDR, respectively, for each connected component. Moreover, the script in the image is determined using a script detection ANN module and the concatenated VDV, HDV, and ZDV of the image. Also, the orientation of the image is determined using an orientation detection ANN module and the concatenated VDV, HDV, and ZDV of the image.
    Type: Application
    Filed: April 10, 2012
    Publication date: October 10, 2013
    Inventors: Chirag JAIN, Chanaveeragouda Virupaxgouda GOUDAR, Kadagattur Gopinatha SRINIDHI, Yifeng WU
  • Patent number: 8552996
    Abstract: A mobile terminal apparatus includes a display section, a touch panel section detecting a touch on a display area of the display section, a pattern detection section detecting a pattern input into a specific area of the touch panel section, and a storage section. The storage section stores a relationship between the pattern input into the touch panel section and an application to be started correspondingly to the pattern in an application table. A control section determines an application corresponding to the pattern detected by the pattern detection section on the basis of the application table stored in the storage section and starting the determined application.
    Type: Grant
    Filed: February 18, 2010
    Date of Patent: October 8, 2013
    Assignees: Sony Corporation, Sony Mobile Communications AB
    Inventor: Hiroshi Morita
  • Publication number: 20130259313
    Abstract: Apparatus and methods for efficient delivery of a plurality of information regarding items for auction. In one embodiment, the apparatus comprises a server entity adapted to communicate with a plurality of information sources (such as sources having vehicle history information, estimated valuation information, etc.) The server entity compiles information received from the plurality of sources and formats the information for efficient delivery to a client device via e.g., SMS based text messaging, internet-based instant messaging, or the like. In another salient aspect of the invention, a client enters an auction identification number (AID) rather than a longer alpha-numeric VIN or other identification number. The server entity may also calculate and communicate estimates and/or alerts/reminders of when bidding will begin for particular items for auction which the client has expressed interest in. Exemplary client interface and business methods are also given.
    Type: Application
    Filed: March 13, 2013
    Publication date: October 3, 2013
    Applicant: MobileTrac, LLC
    Inventors: Paul Breed, James Michael Irish
  • Patent number: 8542952
    Abstract: Embodiments include a method, a manual device, a handheld manual device, a handheld writing device, a system, and an apparatus. An embodiment provides a manual device operable in a context. The manual device includes a writing element operable to form a mark on a surface in response to a movement of the writing element with respect to the surface. The manual device also includes a controller operable to encode information corresponding to the context of the manual device by regulating the formation of the mark.
    Type: Grant
    Filed: August 4, 2010
    Date of Patent: September 24, 2013
    Assignee: The Invention Science Fund I, LLC
    Inventors: Alexander J. Cohen, B. Isaac Cohen, Ed Harlow, Eric C. Leuthardt, Royce A. Levien, Mark A. Malamud
  • Patent number: 8542953
    Abstract: An image processing apparatus supports image processing in multiple languages via a user interface, a determining unit, a setting unit, and a character recognizing unit. The user interface sets an instruction from a user for various functions performed by the image processing apparatus. The user interface displays characters in a language. The determining unit automatically determines the language currently used for the characters displayed in the user interface of the various functions. The setting unit sets, in response to the determining unit automatically determining the language currently used for the characters displayed in the user interface, the determined language as a scanned document language for use in recognizing characters in a scanned document which is obtained by scanning a paper document. The character recognizing unit utilizes the scanned document language set by the setting unit to recognize characters in the scanned document and create text data.
    Type: Grant
    Filed: June 14, 2012
    Date of Patent: September 24, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventor: Koji Maekawa
  • Publication number: 20130243326
    Abstract: A parse module calibrates an interior space by parsing objects and words out of an image of the scene and comparing each parsed object with a plurality of stored objects. The parse module further selects a parsed object that is differentiated from the stored objects as the first object and stores the first object with a location description. A search module can detect the same objects from the scene and use them to determine the location of the scene.
    Type: Application
    Filed: May 3, 2013
    Publication date: September 19, 2013
    Applicant: International Business Machines Corporation
    Inventors: James Billingham, Helen Bowyer, Kevin Brown, Edward Jellard, Graham White
  • Publication number: 20130243325
    Abstract: Multiple sets of character data having termination characters are compared using parallel processing and without causing unwarranted exceptions. Each set of character data to be compared is loaded within one or more vector registers. In particular, in one embodiment, for each set of character data to be compared, an instruction is used that loads data in a vector register to a specified boundary, and provides a way to determine the number of characters loaded. Further, an instruction is used to find the index of the first delimiter character, i.e., the first zero or null character, or the index of unequal characters. Using these instructions, a location of the end of one of the sets of data or a location of an unequal character is efficiently provided.
    Type: Application
    Filed: March 15, 2012
    Publication date: September 19, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind, Timothy J. Slegel
  • Patent number: 8538087
    Abstract: The invention deals with an aid device for reading a printed text, comprising a data acquisition peripheral with a camera and a communication interface, said peripheral being movable by a user on a printed text to frame a portion of text, a processing unit, communication means between the peripheral and the processing unit, and a vocal reproduction device. The processing unit is programmed to acquire a sequence of images framed by the camera, to detect when the user has stopped on the text, to recognize at least one word which the user intends reading, and to reproduce the sound of said at least one word by means of vocal synthesis by means of the vocal reproduction device.
    Type: Grant
    Filed: July 8, 2009
    Date of Patent: September 17, 2013
    Assignee: Universita' Degli Studi di Brescia
    Inventors: Umberto Minoni, Mauro Bianchi
  • Patent number: 8532333
    Abstract: Establishments are identified in geo-tagged images. According to one aspect, text regions are located in a geo-tagged image and text strings in the text regions are recognized using Optical Character Recognition (OCR) techniques. Text phrases are extracted from information associated with establishments known to be near the geographic location specified in the geo-tag of the image. The text strings recognized in the image are compared with the phrases for the establishments for approximate matches, and an establishment is selected as the establishment in the image based on the approximate matches. According to another aspect, text strings recognized in a collection of geo-tagged images are compared with phrases for establishments in the geographic area identified by the geo-tags to generate scores for image-establishment pairs. Establishments in each of the large collection of images as well as representative images showing each establishment are identified using the scores.
    Type: Grant
    Filed: September 27, 2011
    Date of Patent: September 10, 2013
    Assignee: Google Inc.
    Inventors: Shlomo Urbach, Tal Yadid, Yuval Netzer, Andrea Frome, Noam Ben-Haim
  • Patent number: 8532389
    Abstract: A robust OCR system requiring little computing capacity is obtained by first carrying out an adaptive pre-processing optimised in terms of pixel groups, which analyses the image in line segments. The most significant difference compared to previously known methods is that there is no longer a direct pattern comparison, instead the line segments are gone over in as optimum a manner as possible. The corresponding character is then deduced from the sequence of movements. As this sequence of movements can be scaled well and described in a relatively simple manner, this technique is especially suitable for mobile use. The sequence of movements of know characters is stored in a search word, such that the letters can be directly deduced from the movement. A dictionary/lexicon can also be used. If words are recognized by means of the dictionary/lexicon, the recognized letters can be used for an even more optimized character font identification.
    Type: Grant
    Filed: October 28, 2008
    Date of Patent: September 10, 2013
    Assignee: T-Mobile International AG
    Inventor: Gerd Mossakowski
  • Publication number: 20130230248
    Abstract: A method, system and computer program product for ensuring that the tags accurately describe a resource referenced by a bookmark in a collaborative bookmarking system. A user bookmarking an Internet resource that is referenced by a bookmark is detected. The user provides a description of the bookmark in the form of metadata, which includes tags, to be associated with the bookmark. The Internet resource is analyzed to determine its meaning. A second user bookmarking the same Internet resource that is referenced by the bookmark is detected. The second user provides a description of the bookmark in the form of metadata, which includes tags. The Internet resource is analyzed a second time to determine its meaning If the relatedness of these meanings is beyond a threshold limit, then the original bookmark metadata is invalidated and the invalidated tags are replaced with the tags provided by the second user.
    Type: Application
    Filed: March 2, 2012
    Publication date: September 5, 2013
    Applicant: International Business Machines Corporation
    Inventors: Michael G. Alexander, Paul R. Bastide, Matthew E. Broomhall, Beth Anne M. Collopy, Robert E. Loredo
  • Publication number: 20130230246
    Abstract: A system and method for capturing image data is disclosed. A receipt image processing service selects from a repository a template that guides data capture of receipt data from a receipt image and presents the template to a user on an image capture device. A user previews the receipt image and the selected template. If the user decides that the template does not correctly indicate locations of data areas for data items in the receipt image, then the user either updates an existing template or creates a new template that correctly indicates the location of selected data areas in the receipt image. The selected template, the updated template, or the new template is then used to extract receipt data from the receipt image. The receipt data and receipt image data are then provided to the expense report system.
    Type: Application
    Filed: June 14, 2012
    Publication date: September 5, 2013
    Applicant: RICOH COMPANY, LTD.
    Inventor: Jayasimha Nuggehalli
  • Patent number: 8526739
    Abstract: A method according to one embodiment includes performing optical character recognition (OCR) on an image of a first document; generating a list of hypotheses mapping the first document to a complementary document using: textual information from the first document, textual information from the complementary document, and predefined business rules; at least one of: correcting OCR errors in the first document, and normalizing data from the complementary document, using at least one of the textual information from the complementary document and the predefined business rules; determining a validity of the first document based on the hypotheses; and outputting an indication of the determined validity. Additional systems, methods and computer program products are also presented.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: September 3, 2013
    Assignee: Kofax, Inc.
    Inventors: Mauritius A. R. Schmidtler, Roland G. Borrey, Jan W. Amtrup, Stephen Michael Thompson
  • Publication number: 20130223744
    Abstract: Embodiments of the invention are directed to using image data and contextual data to determine information about a scene, based on one or more previously obtained images. Contextual data, such location of image capture, can be used to determine previously obtained images related to the contextual data and other location-related information, such as billboard locations. With even low resolution devices, such as cell phone, image attributes, such as a histogram or optically recognized characters, can be compared between the previously obtained images and the newly captured image. Attributes matching within a predefined threshold indicate matching images. Information on the content of matching previously obtained images can be provided back to a user who captured the new image. User profile data can refine the content information. The content information can also be used as search terms for additional searching or other processing.
    Type: Application
    Filed: March 25, 2013
    Publication date: August 29, 2013
    Applicant: Yahoo! Inc.
    Inventor: Yahoo! Inc.
  • Patent number: 8510093
    Abstract: An image processing apparatus includes a region dividing section, an character recognizing section, a classifying section, a translating section, a calculation section and a correcting section. The region dividing section divides a document image into sentence regions. The character recognizing section recognizes characters in the respective sentence regions. The classifying section classifies the sentence regions into groups in accordance with sizes of the characters. The translating section translates a sentence into a given language for each of the sentence regions. The calculation section calculates a character size of a sentence, which has been translated for each of the sentence regions by the translating section. And The correcting section corrects a size of a translated character of each character region for every sentence region classified into the same group such that the character sizes calculated by the calculating section become equal.
    Type: Grant
    Filed: March 27, 2008
    Date of Patent: August 13, 2013
    Assignee: Fuji Xerox Co., Ltd.
    Inventor: Yuya Konno
  • Patent number: 8510437
    Abstract: Techniques to facilitate a system to capture, process, and archive a series of user interactive events and subsequently retrieve the stored user interactive events are disclosed. The captured information is indexed and stored for future access either on a terminal device or an accessible remote server device.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: August 13, 2013
    Assignee: Yawonba Holdings AU, LLC
    Inventors: Jinsheng Wang, Joe Zheng
  • Patent number: 8509536
    Abstract: A character recognition device to recognize characters after preprocessing an input image corrects distortion. The character recognition device includes an image input unit to receive an image acquired by an image device, a character position estimator to calculate a probability value of a position of characters of the image to estimate the position of the characters, an image preprocessor to detect a plurality of edges including the characters from the image and to correct distortion of the edges, and a character recognizer to recognize the characters included in a rectangle formed by the plurality of edges.
    Type: Grant
    Filed: November 12, 2010
    Date of Patent: August 13, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyo Seok Hwang, Woo Sup Han
  • Publication number: 20130202208
    Abstract: An information processing device comprises a word string acquirer which acquires a word string that is a target of analysis; a partial string extractor which extracts, using two words on either side of each space in the word string, a partial string containing one word but not the other, a partial string not containing the one word but containing the other, and a partial string containing both words from the word string; a division coefficient acquirer which acquires, for each partial string, division coefficients indicating degree of reliability in dividing the partial string by respective division patterns that divide the partial string into words; a probability coefficient acquirer which calculates a coefficient indicating probability that the word string is divided at the space based on the division coefficients; and an ouputter which determines division of the word string based on the coefficient, and divides and outputs the word string.
    Type: Application
    Filed: January 29, 2013
    Publication date: August 8, 2013
    Applicant: CASIO COMPUTER CO., LTD.
    Inventor: CASIO COMPUTER CO., LTD.
  • Publication number: 20130202207
    Abstract: The present invention relates to a method for assisting multiple users to perform a collection simultaneously. The method includes the steps of: (a) acquiring digital data created with respect to recognition reference information of an object from a terminal of each of the multiple users; (b) determining or recognizing whether the respective digital data on the recognition reference information acquired through the terminals were created within a preset place condition and whether the respective digital data on the recognition reference information acquired through the terminals were created within a preset scope of the time; (c) selecting a specified group of users, including a first to an n-th user among the multiple users, who create the digital data within the preset place condition and within the preset scope of the time; and (d) providing information on rewards corresponding to the object for users included in the specified group of users.
    Type: Application
    Filed: December 30, 2011
    Publication date: August 8, 2013
    Applicant: OLAWORKS, INC.
    Inventor: Jung Hee Ryu
  • Patent number: 8504448
    Abstract: Methods, systems, apparatuses and/or computer program products are directed to outgoing returns processing. The outgoing returns processing includes receiving outgoing returns data files, where the outgoing returns data files may be of a plurality of different file formats and received from a plurality of different channels. The outgoing returns processing further includes converting the outgoing returns data files to a platform file format and retrieving image files based on the outgoing returns data files. The outgoing returns are then settled.
    Type: Grant
    Filed: October 26, 2009
    Date of Patent: August 6, 2013
    Assignee: Bank of America Corporation
    Inventors: L. Edward Shaw, Patricia Anne Sullivan Fleming, Martin T. Mulligan, Marcus Eugene McGinnis, Karl R. Johnson, Thomas D. Thibault
  • Publication number: 20130195360
    Abstract: Systems, apparatus and methods for extracting lower modifiers from a word image, before performing optical character recognition (OCR), based on a plurality of tests comprising a first test, a second test and a third test are presented. The method obtains the word image and performing a plurality of tests (e.g., a first test, a second test and a third test). The first test determines whether a vertical line spanning the height of the word image exists. The second test determines whether a jump of a number of components in the lower portion of the word image exists. The third test determines sparseness in a lower portion of the word image. The plurality of tests may run sequentially and/or in parallel. Results from the plurality of tests are used to decide whether a lower modifier exists by comparing and accumulating test results from the plurality of tests.
    Type: Application
    Filed: March 8, 2013
    Publication date: August 1, 2013
    Applicant: QUALCOMM Incorporated
    Inventor: QUALCOMM Incorporated