Limited To Specially Coded, Human-readable Characters Patents (Class 382/182)
-
Publication number: 20120201461Abstract: A character detection apparatus is provided that detects, from an image including a first image representing a character and a second image representing a translucent object, the character. The character detection apparatus includes a calculating portion that, for each of blocks obtained by dividing an overlapping region in which the first image is overlapped by the second image, calculates a frequency of appearance of pixels for each of gradations of a property, and a detection portion that detects the character from the overlapping region based on the frequency for each of the gradations.Type: ApplicationFiled: January 4, 2012Publication date: August 9, 2012Applicant: Konica Minolta Business technologies, Inc.Inventor: Tomoo Yamanaka
-
Patent number: 8238599Abstract: An image processing method is disclosed that is capable of efficient information extraction when embedding information in an image by using plural information embedding methods. The image processing method includes steps of embedding target information in the image by using one or more methods selected from plural information embedding methods; and embedding identification information in the image for identifying the selected one or more methods. The identification information is embedded in a method allowing an amount of the embedded information to be less than an amount of the embedded information in each of the selected one or more methods.Type: GrantFiled: December 14, 2007Date of Patent: August 7, 2012Assignee: Ricoh Company, Ltd.Inventors: Masaaki Ishikawa, Takashi Saitoh, Hiroshi Shimura, Haike Guan, Taeko Ishizu, Hideaki Yamagata
-
Patent number: 8238664Abstract: Even if an image processing apparatus which can recognize a certain character string is available on the network, processing results of an OCR process are determined by character recognition ability of an image processing apparatus which has happened to perform the OCR process. Thus, after an MFP performs a character recognition process based on image data contained in a character region of an image, if it is determined that processing results of the character recognition process are highly likely to contain recognition errors, the processing results are output to another MFP together with first information which indicates a high likelihood of the processing results containing recognition errors. Upon acquiring the processing results, the other MFP with higher character recognition capabilities performs a character recognition process on the image data contained in the character region if the first information is attached.Type: GrantFiled: December 8, 2008Date of Patent: August 7, 2012Assignee: Canon Kabushiki KaishaInventor: Shinichi Kanematsu
-
Publication number: 20120195505Abstract: Methods are systems are provided that include obtaining a digital image from a digital photograph, such as may be taken by a digital camera or a camera phone. The digital image includes, for example, a URI or URL, which may be contained within a visible frame. A character recognition technique, such as an optical character recognition technique, may be used to recognize the URI or URL from the digital image. The URI or URL may be used to access a corresponding Web page. The character recognition technique may be applied on the digital camera or cell phone itself, or remotely.Type: ApplicationFiled: January 31, 2011Publication date: August 2, 2012Applicant: Yahoo! Inc.Inventor: Jin Suk Park
-
Patent number: 8229252Abstract: Embodiments include an apparatus, device, system, computer-program product, and method. In an embodiment, a method is provided. The method includes receiving an annotation environment signal that includes a context information indicative of a recognizable aspect of an item. The method also includes receiving an expression signal that includes an annotation information indicative of a user expression associated with the recognizable aspect of the item. The method further includes electronically associating the context information indicative of a recognizable aspect of an item and the annotation information indicative of a user expression associated with the recognizable aspect of the item.Type: GrantFiled: April 25, 2005Date of Patent: July 24, 2012Assignee: The Invention Science Fund I, LLCInventors: Alexander J. Cohen, Edward K. Y. Jung, Royce A. Levien, Robert W. Lord, Mark A. Malamud, John D. Rinaldo, Jr.
-
Publication number: 20120183222Abstract: A method for automatically typesetting patent images extracts a brief introduction of each patent image from a description part of a patent document, and records a keyword of the brief introduction. The method distinguishes an image label of each patent image from an image part of the patent document. The method rotates the patent image by ninety degrees clockwise in response that the image label of the patent image does not contain the keyword, and outputs the rotated image onto a display device.Type: ApplicationFiled: December 25, 2011Publication date: July 19, 2012Applicants: HON HAI PRECISION INDUSTRY CO., LTD., HONG FU JIN PRECISION INDUSTRY (ShenZhen) CO., LTD.Inventors: WEI-QING XIAO, CHUNG-I LEE, CHIEN-FA YEH
-
Patent number: 8224131Abstract: An image processing apparatus includes a user interface for setting an instruction from a user and is capable of switching a language used in a display screen of the user interface. The image processing apparatus creates text data by determining the language used in the display screen of the user interface and by performing character recognition suitable for recognizing a document of the determined language on read image data. The image processing apparatus also creates a file in which the text data and the image data are associated with each other. Therefore, character recognition is properly performed in the image processing apparatus by automatically selecting the type of the language.Type: GrantFiled: September 29, 2005Date of Patent: July 17, 2012Assignee: Canon Kabushiki KaishaInventor: Koji Maekawa
-
Publication number: 20120179468Abstract: Briefly, in accordance with one or more embodiments, an image processing system is capable of receiving an image containing text, applying optical character recognition to the image, and then audibly reproducing the text via text-to-speech synthesis. Prior to optical character recognition, an orientation corrector is capable of detecting an amount of angular rotation of the text in the image with respect to horizontal, and then rotating the image by an appropriate amount to sufficiently align the text with respect to horizontal for optimal optical character recognition. The detection may be performed using steerable filters to provide an energy versus orientation curve of the image data. A maximum of the energy curve may indicate the amount of angular rotation that may be corrected by the orientation corrector.Type: ApplicationFiled: March 20, 2012Publication date: July 12, 2012Inventor: Oscar Nestares
-
Patent number: 8218821Abstract: An apparatus processes video signals containing video information related to a scene which may contain a vehicle license plate. The apparatus includes a video camera having a video imaging device for viewing the scene and generating a first video signal. A character detector in the video camera processes the first video signal to detect a license plate within the scene and to generate location information indicating the location of the license plate. A line detector in the camera determines a particular video line of the first video signal into which the location information is to be embedded. An insertion circuit in the camera embeds the location information into the particular video line of the first video signal to form a second video signal. The apparatus may also include a video capture device for receiving the second video signal from the video camera and converting the second video signal into digital image data.Type: GrantFiled: January 23, 2007Date of Patent: July 10, 2012Assignee: Pips Technology, Inc.Inventors: Alan K. Sefton, Kent A. Rinehart
-
Publication number: 20120148101Abstract: Disclosed is a method of extracting a text area, the method including generating a text area prediction value within an input second image based on a plurality of text area data stored in a database including geometric information about a text area of a first image, generating a text recognition result value by determining whether a text is recognized with respect to a probable text area within the input second image, and selecting a text area within the second image by combining the generated text area prediction value and text recognition result value.Type: ApplicationFiled: December 13, 2011Publication date: June 14, 2012Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Young Woo YOON, Ho Sub Yoon, Kyu Dae Ban, Jae Yeon Lee, Do Hyung Kim, Su Young Chi, Jae Hong Kim, Joo Chan Sohn
-
Publication number: 20120141031Abstract: A method for analyzing a character string, the method including: analyzing a character string to determine one of more characters of the character string; determining from a dictionary source, an alternative character string to the analyzed character string; comparing the analyzed character string with the alternative character string to determine a weighting factor for each of the characters of the analyzed character string relative to the positional arrangement of the characters in the alternative character string; and for each determined weighting factor, generating for each of the characters in the analyzed character string a corresponding character of a particular size as determined by the weighting factor.Type: ApplicationFiled: September 27, 2011Publication date: June 7, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Flemming Boegelund
-
Publication number: 20120141030Abstract: A code recognition method includes the following steps: a first code-image block is received. Wherein, several first codes are displayed on the first code-image block. The first code-image block is partitioned into several second code-image blocks. Wherein, each of the second code-image blocks displays a second code respectively. Each of the second codes is one of the first codes. Each of the second code-image blocks is recognized as several third codes corresponding to each of the second codes respectively. Some of the neighboring second code-image blocks are combined to form several third code-image blocks. Wherein, each of the third code-image blocks displays a first code set, which comprises some of the second codes. Each of the third code-image blocks is recognized as a second code set corresponding to each of the first code sets respectively. Wherein, each of the second code sets includes the codes selected from the third codes.Type: ApplicationFiled: December 17, 2010Publication date: June 7, 2012Applicant: INSTITUTE FOR INFORMATION INDUSTRYInventors: Yi-Chong Zeng, Jing-Fung Chen
-
Patent number: 8194982Abstract: In a document-image-data providing device, a document image inputting unit is configured to input document image data. An area recognition unit is configured to recognize a text area of a document image element containing text data among document image elements constituting the document image data, and another area of a document image element containing data other than the text data. A text data acquiring unit is configured to acquire text data contained in the recognized text area. A providing unit is configured to provide, in response to a document image data request received from the information processing device, both image data generated from the input document image data to have a resolution lower than a resolution of the input document image data and the text data acquired by the text data acquiring unit, to the information processing device.Type: GrantFiled: September 12, 2008Date of Patent: June 5, 2012Assignee: Ricoh Company, Ltd.Inventor: Masajiro Iwasaki
-
Publication number: 20120134590Abstract: A server system receives a visual query from a client system distinct from the server system. The server system performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system scores each textual character in the plurality of textual characters in accordance with the geographic location of the client system. The server system identifies, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. Then the server system retrieves a canonical document having the one or more high quality textual strings and sends at least a portion of the canonical document to the client system.Type: ApplicationFiled: December 1, 2011Publication date: May 31, 2012Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
-
Publication number: 20120134589Abstract: An image of a known text sample having a text type is generated. The image of the known text sample is input into each OCR engine of a number of OCR engines. Output text corresponding to the image of the known text sample is received from each OCR engine. For each OCR engine, the output text received from the OCR engine is compared with the known text sample, to determine a confidence value of the OCR engine for the text type of the known text sample.Type: ApplicationFiled: November 27, 2010Publication date: May 31, 2012Inventor: Prakash Reddy
-
Patent number: 8189920Abstract: A technique that can contribute to a reduction in an operation burden in managing a processing result of semantic determination processing applied to objects included in an image is provided. An object included in an image of image data is extracted. A semantic of the object in a layout of the image data is determined. When it is determined that plural objects have an identical semantic, a display unit is caused to notify information concerning the plural objects, which are determined as having the semantic, in association with information concerning the semantic.Type: GrantFiled: December 28, 2007Date of Patent: May 29, 2012Assignees: Kabushiki Kaisha Toshiba, Toshiba Tec Kabushiki KaishaInventors: Hajime Tomizawa, Akihiko Fujiwara
-
Patent number: 8189960Abstract: An image processing apparatus includes: an imaging information calculation unit acquiring a first image and higher-resolution second images, and calculating coordinate positions of the second images to the first image and differences in imaging direction between second cameras and a first camera; an eyepoint conversion unit generating eyepoint conversion images obtained by converting the second images based on the differences in imaging direction so that eyepoints of the second cameras coincide with an eyepoint of the first camera and matching the first image with the eyepoint conversion images to calculate phase deviations of the eyepoint conversion images from the first image; and an image synthesizing unit extracting high-frequency images, having frequency components higher than or equal to a predetermined frequency band, from the second images, and pasting the high-frequency images at the coordinate positions in correspondence with the first image to eliminate the phase deviations to generate a synthesizeType: GrantFiled: June 24, 2009Date of Patent: May 29, 2012Assignee: Sony CorporationInventors: Tetsujiro Kondo, Tetsushi Kokubo, Kenji Tanaka, Hitoshi Mukai, Hirofumi Hibi, Kazumasa Tanaka, Hiroyuki Morisaki
-
Patent number: 8189961Abstract: An image deskew system and techniques are used in the context of optical character recognition. An image is obtained of an original set of characters in an original linear (horizontal) orientation. An acquired set of characters, which is skewed relative to the original linear orientation by a rotation angle, is represented by pixels of the image. The rotation angle is estimated, and a confidence value may be associated with the estimation, to determine whether to deskew the image. In connection with rotation angle estimation, an edge detection filter is applied to the acquired set of characters to produce an edge map, which is input to a linear hough transform filter to produce a set of output lines in parametric form. The output lines are assigned scores, and based on the scores, at least one output line is determined to be a dominant line with a slope approximating the rotation angle.Type: GrantFiled: June 9, 2010Date of Patent: May 29, 2012Assignee: Microsoft CorporationInventors: Djordje Nijemcevic, Sasa Galic
-
Patent number: 8189921Abstract: The present invention firstly roughly classifies an analysis range specified by the operator in the color image data of a form into background, a character frame and a character, precisely specifies a character frame on the basis of the classification result, eliminates the character from the color image data from which the background is eliminated and recognizes the remaining character.Type: GrantFiled: March 30, 2009Date of Patent: May 29, 2012Assignee: Fujitsu Frontech LimitedInventors: Shinichi Eguchi, Hajime Kawashima, Kouichi Kanamoto, Shohei Hasegawa, Katsutoshi Kobara, Maki Yabuki
-
Patent number: 8189919Abstract: A method and system for container identification are disclosed. The method comprises obtaining a plurality of digital images of a character sequence on the container, extracting the character sequences from the images, combining the character sequences into at least one identification code candidate, and selecting one of the candidates as the identification code. The system comprises at least one camera and a computer system that is electrically coupled to the camera, whereby when the computer system receives a trigger signal, said computer system takes a plurality of digital images from the camera, produces character sequences as partial recognition results for the plurality of digital images, and combines the partial recognition results together to produce the identification code.Type: GrantFiled: December 27, 2007Date of Patent: May 29, 2012Inventors: Chung Mong Lee, Wing Kin Wong, Ka Yu Sin
-
Publication number: 20120128250Abstract: A server system receives a visual query from a client system distinct from the server system, performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query, and scores each textual character in the plurality of textual characters. The server system identifies, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query; retrieves a canonical document having the one or more high quality textual strings; generates a combination of the visual query and at least a portion of the canonical document; and sends the combination to the client system.Type: ApplicationFiled: December 1, 2011Publication date: May 24, 2012Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
-
Publication number: 20120131520Abstract: A device with a touch-sensitive screen supports tapping gestures for identifying, selecting or working with initially unrecognized text. A single tap gesture can cause a portion of a character string to be selected. A double tap gesture can cause the entire character string to be selected. A tap and hold gesture can cause the device to enter a cursor mode wherein a placement of a cursor relative to the characters in a character string can be adjusted. In a text selection mode, a finger can be used to move the cursor from a cursor start position to a cursor end position and to select text between the positions. Selected or identified text can populate fields, control the device, etc. Recognition of text can be performed upon access of an image or upon the device detecting a tapping gesture in association with display of the image on the screen.Type: ApplicationFiled: January 30, 2012Publication date: May 24, 2012Inventors: Ding-Yuan Tang, Joey G. Budelli
-
Publication number: 20120128251Abstract: A server system receives a visual query from a client system, performs optical character recognition (OCR) on the visual query to produce text recognition data representing textual characters, including a plurality of textual characters in a contiguous region of the visual query. The server system also produces structural information associated with the textual characters in the visual query. Textual characters in the plurality of textual characters are scored. The method further includes identifying, in accordance with the scoring, one or more high quality textual strings, each comprising a plurality of high quality textual characters from among the plurality of textual characters in the contiguous region of the visual query. A canonical document that includes the one or more high quality textual strings and that is consistent with the structural information is retrieved. At least a portion of the canonical document is sent to the client system.Type: ApplicationFiled: December 1, 2011Publication date: May 24, 2012Inventors: David Petrou, Ashok C. Popat, Matthew R. Casey
-
Patent number: 8184908Abstract: An image processing system includes a computer and an image processing apparatus. A control portion of the apparatus controls so that a whole of one side and the other side of a transparent sheet are optically read, and the obtained image data of the front and rear sides is transmitted to the computer. A control portion of the computer controls so that character information for each of data area corresponding to containing ranges of the respective document on the transparent sheet is obtained by a character recognition, with respect to each front and rear side image data received, and the recognized character information of the both sides is related each other for each of the data area, based on previously associated front and rear information showing a front-and-rear position relation between one side and the other side of the document, and stored in a data storing portion.Type: GrantFiled: April 1, 2008Date of Patent: May 22, 2012Assignee: Sharp Kabushiki KaishaInventor: Takayoshi Okochi
-
Publication number: 20120120444Abstract: Disclosed is an image processing apparatus which (i) determines whether or not characters to be subjected to a character recognition process in image data have a size larger than a predetermined size, (ii) in a case where the characters is determined as larger than the predetermined size, reduces at least a region including the characters so that the size of the characters fits within the predetermined size, and (iii) performs a character recognition process of the characters with use of the reduced image data.Type: ApplicationFiled: November 9, 2011Publication date: May 17, 2012Applicant: SHARP KABUSHIKI KAISHAInventors: Hitoshi Hirohata, Akihito Yoshida, Atsuhisa Morimoto, Yohsuke Konishi
-
Publication number: 20120114243Abstract: Techniques for shape clustering and applications in processing various documents, including an output of an optical character recognition (OCR) process.Type: ApplicationFiled: January 17, 2012Publication date: May 10, 2012Applicant: GOOGLE INC.Inventors: Luc Vincent, Raymond W. Smith
-
Publication number: 20120106845Abstract: First data represents an image of text including words. Second data represents the text in a non-image form. A particular word within the second data is replaced with a corresponding part of the first data representing the image of the particular word.Type: ApplicationFiled: October 30, 2010Publication date: May 3, 2012Inventor: Prakash Reddy
-
Publication number: 20120105918Abstract: What is disclosed is a novel system and method for augmenting present methods used for determining the orientation direction automatically being detected of digital pages of a plurality of scanned documents in a digital document processing environment. The present method takes advantage of the observation that pages scanned in data processing centers are often highly correlated. The present method contains five primary steps. 1) Page orientation (i.e., up/down) is detected using a traditional method. 2) Each page is classified as either directional or non-directional. 3) The pages classified as directional are clustered into groups. 4) The direction for each group is determined. 5) The directional group's direction is used to revise the orientation for pages contained in the group. Through the implementation of the teachings hereof, performance, in terms of both speed and accuracy, are very high relative to current methods and detection error rates can be reduced significantly.Type: ApplicationFiled: October 29, 2010Publication date: May 3, 2012Applicant: XEROX CORPORATIONInventors: ZHIGANG FAN, MICHAEL R. CAMPANELLI
-
Patent number: 8170289Abstract: Systems and methods for character-by-character alignment of two character sequences (such as OCR output from a scanned document and an electronic version of the same document) using a Hidden Markov Model (HMM) in a hierarchical fashion are disclosed. The method may include aligning two character sequences utilizing multiple hierarchical levels. For each hierarchical level above a final hierarchical level, the aligning may include parsing character subsequences from the two character sequences, performing an alignment of the character subsequences, and designating aligned character subsequences as the anchors, the parsing and performing the alignment being between the anchors generated from an immediately previous hierarchical level if the current hierarchical level is below the first hierarchical level. For the final hierarchical level, the aligning includes performing a character-by-character alignment of characters between anchors generated from the immediately previous hierarchical level.Type: GrantFiled: September 21, 2005Date of Patent: May 1, 2012Assignee: Google Inc.Inventors: Shaolei Feng, Raghavan Manmatha
-
Patent number: 8170338Abstract: A disclosed information processing apparatus includes an image data obtaining unit configured to obtain image data generated by scanning a confirmation/correction form on a recording medium; a workflow definition obtaining unit configured to obtain a workflow definition of a workflow that includes a workflow step corresponding to the recording medium; a form definition obtaining unit configured to obtain a form definition of the confirmation/correction form corresponding to the workflow step based on the workflow definition; a field image extracting unit configured to extract a field image of a field of the confirmation/correction form from the image data based on the form definition; and a handwriting image extracting unit configured to remove a previous handwriting image and extract a current handwriting image from the field image if the field image contains both the previous handwriting image and the current handwriting image.Type: GrantFiled: April 9, 2008Date of Patent: May 1, 2012Assignee: Ricoh Company, Ltd.Inventor: Kunio Okita
-
Publication number: 20120099792Abstract: A computer implemented method for adaptive optical character recognition on a document with distorted characters includes performing a distortion-correction transformation on a segmented character of the document assuming the segmented character to be a candidate character. The method further includes comparing the transformed segmented character to the candidate character by calculating a comparison score. If the calculated score is within a predetermined range, the segmented character is identified with the candidate character. The method may be implemented in either of computer hardware configured to perform the method, or in computer software embodied in a non-transitory, tangible, computer-readable storage medium. Also disclosed are corresponding computer program product and data processing system.Type: ApplicationFiled: September 5, 2010Publication date: April 26, 2012Applicant: International Business Machines CorporationInventors: Dan Shmuel Chevion, Vladimir Kluzner, Asaf Tzadok, Eugeniusz Walach
-
Publication number: 20120093415Abstract: One embodiment described herein may take the form of a system or method for dynamically recognizing an Internet address within a video or audio component of a multimedia presentation on a distribution system or network such as, but not limited to, a satellite, cable or Internet network. In general, the embodiment may analyze the audio portion of the presentation or one or more frames of a video component to detect the presence of a web address within the one or more frames. In the embodiment where the audio portion is analyzed, the system may perform a voice recognition or a similar analysis on the audio portion to detect the utterance of a web address. Similarly, one embodiment analyzing the one or more frames of the video component may comprise performing an optical character recognition (OCR) of the frame.Type: ApplicationFiled: October 18, 2010Publication date: April 19, 2012Applicant: Eldon Technology LimitedInventors: David John Robinson, Craig Avison-Fell
-
Publication number: 20120087587Abstract: The invention provides various methods and techniques for binarizing an image, generally in advance of further processing such as optical character recognition (OCR). One step includes establishing boundaries of image objects of an image and classifying each image object as either suspect or non-suspect. Another step includes creating a local binarization threshold map that may include or store threshold binarization values associated with image objects classified as non-suspect. Yet another step includes expanding the local binarization threshold map to cover the entire image thereby creating a global binarization threshold map for the entire image. The methods and techniques are capable of identifying and working with separation objects and incuts in images.Type: ApplicationFiled: December 16, 2011Publication date: April 12, 2012Inventor: Olga Kacher
-
Publication number: 20120087537Abstract: A system and method for business card information reading and managing comprises a scanner which is optional and can provide dark background, a preprocessing module, a host computer with data storage, input/output (I/O), and display devices, an information extracting module, optical character recognition (OCR) engine, an image-processing (IP) engine, an information organizing module, all connected to the host computer to work together. On top of the system is the dataflow logic, i.e. the method, which guides all the business card information reading and management in a sequence of steps. The method is supported mainly through the software (SW) running on the host computer, with a GUI to interact with end users and provides functions like scanning/loading images and managing result.Type: ApplicationFiled: October 12, 2010Publication date: April 12, 2012Inventors: Lisong Liu, Lai Chen
-
Publication number: 20120083294Abstract: An image is received by a data processing system. A text recognition module identifies textual information in the image. A data detection module identifies a pattern in the textual information and determines a data type of the pattern. A user interface provides a user with a contextual processing command option based on the data type of the pattern in the textual information.Type: ApplicationFiled: September 30, 2010Publication date: April 5, 2012Applicant: APPLE INC.Inventors: Cedric Bray, Olivier Bonnet
-
Publication number: 20120082382Abstract: A system for document processing including decomposing an image of a document into at least one data entry region sub-image, providing the data entry region sub-image to a data entry clerk available for processing the data entry region sub-image, receiving from the data entry clerk a data entry value associated with the data entry region sub-image, and validating the data entry value.Type: ApplicationFiled: December 12, 2011Publication date: April 5, 2012Applicant: ORBOGRAPH LTDInventors: Avikam Baltsan, Ori Sarid, Aryeh Elimelech, Aharon Boker, Zvi Segal, Gideon Miller
-
Patent number: 8150159Abstract: The present invention discloses an identifying method of hand-written Latin letter. The present invention considers many hand-written styles of Latin letter, extract many stable characteristics of Latin letter of different hand-written styles, and classify the Latin letter aggregation each time with one characteristic, so that the whole standard Latin letter aggregation is classified into many small Latin letter aggregations with intersection to be the coarse classification candidate letter aggregations to be identified. When identifying the inputted hand-written Latin letter, obtain the coarse classification candidate letter aggregation that matches with the characteristics of the inputted hand-written Latin letter. Many stable characteristics ensure the identifying rate. The multilayer coarse classification candidate letter aggregations regulate the searching path and increase the identifying speed.Type: GrantFiled: March 3, 2009Date of Patent: April 3, 2012Assignee: Ningbo Sunrun Elec. & Info. ST & D Co., Ltd.Inventors: Jiaming He, Jianfen Wen, Dexiang Jia, Jing Chen, Ping Chen, Chengchen Ma, Zhouyi Fan, Hongzhen Ding, Zhihui Shi, Aijun Shi, Linghui Fan, Qingbo Zhang
-
Publication number: 20120076415Abstract: A method and system for analyzing a patent disclosure is disclosed. The method and system comprise a computerized cross-check of reference labels within drawings of a disclosure to reference labels found within the text of the disclosure, and generating warnings for reference labels that are missing from either the drawings or the text.Type: ApplicationFiled: September 27, 2010Publication date: March 29, 2012Inventor: Michael R. Kahn
-
Patent number: 8139870Abstract: There is provided an image processing apparatus including a character recognition section that executes character recognition on an input document image and outputs a character recognition result, an item name extraction section that extracts a character string relevant to an item name of an information item from the character recognition result, an item value extraction section that extracts a character string of an item value corresponding to the item name from the vicinity of the character string relevant to the item name in the document image, and an extraction information creation section that creates extraction information by associating the character string of the item value extracted by the item value extraction section to the item name.Type: GrantFiled: August 29, 2006Date of Patent: March 20, 2012Assignee: Fuji Xerox Co., Ltd.Inventor: Masahiro Kato
-
Patent number: 8139861Abstract: The present invention provides a technique of accurately extracting areas of characters included in a captured image even in a case where noise or dirt of a relatively large area occurs in a background image. A pixel value integration evaluation value is obtained by integrating pixel values in a character extracting direction B at each of the pixel positions in a character string direction A of an image including a character string. A waveform of the value is expressed as waveform data. A first threshold and a second threshold are set for the waveform data. An area in which the waveform data exceeds the first threshold is set as a character candidate area. In a case where an area in which the pixel value integration evaluation value exceeds the second threshold exists in the character candidate areas, the character candidate area is regarded as a true character area and the characters are extracted.Type: GrantFiled: September 13, 2007Date of Patent: March 20, 2012Assignee: Keyence CorporationInventor: Masato Shimodaira
-
Patent number: 8139862Abstract: The present invention provides a technique of accurately extracting areas of characters included in a captured image even in a case where noise or dirt of a relatively large area occurs in a background image. An integrated pixel value is obtained by integrating pixel values in a character extracting direction B for pixel positions in a character string direction A of an image including a character string. A standard deviation value is calculated along the character extracting direction for pixel positions in a character string direction A. The integrated pixel value and the standard deviation value are combined for pixel positions in a character string direction A. A threshold is set automatically or manually. A part of pixel positions in a character string direction A having the combined value of the integrated pixel value and the standard deviation value higher than the threshold is recognized as a character area to be extracted.Type: GrantFiled: September 13, 2007Date of Patent: March 20, 2012Assignee: Keyence CorporationInventor: Masato Shimodaira
-
Patent number: 8140339Abstract: A sign language recognition apparatus and method is provided for translating hand gestures into speech or written text. The apparatus includes a number of sensors on the hand, arm and shoulder to measure dynamic and static gestures. The sensors are connected to a microprocessor to search a library of gestures and generate output signals that can then be used to produce a synthesized voice or written text. The apparatus includes sensors such as accelerometers on the fingers and thumb and two accelerometers on the back of the hand to detect motion and orientation of the hand. Sensors are also provided on the back of the hand or wrist to detect forearm rotation, an angle sensor to detect flexing of the elbow, two sensors on the upper arm to detect arm elevation and rotation, and a sensor on the upper arm to detect arm twist. The sensors transmit the data to the microprocessor to determine the shape, position and orientation of the hand relative to the body of the user.Type: GrantFiled: July 21, 2009Date of Patent: March 20, 2012Assignee: The George Washington UniversityInventor: Jose L. Hernandez-Rebollar
-
Patent number: 8135218Abstract: Words possibly included in a scene image shot by a mobile camera can be efficiently extracted using a word dictionary or a map database. Positional information acquiring means 101 measures a current position of the device to acquire positional information. Directional information acquiring means 102 detects a direction of the device to acquire directional information. Character recognizing means 104 determines a range of shooting of a scene image based on the current positional information and the directional information. The character recognizing means 104 extracts from a map database 103 information such as store names, building names, and place names associated with positions in the shooting range. Then the character recognizing means 104 conducts character recognition using word knowledge such as the extracted store names, building names, and place names.Type: GrantFiled: October 4, 2010Date of Patent: March 13, 2012Assignee: NEC CorporationInventors: Katsuhiko Takahashi, Daisuke Nishiwaki
-
Patent number: 8135217Abstract: The present invention relates to a method for aligning a camera sensor to significant data which is text or barcode data to be recognized comprising the steps of:—capturing an image of the significant data by means of the camera sensor; —detecting a predominant alignment line of the significant data and detecting an angle thereof in relation to a horizontal line of the captured image; —determining image sections within the edge and line enhanced image which contain most likely significant data lines; —selecting a representative image section out of the determined image sections which is aligned with the predominant alignment line; —capturing a following image of the significant data; tracking the representative image section and determining the predominant alignment line out of the representative image section to achieve a fast calculation and audio or tactile feedback of the alignment quality to the user.Type: GrantFiled: December 9, 2010Date of Patent: March 13, 2012Assignee: Beyo GmbHInventors: Cuneyt Goktekin, Oliver Tenchio
-
Publication number: 20120053956Abstract: A multifunction device may include a computing device and a computer readable storage medium in communication with the computing device. The computer-readable storage medium may include one or more programming instructions for processing a configuration document including one or more transmittal instructions associated with a pharmacy, storing the transmittal instructions in a database, receiving a patient prescription, receiving a selection of the pharmacy to fill the patient prescription, and in response to receiving the selection of the pharmacy, transmitting the prescription to the pharmacy based on the associated transmittal instructions.Type: ApplicationFiled: August 31, 2010Publication date: March 1, 2012Applicant: XEROX CORPORATIONInventors: Nathaniel G. Martin, Paul R. Austin
-
Publication number: 20120051643Abstract: The present invention is a method and system that automates the process of locating and identifying railcars in a rail yard. The method and system of this invention creates an electronic record of a railcar identification number that is stenciled to the side of the railcar using digital camera technology and an Optical Character Recognition (OCR) device and software applications. The method and system of the present invention eliminates the need for using the AEI portable reader.Type: ApplicationFiled: August 25, 2010Publication date: March 1, 2012Applicant: E. I. Systems, Inc.Inventors: Hung Ha, Hoa Ha
-
Publication number: 20120039537Abstract: A method, apparatus, and system for communicating between an apparatus hosting a workflow application and an imaging device, the system including a state engine configured to read and extract data from a first message received from the imaging device, to communicate with an application component, and to advance to a workflow state, a state translator configured to receive the workflow state from the state engine, to convert the workflow state into an imaging device instruction, and to send the imaging device instruction to the imaging device, a state instantiater configured to change a state of a component of the imaging device in accordance with the imaging device instruction, an event responder configured to assemble data in a second message based on the changed state of the component of the imaging device, and an interface configured to send the second message to the apparatus.Type: ApplicationFiled: August 10, 2010Publication date: February 16, 2012Inventor: Gregory C. Keys
-
Patent number: 8116568Abstract: A ruled-line extraction section can be performed with high precision by providing a main-scanning ruled-line extraction section for determining whether a target pixel of binary image data of a document image is a black pixel or a white pixel, for counting the number of black pixels connected one after another upstream in a main scanning direction with respect to the target pixel of the binary image data and for, when the target pixel of the binary image data is a black pixel and when a value counted for the target pixel is not less than a main-scanning run determination threshold value that has been set in advance, generating ruled-line image data by correcting, to pixel values corresponding to black pixels, pixel values of a predetermined number of pixels connected to the target pixel upstream in the main scanning direction.Type: GrantFiled: March 2, 2009Date of Patent: February 14, 2012Assignee: Sharp Kabushiki KaishaInventor: Takahiro Daidoh
-
Publication number: 20120030103Abstract: Improved systems and techniques for submission and verification of redemption codes are disclosed. It will be appreciated that the submission and verification techniques can, among other things, automate the submission and verification of redemption codes for various redeemable instruments, including transaction cards (e.g., gift or prepaid cards) widely used for online transactions (e.g., purchase of media items from an online media store). A redemption code can be determined based on an image of a redeemable instrument. More particularly, an image of a redeemable instrument can be processed using image analysis or one or more image processing techniques to extract a redemption code for the redeemable instrument. By way of example, an image (e.g., a digital picture) of a gift card can be processed by a device to extract an alphanumeric value printed on the gift card. This means that the alphanumeric value need not be entered manually by a person seeking to redeem the redeemable instrument (e.g., gift card).Type: ApplicationFiled: July 27, 2010Publication date: February 2, 2012Inventors: Gregory Hughes, Ted Biskupski, Glenn Epis, Philip J. Luongo, JR.
-
Patent number: 8107730Abstract: An apparatus for use with a single modality imaging system configured to generate uncorrected imaging data of a patient, the single modality imaging system includes two gamma cameras and a patient stretcher disposed between the two gamma cameras, the apparatus for compensating for downward stretcher deflection at the extended end of the patient stretcher that occurs during stretcher extension. The apparatus includes a single sag sensor for sensing the downward deflection of the patient stretcher, a subtracting device configured to generate a sag correction factors based on a baseline stretcher height and an input received from the sag sensor, and a compensator configured to modify at least a portion of the uncorrected imaging data to compensate for sag using the using the sag correction factor to generate a unified image.Type: GrantFiled: August 5, 2008Date of Patent: January 31, 2012Assignee: General Electric CompanyInventor: Dov Kariv