On-line Recognition Of Handwritten Characters Patents (Class 382/187)
  • Publication number: 20100310172
    Abstract: A method for text recognition includes generating a number of text hypotheses for an image, for example, using an HMM based approach using fixed-width analysis features. For each text hypothesis, one or more segmentations are generated and scored at the segmental level, for example, according to character or character group segments of the text hypothesis. In some embodiments, multiple alternative segmentations are considered for each text hypothesis. In some examples, scores determined in generating the text hypothesis and the segmental score are combined to select an overall text recognition of the image.
    Type: Application
    Filed: June 3, 2009
    Publication date: December 9, 2010
    Applicant: BBN Technologies Corp.
    Inventors: Premkumar Natarajan, Rohit Prasad, Richard Schwartz, Krishnakumar Subramanian
  • Patent number: 7848574
    Abstract: A method for classifying an input character is disclosed. Character models are used. Each character model is associated with an output character and defines a model specific segmentation scheme for that output character and an associated segment model. The model specific segmentation scheme defines a minimum length corresponding to a number of points in a stroke of the output character and a minimum length threshold. Using each of the character models, the input character is decomposed into segments and the segments are evaluated against the segment model of the respective character model to produce a score indicative of the conformity of the segments with the segment model. The character model that produced the highest score is selected and the input character is classified as the output character associated with the character model that produces the highest score.
    Type: Grant
    Filed: March 30, 2010
    Date of Patent: December 7, 2010
    Assignee: Silverbrook Research Pty Ltd
    Inventor: Jonathon Leigh Napper
  • Patent number: 7848917
    Abstract: Multiple input modalities are selectively used by a user or process to prune a word graph. Pruning initiates rescoring in order to generate a new word graph with a revised best path.
    Type: Grant
    Filed: March 30, 2006
    Date of Patent: December 7, 2010
    Assignee: Microsoft Corporation
    Inventors: Frank Kao-Ping K. Soong, Jian-Lai Zhou, Peng Liu
  • Patent number: 7843591
    Abstract: A digital tracing method and device, comprising a flat acquisition element (3) for digitizing a document (7) and a flat display element (5) for displaying said digitized document mounted on said flat acquisition element (3).
    Type: Grant
    Filed: June 12, 2006
    Date of Patent: November 30, 2010
    Assignee: France Telecom
    Inventor: Joël Gardes
  • Patent number: 7844114
    Abstract: A method and system for implementing character recognition is described herein. An input character is received. The input character is composed of one or more logical structures in a particular layout. The layout of the one or more logical structures is identified. One or more of a plurality of classifiers are selected based on the layout of the one or more logical structures in the input character. The entire character is input into the selected classifiers. The selected classifiers classify the logical structures. The outputs from the selected classifiers are then combined to form an output character vector.
    Type: Grant
    Filed: December 12, 2005
    Date of Patent: November 30, 2010
    Assignee: Microsoft Corporation
    Inventors: Kumar H. Chellapilla, Patrice Y. Simard
  • Patent number: 7839541
    Abstract: This invention provides an image editing method having a selecting step of selecting an edit target area, a cancellation step of canceling a selection of the edit target area selected in the selecting step, and an area selecting step of selecting again the edit target area by indicating the inside of the edit target area.
    Type: Grant
    Filed: June 27, 2005
    Date of Patent: November 23, 2010
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kenzou Sekiguchi
  • Publication number: 20100278430
    Abstract: A method of identifying a string formed from a number of hand-written characters is disclosed. The method starts by determining character probabilities for each hand-written character in the string. Each character probability represents the likelihood of the respective hand-written character being a respective one of a number of predetermined characters. Next, template probabilities for the string are determined. Each template probability represents the likelihood of the string corresponding to a respective one of a number of templates. Each template represents a respective combination of character types. The step of determining the template probabilities for the string includes the sub-steps of determining the number of characters in the string, selecting templates having an identical number of characters, and obtaining a template probability for each selected template.
    Type: Application
    Filed: July 6, 2010
    Publication date: November 4, 2010
    Inventor: Jonathon Leigh Napper
  • Publication number: 20100281350
    Abstract: Various methods for written mathematical expression analysis are provided. One method may include receiving written input where the written input is representative of a mathematical expression. The method may also include analyzing the written input to identify at least one operator and at least one operand and constructing an expression tree based at least in part on predefined symbol relationships, the at least one operator, and the at least one operand. Similar apparatuses and computer program products are also provided.
    Type: Application
    Filed: April 29, 2009
    Publication date: November 4, 2010
    Inventors: Xiaohui Xie, Yanming Zou, Yingfei Liu, Kongqiao Wang
  • Patent number: 7826665
    Abstract: In a system for updating a contacts database (42, 46), a portable imager (12) acquires a digital business card image (10). An image segmenter (16) extracts text image segments from the digital business card image. An optical character recognizer (OCR) (26) generates one or more textual content candidates for each text image segment. A scoring processor (36) scores each textual content candidate based on results of database queries respective to the textual content candidates. A content selector (38) selects a textual content candidate for each text image segment based at least on the assigned scores. An interface (50) is configured to update the contacts list based on the selected textual content candidates.
    Type: Grant
    Filed: December 12, 2005
    Date of Patent: November 2, 2010
    Assignee: Xerox Corporation
    Inventors: Marco Bressan, Hervé Dejean, Christopher R. Dance
  • Publication number: 20100272362
    Abstract: An image forming apparatus that allows easy and reliable extraction of a hand-written image and allowing reliable execution of a desired process on image data based on the hand-written image is provided. For this purpose, during a mark adding process, a scanner unit and an image processing unit form YMCK data based on an original image; a specific area extracting unit extracts image data of a specific area from the YMCK data; a mark image adding unit combines image data of the specific area with the mark image data to form combined data; and a printer unit outputs a first image based on the combined data. During image processing of the specific area, the scanner unit forms RGB data based on the first image; the mark area extracting unit extracts image data of the mark area from the RGB data; the specific area image processing unit performs prescribed image processing on the image data of mark area; and the printer unit outputs a second image based on the YMCK data after the prescribed image processing.
    Type: Application
    Filed: April 23, 2010
    Publication date: October 28, 2010
    Inventor: Kazuyuki OHNISHI
  • Publication number: 20100272361
    Abstract: A method for automatically recognizing Arabic text includes digitizing a line of Arabic characters to form a two-dimensional array of pixels each associated with a pixel value, wherein the pixel value is expressed in a binary number, dividing the line of the Arabic characters into a plurality of line images, defining a plurality of cells in one of the plurality of line images, wherein each of the plurality of cells comprises a group of adjacent pixels, serializing pixel values of pixels in each of the plurality of cells in one of the plurality of line images to form a binary cell number, forming a text feature vector according to binary cell numbers obtained from the plurality of cells in one of the plurality of line images, and feeding the text feature vector into a Hidden Markov Model to recognize the line of Arabic characters.
    Type: Application
    Filed: April 27, 2009
    Publication date: October 28, 2010
    Inventors: Mohammad S. Khorsheed, Hussein K. Al-Omari, Khalid M. Alfaifi, Khalid M. Alhazmi
  • Patent number: 7821507
    Abstract: A method of enabling a user to initiate an action via a printed substrate, said substrate comprising user information and coded data, said coded data being indicative of a region identity associated with the substrate and of a plurality of locations on the substrate, said method comprising the steps of: receiving, in a computer system and from a sensing device, mode data and interaction data, the sensing device being operable in a plurality of modes and the mode data being indicative of one of said modes, the interaction data being indicative of the region identity and at least one position of the sensing device relative to the substrate, the sensing device generating the interaction data, when operatively positioned or moved relative to the substrate, by reading at least some of the coded data; identifying and retrieving at least part of a page description corresponding to the printed substrate using the region identity; determining a mode of the sensing device using the mode data; identifying an action usin
    Type: Grant
    Filed: February 8, 2007
    Date of Patent: October 26, 2010
    Assignee: Silverbrook Research Pty Ltd
    Inventors: Paul Lapstun, Kia Silverbrook, Michael Hollins, Zhamak Dehghani, Andrew Timothy Robert Newman
  • Patent number: 7817857
    Abstract: Various technologies and techniques are disclosed that improve handwriting recognition operations. Handwritten input is received in training mode and run through several base recognizers to generate several alternate lists. The alternate lists are unioned together into a combined alternate list. If the correct result is in the combined list, each correct/incorrect alternate pair is used to generate training patterns. The weights associated with the alternate pairs are stored. At runtime, the combined alternate list is generated just as training time. The trained comparator-net can be used to compare any two alternates in the combined list. A template matching base recognizer is used with one or more neural network base recognizers to improve recognition operations. The system provides comparator-net and reorder-net processes trained on print and cursive data, and ones that have been trained on cursive-only data. The respective comparator-net and reorder-net processes are used accordingly.
    Type: Grant
    Filed: May 31, 2006
    Date of Patent: October 19, 2010
    Assignee: Microsoft Corporation
    Inventors: Qi Zhang, Ahmad A. Abdulkader, Michael T. Black
  • Patent number: 7817858
    Abstract: It is shown how to select and insert a non-textual, e.g. a smiley, into an application such as a chat application in a communication terminal. A smiley insertion area in the form of a hand writing input area is displayed under the control of a user interface application. After recording that a stylus, or similar device, has been used in drawing on the touch sensitive display, the drawing is matched in an interpretation process against a pattern library consisting of smileys and other non-textual symbols. After a successful match, the smiley symbol is appended to the text that is being input.
    Type: Grant
    Filed: October 31, 2006
    Date of Patent: October 19, 2010
    Assignee: Nokia Corporation
    Inventor: Roope Rainisto
  • Publication number: 20100246964
    Abstract: Recognizing handwritten words at an electronic device. A plurality of strokes is received at a common input region of an electronic device. The plurality of strokes in combination defines a word comprising a plurality of symbols, a relative geometry of a first subset of the plurality of strokes defines a first symbol and a relative geometry of a second subset of the plurality of strokes defines a second symbol such that the relative geometry of the first subset of the plurality of strokes is not related to the relative geometry of the second subset of the plurality of strokes, and at least one stroke of the first subset of the plurality of strokes is spatially superimposed over at least one stroke of the second subset of the plurality of strokes.
    Type: Application
    Filed: March 30, 2009
    Publication date: September 30, 2010
    Inventors: Nada P. Matic, Yi-Hsun E. Cheng
  • Publication number: 20100246965
    Abstract: In one example, video may be analyzed and divided into segments. Character recognition may be performed on the segments to determine what text appears in the segments. The text may be used to assign tags to the video and/or to the segments. Segments that appear visually similar to each other (e.g., segments that appear to be different views of the same person) may be grouped together, and a tag that is assigned to one segment may be propagated to another segment. The tags may be used to perform various types of tasks with respect to the video. One example of such a task is to perform a search on the video.
    Type: Application
    Filed: March 31, 2009
    Publication date: September 30, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Boris Epshtein, Eyal Ofek
  • Publication number: 20100245266
    Abstract: A handwriting processing apparatus includes an acquiring unit configured to acquire coordinate information of handwriting input by an input unit and attribute information, the attribute information indicating a type of input of the handwriting; a determining unit configured to determined a kind of the handwriting using the attribute information; a handwriting processing unit configured to perform handwriting processing corresponding to the kind of the handwriting using the coordinate information; and a display control unit configured to control a display unit to display a result of the handwriting processing.
    Type: Application
    Filed: September 15, 2009
    Publication date: September 30, 2010
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Yojiro Tonouchi, Ryuzo Okada, Mieko Asano, Hiroshi Hattori, Tsukasa Ike, Akihito Seki, Hidetaka Ohira
  • Publication number: 20100232700
    Abstract: An apparatus includes a reading unit configured to read image data, a recognition unit configured to recognize a region designated by a handwritten portion and processing associated with a color of the handwritten portion in the image data, and a display unit configured to display a preview by superimposing a recognized result and the region to be processed in the image data and displaying a content of the recognized processing on the displayed preview.
    Type: Application
    Filed: March 8, 2010
    Publication date: September 16, 2010
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Yoichi Kashibuchi, Naoki Ito
  • Patent number: 7797152
    Abstract: The present invention is a method of database searching. First, a language is selected and elements received. The system is searched to identify a unit number associated with each element, which is linked to a data unit containing morphological variants of the element. If none are identified, the element is broken into sub-textual units that may contain a prefix, compound-prefix, and/or suffix along with a primary element. A unit number is then obtained for the primary element. If this does not result in a match, the elements may be saved in a database for further linguistic development. A unit number associated with each matched element is then chosen, and the elements contained in the data units linked to the unit numbers are compared to a database index. If an element is associated with multiple unit numbers, this process is repeated until all data units have been compared to the database.
    Type: Grant
    Filed: February 17, 2006
    Date of Patent: September 14, 2010
    Assignee: The United States of America as represented by the Director, National Security Agency
    Inventors: David P. Waite, Richard O. Wyckoff
  • Patent number: 7796818
    Abstract: Execution commands corresponding to the type of gesture are stored and by acquiring coordinate values of accepted handwriting input including gesture on a display screen, handwriting is displayed. When handwriting input is accepted, display data of handwriting is updated. When handwriting input is not completed, it is decided whether or not handwriting input is gesture input. When it is decided that the handwriting input is gesture input, the type of gesture is decided on the basis of the coordinate values of gesture on the display screen, an execution command corresponding to the type of gesture is read and the fact that the handwriting input is gesture input is displayed. When gesture input is completed, the execution command corresponding to gesture input is executed. Therefore, the user can make handwriting input according to the user's intent without performing mode switching.
    Type: Grant
    Filed: November 13, 2006
    Date of Patent: September 14, 2010
    Assignee: Fujitsu Limited
    Inventors: Naomi Iwayama, Katsuhiko Akiyama, Kenji Nakajima
  • Publication number: 20100225598
    Abstract: The present invention discloses a multi-touch and handwriting-recognition resistive touchscreen, which comprises a touch layer, a spacer layer, a sensing layer and a controller. The touch layer and the sensing layer are separated by the spacer layer and respectively have a plurality of strip-like touch loops and a plurality of strip-like sensing loops. The touch layer is superimposed on the sensing layer with the touch loops oriented vertically to the sensing loops. The controller respectively connects to and supplies voltages to the two terminals of each touch loop and the two terminals of each sensing loop to enable a digital-mode driving and an analog-mode driving in different time intervals. The controller can integrate the multi-touch function and the handwriting-recognition function. Alternatively, a switch is used to switch the controller to operate in the digital mode enabling the multi-touch function or the analog mode enabling the handwriting-recognition function.
    Type: Application
    Filed: March 5, 2009
    Publication date: September 9, 2010
    Inventor: Jia-You SHEN
  • Patent number: 7792369
    Abstract: A form processing apparatus extracts layout information and character information from a form document. A candidate extracting unit extracts word candidates from the character information. A frequency digitizing unit calculates emission probability of a word candidate from each element. A relation digitizing unit calculates transition probability that relationship between word candidates is established. An evaluating unit calculates an evaluation value indicative of a probability of appearance of word candidates in respective logical elements. A determining unit determines the element and a word candidate thereof as the element and a character string thereof in the form document, based on the evaluation value.
    Type: Grant
    Filed: November 15, 2006
    Date of Patent: September 7, 2010
    Assignee: Fujitsu Limited
    Inventors: Akihiro Minagawa, Hiroaki Takebe, Katsuhito Fujimoto
  • Patent number: 7792363
    Abstract: A system for presenting text found on an object. The system comprises an object manipulation subsystem configured to position the substantially planar object for imaging; an imaging module configured to capture an image of the substantially planar object; a text capture module configured to capture text from the image of the substantially planar object; an Optical Character Recognition (“OCR”) component configured to convert the text to a digital text; a material context component configured to associate a media type with the text found on the substantially planar object; and an output module configured to convert the digital text to an output format, wherein the system is configured to organize the digital text according to the media type before converting the digital text to an output format.
    Type: Grant
    Filed: March 28, 2007
    Date of Patent: September 7, 2010
    Inventor: Benjamin Perkins Foss
  • Patent number: 7783109
    Abstract: A system for interactive note-taking is provided having a receiver for receiving interaction data from a note-taking device used to interact with a note-taking form having note-taking information and a plurality of coded tags printed thereon, and a processor for recording or retrieving the note-taking by identifying, from the received interaction data, at least one parameter relating to the note-taking. Each tag encodes data on an identity of the form and a location of that tag on the form. The note-taking device senses the tags and generates the interaction data with data on the sensed form identity and a position of the note-taking device relative to the sensed tags.
    Type: Grant
    Filed: September 15, 2008
    Date of Patent: August 24, 2010
    Assignee: Silverbrook Research Pty Ltd
    Inventors: Paul Lapstun, Kia Silverbrook, Jacqueline Anne Lapstun
  • Publication number: 20100189316
    Abstract: A method for fingerprint recognition comprises converting fingerprint specimens into electronic images; converting the electronic images into mathematical graphs that include a vertex and an edge; detecting similarities between a plurality of graphs; aligning vertices and edges of similar graphs; and comparing similar graphs.
    Type: Application
    Filed: November 3, 2009
    Publication date: July 29, 2010
    Applicant: GANNON TECHNOLOGIES GROUP, LLC
    Inventor: Mark A. Walch
  • Patent number: 7764837
    Abstract: A technique for continuously recognizing a character entry into a computer. In one example embodiment, this is achieved by drawing a character in a continuous stroke order using a stylus [210] on touch screen [220]. As associated data of the drawn continuous stroke order is then inputted into a continuous character recognizer [930] via the touch screen [220]. One or more hypothesis candidate characters including the continuous stroke order are then produced by the continuous character recognizer [930] upon a partial recognition. The produced one or more hypothesis candidate characters are then displayed substantially over the drawn continuous stroke order on a display device [230].
    Type: Grant
    Filed: September 1, 2004
    Date of Patent: July 27, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Shekhar Ramachandra Borgaonkar, Sriganesh Madhvanath, Prashanth Pandit
  • Publication number: 20100184483
    Abstract: A handheld electronic device including a data input module, a database and a display unit is provided. The data input module includes an image fetching unit and a pattern recognition unit. The image fetching unit is for fetching a pattern of an external object to produce image data. The pattern recognition unit is for transferring the image data to input data. The database is for storing several storage data items. Each storage data item has subject information. When the input data is related to one of the storage data items, the display unit is for displaying navigation information generated according to the subject information of the related storage data item.
    Type: Application
    Filed: December 29, 2009
    Publication date: July 22, 2010
    Applicant: Inventec Appliances Corp.
    Inventors: Yi-Min Kao, Qi Liu
  • Patent number: 7760915
    Abstract: The invention provides a method, system, and program product for encrypting information. In one embodiment, the invention includes prompting a user for a password associated with a digital signature certificate stored in a digital pen, capturing a handwritten password made using the digital pen, displaying to the user the captured password, and encrypting information entered using the digital pen using the captured password. In some embodiments, the password may be captured from a predefined field on a digital page.
    Type: Grant
    Filed: October 9, 2006
    Date of Patent: July 20, 2010
    Assignee: International Business Machines Corporation
    Inventors: Kulvir S. Bhogal, Gregory J. Boss, Rick A. Hamilton, II, Alexandre Polozoff
  • Patent number: 7760946
    Abstract: Various technologies and techniques are disclosed that generate a teaching data set for use by a handwriting recognizer. Ink input is received from various ink sources, such as implicit field data, scripted untruthed ink, scripted truth ink, and/or ink from at least one other language in the same script as the target language for the recognizer. The ink input is used with various machine learning methods and/or other algorithmic methods to generate a teaching ink data set. Examples of the various machine learning methods include a character and/or word n-gram distribution leveling method, an allograph method, a subject diversity method, and a print and cursive data selection method. The teaching ink data is used by a handwriting trainer to produce the handwriting recognizer for the target language.
    Type: Grant
    Filed: June 16, 2006
    Date of Patent: July 20, 2010
    Assignee: Microsoft Corporation
    Inventors: Erik M. Geidl, James A. Pittman
  • Patent number: 7756335
    Abstract: A method for determining at least one recognition candidate for a handwritten pattern comprises selecting possible segmentation points in the handwritten pattern for use in segmenting and recognizing the handwritten pattern. The method further may comprise comparing segments of the handwritten pattern to templates. The comparison may return segment candidates forming possible recognition results of the segments of the handwritten pattern. The method further comprises forming a representation of sequences of segment candidates, said representation comprising data blocks corresponding to segmentation points, wherein a data block comprises references to data blocks corresponding to subsequent segmentation points. The reference may comprise information of segment candidates.
    Type: Grant
    Filed: February 28, 2006
    Date of Patent: July 13, 2010
    Assignee: Zi Decuma AB
    Inventor: Jakob Sternby
  • Patent number: 7756337
    Abstract: A method, computer program product, and a data processing system for performing handwriting recognition of a language having character stroke order rules. A stroke parameter set describing attributes of a handwritten stroke is calculated, and a user input indicates a stroke order knowledge. A reference character dictionary includes a record having a plurality of reference parameter sets each defining attributes of reference character strokes. A stroke sequence number of the stroke parameter set is identified and at least one of the reference parameter sets are excluded from a comparison with the stroke parameter set based on the stroke sequence number.
    Type: Grant
    Filed: January 14, 2004
    Date of Patent: July 13, 2010
    Assignee: International Business Machines Corporation
    Inventors: Yen-Fu Chen, John W. Dunsmoir
  • Publication number: 20100166314
    Abstract: Methods and apparatuses for generating, by a computing device configured to interpret a handwritten expression, a symbol graph to represent strokes associated with the handwritten expression, are described herein. The symbol graph may include nodes, each node corresponding to a combination of a stroke and a candidate symbol for that stroke. The computing device may also generate a segment graph based on the symbol graph by combining nodes associated with a same stroke if strokes of their preceding nodes are the same. Also the computing device may perform a structure analysis on at least a subset of segment sequences represented by the segment graph to determine hypotheses for the handwritten expression. In other embodiments, rather than generate a segment graph, the computing device may determine segment sequences by selecting a number of symbol sequences from the symbol graph and combining symbol sequences having the same segmentation.
    Type: Application
    Filed: December 30, 2008
    Publication date: July 1, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Yu Shi, Frank Kao-Ping Soong
  • Publication number: 20100166312
    Abstract: System for implementing user handwriting according to the present invention, comprising : a handwriting input module (120) for receiving user handwriting including at least 100 to 200 characters by a user with sample sentences; a feature determining module (150); a distance determining module (160) for determining a vertical distance between an uppermost point mark and a lowermost point mark between 2 characters and their segments and a horizontal distance between a leftmost point mark and a rightmost point mark between 2 characters and their segments; a position determining module (170) for determining positions of the uppermost and lowermost point marks and the leftmost and rightmost point marks between 2 characters and their segments handwriting combining module (180) for combining several handwriting base on data recognized by the feature determining module (150), the distance determining module (160) and the position determining module (170); and a handwriting output module (200) for outputting handwriti
    Type: Application
    Filed: August 16, 2007
    Publication date: July 1, 2010
    Inventor: Kyung-Ho Jang
  • Publication number: 20100166313
    Abstract: An information processing apparatus includes a gesture locus data recognition unit configured to execute processing for recognizing gesture locus data included in locus data according to characteristic data of the locus data and gesture characteristic shape data included in gesture dictionary data and output a result of the processing, a separation unit configured to separate gesture locus data and locus data other than the gesture locus data from the locus data according to the result of the recognition by the gesture locus data recognition unit, and a character locus data recognition unit configured to execute processing for recognizing locus data of a character included in the locus data other than the gesture locus data according to the characteristic data of the locus data other than the gesture locus data which is separated by the separation unit, and the locus characteristic data of a character included in a character dictionary data, and output a result of the processing.
    Type: Application
    Filed: December 18, 2009
    Publication date: July 1, 2010
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Tsunekazu Arai
  • Patent number: 7738732
    Abstract: It is intended to provide an image composition apparatus, a control method and program of an image processing apparatus which, when a user wants to combine a photo image and a handwritten image, even if the image processing apparatus is used for other purposes or even if the power of the image processing apparatus is turned off while the user is creating the handwritten image, enables the user to subsequently resume the image composition work.
    Type: Grant
    Filed: June 13, 2006
    Date of Patent: June 15, 2010
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masao Maeda
  • Patent number: 7729542
    Abstract: A new unistroke text entry method for handheld or wearable devices is designed to provide high accuracy and stability of motion. The user makes characters by traversing the edges and diagonals of a geometric pattern, e.g. a square, imposed over the usual text input area. Gesture recognition is accomplished not through pattern recognition but through the sequence of corners that are hit. This means that the full stroke path is unimportant and the recognition is highly deterministic, enabling better accuracy than other gestural alphabets. This input technique works well using a template with a square hole placed over a touch-sensitive surface, such as on a Personal Digital Assistant (PDA), and with a square boundary surrounding a joystick, which might be used on a cell-phone or game controller. Another feature of the input technique is that capital letters are made by ending the stroke in a particular corner, rather than through a mode change as in other gestural input techniques.
    Type: Grant
    Filed: March 29, 2004
    Date of Patent: June 1, 2010
    Assignee: Carnegie Mellon University
    Inventors: Jacob O. Wobbrock, Brad A. Myers
  • Patent number: 7729541
    Abstract: A method is provided for converting a two-dimensional image or bitmap of a handwritten manuscript into three-dimensional data The three-dimensional data can be used to automatically recognize features of the manuscript, such as characters or words. The method includes the steps of: converting the two-dimensional image into three-dimensional volumetric data; filtering the three-dimensional volumetric data; and processing the filtered three-dimensional volumetric data to resolve features of the two-dimensional image. The method can be used, for example, to differentiate between ascenders, descenders, loops, curls, and endpoints that define the overall letter forms in handwritten text, manuscripts or signatures.
    Type: Grant
    Filed: March 16, 2005
    Date of Patent: June 1, 2010
    Assignee: Arizona Board of Regents, A Body Corporate, Acting for and on Behalf of Arizona State University
    Inventors: Anshuman Razdan, John Femiani
  • Patent number: 7729538
    Abstract: The present invention leverages spatial relationships to provide a systematic means to recognize text and/or graphics. This allows augmentation of a sketched shape with its symbolic meaning, enabling numerous features including smart editing, beautification, and interactive simulation of visual languages. The spatial recognition method obtains a search-based optimization over a large space of possible groupings from simultaneously grouped and recognized sketched shapes. The optimization utilizes a classifier that assigns a class label to a collection of strokes. The overall grouping optimization assumes the properties of the classifier so that if the classifier is scale and rotation invariant the optimization will be as well. Instances of the present invention employ a variant of AdaBoost to facilitate in recognizing/classifying symbols. Instances of the present invention employ dynamic programming and/or A-star search to perform optimization.
    Type: Grant
    Filed: August 26, 2004
    Date of Patent: June 1, 2010
    Assignee: Microsoft Corporation
    Inventors: Michael Shilman, Paul A. Viola, Kumar H. Chellapilla
  • Patent number: 7720286
    Abstract: A system is provided that includes a pen-enabled computing arrangement having a capture interface and at least one processing element. The capture interface can capture an electronic input defining a stroke through a plurality of concatenated regions. In addition, the handwriting capture interface can also optionally capture an electronic handwriting input based upon a position of the writing stylus with reference to a position-determining pattern. Each of the concatenated regions corresponds to a region of an identification pattern including a plurality of regions that are each associated with a character of an identifier associated with an object. The stroke includes a plurality of portions referenced to respective regions of the identification pattern such that the processing element can determine the identifier based upon the respective regions of the identification pattern, and associate the electronic input with the object associated with the identifier.
    Type: Grant
    Filed: May 25, 2005
    Date of Patent: May 18, 2010
    Assignees: Advanced Digital Systems, Inc., Cardinal Brands, Inc.
    Inventor: Gregory James Clary
  • Patent number: 7720318
    Abstract: A computer-implemented method of font identification includes receiving a first document, the first document including the first text set in a proportional font. Test text, corresponding to the first text of the first document, is received. The test text is set in a test font. A first fingerprint is generated, based on relative line widths of the first text of the first document. A second fingerprint is generated based on relative line widths of the test text, as set in the test font. The test font is then accepted as being consistent with a font of the first text, based on a predetermined strength of relationship between the first and second fingerprints.
    Type: Grant
    Filed: September 9, 2005
    Date of Patent: May 18, 2010
    Assignee: Adobe Systems Incorporated
    Inventor: Thomas Phinney
  • Patent number: 7715630
    Abstract: The present invention relates to interfacing with electronic ink. Ink is stored in a data structure that permits later retrieval by applications. The ink includes stroke information and may include property information. Through various programming interfaces, one may interact with the ink through methods and setting or retrieving properties. Other objects and collections may be used as well in conjunction with the ink objects.
    Type: Grant
    Filed: December 16, 2005
    Date of Patent: May 11, 2010
    Assignee: Mircosoft Corporation
    Inventors: Alexander Gounares, Steve Dodge, Timothy H. Kannapel, Rudolph Balaz, Subha Bhattacharyay, Manoj K. Biswas, Robert L. Chambers, Bodin Dresevic, Stephen A. Fisher, Arin J. Goldberg, Gregory Hullender, Brigette E. Krantz, Todd A. Torset, Jerome J. Turner, Andrew Silverman, Shiraz M. Somji
  • Patent number: 7716579
    Abstract: A method, system and computer-readable media for supporting text entry on a personal computing device by activating automated searching to search for completion candidates which are based on a partial text entry received from a user. The completion candidates are displayed in a search list. The user may select a completion candidate from among the completion candidates in the search list to correspondingly modify the partial text entry, or the user may decline all of the completion candidates displayed in the search list and terminate the automated searching. The system may further provide a digital keyboard for use in entering text.
    Type: Grant
    Filed: May 19, 2005
    Date of Patent: May 11, 2010
    Assignee: 602531 British Columbia Ltd.
    Inventors: Harold David Gunn, John Chapman
  • Patent number: 7715629
    Abstract: Techniques for processing handwriting input based upon a user's writing style. Some techniques employ the style in which the user writes a single character, while other techniques alternately or additionally employ a group of allographs that form a handwriting style. Some implementations of these techniques, such as those implemented in writing style analysis tool, analyze one or more characters written by a user to identify a community, such as a geographic region or cultural group, to which the user's handwriting style belongs. Other implementations analyze one or more characters of a user's handwriting in order to alternately or additionally categorize the user's handwriting into a particular handwriting style. The writing style analysis tool may then provide the user with a handwriting recognition application specifically configured for that user's personal handwriting style.
    Type: Grant
    Filed: August 29, 2005
    Date of Patent: May 11, 2010
    Assignee: Microsoft Corporation
    Inventor: Ahmad A. Abdulkader
  • Publication number: 20100115404
    Abstract: The present invention discloses a method to automatically arrange word string in a display. The word string includes two words. The method includes the following steps. The first step is to define the positions of the start position and the stop position of strokes of the two words. The second step is to group the start position and the stop position based on a threshold value. The third step is to circle the words based on the grouping result. The final step is to rearrange the two words based on a datum point.
    Type: Application
    Filed: April 2, 2009
    Publication date: May 6, 2010
    Applicant: AVerMedia Information, Inc.
    Inventors: Shi-Mu Sun, Christopher Yen, Yun-Hui Liang, Mei-Jen Kuo
  • Patent number: 7711192
    Abstract: A system, method and computer program product for identifying spam in an image using grey scale representation of an image, including identifying a plurality of contours in the image, the contours corresponding to probable symbols (letters, numbers, punctuation signs, etc.); ignoring contours that are too small or too large given the specified limits; identifying text lines in the image, based on the remaining contours; parsing the text lines into words; ignoring words that are too short or too long, from the identified text lines; ignoring text lines that are too short; verifying that the image contains text by comparing a number of pixels of a symbol color within remaining contours to a total number of pixels of the symbol color in the image; and if the image contains a text, rendering a spam/no spam verdict based on comparing a signature of the remaining text against a SPAM template.
    Type: Grant
    Filed: July 6, 2009
    Date of Patent: May 4, 2010
    Assignee: Kaspersky Lab, ZAO
    Inventor: Evgeny P. Smirnov
  • Publication number: 20100104189
    Abstract: A method of identifying at least one handwritten character composed of at least one stroke is disclosed. The method comprises providing a database comprising a plurality of sequences of strokes, the strokes of each sequence defining at least one character, at least some of said sequences comprising a plurality of strokes; capturing a string of handwritten characters, said string comprising the at least one handwritten character; and matching at least a part of the string with a sequence from said plurality of sequences. This method enables the recognition of multi-stroke characters where the positions of the strokes relative to each other are unknown or at least unreliable. A computer program product implementing this method and an electronic device comprising this computer program product are also disclosed.
    Type: Application
    Filed: December 10, 2008
    Publication date: April 29, 2010
    Inventors: Bharath ARAVAMUDHAN, Sriganesh Madhvanath
  • Patent number: 7706615
    Abstract: In an information processing method for recognizing a handwritten figure or character, with use of a speech input in combination, in order to increase the recognition accuracy a given target is subjected to figure recognition and a first candidate figure list is obtained. Input speech information is phonetically recognized and a second candidate figure list is obtained. On the basis of the figure candidates obtained by the figure recognition and the figure candidates obtained by the speech recognition, a most likely figure is selected.
    Type: Grant
    Filed: August 4, 2006
    Date of Patent: April 27, 2010
    Assignee: Canon Kabushiki Kaisha
    Inventors: Makoto Hirota, Toshiaki Fukada, Yasuhiro Komori
  • Patent number: 7706613
    Abstract: A system, method and computer program product for identifying spam in an image, including (a) identifying a plurality of contours in the image, the contours corresponding to probable symbols; (b) ignoring contours that are too small or too large; (c) identifying text lines in the image, based on the remaining contours; (d) parsing the text lines into words; (e) ignoring words that are too short or too long from the identified text lines; (f) ignoring text lines that are too short; (g) verifying that the image contains text by comparing a number of pixels of a symbol color within remaining contours to a total number of pixels of the symbol color in the image, and that there is at least one text line after filtration; and (h) if the image contains text, rendering a spam/no spam verdict based on a contour representation of the text that which appears after step (f).
    Type: Grant
    Filed: August 23, 2007
    Date of Patent: April 27, 2010
    Assignee: Kaspersky Lab, ZAO
    Inventor: Evgegy P. Smirnov
  • Patent number: 7706616
    Abstract: A word pattern recognition system based on a virtual keyboard layout combines handwriting recognition with a virtual, graphical, or on-screen keyboard to provide a text input method with relative ease of use. The system allows the user to input text quickly with little or no visual attention from the user. The system supports a very large vocabulary of gesture templates in a lexicon, including practically all words needed for a particular user. In addition, the system utilizes various techniques and methods to achieve reliable recognition of a very large gesture vocabulary. Further, the system provides feedback and display methods to help the user effectively use and learn shorthand gestures for words. Word patterns are recognized independent of gesture scale and location. The present system uses language rules to recognize and connect suffixes with a preceding word, allowing users to break complex words into easily remembered segments.
    Type: Grant
    Filed: February 27, 2004
    Date of Patent: April 27, 2010
    Assignee: International Business Machines Corporation
    Inventors: Per-Ola Kristensson, Jingtao Wang, Shumin Zhai
  • Patent number: 7706614
    Abstract: A system, method and computer program product for identifying spam in an image, including (a) identifying a plurality of contours in the image, the contours corresponding to probable symbols; (b) ignoring contours that are too small or too large; (c) identifying text lines in the image, based on the remaining contours; (d) parsing the text lines into words; (e) ignoring words that are too short or too long from the identified text lines; (f) ignoring text lines that are too short; (g) verifying that the image contains text by comparing a number of pixels of a symbol color within remaining contours to a total number of pixels of the symbol color in the image, and that there is at least one text line after filtration; and (h) if the image contains text, rendering a spam/no spam verdict based on a contour representation of the text that which appears after step (f).
    Type: Grant
    Filed: February 4, 2008
    Date of Patent: April 27, 2010
    Assignee: Kaspersky Lab, ZAO
    Inventor: Evgegy P. Smirnov