Alphanumerics Patents (Class 382/161)
  • Patent number: 10347293
    Abstract: Provided is a process, including: obtaining screen-cast video; determining amounts of difference between respective frames; selecting a subset of frames based on the amounts; causing OCRing of each frame in the subset of frames; classifying text in each frame-OCR record as confidential or non-confidential; and forming a redacted version of the screen-cast video based on the classifying.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: July 9, 2019
    Assignee: Droplr, Inc.
    Inventors: Gray Skinner, Levi Nunnink
  • Patent number: 10204271
    Abstract: In the present invention, an attribution is extracted from each region obtained by segmentation of an image, relationships among the regions are described, and a composition of the image is evaluated based on the attributions and the relationships.
    Type: Grant
    Filed: July 31, 2014
    Date of Patent: February 12, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: You Lv, Yong Jiang, Bo Wu, Xian Li
  • Patent number: 10163063
    Abstract: Computer program products and systems are provided for mining for sub-patterns within a text data set. The embodiments facilitate finding a set of N frequently occurring sub-patterns within the data set, extracting the N sub-patterns from the data set, and clustering the extracted sub-patterns into K groups, where each extracted sub-pattern is placed within the same group with other extracted sub-patterns based upon a distance value D that determines a degree of similarity between the sub-pattern and every other sub-pattern within the same group.
    Type: Grant
    Filed: March 7, 2012
    Date of Patent: December 25, 2018
    Assignee: International Business Machines Corporation
    Inventors: Snigdha Chaturvedi, Tanveer A Faruquie, Hima P. Karanam, Marvin Mendelssohn, Mukesh K. Mohania, L. Venkata Subramaniam
  • Patent number: 10088977
    Abstract: An operating method of an electronic device is provided. The method includes selecting an area corresponding to at least one field of a page displayed through a display of the electronic device on the basis of an input; confirming an attribute corresponding to the at least one field among a plurality of attributes including a first attribute and a second attribute; and selectively providing a content corresponding to the attribute among at least one content including a first content and a second content according to the confirmed attribute.
    Type: Grant
    Filed: September 2, 2014
    Date of Patent: October 2, 2018
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Tae-Yeon Kim, Sang-Hyuk Koh, Hee-Jin Kim, Bo-Hyun Sim, Hye-Mi Lee, Si-Hak Jang
  • Patent number: 10019492
    Abstract: The present application relates to the field of computer technologies, and in particular, to a stop word identification method used in an information retrieval system. In a stop word identification method, after a first query input by a user is acquired, a second query that belongs to a same session as the first query is acquired, and a stop word in the first query is identified according to a change-based feature of each word in the first query relative to the second query. According to the solution provided by the present application, a stop word in a query can be identified more accurately, and efficiency and precision of an information retrieval system are improved.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: July 10, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Wenli Zhou, Zhe Wang, Feiran Hu
  • Patent number: 9946455
    Abstract: A message screen display comprises a static non-scrollable display area for display of at least part of a first message, the first message having an associated first message time. The message screen display further comprises a scrollable display area for display of at least part of a second message, the second message having an associated second message time. The message screen display further comprises a feature applied to at least part of the first message that varies based on time as referenced to the associated first message time.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: April 17, 2018
    Assignee: New York Stock Exchange LLC
    Inventors: Robert B. Hlad, Valerie Jeanne Schafer, Cynthia Teresa Bautista-Rozenberg, Robert S. Tannen, Nicholas L. Springer
  • Patent number: 9940511
    Abstract: Systems, computer program products, and techniques for discriminating hand and machine print from each other, and from signatures, are disclosed and include determining a color depth of an image, the color depth corresponding to at least one of grayscale, bi-tonal and color; reducing color depth of non-bi-tonal images to generate a bi-tonal representation of the image; identifying a set of one or more graphical line candidates in either the bi-tonal image or the bi-tonal representation, the graphical line candidates including one or more of true graphical lines and false positives; discriminating any of the true graphical lines from any of the false positives; removing the true graphical lines from the bi-tonal image or the bi-tonal representation without removing the false positives to generate a component map comprising connected components and excluding graphical lines; and identifying one or more of the connected components in the component map.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: April 10, 2018
    Assignee: KOFAX, INC.
    Inventors: Alexander Shustorovich, Christopher W. Thrasher, Anthony Macciola, Jan W. Amtrup
  • Patent number: 9914213
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: March 13, 2018
    Assignee: GOOGLE LLC
    Inventors: Sudheendra Vijayanarasimhan, Eric Jang, Peter Pastor Sampedro, Sergey Levine
  • Patent number: 9798818
    Abstract: A method and apparatus are provided for automatically generating and processing first and second concept vector sets extracted, respectively, from a first set of concept sequences and from a second, temporally separated, concept sequences by performing a natural language processing (NLP) analysis of the first concept vector set and second concept vector set to detect changes in the corpus over time by identifying changes for one or more concepts included in the first and/or second set of concept sequences.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: October 24, 2017
    Assignee: International Business Machines Corporation
    Inventors: Tin Kam Ho, Luis A. Lastras-Montano, Oded Shmueli
  • Patent number: 9652439
    Abstract: Systems and methods are provided through which data parseable against a document type definition by generating a list of a possible paths of an input element that is not encoded against the document type definition, determining the path that is the best fit with the document type definition, and then generating the element in the syntax of the document type definition. Determining the path that is the best fit includes parsing the path against the document type definition. The best fit is expressed in a scoring scale, in which the best score indicates the best fit. Thereafter, the path with the best fit is translated in accordance to the document type definition or markup language.
    Type: Grant
    Filed: August 4, 2014
    Date of Patent: May 16, 2017
    Assignee: Thomson Reuters Global Resources
    Inventor: Michael S. Zaharkin
  • Patent number: 9600135
    Abstract: A system for executing a multimodal software application includes a mobile computer device with a plurality of input interface components, the multimodal software application, and a dialog engine in operative communication with the multimodal software application. The multimodal software application is configured to receive first data from the plurality of input interface components. The dialog engine executes a workflow description from the multimodal software application by providing prompts to an output interface component. Each of these prompts includes notification indicating which of the input interface components are valid receivers for that respective prompt. Furthermore, the notification may indicate the current prompt and at least the next prompt in sequence.
    Type: Grant
    Filed: September 10, 2010
    Date of Patent: March 21, 2017
    Assignee: Vocollect, Inc.
    Inventor: Sean Nickel
  • Patent number: 9524447
    Abstract: A reference 2D data is provided. The reference 2D data comprises a first plurality of pixels defined in 2D coordinates. The reference 2D data is transformed into a reference 1D data having a first 1D size. The reference 1D data comprises the first plurality of pixels in a transformed 1D order. A plurality of input 2D data are also provided. An input 2D data comprises a second plurality of pixels defined in 2D coordinates. The plurality of input 2D data are transformed into a plurality of input 1D data, which comprises transforming the input 2D data into an input 1D data. Transforming the input 2D data into the input 1D data is the same as transforming the reference 2D data into the reference 1D data. Finally, a transformed input 1D data from the plurality of input 1D data that matches the transformed reference 1D data is searched.
    Type: Grant
    Filed: March 5, 2014
    Date of Patent: December 20, 2016
    Inventor: Sizhe Tan
  • Patent number: 9465912
    Abstract: An apparatus and a method for mining temporal pattern are provided. A method for mining temporal pattern includes generating a data pattern group comprising data patterns from sequential data, generating a candidate pattern group comprising candidate patterns from the data pattern group, calculating a support value for a candidate pattern in a candidate pattern group based on a discrepancy value of the candidate pattern, and determining whether the candidate pattern satisfies a predetermined pattern requirement, based on the calculated support value.
    Type: Grant
    Filed: May 1, 2014
    Date of Patent: October 11, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyoung-Min Park, Hyo-A Kang, Ki-Yong Lee
  • Patent number: 9396413
    Abstract: Methods, systems and apparatus for choosing image labels. In one aspect, a method includes receiving data specifying a first image, receiving text labels for the first image, receiving search results in response to a web search performed using at least some of the text labels as queries, ranking the text labels, at least in part, based on a number of resources referenced by the received search results, wherein at least some of the resources each include an image matching the first image, and selecting an image label for the image from the ranked text labels, the image label being selected based on the ranking.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: July 19, 2016
    Assignee: Google Inc.
    Inventors: Yong Zhang, Charles J. Rosenberg, Jingbin Wang, Sean O Malley
  • Patent number: 9218546
    Abstract: Methods, systems and apparatus for choosing image labels. In one aspect, a method includes receiving data specifying a first image, receiving text labels for the first image, receiving search results in response to a web search performed using at least some of the text labels as queries, ranking the text labels, at least in part, based on a number of resources referenced by the received search results, wherein at least some of the resources each include an image matching the first image, and selecting an image label for the image from the ranked text labels, the image label being selected based on the ranking.
    Type: Grant
    Filed: June 1, 2012
    Date of Patent: December 22, 2015
    Assignee: Google Inc.
    Inventors: Yong Zhang, Charles J. Rosenberg, Jingbin Wang, Sean O'Malley
  • Patent number: 9117375
    Abstract: A computerized assessment grading method comprises creating a syntax tree for a received equation-based response to at least one assessment question and a syntax tree for at least one solution to the at least one question, comparing the syntax trees, and grading the response based on the results of the comparison.
    Type: Grant
    Filed: June 27, 2011
    Date of Patent: August 25, 2015
    Assignee: SMART Technologies ULC
    Inventors: David Labine, Lothar Wenzel, Albert Chu
  • Patent number: 9104936
    Abstract: A method of reading data represented by characters formed of an x by y array of dots, e.g. as printed by a dot-matrix printer, is described. An image of the character(s) is captured by a digital camera device and transmitted to a computer, and by using analysis software operating in the computer to which the camera image has been sent, dot shapes are identified and their positions within the captured image detected, using the similarity of dots to idealized representations of dots using a combination of covariance, correlation or color data. The position information about the detected dots is then processed to determine the distance between dots and to identify “clusters” of adjacent dots in groups of dots close to one another, and to enable such clusters to be mapped on to a notional x by y grid that defines the intended positions of the dots where grid elements intersect.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: August 11, 2015
    Assignee: Wessex Technology Opto-Electronic Products Limited
    Inventors: Alan Joseph Bell, Martin Robinson, Guanhua Chen
  • Patent number: 9104306
    Abstract: A user device is disclosed which includes a touch input and a keypad input. The user device is configured to operate in a gesture capture mode as well as a navigation mode. In the navigation mode, the user interfaces with the touch input to move a cursor or similar selection tool within the user output. In the gesture capture mode, the user interfaces with the touch input to provide gesture data that is translated into key code output having a similar or identical format to outputs of the keypad.
    Type: Grant
    Filed: October 29, 2010
    Date of Patent: August 11, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Lye Hock Bernard Chan, Chong Pin Jonathan Teoh, Boon How Kok
  • Patent number: 9098768
    Abstract: A character detection apparatus is provided that detects, from an image including a first image representing a character and a second image representing a translucent object, the character. The character detection apparatus includes a calculating portion that, for each of blocks obtained by dividing an overlapping region in which the first image is overlapped by the second image, calculates a frequency of appearance of pixels for each of gradations of a property, and a detection portion that detects the character from the overlapping region based on the frequency for each of the gradations.
    Type: Grant
    Filed: January 4, 2012
    Date of Patent: August 4, 2015
    Assignee: KONICA MINOLTA, INC.
    Inventor: Tomoo Yamanaka
  • Publication number: 20150139539
    Abstract: An apparatus and method for detecting forgery/falsification of a homepage. The apparatus includes a homepage image shot generation module for generating homepage image shots of an entire screen of an accessed homepage. A character string extraction module extracts character strings from each homepage image shot using an OCR technique. A character string comparison module compares each of the extracted character strings with character strings required for determination of homepage forgery/falsification, thus determining whether the extracted character string is a normal character string or a falsified character string. A homepage falsification determination module determines whether the corresponding homepage has been forged/falsified, based on results of the comparison. A character string learning module learns the character string extracted from the homepage image shot, based on results of the determination, and classifies the character string as the normal character string or the falsified character string.
    Type: Application
    Filed: August 25, 2014
    Publication date: May 21, 2015
    Inventors: Taek kyu LEE, Geun Yong KIM, Seok won LEE, Myeong Ryeol CHOI, Hyung Geun OH, KiWook SOHN
  • Patent number: 8995741
    Abstract: Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: March 31, 2015
    Assignee: Google Inc.
    Inventors: Sanjiv Kumar, Henry Allan Rowley, Xiaohang Wang, Jose Jeronimo Moreira Rodrigues
  • Patent number: 8977042
    Abstract: A character recognition system receives an unknown character and recognizes the character based on a pre-trained recognition model. Prior to recognizing the character, the character recognition system may pre-process the character to rotate the character to a normalized orientation. By rotating the character to a normalized orientation in both training and recognition stages, the character recognition system releases the pre-trained recognition model from considering character prototypes in different orientations and thereby speeds up recognition of the unknown character. In one example, the character recognition system rotates the character to the normalized orientation by aligning a line between a sum of coordinates of starting points and a sum of coordinates of ending points of each stroke of the character with a normalized direction.
    Type: Grant
    Filed: March 23, 2012
    Date of Patent: March 10, 2015
    Assignee: Microsoft Corporation
    Inventors: Qiang Huo, Jun Du
  • Publication number: 20140363074
    Abstract: Methods, systems, and computer-readable media related to a technique for providing handwriting input functionality on a user device. A handwriting recognition module is trained to have a repertoire comprising multiple non-overlapping scripts and capable of recognizing tens of thousands of characters using a single handwriting recognition model. The handwriting input module provides real-time, stroke-order and stroke-direction independent handwriting recognition. User interfaces for providing the handwriting input functionality are also disclosed.
    Type: Application
    Filed: May 30, 2014
    Publication date: December 11, 2014
    Applicant: Apple Inc.
    Inventors: Jannes G. A. DOLFING, Karl M. GROETHE, Ryan S. DIXON, Jerome R. BELLEGARDA
  • Patent number: 8908971
    Abstract: Methods, devices and systems are described for transcribing text from artifacts to electronic files. A computer system is provided, wherein the computer system comprises a computer-readable storage device. An image of the artifact is received wherein text is present on the artifact. A first portion of the text is analyzed. Characters representing the first portion of the text are identified at a first confidence level equal to or greater than a threshold confidence level. The characters representing the first portion of the text are stored. A second portion of the text appearing on the artifact is analyzed. A plurality of candidates to represent the second portion of the text are identified at a second confidence level below the threshold confidence level. Finally, the plurality of candidates to a user for selection are presented.
    Type: Grant
    Filed: September 25, 2013
    Date of Patent: December 9, 2014
    Assignee: Ancestry.com Operations Inc.
    Inventor: Lee Samuel Jensen
  • Patent number: 8908961
    Abstract: A method for automatically recognizing Arabic text includes building an Arabic corpus comprising Arabic text files written in different writing styles and ground truths corresponding to each of the Arabic text files, storing writing-style indices in association with the Arabic text files, digitizing an Arabic word to form an array of pixels, dividing the Arabic word into line images, forming a text feature vector from the line images, training a Hidden Markov Model using the Arabic text files and ground truths in the Arabic corpus in accordance with the writing-style indices, and feeding the text feature vector into a Hidden Markov Model to recognize the Arabic words.
    Type: Grant
    Filed: April 23, 2014
    Date of Patent: December 9, 2014
    Assignee: King Abdulaziz City for Science & Technology
    Inventors: Mohammad S. Khorsheed, Hussein K. Al-Omari
  • Patent number: 8885931
    Abstract: One or more techniques and/or systems are disclosed for mitigating machine solvable human interactive proofs (HIPs). A classifier is trained over a set of one or more training HIPs that have known characteristics for OCR solvability and HIP solving pattern from actual use. A HIP classification is determined for a HIP (such as from a HIP library used by a HIP generator) using the trained classifier. If the HIP is classified by the trained classifier as a merely human solvable classification, such that it may not be solved by a machine, the HIP can be identified for use in the HIP generation system. Otherwise, the HIP can be altered to (attempt to) be merely human solvable.
    Type: Grant
    Filed: January 26, 2011
    Date of Patent: November 11, 2014
    Assignee: Microsoft Corporation
    Inventor: Kumar S. Srivastava
  • Publication number: 20140270497
    Abstract: Product images are used in conjunction with textual descriptions to improve classifications of product offerings. By combining cues from both text and image descriptions associated with products, implementations enhance both the precision and recall of product description classifications within the context of web-based commerce search. Several implementations are directed to improving those areas where text-only approaches are most unreliable. For example, several implementations use image signals to complement text classifiers and improve overall product classification in situations where brief textual product descriptions use vocabulary that overlaps with multiple diverse categories. Other implementations are directed to using text and images “training sets” to improve automated classifiers including text-only classifiers.
    Type: Application
    Filed: May 30, 2014
    Publication date: September 18, 2014
    Applicant: Microsoft Corporation
    Inventors: Anitha Kannan, Partha Pratim Talukdar, Nikhil Rasiwasia, Qifa Ke, Rakesh Agrawal
  • Patent number: 8831329
    Abstract: Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
    Type: Grant
    Filed: October 21, 2013
    Date of Patent: September 9, 2014
    Assignee: Google Inc.
    Inventors: Sanjiv Kumar, Henry Allan Rowley, Xiaohang Wang, Jose Jeronimo Moreira Rodrigues
  • Patent number: 8787660
    Abstract: A method of defining model characters of a font. The method includes receiving a string of characters, receiving an image that includes an occurrence of the string, identifying objects in the image, determining, for each respective object, which of the objects satisfies first criteria indicating that the respective object likely corresponds to a character in the string, determining, for each respective object satisfying the first criteria, which of the objects satisfies second criteria indicating that the respective object belongs to a sequence of objects likely to correspond to the string, and defining, for each respective object satisfying the second criteria, a model character for each character of the string based upon a corresponding object of the sequence of objects. The first criteria may include aspect ratio criterion, size criterion, or both, and the second criteria may include alignment criterion, spacing criterion contrast criterion, encompassment criterion, or combinations thereof.
    Type: Grant
    Filed: November 23, 2005
    Date of Patent: July 22, 2014
    Assignee: Matrox Electronic Systems, Ltd.
    Inventors: Christian Simon, Sylvain Chapleau
  • Patent number: 8768050
    Abstract: Product images are used in conjunction with textual descriptions to improve classifications of product offerings. By combining cues from both text and image descriptions associated with products, implementations enhance both the precision and recall of product description classifications within the context of web-based commerce search. Several implementations are directed to improving those areas where text-only approaches are most unreliable. For example, several implementations use image signals to complement text classifiers and improve overall product classification in situations where brief textual product descriptions use vocabulary that overlaps with multiple diverse categories. Other implementations are directed to using text and images “training sets” to improve automated classifiers including text-only classifiers.
    Type: Grant
    Filed: June 13, 2011
    Date of Patent: July 1, 2014
    Assignee: Microsoft Corporation
    Inventors: Anitha Kannan, Partha Pratim Talukdar, Nikhil Rasiwasia, Qifa Ke, Rakesh Agrawal
  • Publication number: 20140177951
    Abstract: In a method for processing an electronic document, a database which is used to extract information relating to the document is adapted using the electronic document, and in which the database is adapted using at least one item of feedback from a user. Furthermore, an apparatus, a computer program product and a storage medium are accordingly specified.
    Type: Application
    Filed: December 23, 2013
    Publication date: June 26, 2014
    Applicant: DOCUWARE GMBH
    Inventors: JUERGEN BIFFAR, MICHAEL BERGER, CHRISTOPH WEIDLING, ANDREAS HOFMEIER, DANIEL ESSER, MARCEL HANKE
  • Patent number: 8761500
    Abstract: A method for automatically recognizing Arabic text includes building an Arabic corpus comprising Arabic text files written in different writing styles and ground truths corresponding to each of the Arabic text files, storing writing-style indices in association with the Arabic text files, digitizing a line of Arabic characters to form an array of pixels, dividing the line of the Arabic characters into line images, forming a text feature vector from the line images, training a Hidden Markov Model using the Arabic text files and ground truths in the Arabic corpus in accordance with the writing-style indices, and feeding the text feature vector into a Hidden Markov Model to recognize the line of Arabic characters.
    Type: Grant
    Filed: May 12, 2013
    Date of Patent: June 24, 2014
    Assignee: King Abdulaziz City for Science and Technology
    Inventors: Mohammad S. Khorsheed, Hussein K. Al-Omari, Majed Ibrahim Bin Osfoor, Adbulaziz Obaid Alobaid, Hussam Abdulrahman Alfaleh, Arwa Ibrahem Bin Asfour
  • Publication number: 20140169665
    Abstract: A system and method for processing documents with automatic improvements to the processing. Documents are submitted to a processing system and data is extracted from the documents. The data may be extracted utilising OCR techniques. The data may be verified and interpreted utilising classifiers and predefined feature extraction rules which may improve their performance through an iterative learning cycle.
    Type: Application
    Filed: February 21, 2014
    Publication date: June 19, 2014
    Applicant: Porta Holding Ltd.
    Inventors: Rasmus Berg Palm, Claus Thrane, Gert Sylvest, Mikkel Hippe Brun
  • Patent number: 8725660
    Abstract: A collection of labeled training cases is received, where each of the labeled training cases has at least one original feature and a label with respect to at least one class. Non-linear transformation of values of the original feature in the training cases is applied to produce transformed feature values that are more linearly related to the class than the original feature values. The non-linear transformation is based on computing probabilities of the training cases that are positive with respect to the at least one class. The transformed feature values are used to train a classifier.
    Type: Grant
    Filed: July 30, 2009
    Date of Patent: May 13, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: George H. Forman, Martin B. Scholz, Shyam Sundar Rajaram
  • Publication number: 20140105488
    Abstract: Architecture that performs image page index selection. A learning-based framework learns a statistical model based on the hyperlink (URL-uniform resource locator) previous click information obtained from the image search users. The learned model can combine the features of a newly discovered URL to predict the possibility of the newly-discovered URL being clicked in the future image search. In addition to existing web index selection features, image clicks are added as features, and the image clicks are aggregated over different URL segments, as well as the site modeling pattern trees to reduce the sparse problem of the image click information.
    Type: Application
    Filed: October 17, 2012
    Publication date: April 17, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Bo Geng, Xian-Sheng Hua, Zhong Wu, Dengyong Zhou
  • Publication number: 20140091522
    Abstract: A method of automatically generating a calibration file for a card handling device comprises automatically generating a calibration file stored in memory of a main control system for a card handling device. Automatically generating the calibration file comprises identifying at least one parameter associated with a rank area around a rank of the at least a portion of the card, identifying at least one parameter associated with a suit area around a suit of the at least a portion of the card, and storing the at least one parameter associated with the rank area and the at least one parameter associated with the suit area in the calibration file. Additionally, a method of automatically generating deck libraries for one or more decks of cards comprises automatically generate a plurality of master images for the cards of the first deck type using the parameters from the calibration file.
    Type: Application
    Filed: September 9, 2013
    Publication date: April 3, 2014
    Inventors: James V. Kelly, Vladislav Zvercov, Brian Miller
  • Patent number: 8682093
    Abstract: A method and system for producing accented image data for an accented image is disclosed. The method includes decomposing each of a first and a second image into a gradient representation which comprises spectral and edge components. The first image comprises more spectral dimensions than the second image. The edge component from the first image is combined with the spectral component from the second image to form a combined gradient representation. Accented image data for the accented image is then generated from data including the combined gradient representation.
    Type: Grant
    Filed: August 27, 2010
    Date of Patent: March 25, 2014
    Assignee: University of East Anglia
    Inventors: David Connah, Mark S. Drew, Graham Finlayson
  • Patent number: 8606022
    Abstract: An information processing apparatus, which creates a tree structure used by a recognition apparatus which recognizes specific information using the tree structure, including a memory unit which stores data including the information to be recognized and data not including the information so as to correspond to a label showing whether or not the data includes the information, a recognition device which recognizes the information and outputs a high score value when the data including the information is input, and a grouping unit which performs grouping of the recognition devices using a score distribution obtained when the data is input into the recognition devices.
    Type: Grant
    Filed: March 2, 2011
    Date of Patent: December 10, 2013
    Assignee: Sony Corporation
    Inventor: Jun Yokono
  • Patent number: 8600152
    Abstract: Methods, devices and systems are described for transcribing text from artifacts to electronic files. A computer system is provided, wherein the computer system comprises a computer-readable storage device. An image of the artifact is received wherein text is present on the artifact. A first portion of the text is analyzed. Characters representing the first portion of the text are identified at a first confidence level equal to or greater than a threshold confidence level. The characters representing the first portion of the text are stored. A second portion of the text appearing on the artifact is analyzed. A plurality of candidates to represent the second portion of the text are identified at a second confidence level below the threshold confidence level. Finally, the plurality of candidates to a user for selection are presented.
    Type: Grant
    Filed: October 26, 2009
    Date of Patent: December 3, 2013
    Assignee: Ancestry.com Operations Inc.
    Inventor: Lee Samuel Jensen
  • Publication number: 20130315480
    Abstract: Text in web pages or other text documents may be classified based on the images or other objects within the webpage. A system for identifying and classifying text related to an object may identify one or more web pages containing the image or similar images, determine topics from the text of the document, and develop a set of training phrases for a classifier. The classifier may be trained and then used to analyze the text in the documents. The training set may include both positive examples and negative examples of text taken from the set of documents. A positive example may include captions or other elements directly associated with the object, while negative examples may include text taken from the documents, but from a large distance from the object. In some cases, the system may iterate on the classification process to refine the results.
    Type: Application
    Filed: August 5, 2013
    Publication date: November 28, 2013
    Applicant: Microsoft Corporation
    Inventors: Simon Baker, Dahua Lin, Anitha Kannan, Qifa Ke
  • Patent number: 8577131
    Abstract: Systems and methods for improving visual object recognition by analyzing query images are disclosed. In one example, a visual object recognition module may determine query images matching objects of a training corpus utilized by the module. Matched query images may be added to the training corpus as training images of a matched object to expand the recognition of the object by the module. In another example, relevant candidate image corpora from a pool of image data may be automatically selected by matching the candidate image corpora against user query images. Selected image corpora may be added to a training corpus to improve recognition coverage. In yet another example, objects unknown to a visual object recognition module may be discovered by clustering query images. Clusters of similar query images may be annotated and added into a training corpus to improve recognition coverage.
    Type: Grant
    Filed: July 12, 2011
    Date of Patent: November 5, 2013
    Assignee: Google Inc.
    Inventors: Yuan Li, Hartwig Adam
  • Patent number: 8548259
    Abstract: Techniques and methods are disclosed herein for combining and weighting of values from and associated with classifiers. Classifiers are used to recognize characters as part of an optical character recognition (OCR) system. Various methods of normalization facilitate combining of results of classifiers. For example, weight values may be entered into a weight table having two columns, one that includes weights from comparing patterns with images of correct characters, the other column includes weights from comparing patterns with images of incorrect characters.
    Type: Grant
    Filed: October 24, 2012
    Date of Patent: October 1, 2013
    Assignee: ABBYY Development LLC
    Inventor: Diar Tuganbaev
  • Publication number: 20130251249
    Abstract: A character recognition system receives an unknown character and recognizes the character based on a pre-trained recognition model. Prior to recognizing the character, the character recognition system may pre-process the character to rotate the character to a normalized orientation. By rotating the character to a normalized orientation in both training and recognition stages, the character recognition system releases the pre-trained recognition model from considering character prototypes in different orientations and thereby speeds up recognition of the unknown character. In one example, the character recognition system rotates the character to the normalized orientation by aligning a line between a sum of coordinates of starting points and a sum of coordinates of ending points of each stroke of the character with a normalized direction.
    Type: Application
    Filed: March 23, 2012
    Publication date: September 26, 2013
    Applicant: Microsoft Corporation
    Inventors: Qiang Huo, Jun Du
  • Patent number: 8494275
    Abstract: An information recognition system includes: a display section displaying an image on a display surface at a predetermined display resolution; an image combining section combining a character entry guide with the image, the character entry guide assisting handwritten input to the display surface; an information detecting section detecting handwritten input information at a detection resolution which is higher than the display resolution, the handwritten input information input to the display surface according to the character entry guide; and a character recognizing section performing character recognition based on the information detected at the detection resolution.
    Type: Grant
    Filed: March 11, 2011
    Date of Patent: July 23, 2013
    Assignee: Seiko Epson Corporation
    Inventor: Naruhide Kitada
  • Patent number: 8472707
    Abstract: A method for automatically recognizing Arabic text includes digitizing a line of Arabic characters to form a two-dimensional array of pixels each associated with a pixel value, wherein the pixel value is expressed in a binary number, dividing the line of the Arabic characters into a plurality of line images, defining a plurality of cells in one of the plurality of line images, wherein each of the plurality of cells comprises a group of adjacent pixels, serializing pixel values of pixels in each of the plurality of cells in one of the plurality of line images to form a binary cell number, forming a text feature vector according to binary cell numbers obtained from the plurality of cells in one of the plurality of line images, and feeding the text feature vector into a Hidden Markov Model to recognize the line of Arabic characters.
    Type: Grant
    Filed: November 26, 2012
    Date of Patent: June 25, 2013
    Assignee: King Abdulaziz City for Science & Technology
    Inventors: Mohammad S. Khorsheed, Hussein K. Al-Omari
  • Patent number: 8463042
    Abstract: An apparatus for pattern processing exhibits a discretizing device for discretizing an input pattern, a device for generating a number n of discrete variants of the quantized input pattern in accordance with established rules, a number n of input stages (50) for generating, for each input-pattern variant, an assigned output symbol from a set of symbols, and a selection unit (60) for selecting a symbol by way of selected symbol relating to the input pattern from the n generated output symbols in accordance with an established selection rule. The apparatus according to the invention and the corresponding process according to the invention enable a faster, more precise and more flexible recognition of patterns, in which connection it may be a question of spatial image patterns, temporally variable signal patterns and other input patterns.
    Type: Grant
    Filed: May 22, 2009
    Date of Patent: June 11, 2013
    Inventor: Eberhard Falk
  • Patent number: 8446422
    Abstract: An image display apparatus is disclosed. The image display apparatus includes a detection section, an image forming section, and a display process section. The detection section detects a user's watching state. The image forming section that forms a display image which is displayed on a screen based on a plurality of images and changes the display image based on a detected result of the detection section. The display process section which performs a process of displaying the display image formed by the image forming section.
    Type: Grant
    Filed: January 21, 2009
    Date of Patent: May 21, 2013
    Assignee: Sony Corporation
    Inventors: Kazumasa Tanaka, Tetsujiro Kondo, Yasushi Tatehira, Tetsushi Kokubo, Kenji Tanaka, Hitoshi Mukai, Hirofumi Hibi, Hiroyuki Morisaki
  • Patent number: 8442310
    Abstract: One or more techniques and/or systems are disclosed for compensating for affine distortions in handwriting recognition. Orientation estimation is performed on a handwriting sample to generate a set of likely characters for the sample. An estimated affine transform is determined for the sample by applying hidden Markov model (HMM) based minimax testing to the sample using the set of likely characters. The estimated affine transform is applied to the sample to compensate for the affine distortions of the sample, yielding an affine distortion compensated sample.
    Type: Grant
    Filed: April 30, 2010
    Date of Patent: May 14, 2013
    Assignee: Microsoft Corporation
    Inventor: Qiang Huo
  • Publication number: 20130114890
    Abstract: Methods and systems of the present embodiment provide segmenting of connected components of markings found in document images. Segmenting includes detecting aligned text. From this detected material an aligned text mask is generated and used in processing of the images. The processing includes breaking connected components in the document images into smaller pieces or fragments by detecting and segregating the connected components and fragments thereof likely to belong to aligned text.
    Type: Application
    Filed: November 15, 2012
    Publication date: May 9, 2013
    Applicant: PALO ALTO RESEARCH CENTER INCORPORATED
    Inventor: Palo Alto Research Center Incorporated
  • Publication number: 20130108115
    Abstract: Embodiments of the invention describe methods and apparatus for performing context-sensitive OCR. A device obtains an image using a camera coupled to the device. The device identifies a portion of the image comprising a graphical object. The device infers a context associated with the image and selects a group of graphical objects based on the context associated with the image. Improved OCR results are generated using the group of graphical objects. Input from various sensors including microphone, GPS, and camera, along with user inputs including voice, touch, and user usage patterns may be used in inferring the user context and selecting dictionaries that are most relevant to the inferred contexts.
    Type: Application
    Filed: April 18, 2012
    Publication date: May 2, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Kyuwoong HWANG, Te-Won Lee, Duck Hoon Kim, Kisun You, Minho Jin, Taesu Kim, Hyun-Mook Cho