Alphanumerics Patents (Class 382/161)
-
Patent number: 12175337Abstract: In example embodiments, techniques are provided for using machine learning to extract machine-readable labels for text boxes and symbols in P&IDs in image-only formats. A P&ID data extraction application uses an optical character recognition (OCR) algorithm to predict labels for text boxes in a P&ID. The P&ID data extraction application uses a first machine learning algorithm to detect symbols in the P&ID and return a predicted bounding box and predicted class of equipment for each symbol. One or more of the predicted bounding boxes may be decimate by non-maximum suppression to avoid overlapping detections. The P&ID data extraction application uses a second machine learning algorithm to infer properties for each detected symbol having a remaining predicted bounding box. The P&ID data extraction application stores the predicted bounding box and a label including the predicted class of equipment and inferred properties in a machine-readable format.Type: GrantFiled: December 21, 2020Date of Patent: December 24, 2024Assignee: Bentley Systems, IncorporatedInventors: Marc-André Gardner, Karl-Alexandre Jahjah
-
Patent number: 12094233Abstract: An information processing apparatus includes a processor configured to control a display such that a result of recognition, which is obtained by recognizing an image on which a character string is written, and a result of comparison, which is obtained by comparing the result of recognition with a database registered in advance, are displayed next to each other.Type: GrantFiled: May 21, 2021Date of Patent: September 17, 2024Assignee: FUJIFILM Business Innovation Corp.Inventor: Yoshie Ohira
-
Patent number: 12045312Abstract: A device, method, and non-transitory computer readable medium are described. The method includes receiving a dataset including hand written Arabic words and hand written Arabic alphabets from one or more users. The method further includes removing whitespace around alphabets in the hand written Arabic words and the hand written Arabic alphabets in the dataset. The method further includes splitting the dataset into a training set, a validation set, and a test set. The method further includes classifying one or more user datasets from the training set, the validation set, and the test set. The method further includes identifying the target user from the one or more user datasets. The identification of the target user includes a verification accuracy of the hand written Arabic words being larger than a verification accuracy threshold value.Type: GrantFiled: March 11, 2024Date of Patent: July 23, 2024Assignee: Prince Mohammad Bin Fahd UniversityInventors: Majid Ali Khan, Nazeeruddin Mohammad, Ghassen Ben Brahim, Abul Bashar, Ghazanfar Latif
-
Patent number: 12033412Abstract: Systems and methods for extracting information from documents are provided. In one example embodiment, a computer-implemented method includes obtaining one or more units of text from an image of a document. The method includes determining one or more annotated values from the one or more units of text and determining a set of candidate labels for each annotated value. The method determines each set of candidate labels by performing a search for the candidate labels based at least in part on a language associated with the document and a location of each annotated value. The method includes determining a canonical label for each annotated value based at least in part on the associated candidate labels, and mapping at least one annotated value to an action that is presented to a user based at least in part on the canonical label associated with the annotated value.Type: GrantFiled: January 28, 2019Date of Patent: July 9, 2024Assignee: GOOGLE LLCInventors: Rakesh Iyer, Lisha Ruan
-
Patent number: 12019708Abstract: A device, method, and non-transitory computer readable medium are described. The method includes receiving a dataset including hand written Arabic words and hand written Arabic alphabets from one or more users. The method further includes removing whitespace around alphabets in the hand written Arabic words and the hand written Arabic alphabets in the dataset. The method further includes splitting the dataset into a training set, a validation set, and a test set. The method further includes classifying one or more user datasets from the training set, the validation set, and the test set. The method further includes identifying the target user from the one or more user datasets. The identification of the target user includes a verification accuracy of the hand written Arabic words being larger than a verification accuracy threshold value.Type: GrantFiled: February 14, 2024Date of Patent: June 25, 2024Assignee: Prince Mohammad Bin Fahd UniversityInventors: Majid Ali Khan, Nazeeruddin Mohammad, Ghassen Ben Brahim, Abul Bashar, Ghazanfar Latif
-
Patent number: 12008696Abstract: A translation method is performed by an augmented reality (AR) device. The method includes obtaining a text to be translated from an environment image in response to a translation trigger instruction; obtaining a translation result of the text to be translated; and displaying the translation result based on a preset display mode in an AR space. The AR space is a virtual space constructed by the AR device based on a real environment.Type: GrantFiled: March 29, 2022Date of Patent: June 11, 2024Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.Inventors: Zhixiong Yu, Zifei Dou, Xiang Li
-
Patent number: 11914673Abstract: A device, method, and non-transitory computer readable medium are described. The method includes receiving a dataset including hand written Arabic words and hand written Arabic alphabets from one or more users. The method further includes removing whitespace around alphabets in the hand written Arabic words and the hand written Arabic alphabets in the dataset. The method further includes splitting the dataset into a training set, a validation set, and a test set. The method further includes classifying one or more user datasets from the training set, the validation set, and the test set. The method further includes identifying the target user from the one or more user datasets. The identification of the target user includes a verification accuracy of the hand written Arabic words being larger than a verification accuracy threshold value.Type: GrantFiled: October 5, 2021Date of Patent: February 27, 2024Assignee: Prince Mohammad Bin Fahd UniversityInventors: Majid Ali Khan, Nazeeruddin Mohammad, Ghassen Ben Brahim, Abul Bashar, Ghazanfar Latif
-
Patent number: 11830195Abstract: A training label image correction method includes performing a segmentation process on an input image (11) of training data (10) by a trained model (1) using the training data to create a determination label image (14), comparing labels of corresponding portions in the determination label image (14) and a training label image (12) with each other, and correcting label areas (13) included in the training label image based on label comparison results.Type: GrantFiled: August 6, 2018Date of Patent: November 28, 2023Assignee: Shimadzu CorporationInventors: Wataru Takahashi, Ayako Akazawa, Shota Oshikawa
-
Patent number: 11816182Abstract: The present disclosure provides techniques for encoding and decoding characters for optical character recognition. The techniques involve determining sets of numbers for encoding a character set where each number in a particular set of numbers for encoding a particular character is mapped to a graphical unit (e.g., radical) of the particular character. A mapping between each set of numbers in the possible encodings and the character set may be determined based the closest character already encoded. A machine learning model may be trained to perform optical character recognition using training data labeled using the set of encodings and the mappings.Type: GrantFiled: June 7, 2021Date of Patent: November 14, 2023Assignee: SAP SEInventors: Marco Spinaci, Marek Polewczyk
-
Patent number: 11755687Abstract: A device, method, and non-transitory computer readable medium are described. The method includes receiving a dataset including hand written Arabic words and hand written Arabic alphabets from one or more users. The method further includes removing whitespace around alphabets in the hand written Arabic words and the hand written Arabic alphabets in the dataset. The method further includes splitting the dataset into a training set, a validation set, and a test set. The method further includes classifying one or more user datasets from the training set, the validation set, and the test set. The method further includes identifying the target user from the one or more user datasets. The identification of the target user includes a verification accuracy of the hand written Arabic words being larger than a verification accuracy threshold value.Type: GrantFiled: October 19, 2022Date of Patent: September 12, 2023Assignee: Prince Mohammad Bin Fahd UniversityInventors: Majid Ali Khan, Nazeeruddin Mohammad, Ghassen Ben Brahim, Abul Bashar, Ghazanfar Latif
-
Patent number: 11734268Abstract: Disclosed are methods, systems, devices, apparatus, media, design structures, and other implementations, including a method that includes receiving a source document, applying one or more pre-processes to the source document to produce contextual information representative of the structure and content of the source document, and transforming the source document, based on the contextual information, to generate a question-and-answer searchable document.Type: GrantFiled: June 25, 2021Date of Patent: August 22, 2023Assignee: Pryon IncorporatedInventors: David Nahamoo, Igor Roditis Jablokov, Vaibhava Goel, Etienne Marcheret, Ellen Eide Kislal, Steven John Rennie, Marie Wenzel Meteer, Neil Rohit Mallinar, Soonthorn Ativanichayaphong, Joseph Allen Pruitt, John Pruitt, Bryan Dempsey, Chui Sung
-
Patent number: 11574240Abstract: Methods and systems are provided for generating training data for training a classifier to assign nodes of a taxonomy graph to items based on item descriptions. Each node has a label. For each item, the system identifies for that item one or more candidate paths within the taxonomy graph that are relevant to that item. The system identifies the candidate paths based on content of the item description of that item matching labels of nodes. A candidate path is a sequence of nodes starting a root node of the taxonomy graph. For each identified candidate path, the system labels the item description with the candidate path equivalently with leaf node or label of the leaf node. The labeled item descriptions compose the training data for training the classifier.Type: GrantFiled: March 19, 2019Date of Patent: February 7, 2023Assignee: YOURANSWER INTERNATIONAL PTY LTD.Inventors: Rahmon Charles Coupe, Jonathan James Schutz, Halton James Stewart, Adam James Ingerman
-
Patent number: 11574456Abstract: Aspects of the present disclosure relate to processing irregularly arranged characters. An image is received. An irregularly arranged character within the image is detected. A direction of the irregularly arranged character is modified to a proper direction to obtain a properly oriented character. The properly oriented character is recognized to obtain a first identified character. The image is then rebuilt by replacing the irregularly arranged character with the first identified character, the first identified character in a machine-encoded format.Type: GrantFiled: October 7, 2019Date of Patent: February 7, 2023Assignee: International Business Machines CorporationInventors: Zhuo Cai, Jian Dong Yin, Wen Wang, Rong Fu, Hao Sheng, Kang Zhang
-
Patent number: 11544510Abstract: Systems and methods for classifying images (e.g., ads) are described. An image is accessed. Optical character recognition is performed on at least a first portion of the image. Image recognition is performed via a convolutional neural network on at least a second portion of the image. At least one class for the image is automatically identified, via a fully connected neural network, based on one or more predictions, each of the one or more predictions being based on both the optical character recognition and the image recognition. Finally, the at least one class identified for the image is output.Type: GrantFiled: July 11, 2019Date of Patent: January 3, 2023Assignee: Comscore, Inc.Inventors: Yogen Chaudhari, Sean Pinkney, Prashanth Venkatraman, Ashwath Rajendran, Jay Parikh
-
Patent number: 11544539Abstract: A hardware neural network conversion method, a computing device, a compiling method and a neural network software and hardware collaboration system for converting a neural network application into a hardware neural network fulfilling a hardware constraint condition are disclosed. The method comprises: obtaining a neural network connection diagram corresponding to the neural network application; splitting the neural network connection diagram into neural network basic units; converting each of the neural network basic units so as to form a network having equivalent functions thereto and formed by connecting basic module virtual entities of neural network hardware; and connecting the obtained basic unit hardware network according to the sequence of splitting so as to create a parameter file for the hardware neural network. The present disclosure provides a novel neural network and a brain-like computing software and hardware system.Type: GrantFiled: September 29, 2016Date of Patent: January 3, 2023Assignee: Tsinghua UniversityInventors: Youhui Zhang, Yu Ji
-
Patent number: 11535102Abstract: A digital instrument display method includes generating bitmap font image and font table for font; running application module; running graphic engine module; running content module; transmitting, by sensor unit attached to vehicle, the generated signals to application module; analyzing, by application module, information received, and generating external event data; reading, by content module, external event data stored in internal memory for each frame and determining whether data are character string output; if data are character string output, transmitting, by content module, corresponding character string information to graphic engine module; acquiring, by graphic engine module, position and size information of bitmap font image that matches character string in bitmap font table; and copying, by control unit, position and size information of bitmap font image that matches character string acquired and displaying character string through instrument display unit using graphic engine module and graphic libraType: GrantFiled: October 26, 2020Date of Patent: December 27, 2022Assignee: GRAPICAR INC.Inventors: Seyoung Lim, Dongsu Shin, Jeonghun Ko
-
Patent number: 11531859Abstract: A method for a neural network includes receiving an input from a vector of inputs, determining a table index based on the input, and retrieving a hash table from a plurality of hash tables, wherein the hash table corresponds to the table index. The method also includes determining an entry index of the hash table based on an index matrix, wherein the index matrix includes one or more index values, and each of the one or more index values corresponds to a vector in the hash table and determining an entry value in the hash table corresponding to the entry index. The method also includes determining a value index, wherein the vector in the hash table includes one or more entry values, and wherein the value index corresponds to one of the one or more entry values in the vector and determining a layer response.Type: GrantFiled: December 22, 2017Date of Patent: December 20, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Duanduan Yang, Xiang Sun
-
Patent number: 11422834Abstract: One or more computing devices, systems, and/or methods for implementing automated barriers and delays for communication are provided. Content generated by a user may be evaluated to classify the content before the content has been submitted for access by other users. A user interface is generated and populated with an activity for the user to perform based upon a classification of the content. The user may be restricted from submitting the content until successful performance of the activity. Upon determining that the user successfully performed the activity, the user may be provided with an option to submit the content. Otherwise, the user may be blocked from submitting the content.Type: GrantFiled: March 25, 2019Date of Patent: August 23, 2022Assignee: YAHOO ASSETS LLCInventors: John Donald, Eric Theodore Bax, Kimberly Williams, Tanisha Sharma, Melissa Susan Gerber, Nikki Mia Williams
-
Patent number: 11150751Abstract: Systems and methods for a dynamically reconfigurable touchpad are described. In an illustrative, non-limiting embodiment, an Information Handling System (IHS) may include processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the IHS to: receive a configuration parameter from a user; and modify a touchpad by applying the configuration parameter to the operation of proximity sensors disposed under a non-display surface of the IHS.Type: GrantFiled: May 9, 2019Date of Patent: October 19, 2021Assignee: Dell Products, L.P.Inventors: Srinivas Kamepalli, Deeder M. Aurongzeb, Mohammed K. Hijazi
-
Patent number: 11113558Abstract: An information processing apparatus includes an extraction unit that extracts a character string corresponding to a keyword from a character string including the keyword described across plural lines, in accordance with an extraction condition of the character string corresponding to the keyword, a combining unit that combines character strings extracted by the extraction unit in accordance with a line sequence, and an output unit that the character strings combined by the combining unit as a character string corresponding to the keyword.Type: GrantFiled: August 1, 2019Date of Patent: September 7, 2021Assignee: FUJIFILM Business Innovation Corp.Inventors: Kunihiko Kobayashi, Junichi Shimizu, Daigo Horie
-
Patent number: 11106931Abstract: Systems and methods for performing OCR of an image depicting text symbols and imaging a document having a plurality of planar regions are disclosed. An example method comprises: receiving a first image of a document having a plurality of planar regions and one or more second images of the document; identifying a plurality of coordinate transformations corresponding to each of the planar regions of the first image of the document; identifying, using the plurality of coordinate transformations, a cluster of symbol sequences of the text in the first image and in the one or more second images; and producing a resulting OCR text comprising a median symbol sequence for the cluster of symbol sequences.Type: GrantFiled: August 22, 2019Date of Patent: August 31, 2021Assignee: ABBYY Production LLCInventor: Aleksey Kalyuzhny
-
Patent number: 10699195Abstract: Systems and methods are disclosed herein for ensuring a safe mutation of a neural network. A processor determines a threshold value representing a limit on an amount of divergence of response for the neural network. The processor identifies a set of weights for the neural network, the set of weights beginning as an initial set of weights. The processor trains the neural network by repeating steps including determining a safe mutation representing a perturbation that results in a response of the neural network that is within the threshold divergence, and modifying the set of weights of the neural network in accordance with the safe mutation.Type: GrantFiled: December 14, 2018Date of Patent: June 30, 2020Assignee: Uber Technologies, Inc.Inventors: Joel Anthony Lehman, Kenneth Owen Stanley, Jeffrey Michael Clune
-
Patent number: 10635966Abstract: A parallel convolutional neural network is provided. The CNN is implemented by a plurality of convolutional neural networks each on a respective processing node. Each CNN has a plurality of layers. A subset of the layers are interconnected between processing nodes such that activations are fed forward across nodes. The remaining subset is not so interconnected.Type: GrantFiled: January 24, 2017Date of Patent: April 28, 2020Assignee: Google LLCInventors: Alexander Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
-
Patent number: 10360494Abstract: Embodiments of a convolutional neural network (CNN) system based on using resolution-limited small-scale CNN modules are disclosed. In some embodiments, a CNN system includes: a receiving module for receiving an input image of a first image size, the receiving module can be used to partition the input image into a set of subimages of a second image size; a first processing stage that includes a first hardware CNN module configured with a maximum input image size, the first hardware CNN module is configured to sequentially receive the set of subimages and sequentially process the received subimages to generate a set of outputs; a merging module for merging the sets of outputs into a set of merged feature maps; and a second processing stage for receiving the set of feature maps and processing the set of feature maps to generate an output including at least one prediction on the input image.Type: GrantFiled: February 23, 2017Date of Patent: July 23, 2019Assignee: AltumView Systems Inc.Inventors: Xing Wang, Him Wai Ng, Jie Liang
-
Patent number: 10347293Abstract: Provided is a process, including: obtaining screen-cast video; determining amounts of difference between respective frames; selecting a subset of frames based on the amounts; causing OCRing of each frame in the subset of frames; classifying text in each frame-OCR record as confidential or non-confidential; and forming a redacted version of the screen-cast video based on the classifying.Type: GrantFiled: July 31, 2018Date of Patent: July 9, 2019Assignee: Droplr, Inc.Inventors: Gray Skinner, Levi Nunnink
-
Patent number: 10204271Abstract: In the present invention, an attribution is extracted from each region obtained by segmentation of an image, relationships among the regions are described, and a composition of the image is evaluated based on the attributions and the relationships.Type: GrantFiled: July 31, 2014Date of Patent: February 12, 2019Assignee: CANON KABUSHIKI KAISHAInventors: You Lv, Yong Jiang, Bo Wu, Xian Li
-
Patent number: 10163063Abstract: Computer program products and systems are provided for mining for sub-patterns within a text data set. The embodiments facilitate finding a set of N frequently occurring sub-patterns within the data set, extracting the N sub-patterns from the data set, and clustering the extracted sub-patterns into K groups, where each extracted sub-pattern is placed within the same group with other extracted sub-patterns based upon a distance value D that determines a degree of similarity between the sub-pattern and every other sub-pattern within the same group.Type: GrantFiled: March 7, 2012Date of Patent: December 25, 2018Assignee: International Business Machines CorporationInventors: Snigdha Chaturvedi, Tanveer A Faruquie, Hima P. Karanam, Marvin Mendelssohn, Mukesh K. Mohania, L. Venkata Subramaniam
-
Patent number: 10088977Abstract: An operating method of an electronic device is provided. The method includes selecting an area corresponding to at least one field of a page displayed through a display of the electronic device on the basis of an input; confirming an attribute corresponding to the at least one field among a plurality of attributes including a first attribute and a second attribute; and selectively providing a content corresponding to the attribute among at least one content including a first content and a second content according to the confirmed attribute.Type: GrantFiled: September 2, 2014Date of Patent: October 2, 2018Assignee: Samsung Electronics Co., LtdInventors: Tae-Yeon Kim, Sang-Hyuk Koh, Hee-Jin Kim, Bo-Hyun Sim, Hye-Mi Lee, Si-Hak Jang
-
Patent number: 10019492Abstract: The present application relates to the field of computer technologies, and in particular, to a stop word identification method used in an information retrieval system. In a stop word identification method, after a first query input by a user is acquired, a second query that belongs to a same session as the first query is acquired, and a stop word in the first query is identified according to a change-based feature of each word in the first query relative to the second query. According to the solution provided by the present application, a stop word in a query can be identified more accurately, and efficiency and precision of an information retrieval system are improved.Type: GrantFiled: September 1, 2017Date of Patent: July 10, 2018Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Wenli Zhou, Zhe Wang, Feiran Hu
-
Patent number: 9946455Abstract: A message screen display comprises a static non-scrollable display area for display of at least part of a first message, the first message having an associated first message time. The message screen display further comprises a scrollable display area for display of at least part of a second message, the second message having an associated second message time. The message screen display further comprises a feature applied to at least part of the first message that varies based on time as referenced to the associated first message time.Type: GrantFiled: May 21, 2015Date of Patent: April 17, 2018Assignee: New York Stock Exchange LLCInventors: Robert B. Hlad, Valerie Jeanne Schafer, Cynthia Teresa Bautista-Rozenberg, Robert S. Tannen, Nicholas L. Springer
-
Patent number: 9940511Abstract: Systems, computer program products, and techniques for discriminating hand and machine print from each other, and from signatures, are disclosed and include determining a color depth of an image, the color depth corresponding to at least one of grayscale, bi-tonal and color; reducing color depth of non-bi-tonal images to generate a bi-tonal representation of the image; identifying a set of one or more graphical line candidates in either the bi-tonal image or the bi-tonal representation, the graphical line candidates including one or more of true graphical lines and false positives; discriminating any of the true graphical lines from any of the false positives; removing the true graphical lines from the bi-tonal image or the bi-tonal representation without removing the false positives to generate a component map comprising connected components and excluding graphical lines; and identifying one or more of the connected components in the component map.Type: GrantFiled: May 29, 2015Date of Patent: April 10, 2018Assignee: KOFAX, INC.Inventors: Alexander Shustorovich, Christopher W. Thrasher, Anthony Macciola, Jan W. Amtrup
-
Patent number: 9914213Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).Type: GrantFiled: March 2, 2017Date of Patent: March 13, 2018Assignee: GOOGLE LLCInventors: Sudheendra Vijayanarasimhan, Eric Jang, Peter Pastor Sampedro, Sergey Levine
-
Patent number: 9798818Abstract: A method and apparatus are provided for automatically generating and processing first and second concept vector sets extracted, respectively, from a first set of concept sequences and from a second, temporally separated, concept sequences by performing a natural language processing (NLP) analysis of the first concept vector set and second concept vector set to detect changes in the corpus over time by identifying changes for one or more concepts included in the first and/or second set of concept sequences.Type: GrantFiled: September 22, 2015Date of Patent: October 24, 2017Assignee: International Business Machines CorporationInventors: Tin Kam Ho, Luis A. Lastras-Montano, Oded Shmueli
-
Patent number: 9652439Abstract: Systems and methods are provided through which data parseable against a document type definition by generating a list of a possible paths of an input element that is not encoded against the document type definition, determining the path that is the best fit with the document type definition, and then generating the element in the syntax of the document type definition. Determining the path that is the best fit includes parsing the path against the document type definition. The best fit is expressed in a scoring scale, in which the best score indicates the best fit. Thereafter, the path with the best fit is translated in accordance to the document type definition or markup language.Type: GrantFiled: August 4, 2014Date of Patent: May 16, 2017Assignee: Thomson Reuters Global ResourcesInventor: Michael S. Zaharkin
-
Patent number: 9600135Abstract: A system for executing a multimodal software application includes a mobile computer device with a plurality of input interface components, the multimodal software application, and a dialog engine in operative communication with the multimodal software application. The multimodal software application is configured to receive first data from the plurality of input interface components. The dialog engine executes a workflow description from the multimodal software application by providing prompts to an output interface component. Each of these prompts includes notification indicating which of the input interface components are valid receivers for that respective prompt. Furthermore, the notification may indicate the current prompt and at least the next prompt in sequence.Type: GrantFiled: September 10, 2010Date of Patent: March 21, 2017Assignee: Vocollect, Inc.Inventor: Sean Nickel
-
Patent number: 9524447Abstract: A reference 2D data is provided. The reference 2D data comprises a first plurality of pixels defined in 2D coordinates. The reference 2D data is transformed into a reference 1D data having a first 1D size. The reference 1D data comprises the first plurality of pixels in a transformed 1D order. A plurality of input 2D data are also provided. An input 2D data comprises a second plurality of pixels defined in 2D coordinates. The plurality of input 2D data are transformed into a plurality of input 1D data, which comprises transforming the input 2D data into an input 1D data. Transforming the input 2D data into the input 1D data is the same as transforming the reference 2D data into the reference 1D data. Finally, a transformed input 1D data from the plurality of input 1D data that matches the transformed reference 1D data is searched.Type: GrantFiled: March 5, 2014Date of Patent: December 20, 2016Inventor: Sizhe Tan
-
Patent number: 9465912Abstract: An apparatus and a method for mining temporal pattern are provided. A method for mining temporal pattern includes generating a data pattern group comprising data patterns from sequential data, generating a candidate pattern group comprising candidate patterns from the data pattern group, calculating a support value for a candidate pattern in a candidate pattern group based on a discrepancy value of the candidate pattern, and determining whether the candidate pattern satisfies a predetermined pattern requirement, based on the calculated support value.Type: GrantFiled: May 1, 2014Date of Patent: October 11, 2016Assignee: Samsung Electronics Co., Ltd.Inventors: Hyoung-Min Park, Hyo-A Kang, Ki-Yong Lee
-
Patent number: 9396413Abstract: Methods, systems and apparatus for choosing image labels. In one aspect, a method includes receiving data specifying a first image, receiving text labels for the first image, receiving search results in response to a web search performed using at least some of the text labels as queries, ranking the text labels, at least in part, based on a number of resources referenced by the received search results, wherein at least some of the resources each include an image matching the first image, and selecting an image label for the image from the ranked text labels, the image label being selected based on the ranking.Type: GrantFiled: November 13, 2015Date of Patent: July 19, 2016Assignee: Google Inc.Inventors: Yong Zhang, Charles J. Rosenberg, Jingbin Wang, Sean O Malley
-
Patent number: 9218546Abstract: Methods, systems and apparatus for choosing image labels. In one aspect, a method includes receiving data specifying a first image, receiving text labels for the first image, receiving search results in response to a web search performed using at least some of the text labels as queries, ranking the text labels, at least in part, based on a number of resources referenced by the received search results, wherein at least some of the resources each include an image matching the first image, and selecting an image label for the image from the ranked text labels, the image label being selected based on the ranking.Type: GrantFiled: June 1, 2012Date of Patent: December 22, 2015Assignee: Google Inc.Inventors: Yong Zhang, Charles J. Rosenberg, Jingbin Wang, Sean O'Malley
-
Patent number: 9117375Abstract: A computerized assessment grading method comprises creating a syntax tree for a received equation-based response to at least one assessment question and a syntax tree for at least one solution to the at least one question, comparing the syntax trees, and grading the response based on the results of the comparison.Type: GrantFiled: June 27, 2011Date of Patent: August 25, 2015Assignee: SMART Technologies ULCInventors: David Labine, Lothar Wenzel, Albert Chu
-
Patent number: 9104936Abstract: A method of reading data represented by characters formed of an x by y array of dots, e.g. as printed by a dot-matrix printer, is described. An image of the character(s) is captured by a digital camera device and transmitted to a computer, and by using analysis software operating in the computer to which the camera image has been sent, dot shapes are identified and their positions within the captured image detected, using the similarity of dots to idealized representations of dots using a combination of covariance, correlation or color data. The position information about the detected dots is then processed to determine the distance between dots and to identify “clusters” of adjacent dots in groups of dots close to one another, and to enable such clusters to be mapped on to a notional x by y grid that defines the intended positions of the dots where grid elements intersect.Type: GrantFiled: September 14, 2012Date of Patent: August 11, 2015Assignee: Wessex Technology Opto-Electronic Products LimitedInventors: Alan Joseph Bell, Martin Robinson, Guanhua Chen
-
Patent number: 9104306Abstract: A user device is disclosed which includes a touch input and a keypad input. The user device is configured to operate in a gesture capture mode as well as a navigation mode. In the navigation mode, the user interfaces with the touch input to move a cursor or similar selection tool within the user output. In the gesture capture mode, the user interfaces with the touch input to provide gesture data that is translated into key code output having a similar or identical format to outputs of the keypad.Type: GrantFiled: October 29, 2010Date of Patent: August 11, 2015Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.Inventors: Lye Hock Bernard Chan, Chong Pin Jonathan Teoh, Boon How Kok
-
Patent number: 9098768Abstract: A character detection apparatus is provided that detects, from an image including a first image representing a character and a second image representing a translucent object, the character. The character detection apparatus includes a calculating portion that, for each of blocks obtained by dividing an overlapping region in which the first image is overlapped by the second image, calculates a frequency of appearance of pixels for each of gradations of a property, and a detection portion that detects the character from the overlapping region based on the frequency for each of the gradations.Type: GrantFiled: January 4, 2012Date of Patent: August 4, 2015Assignee: KONICA MINOLTA, INC.Inventor: Tomoo Yamanaka
-
Publication number: 20150139539Abstract: An apparatus and method for detecting forgery/falsification of a homepage. The apparatus includes a homepage image shot generation module for generating homepage image shots of an entire screen of an accessed homepage. A character string extraction module extracts character strings from each homepage image shot using an OCR technique. A character string comparison module compares each of the extracted character strings with character strings required for determination of homepage forgery/falsification, thus determining whether the extracted character string is a normal character string or a falsified character string. A homepage falsification determination module determines whether the corresponding homepage has been forged/falsified, based on results of the comparison. A character string learning module learns the character string extracted from the homepage image shot, based on results of the determination, and classifies the character string as the normal character string or the falsified character string.Type: ApplicationFiled: August 25, 2014Publication date: May 21, 2015Inventors: Taek kyu LEE, Geun Yong KIM, Seok won LEE, Myeong Ryeol CHOI, Hyung Geun OH, KiWook SOHN
-
Patent number: 8995741Abstract: Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.Type: GrantFiled: August 15, 2014Date of Patent: March 31, 2015Assignee: Google Inc.Inventors: Sanjiv Kumar, Henry Allan Rowley, Xiaohang Wang, Jose Jeronimo Moreira Rodrigues
-
Patent number: 8977042Abstract: A character recognition system receives an unknown character and recognizes the character based on a pre-trained recognition model. Prior to recognizing the character, the character recognition system may pre-process the character to rotate the character to a normalized orientation. By rotating the character to a normalized orientation in both training and recognition stages, the character recognition system releases the pre-trained recognition model from considering character prototypes in different orientations and thereby speeds up recognition of the unknown character. In one example, the character recognition system rotates the character to the normalized orientation by aligning a line between a sum of coordinates of starting points and a sum of coordinates of ending points of each stroke of the character with a normalized direction.Type: GrantFiled: March 23, 2012Date of Patent: March 10, 2015Assignee: Microsoft CorporationInventors: Qiang Huo, Jun Du
-
Publication number: 20140363074Abstract: Methods, systems, and computer-readable media related to a technique for providing handwriting input functionality on a user device. A handwriting recognition module is trained to have a repertoire comprising multiple non-overlapping scripts and capable of recognizing tens of thousands of characters using a single handwriting recognition model. The handwriting input module provides real-time, stroke-order and stroke-direction independent handwriting recognition. User interfaces for providing the handwriting input functionality are also disclosed.Type: ApplicationFiled: May 30, 2014Publication date: December 11, 2014Applicant: Apple Inc.Inventors: Jannes G. A. DOLFING, Karl M. GROETHE, Ryan S. DIXON, Jerome R. BELLEGARDA
-
Patent number: 8908961Abstract: A method for automatically recognizing Arabic text includes building an Arabic corpus comprising Arabic text files written in different writing styles and ground truths corresponding to each of the Arabic text files, storing writing-style indices in association with the Arabic text files, digitizing an Arabic word to form an array of pixels, dividing the Arabic word into line images, forming a text feature vector from the line images, training a Hidden Markov Model using the Arabic text files and ground truths in the Arabic corpus in accordance with the writing-style indices, and feeding the text feature vector into a Hidden Markov Model to recognize the Arabic words.Type: GrantFiled: April 23, 2014Date of Patent: December 9, 2014Assignee: King Abdulaziz City for Science & TechnologyInventors: Mohammad S. Khorsheed, Hussein K. Al-Omari
-
Patent number: 8908971Abstract: Methods, devices and systems are described for transcribing text from artifacts to electronic files. A computer system is provided, wherein the computer system comprises a computer-readable storage device. An image of the artifact is received wherein text is present on the artifact. A first portion of the text is analyzed. Characters representing the first portion of the text are identified at a first confidence level equal to or greater than a threshold confidence level. The characters representing the first portion of the text are stored. A second portion of the text appearing on the artifact is analyzed. A plurality of candidates to represent the second portion of the text are identified at a second confidence level below the threshold confidence level. Finally, the plurality of candidates to a user for selection are presented.Type: GrantFiled: September 25, 2013Date of Patent: December 9, 2014Assignee: Ancestry.com Operations Inc.Inventor: Lee Samuel Jensen
-
Patent number: 8885931Abstract: One or more techniques and/or systems are disclosed for mitigating machine solvable human interactive proofs (HIPs). A classifier is trained over a set of one or more training HIPs that have known characteristics for OCR solvability and HIP solving pattern from actual use. A HIP classification is determined for a HIP (such as from a HIP library used by a HIP generator) using the trained classifier. If the HIP is classified by the trained classifier as a merely human solvable classification, such that it may not be solved by a machine, the HIP can be identified for use in the HIP generation system. Otherwise, the HIP can be altered to (attempt to) be merely human solvable.Type: GrantFiled: January 26, 2011Date of Patent: November 11, 2014Assignee: Microsoft CorporationInventor: Kumar S. Srivastava