Context Analysis Or Word Recognition (e.g., Character String) Patents (Class 382/229)
  • Patent number: 9727804
    Abstract: Determining a set of edit operations to perform on a string, such as one generated by optical character recognition, to satisfy a string template by determining a minimum cost of performing edit operations on the string to satisfy the string template and then determining the set of edit operations corresponding to the minimum cost. Transforming a string to satisfy one or more string templates by determining a minimum cost of performing edit operations on the string to satisfy one or more string templates, selecting one or more minimum costs, determining a set of edit operations corresponding to the minimum costs, and then performing the set of edit operations on the string. Determining a minimum cost of performing edit operations on a string to satisfy a string template by determining set costs of performing sets of edit operations using costs associated with edit operations of the set and determining the minimum cost using the set costs.
    Type: Grant
    Filed: April 15, 2005
    Date of Patent: August 8, 2017
    Assignee: Matrox Electronic Systems, LTD.
    Inventor: Jean-Simon Lapointe
  • Patent number: 9720644
    Abstract: A system that acquires captured voice data corresponding to a spoken command; sequentially analyzes the captured voice data; causes a display to display a visual indication corresponding to the sequentially analyzed captured voice data; and performs a predetermined operation corresponding to the spoken command when it is determined that the sequential analysis of the captured voice data is complete.
    Type: Grant
    Filed: March 11, 2014
    Date of Patent: August 1, 2017
    Assignee: SONY CORPORATION
    Inventors: Junki Ohmura, Michinari Kohno, Kenichi Okada
  • Patent number: 9691009
    Abstract: The present invention provides a portable optical reader, an optical reading method using the portable optical reader and a computer program capable of detecting a high possibility of a reading error and notifying a user of a possibility of a reading error. A character string as a reading target is imaged and a character string is recognized based on the captured image. A plurality of reading formats defining an attribute of the character string is stored, and a first reading format matched with the recognized character string among a plurality of stored reading format is searched. Among the plurality of stored reading formats, a second reading format in which a character string matched with the first reading format as a partial character string is searched. Based on the search result, a possibility of a reading error regarding the recognized character string is notified.
    Type: Grant
    Filed: April 6, 2015
    Date of Patent: June 27, 2017
    Assignee: Keyence Corporation
    Inventors: Taiga Nomi, Shusuke Oki
  • Patent number: 9690767
    Abstract: Computer-implemented systems and methods are provided for suggesting emoticons for insertion into text based on an analysis of sentiment in the text. An example method includes: determining a first sentiment of text in a text field; selecting first text from the text field in proximity to a current position of an input cursor in the text field; identifying one or more candidate emoticons wherein each candidate emoticon is associated with a respective score indicating relevance to the first text and the first sentiment based on, at least, historical user selections of emoticons for insertion in proximity to respective second text having a respective second sentiment; providing one or more candidate emoticons having respective highest scores for user selection; and receiving user selection of one or more of the provided emoticons and inserting the selected emoticons into the text field at the current position of the input cursor.
    Type: Grant
    Filed: June 6, 2016
    Date of Patent: June 27, 2017
    Assignee: Machine Zone, Inc.
    Inventors: Gabriel Leydon, Nikhil Bojja
  • Patent number: 9639783
    Abstract: Systems, apparatuses, and methods to relate images of words to a list of words are provided. A trellis based word decoder analyses a set of OCR characters and probabilities using a forward pass across a forward trellis and a reverse pass across a reverse trellis. Multiple paths may result, however, the most likely path from the trellises has the highest probability with valid links. A valid link is determined from the trellis by some dictionary word traversing the link. The most likely path is compared with a list of words to find the word closest to the most.
    Type: Grant
    Filed: April 28, 2015
    Date of Patent: May 2, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Pawan Kumar Baheti, Kishor K. Barman, Raj Kumar Krishna Kumar
  • Patent number: 9613299
    Abstract: Methods and systems for performing character recognition of a document image include analyzing verification performed by a user on a recognized text obtained by character recognition of a document image, identifying analogous changes of a first incorrect character for a first correct character, and prompting the user to initiate a training of a recognition pattern based on the identified analogous changes.
    Type: Grant
    Filed: December 11, 2014
    Date of Patent: April 4, 2017
    Assignee: ABBYY Development LLC
    Inventors: Michael Krivosheev, Natalia Kolodkina, Alexander Makushev
  • Patent number: 9519917
    Abstract: A method and a system for context-based real-time advertising are provided. In example embodiments, a document content that is displayed to a user may be analyzed and keywords may be identified. Selected listings from a publication system may be received; the selected listings may be selected using the keywords. The system may detect user events associated with the keywords and, in response to the detection of the user events, display information related to the listings while maintaining the displaying of the document content.
    Type: Grant
    Filed: December 1, 2014
    Date of Patent: December 13, 2016
    Assignee: eBay Inc.
    Inventor: Thomas Geiger
  • Patent number: 9471219
    Abstract: A text recognition apparatus and method of the portable terminal is provided for recognizing text image selected by a pen on a screen image as text. The text recognition method of the present invention includes displaying an image; configuring a recognition area on the image in response to a gesture made with a pen; recognizing text in the recognition area; displaying the recognized text and action items corresponding to the text; and executing, when one of the action items is selected, an action corresponding to the selected action item.
    Type: Grant
    Filed: August 21, 2013
    Date of Patent: October 18, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sihak Jang, Seonhwa Kim, HeeJin Kim, Mijung Park
  • Patent number: 9454240
    Abstract: A computing device is described that outputs, for display, a graphical keyboard comprising a plurality of keys. The computing device receives, an indication of a gesture detected at a presence-sensitive input device. The computing device determines, based at least in part on the indication of the gesture and at least one characteristic of the gesture, one or more keys from the plurality of keys. The computing device determines a character string based on the one or more keys from the plurality of keys. In response to determining that the character string is not included in a lexicon and a spatial model probability associated with the one or more keys from the plurality of keys exceeds a probability threshold, the computing device outputs, for display, the character string.
    Type: Grant
    Filed: April 18, 2013
    Date of Patent: September 27, 2016
    Assignee: Google Inc.
    Inventors: Satoshi Kataoka, Keisuke Kuroyanagi
  • Patent number: 9445142
    Abstract: An information processing apparatus which communicates with an image capturing apparatus and reproduces video data obtained by the image capturing apparatus, comprises a unit which requests a segment list in which information of segments of video data is written; a unit which acquires the requested segment list; a unit which decides which segment, from the segments in the acquired segment list to request; a unit which requests the decided segment from the image capturing apparatus; a unit which acquires the requested segment; and a unit which calculates a delay time for segment transmission based on a number of segments in the segment list.
    Type: Grant
    Filed: July 10, 2014
    Date of Patent: September 13, 2016
    Assignee: Canon Kabushiki Kaisha
    Inventor: Mayu Yokoi
  • Patent number: 9424251
    Abstract: A method of extracting the semantic distance from the mathematical sentence and classifying the mathematical sentence by the semantic distance, includes: receiving a user query; extracting at least one keyword included in the received user query; and extracting a semantic distance by, indexing one or more of natural language tokens and mathematical equation tokens including semantic information, extracting the semantic distance, between the at least one extracted keyword and the one or more indexed semantic information by referring indexed information, and acquiring a similarity of the received user query and the semantic information.
    Type: Grant
    Filed: June 6, 2013
    Date of Patent: August 23, 2016
    Assignee: SK TELECOM CO., LTD.
    Inventors: Keun Tae Park, Yong Gil Park, Hyeongin Choi, Nam Sook Wee, Doo Seok Lee, Jung Kyo Sohn, Haeng Moon Kim, Dong Hahk Lee
  • Patent number: 9411801
    Abstract: Disclosed are implementations of methods and systems for displaying definitions and translations of words by searching for a translation simultaneously in various languages according to a query in a general language dictionary. The invention removes the need to specify a source language for the word or word combination when translated into a target language. The target language may be preset. Translation is possible for word combinations in multiple sources languages. Source words may be entered manually or captured by an imaging component of an electronic device. When captured, a word combination is selected, and subjected to optical character recognition (OCR) and translation. Source language and OCR language may be suggested via geolocation of the electronic device.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: August 9, 2016
    Assignee: ABBYY Development LLC
    Inventor: Maria Osipova
  • Patent number: 9412052
    Abstract: A method for extracting text from an image data is disclosed. The method includes pre-processing, via a processor, the image data to obtain a readable image data. The method further includes filtering, via the processor, a plurality of copies of the readable image data using a plurality of noise filters to obtain a corresponding plurality of noise removed images. Yet further, the method includes performing, via the processor, image data recognition on each of the plurality of noise removed images to obtain a text copy associated with each of the plurality of noise removed images. Moreover, the method includes ranking, via the processor, each word in the text copy associated with each of the plurality of noise removed images based on a predefined set of parameters. Finally, the method includes selecting, via the processor, highest ranked words within the text copy associated with each of the plurality of noise removed images to obtain output text for the image data.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: August 9, 2016
    Assignee: WIPRO LIMITED
    Inventors: Harihara Vinayakaram Natarajan, Tamilselvan Subramanian
  • Patent number: 9396389
    Abstract: A digital camera associated with a mobile processing apparatus is used to produce a file containing a 2D digitized image of a document having pre-formatted fields for user's check marks. The image is electronically matched to a digital template of the document for extracting digitized images of the pre-formatted fields, which are thereafter analyzed for presence therein of user-entered check marks.
    Type: Grant
    Filed: October 8, 2014
    Date of Patent: July 19, 2016
    Assignee: ABBYY Development LLC
    Inventor: Sergey Anatolyevich Kuznetsov
  • Patent number: 9383913
    Abstract: A data filtering menu enabling a user to select different characteristics and values may be displayed. Each of the characteristics may be displayed in a first selectable list in the filtering menu. Once a user selects one of the characteristics, a second list containing selectable values associated with the selected characteristic may be displayed in a second list. The selected values may filter a set of data and the list of characteristics may be modified to display a representation of values selected from one or more of the second lists. Additionally, a selectable object associated with a characteristic having user selected values may also be displayed with a filtered result. If this object is selected, a list of values from the second list may be redisplayed. The user may then select different values and re-executed the filter with the new values.
    Type: Grant
    Filed: May 30, 2012
    Date of Patent: July 5, 2016
    Assignee: SAP SE
    Inventors: Timo Hoyer, Sascha Hans Grub
  • Patent number: 9384423
    Abstract: A system and method for computing confidence in an output of a text recognition system includes performing character recognition on an input text image with a text recognition system to generate a candidate string of characters. A first representation is generated, based on the candidate string of characters, and a second representation is generated based on the input text image. A confidence in the candidate string of characters is computed based on a computed similarity between the first and second representations in a common embedding space.
    Type: Grant
    Filed: May 28, 2013
    Date of Patent: July 5, 2016
    Assignee: XEROX CORPORATION
    Inventors: Jose Antonio Rodriguez-Serrano, Florent C. Perronnin
  • Patent number: 9372608
    Abstract: Computer-implemented systems and methods are provided for suggesting emoticons for insertion into text based on an analysis of sentiment in the text. An example method includes: determining a first sentiment of text in a text field; selecting first text from the text field in proximity to a current position of an input cursor in the text field; identifying one or more candidate emoticons wherein each candidate emoticon is associated with a respective score indicating relevance to the first text and the first sentiment based on, at least, historical user selections of emoticons for insertion in proximity to respective second text having a respective second sentiment; providing one or more candidate emoticons having respective highest scores for user selection; and receiving user selection of one or more of the provided emoticons and inserting the selected emoticons into the text field at the current position of the input cursor.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: June 21, 2016
    Assignee: Machine Zone, Inc.
    Inventors: Gabriel Leydon, Nikhil Bojja
  • Patent number: 9348459
    Abstract: A method for inputting a character executed by a computer that inputs a character, the method includes: obtaining a first pressed position at which the pressing operation has been performed and a first key corresponding to the first pressed position; detecting deletion of a character input using the first key; obtaining, when the deletion is detected, a second pressed position at which a next pressing operation has been performed and a second key corresponding to the second pressed position; determining whether or not a distance between the first pressed position and the second key is smaller than or equal to a threshold; and correcting, when the distance is smaller than or equal to the threshold, a range that is recognized as the second key in the region on the basis of the first pressed position.
    Type: Grant
    Filed: May 13, 2013
    Date of Patent: May 24, 2016
    Assignee: FUJITSU LIMITED
    Inventors: Taichi Murase, Nobuyuki Hara, Atsunori Moteki, Takahiro Matsuda
  • Patent number: 9336285
    Abstract: Methods, systems, and computer program products are provided for presenting content in accordance with a placement designation. A user interface is provided for designating that a content item is to be presented in a search suggestion control along with search suggestions for completing a search string. Requests for content are processed, including evaluating a partial form of a received search string, identifying search suggestions for completing the partial form of the search string, and presenting the content item in the search suggestion control along with search suggestions that match campaign terms.
    Type: Grant
    Filed: February 28, 2013
    Date of Patent: May 10, 2016
    Assignee: Google Inc.
    Inventor: Ezequiel Vidra
  • Patent number: 9269028
    Abstract: Provided are string similarity assessment techniques. In one embodiment, the techniques include receiving a plurality of input strings comprising characters from a character set and generating hashtables for each respective input string using a hash function that assigns the characters as keys and character positions in the strings as values. The techniques may also include determine a character similarity index for at least two of the input strings relative to each other by comparing a similarity of the values for each key in the their respective hashtables; determining a total disordering index based representative of an alignment of the at least two input strings by determining differences between a plurality of index values for each individual key in their respective hashtables and determining the total disordering index based on the differences; and determining a string similarity metric based on at least one character similarity index and the total disordering index.
    Type: Grant
    Filed: July 7, 2014
    Date of Patent: February 23, 2016
    Assignee: General Electric Company
    Inventors: Jake Matthew Kurzer, Brett Csorba
  • Patent number: 9251430
    Abstract: A character recognition apparatus may include an imaging element configured to read a character string placed on an information recording medium; an image memory configured to store image data of the character string; and a character segmenting unit configured to segment a character constituting the character string. The character segmenting unit may include a minimum intensity curve creating unit configured to detect a minimum intensity value among light intensity values, and create a minimum intensity curve of the image data according to the minimum intensity value of each pixel row; a character segmenting position detecting unit configured to calculate a space between the characters neighboring in the created minimum intensity curve, in order to detect a character segmenting position between the characters; and a character segmenting process unit configured to segment each character according to the detected character segmenting position between the characters.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: February 2, 2016
    Assignee: NIDEC SANKYO CORPORATION
    Inventor: Hiroshi Nakamura
  • Patent number: 9244907
    Abstract: Various embodiments provide a method that comprises receiving a set of segments from a text field, analyzing the set of segments to determine at least one of a target subtext or a target meaning associated with the set of segments, and identifying a set of candidate emoticons where each candidate emoticon in the set of candidate emoticons has an association between the candidate emoticon and at least one of the target subtext or the target meaning. The method may further comprise presenting the set of candidate emoticons for entry selection at a current position of an input cursor, receiving an entry selection for a set of selected emoticons from the set of candidate emoticons, and inserting the set of selected emoticons into the text field at the current position of the input cursor.
    Type: Grant
    Filed: June 8, 2015
    Date of Patent: January 26, 2016
    Assignee: Machine Zone, Inc.
    Inventor: Gabriel Leydon
  • Patent number: 9247100
    Abstract: A method for routing a confirmation of receipt and/or delivery of a facsimile or portion thereof according to one embodiment includes generating text of a facsimile in a computer readable format; ascertaining one or more of a significance and a relevance of at least a portion of the text by locating one or more keywords in the text, wherein at least two of the keywords are not adjacent in the text; analyzing the text for at least one of a meaning and a context of the text; and routing at least one confirmation of receipt and/or delivery of the facsimile to one or more confirmation destinations based on the analysis. Additional disclosed embodiments include systems and computer program products configured to similarly route confirmation of receipt and/or delivery of a facsimile or a portion thereof.
    Type: Grant
    Filed: October 21, 2013
    Date of Patent: January 26, 2016
    Assignee: Kofax, Inc.
    Inventors: Roy Couchman, Roland G. Borrey
  • Patent number: 9239833
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for presenting additional information for text depicted by an image. In one aspect, a method includes receiving an image. Text depicted in the image is identified. A presentation context is selected for the image based on an arrangement of the text depicted by the image. Each presentation context corresponds to a particular arrangement of text within images. Each presentation context has a corresponding user interface for presenting additional information related to the text. The user interface for each presentation context is different from the user interface for other presentation contexts. The user interface that corresponds to the selected presentation context is identified. Additional information is presented for at least a portion of the text depicted in the image using the identified user interface. The user interface can present the additional information in an overlay over the image.
    Type: Grant
    Filed: November 8, 2013
    Date of Patent: January 19, 2016
    Assignee: Google Inc.
    Inventors: Alexander J. Cuthbert, Joshua J. Estelle
  • Patent number: 9236043
    Abstract: Controlling a reading machine while reading a document to a user by receiving an image of a document, accessing a knowledge base that provides data that identifies sections in the document and processing user commands to select a section of the document. The reading machine applies text-to-speech to a text file that corresponds to the selected section of the document, to read the selected section of the document aloud to the user.
    Type: Grant
    Filed: April 1, 2005
    Date of Patent: January 12, 2016
    Assignee: KNFB READER, LLC
    Inventors: Raymond C. Kurzweil, Paul Albrecht, Lucy Gibson
  • Patent number: 9229608
    Abstract: A character display apparatus, a character display method, and a non-transitory computer readable medium storing a character display program are capable of automatically avoiding a handwriting character being illegible during input of the character by detecting an overlap between lines to be drawn during drawing based on trajectory data of a handwriting character to determine a presence/absence of an illegible part. If it is determined that an illegible part is present, the thickness of the handwriting character is automatically selected again to be thinner and the image of the handwriting character is drawn again, which avoids illegible part occurring in the handwriting character without inputting the character all over again.
    Type: Grant
    Filed: January 30, 2014
    Date of Patent: January 5, 2016
    Assignee: Sharp Kabushiki Kaisha
    Inventor: Teppei Hosokawa
  • Patent number: 9207808
    Abstract: According to one embodiment, an information processing apparatus capable of accessing a first storage medium and acquiring at least one of groups related to a correspondence between one symbol and at least one stroke data, includes an input device and a processor. The input device is configured to input an image. The processor is configured to, if a partial image of the image input by the input device corresponds to a first symbol in a first group or a first image generatable from first stroke data in the first group, associate the partial image and the first stroke data.
    Type: Grant
    Filed: February 15, 2013
    Date of Patent: December 8, 2015
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Chikashi Sugiura
  • Patent number: 9208312
    Abstract: A system for automated checking of data content includes content checkers (208) to (214) arranged in parallel and connected between an input sub-system (204) and an output sub-system (216). The content checkers (208) to (214) check different data formats. Incoming data from an external computer system (202) is passed by the input sub-system (204) to the checkers (208) to (214), which report check results to both input and output sub-systems (204) and (216). From the four check results, the input sub-system (204) judges the data's acceptability for forwarding to a sensitive computer system (218). Unacceptable data is discarded; acceptable data passes to the output sub-system (216), which also judges the data's acceptability from the four check results.
    Type: Grant
    Filed: October 20, 2010
    Date of Patent: December 8, 2015
    Assignee: QINETIQ LIMITED
    Inventors: Simon Robert Wiseman, Katherine Jane Hughes
  • Patent number: 9208381
    Abstract: Embodiments of methods, systems, and storage medium associated with processing of digital images including character recognition are disclosed herein. In one instance, the method may include identifying at least some components of a plurality of characters included in a digital image of content, based at least in part on comparison of a vector representation of each component with predefined component shape patterns; and determining one or more characters from the identified components. The determining may be based at least in part on evaluating the identified components using predetermined combination rules that define the one or more characters based at least in part on relationships between the one or more components in the identified plurality of characters. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: December 13, 2012
    Date of Patent: December 8, 2015
    Assignee: Amazon Technologies, Inc.
    Inventors: Satishkumar Kothandapani Shanmugasundaram, Niranjan Jayakar
  • Patent number: 9188444
    Abstract: Systems, methods, and computer storage mediums are provided for correcting the placement of an object on an image. An example method includes providing the image and depth data that describes the depth of the three-dimensional scene captured by the image. The depth data describes at least a distance between a camera that captured the three-dimensional scene and one or more structures in the scene and a geolocation of the camera when the three-dimensional scene was captured. When the object is moved from a first location on the image to a second location on the image, a set of coordinates that describes the second location relative to the image is received. The set of coordinates are then translated into geolocated coordinates that describe a geolocation that corresponds to the second location. The set of coordinate is translated, at least in part, using the depth data associated with the image.
    Type: Grant
    Filed: March 7, 2012
    Date of Patent: November 17, 2015
    Assignee: Google Inc.
    Inventors: Stéphane Lafon, Jie Shao
  • Patent number: 9141607
    Abstract: Various aspects can be implemented for determining optical character recognition (OCR) parameters using an OCR engine. In general, one aspect can be a method that includes using an optical character recognition (OCR) engine in a base configuration to generate one or more OCR responses corresponding to one or more sample pages of a document. The method also includes identifying a dominant OCR parameter for the document based on the one or more generated OCR responses. Other implementations of this aspect include corresponding systems, apparatus, and computer program products.
    Type: Grant
    Filed: May 30, 2007
    Date of Patent: September 22, 2015
    Assignee: Google Inc.
    Inventors: Dar-Shyang Lee, Igor Krivokon, Luc Vincent
  • Patent number: 9135289
    Abstract: Identifying matching transactions. First and second log files contain operation records of transactions in a transaction workload, each file recording a respective execution of the transaction workload, the method comprising. A first record location in the first file and an associated window of a defined number of sequential second record locations in the second file are advanced one record location at a time. Whether each operation record of a complete transaction at a first record location has a matching operation record at one of the record locations in the associated window of second record locations is determined. If so, the complete transaction in the first file and the transaction that includes the matching operation records in the second file are identified as matching transactions.
    Type: Grant
    Filed: June 2, 2014
    Date of Patent: September 15, 2015
    Assignee: International Business Machines Corporation
    Inventors: Manoj K. Agarwal, Curt L. Cotner, Amitava Kundu, Prasan Roy, Rajesh Sambandhan
  • Patent number: 9128930
    Abstract: A method, device and system for providing a language service are disclosed. In some embodiments, the method is performed at a computer system having one or more processors and memory for storing programs to be executed by the one or more processors. The method includes receiving a first message from a client device. The method includes determining if the first message is in a first language or a second language different than the first language. The method includes translating the first message into a second message in the second language if the first message is in the first language. The method includes, alternatively, generating a third message in the second language if the first message is in the second language, where the third message includes a conversational response to the first message. The method further includes returning one of the second message and the third message to the client device.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: September 8, 2015
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yang Song, Bo Chen, Li Lu, Hao Ye
  • Patent number: 9122898
    Abstract: A computer implemented method for extracting meaningful text from a document of unknown or unspecified format. In a particular embodiment, the method includes reading the document, thereby to extract raw encoded text, analysing the raw encoded text, thereby to identify one or more text chunks, and for a given chunk, performing compression identification analysis to determine whether compression is likely. The method can further include performing a decompression process, performing an encoding identification process thereby to identify a likely character encoding protocol, and converting the chunk using the identified likely character encoding protocol, thereby to output the chunk as readable text.
    Type: Grant
    Filed: March 22, 2012
    Date of Patent: September 1, 2015
    Assignee: Lexmark International Technology SA
    Inventors: Scott Coles, Derek Murphy, Ben Truscott, Ian Davies
  • Patent number: 9098759
    Abstract: An image processing apparatus includes an accepting unit, a recognizing unit, and a selecting unit. The accepting unit accepts character information about a character image in a character region in an image. The recognizing unit performs character recognition on the character image in the character region. The selecting unit selects a character recognition result which matches the character information accepted by the accepting unit, from multiple character recognition results that are obtained by the recognizing unit.
    Type: Grant
    Filed: October 17, 2012
    Date of Patent: August 4, 2015
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Satoshi Kubota, Shunichi Kimura
  • Patent number: 9098518
    Abstract: A resource sharing method for a resource sharing system for a plurality of edge peers that individually store a resource and a plurality of super peers that manage the plurality of edge peers or resource information including a name of the resource that includes at least one key word is provided. In the resource sharing system, the super peers distribute and manage resource information of the edge peers according to a key word. Therefore, if a resource is requested using part of a name of a resource desired by a certain edge peer, the super peer searches for resources including the resource desired by the edge peer. Therefore, a user of an edge peer may acquire a desired resource even using part of a name of a desired resource.
    Type: Grant
    Filed: February 26, 2009
    Date of Patent: August 4, 2015
    Assignees: Samsung Electronics Co., Ltd., Industry-Academic Cooperation Foundation Yonsei University
    Inventors: Jae-min Ahn, Ji-yon Han, Jeong-hwa Song, U-ram Yoon, Keon-il Jeong, Eo-hyung Lee, Kyung-lang Park, Shin-dug Kim
  • Patent number: 9092423
    Abstract: The present invention relies on the two-dimensional information in documents and encodes two-dimensional structures into a one-dimensional synthetic language such that two-dimensional documents can be searched at text search speed. The system comprises: an indexing module, a retrieval module, an encoder, a quantization module, a retrieval engine and a control module coupled by a bus. Electronic documents are first indexed by the indexing module and stored as a synthetic text library. The retrieval module then converts an input image to synthetic text and searches for matches to the synthetic text in the synthetic text library. The matches can be in turn used to retrieve the corresponding electronic documents. In one or more embodiments, the present invention includes a method for comparing the synthetic text to documents that have been converted to synthetic text for a match.
    Type: Grant
    Filed: July 1, 2013
    Date of Patent: July 28, 2015
    Assignee: Ricoh Co., Ltd.
    Inventor: Jorge Moraleda
  • Patent number: 9092688
    Abstract: A method including determining a position of each glyph in an image of a text document, identifying word boundaries in the document thereby implying the existence of a first plurality of words, preparing a first array of word lengths based on the first plurality of words, preparing a second array of word lengths based on a second plurality of words of a text file including a certain text, comparing at least part of the first array to at least part of the second array to find a best alignment between the first and second array, deriving a layout of at least part of the certain text as arranged in the image of the text document at least based on the best alignment and the position of at least some of the glyphs in the image. Related apparatus and methods are also described.
    Type: Grant
    Filed: August 28, 2013
    Date of Patent: July 28, 2015
    Assignee: CISCO TECHNOLOGY INC.
    Inventors: Guy Adini, Harel Cain, Oded Rimon
  • Patent number: 9083731
    Abstract: A system, method, apparatus and mechanism for estimating worst-case time complexity of a regular expression defining a pattern adapted for identifying malicious packets and comprising one or more back-references (backref-regex) by constructing a non-deterministic finite automaton (NFA) corresponding to the backref-regex (backref-NFA), wherein the backref-NFA comprises a plurality of NFA-states and a respectively labeled edge for each of the one or more back-references of the backref-regex; performing liveness analysis on the backref-NFA to determine for each NFA-state of the backref-NFA a set of back-references alive at the NFA-state; and determining a maximum number of alive back-references over the plurality of NFA-states, wherein the determined maximum number is indicative of the worst-case time complexity of the backref-regex.
    Type: Grant
    Filed: January 28, 2014
    Date of Patent: July 14, 2015
    Assignee: Alcatel Lucent
    Inventors: Kedar S Namjoshi, Girija J Narlikar
  • Patent number: 9075794
    Abstract: Various embodiments provide a method that comprises receiving a set of segments from a text field, analyzing the set of segments to determine at least one of a target subtext or a target meaning associated with the set of segments, and identifying a set of candidate emoticons where each candidate emoticon in the set of candidate emoticons has an association between the candidate emoticon and at least one of the target subtext or the target meaning. The method may further comprise presenting the set of candidate emoticons for entry selection at a current position of an input cursor, receiving an entry selection for a set of selected emoticons from the set of candidate emoticons, and inserting the set of selected emoticons into the text field at the current position of the input cursor.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: July 7, 2015
    Assignee: MACHINE ZONE, INC.
    Inventor: Gabriel Leydon
  • Patent number: 9053296
    Abstract: This invention converts markup language files such as HTML files into pseudocode that is structured like programming language source code in order to use source code copy detection tools to find pairs of markup language files that have been copied in full or in part.
    Type: Grant
    Filed: August 28, 2010
    Date of Patent: June 9, 2015
    Assignee: SOFTWARE ANALYSIS AND FORENSIC ENGINEERING CORPORATION
    Inventors: Steven Mylroie, Robert Marc Zeidman
  • Publication number: 20150146992
    Abstract: An electronic device and a method are provided for recognizing a character in the electronic device. The electronic device includes a display unit configured to, upon receipt of an image in a real-time character recognition method, display a character recognition area defined for character recognition adjusted to an inclination angle of an object included in the received image. The electronic device also includes a controller configured to detect the inclination angle of the object included in the image, to adjust an angle of the character recognition area defined for character recognition to the inclination angle of the object, and to control recognition of a character from the object in the angle-adjusted character recognition area.
    Type: Application
    Filed: May 30, 2014
    Publication date: May 28, 2015
    Inventor: Dong-Hyun Yeom
  • Publication number: 20150139482
    Abstract: According to one aspect, embodiments of the invention provide a system and method for utilizing the effort expended by a user in responding to a CAPTCHA request to automatically transcribe text from images in order to verify, retrieve and/or update geographic data associated with geographic locations at which the images were recorded.
    Type: Application
    Filed: April 12, 2012
    Publication date: May 21, 2015
    Applicant: GOOGLE INC.
    Inventors: Marco Zennaro, Luc Vincent, Kong Man Cheung, David Abraham
  • Patent number: 9036083
    Abstract: Techniques of detecting text in video are disclosed. In some embodiments, a portion of video content can be identified as having text. Text within the identified portion of the video content can be identified. A category for the identified text can be determined. In some embodiments, a determination is made as to whether the video content satisfies at least one predetermined condition, and the portion of video content is identified as having text in response to a determination that the video content satisfies the predetermined condition(s). In some embodiments, the predetermined condition(s) comprises at least one of a minimum level of clarity, a minimum level of contrast, and a minimum level of content stability across multiple frames. In some embodiments, additional information corresponding to the video content is determined based on the identified text and the determined category.
    Type: Grant
    Filed: May 28, 2014
    Date of Patent: May 19, 2015
    Assignee: Gracenote, Inc.
    Inventors: Irene Zhu, Wilson Harron, Markus K. Cremer
  • Publication number: 20150131918
    Abstract: A system for document processing including decomposing an image of a document into at least one data entry region sub-image, providing the data entry region sub-image to a data entry clerk available for processing the data entry region sub-image, receiving from the data entry clerk a data entry value associated with the data entry region sub-image, and validating the data entry value.
    Type: Application
    Filed: November 17, 2014
    Publication date: May 14, 2015
    Inventors: Avikam Baltsan, Ori Sarid, Aryeh Elimelech, Aharon Boker, Zvi Segal, Gideon Miller
  • Patent number: 9025890
    Abstract: A table record estimation device includes: a table element string extraction unit having a function of extracting text data from input data and acquiring a series of keywords as an element of a table (table data) from the extracted text data; a table element labeling unit having a function of labeling the individual keywords acquired by the table element string extraction unit for each type based on correspondence information stored in a classification rule storage unit; and a label appearance pattern estimation unit having a function of estimating a label permutation constituting one-unit record from a label string attached for the each type by the table element labeling unit and outputting the label permutation as a record estimation result.
    Type: Grant
    Filed: May 21, 2007
    Date of Patent: May 5, 2015
    Assignee: NEC Corporation
    Inventor: Itaru Hosomi
  • Patent number: 9021379
    Abstract: A method is provided for determining a word input by a gesture stroke on a virtual keyboard. The method includes receiving data representing the gesture stroke, analyzing the data to identify a set of dwell points in the gesture stroke, generating a simplified stroke defining a polyline having vertices corresponding with the dwell points of the identified set. The method further includes comparing the simplified stroke polyline with a plurality of predefined polylines each representing a respective word, to determine a closest matching polyline. The computing system outputs the word represented by the closest matching polyline.
    Type: Grant
    Filed: November 7, 2011
    Date of Patent: April 28, 2015
    Assignee: Google Inc.
    Inventors: Nirmal J. Patel, Thad E. Starner
  • Patent number: 9008447
    Abstract: A method and system for character recognition are described. In one embodiment, it may use matched sequences rather than character shape to determine a computer legible result.
    Type: Grant
    Filed: April 1, 2005
    Date of Patent: April 14, 2015
    Assignee: Google Inc.
    Inventors: Martin T. King, Dale L. Grover, Clifford A. Kushler, James Q. Stafford-Fraser
  • Patent number: 8995775
    Abstract: A photo spam detector detects illegitimate non-natively captured images through extracting image features and feeding the extracted features into a probabilistic model. The probabilistic model categorizes the photo as legitimate or illegitimate. Requests to tag one or more users in a photo are analyzed by a tag analyzer that assesses relationships between the tag requests themselves, social relationships between the tagged users, and the presence or absence of faces within the regions specified by the tag requests. Based on the classification of images or tags as illegitimate, a social networking system applies one or more social media distribution policies to the image or tags to suppress or prohibit distribution.
    Type: Grant
    Filed: May 2, 2011
    Date of Patent: March 31, 2015
    Assignee: Facebook, Inc.
    Inventor: Felix Fung
  • Patent number: 8988543
    Abstract: The present invention relates to a camera based method for text input and detection of a keyword or of a text-part within page or a screen comprising the steps of: directing a camera module on the printed page and capturing an image thereof; digital image filtering of the captured image; detection of word blocks contained in the image, each word block containing most likely a recognizable word; performing OCR within each word block; determination of A-blocks among the word blocks according to a keyword probability determination rule, wherein each of the A-blocks contains most likely the keyword; assignment of an attribute to each A-block; indication of the A-blocks in the display by a frame or the like for a further selection of the keyword; further selection of the A-block containing the keyword based on the displayed attribute of the keyword; forwarding the text content as text input to an application.
    Type: Grant
    Filed: April 28, 2011
    Date of Patent: March 24, 2015
    Assignee: Nuance Communications, Inc.
    Inventors: Cüneyt Göktekin, Oliver Tenchio