Context Analysis Or Word Recognition (e.g., Character String) Patents (Class 382/229)
  • Patent number: 10311139
    Abstract: Computer-implemented systems and methods are provided for suggesting emoticons for insertion into text based on an analysis of sentiment in the text. An example method includes: determining a first sentiment of text in a text field; selecting first text from the text field in proximity to a current position of an input cursor in the text field; identifying one or more candidate emoticons wherein each candidate emoticon is associated with a respective score indicating relevance to the first text and the first sentiment based on, at least, historical user selections of emoticons for insertion in proximity to respective second text having a respective second sentiment; providing one or more candidate emoticons having respective highest scores for user selection; and receiving user selection of one or more of the provided emoticons and inserting the selected emoticons into the text field at the current position of the input cursor.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: June 4, 2019
    Assignee: MZ IP Holdings, LLC
    Inventors: Gabriel Leydon, Nikhil Bojja
  • Patent number: 10303761
    Abstract: In a method of creating similar sentences from an entered original, one or more second phrases having the same meaning as a first phrase, which is part of the original, are extracted from a first database; an N-gram value is calculated according to a context dependence value, in a second database, corresponding to the one or more second phrases; one or more contiguous third phrases that include a number of second phrases equivalent to the N-gram value are extracted from one or more sentences obtained by replacing, in the original, the first phrase with the one or more second phrases; the appearance frequency of the one or more third phrases in a third database is calculated; and if the calculated appearance frequency is determined to be larger than or equal to a threshold, the one or more sentences are used as similar sentences of the original and are externally output.
    Type: Grant
    Filed: September 7, 2017
    Date of Patent: May 28, 2019
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Masaki Yamauchi, Nanami Fujiwara, Masahiro Imade
  • Patent number: 10275687
    Abstract: Data representing an image is received by an image recognition system. An image recognition system generates an image classification output distribution for a plurality of image features based on analysis of the data representing the image and training data stored for the image recognition system. One or more filters are applied to the image classification output distribution to obtain an updated image classification output distribution. A highest confidence value is selected from the updated image classification output distribution. A selected image feature associated with the highest confidence value is identified from the plurality of image features. Information associated with the selected image feature is obtained from a database and communicated to the user's device by the image recognition system.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: April 30, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Prasenjit Dey, Vijay Ekambaram, Ravindranath Kokku, Nitendra Rajput, Ruhi Sharma Mittal
  • Patent number: 10261795
    Abstract: Method, apparatus, and program means for performing a string comparison operation. In one embodiment, an apparatus includes execution resources to execute a first instruction. In response to the first instruction, said execution resources store a result of a comparison between each data element of a first and second operand corresponding to a first and second text string, respectively.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: April 16, 2019
    Assignee: Intel Corporation
    Inventors: Michael A. Julier, Jeffrey D. Gray, Srinivas Chennupaty, Sean P. Mirkes, Mark P. Seconi
  • Patent number: 10255250
    Abstract: A message processing device (10) includes an estimator (11), an extractor (12), and an indicator (13). The estimator (11) estimates words understandable to a destination user (1). The extractor (12) extracts, from a message (3) created by a transmission originator user (2), a portion that does not match the words estimated by the estimator (11). The indicator (13) indicates, to the transmission originator user (2), the message (3) with the extracted portion by the extractor (12) being in an emphasized manner.
    Type: Grant
    Filed: July 31, 2014
    Date of Patent: April 9, 2019
    Assignee: Rakuten, Inc.
    Inventor: Jun Katakawa
  • Patent number: 10254917
    Abstract: Various embodiments provide a method that comprises receiving a set of segments from a text field, analyzing the set of segments to determine at least one of a target subtext or a target meaning associated with the set of segments, and identifying a set of candidate emoticons where each candidate emoticon in the set of candidate emoticons has an association between the candidate emoticon and at least one of the target subtext or the target meaning. The method may further comprise presenting the set of candidate emoticons for entry selection at a current position of an input cursor, receiving an entry selection for a set of selected emoticons from the set of candidate emoticons, and inserting the set of selected emoticons into the text field at the current position of the input cursor.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: April 9, 2019
    Assignee: MZ IP Holdings, LLC
    Inventor: Gabriel Leydon
  • Patent number: 10185898
    Abstract: An image recognition approach employs both computer generated and manual image reviews to generate image tags characterizing an image. The computer generated and manual image reviews can be performed sequentially or in parallel. The generated image tags may be provided to a requester in real-time, be used to select an advertisement, and/or be used as the basis of an internet search. In some embodiments generated image tags are used as a basis for an upgraded image review. A confidence of a computer generated image review may be used to determine whether or not to perform a manual image review. Images and their associated image tags are optionally added to an image sequence.
    Type: Grant
    Filed: March 11, 2016
    Date of Patent: January 22, 2019
    Assignee: CLOUDSIGHT, INC.
    Inventors: Bradford A Folkens, Dominik K Mazur
  • Patent number: 10176392
    Abstract: Optical character recognition is described in various implementations. In one example implementation, a method may include receiving a plurality of optical character recognition (OCR) outputs provided by a respective plurality of OCR engines, each of the plurality of OCR outputs being representative of text depicted in a portion of an electronic image. The method may also include identifying a document context associated with the electronic image, and generating an output character set by applying a character resolution model to resolve differences among the plurality of OCR outputs. The character resolution model may define a probability of character recognition accuracy for each of the plurality of OCR engines given the identified document context. The method may also include updating the character resolution model to generate an updated character resolution model such that subsequent generating of output character sets are based on the updated character resolution model.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: January 8, 2019
    Assignee: LONGSAND LIMITED
    Inventor: Sean Blanchflower
  • Patent number: 10127478
    Abstract: According to one embodiment, an electronic apparatus includes a hardware processor. The hardware processor converts a first character in a first image of images in which characters of languages are rendered, into a first character code by using dictionary data for a first language environment, converts the first character into a second character code by using dictionary data for a second language environment, causes a memory to store a pair of the first character code and a first area in the first image corresponding to the first character code, and causes the memory to store a pair of the second character code and a second area in the first image corresponding to the second character code.
    Type: Grant
    Filed: September 20, 2016
    Date of Patent: November 13, 2018
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Hideki Tsutsui
  • Patent number: 10043009
    Abstract: Technologies for analyzing software similarity include a computing device having access to a collection of sample software. The computing device identifies a number of code segments, such as basic blocks, within the software. The computing device normalizes each code segment by extracting the first data element of each computer instruction within the code segment. The first data element may be the first byte. The computing device calculates a probabilistic feature hash signature for each normalized code segment. The computing device may filter out known-good code segments by comparing signatures with a probabilistic hash filter generated from a collection of known-good software. The computing device calculates a similarity value between each pair of unfiltered, normalized code segments. The computing device generates a graph including the normalized code segments and the similarity values. The computing device may cluster the graph using a force-based clustering algorithm.
    Type: Grant
    Filed: September 24, 2014
    Date of Patent: August 7, 2018
    Assignee: Intel Corporation
    Inventor: Jason R. Upchurch
  • Patent number: 10038839
    Abstract: Various approaches provide for detecting and recognizing text to enable a user to perform various functions or tasks. For example, a user could point a camera at an object with text, in order to capture an image of that object. The camera can be integrated with a portable computing device that is capable of taking the image and processing the image (or providing the image for processing) to recognize, identify, and/or isolate the text in order to send the image of the object as well as recognized text to an application, function, or system, such as an electronic marketplace.
    Type: Grant
    Filed: June 1, 2017
    Date of Patent: July 31, 2018
    Assignee: A.9.com, Inc.
    Inventors: Adam Wiggen Kraft, Kathy Wing Lam Ma, Xiaofan Lin, Arnab Sanat Kumar Dhua, Yu Lou
  • Patent number: 10001838
    Abstract: A user can emulate touch screen events with motions and gestures that the user performs at a distance from a computing device. A user can utilize specific gestures, such as a pinch gesture, to designate portions of motion that are to be interpreted as input, to differentiate from other portions of the motion. A user can then perform actions such as text input by performing motions with the pinch gesture that correspond to words or other selections recognized by a text input program. A camera-based detection approach can be used to recognize the location of features performing the motions and gestures, such as a hand, finger, and/or thumb of the user.
    Type: Grant
    Filed: November 5, 2014
    Date of Patent: June 19, 2018
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Kenneth M. Karakotsios, Dong Zhou
  • Patent number: 9996668
    Abstract: Methods and apparatus, including computer program products, are provided for backfilling. In some example embodiments, there is provided a method that includes receiving, at a receiver, backfill data representative of sensor data stored, at a continuous blood glucose sensor and transmitter assembly, due to a loss of a wireless link between the receiver and the continuous blood glucose sensor and transmitter assembly; generating, at the receiver, at least one of a notification or a graphically distinct indicator for presentation at a display of the receiver, the at least one of the notification or the graphically distinct indicator enabling the backfill data to be graphically distinguished, when presented at the display, from non-backfill data; and generating, at the receiver, a view including the backfill data, the non-backfill data, and the generated at least one of the notification or the graphically distinct indicator. Related systems, methods, and articles of manufacture are also described.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: June 12, 2018
    Assignee: DexCom, Inc.
    Inventors: Eli Reihman, Sebastian Bohm, Leif N. Bowman, Katherine Yerre Koehler, Disha B. Sheth, Peter C. Simpson, Jim Stephen Amidei, Douglas William Burnette, Michael Robert Mensinger, Eric Cohen, Hari Hampapuram, Phil Mayou
  • Patent number: 9990109
    Abstract: An information display terminal achieves a suitable display of types of electronic content such as electronic books with a display unit, an operation unit for receiving an operation instruction directed to the information display terminal, and a scroll control unit for controlling the operation for automatic scroll processing of the electronic content displayed on the display unit. An operation instruction instructing initiation of the automatic scroll processing is input to the operation unit, the scroll control unit performs control to initiate automatic scroll processing by which the electronic content displayed on the display unit is displayed in such a manner that the electronic content is moved a predetermined distance per predetermined time period; and when an operation instruction instructing interruption of the automatic scroll processing is input to the operation unit, the scroll control unit performs control to interrupt the automatic scroll processing.
    Type: Grant
    Filed: June 17, 2013
    Date of Patent: June 5, 2018
    Assignee: MAXELL, LTD.
    Inventors: Kazuhiko Yoshizawa, Nobuo Masuoka, Masayuki Hirabayashi, Motoyuki Suzuki
  • Patent number: 9977776
    Abstract: An input support apparatus includes a processor and a memory configured to store association information in which a character string decided to be inputted on a screen is associated with coordinates at which the character string is inputted on the screen. The processor configured to execute a process including searching the association information for a character string that corresponds to the coordinates of an operation position on the screen for input operation, and outputting the searched-out character string.
    Type: Grant
    Filed: June 1, 2015
    Date of Patent: May 22, 2018
    Assignee: FUJITSU LIMITED
    Inventor: Lingyan Feng
  • Patent number: 9881376
    Abstract: A method, system, and computer-readable storage medium for performing content based transitions between images. Image content within each image of a set of images are analyzed to determine at least one respective characteristic metric for each image. A respective transition score for each pair of at least a subset of the images is determined with respect to each transition effect of a plurality of transition effects based on the at least one respective characteristic metric for each image. Transition effects implementing transitions between successive images for a sequence of the images are determined based on the transition scores. An indication of the determined transition effects is stored. The determined transition effects are useable to present the images in a slideshow or other image sequence presentation.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: January 30, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Elya Shechtman, Shai Bagon, Aseem O. Agarwala
  • Patent number: 9781075
    Abstract: Managing network ports is disclosed. Network session identification information is received. The network session identification information is associated with a destination IP address and a destination network port. An available source network port is determined using a data structure that is based on the destination IP address and the destination network port.
    Type: Grant
    Filed: July 22, 2014
    Date of Patent: October 3, 2017
    Assignee: Avi Networks
    Inventors: Sreeram Iyer, Kiron Haltore, Murali Basavaiah
  • Patent number: 9773009
    Abstract: The present application discloses a method and an apparatus for obtaining structured information in a fixed layout document to improve the structuring speed for information management of a fixed layout document. The method may comprise: determining initial page number information corresponding to current directory entry of the document; segmenting first article content of a page corresponding to the initial page number information into at least one structured-characters-block; searching in each structured-characters-block for a first structured-characters-block which matches with name strings of the current directory entry, and obtaining first position information about where the first structured-characters-block is located in the first article content; and obtaining initial position information of the current directory entry and end position information of the previous directory entry based on the first position information.
    Type: Grant
    Filed: December 7, 2012
    Date of Patent: September 26, 2017
    Assignees: Beijing Founder Apabi Technology Limited, Peking University Founder Group Co., Ltd.
    Inventors: Ning Dong, Wenjuan Huang, Baoliang Zhang
  • Patent number: 9767388
    Abstract: An improved method for verifying whether a character-recognition technology has correctly identified which characters are represented by character images involves displaying the uncertain character images in place of their respective hypothesis characters in a document being read a verifier. The verifier may mark incorrectly spelled words containing the uncertain character images. Based on the markings, a system adjusts a confidence level associated with the hypothesis about the uncertain character in order to obtain a confirmed hypothesis linked to the uncertain character.
    Type: Grant
    Filed: October 7, 2014
    Date of Patent: September 19, 2017
    Assignee: ABBYY DEVELOPMENT LLC
    Inventors: Aram Bengurovich Pakhchanian, Michael Pavlovich Pogosskiy
  • Patent number: 9753686
    Abstract: Systems, apparatuses and methodologies, for an administrator to configure a flexible document workflow are provided. A workflow creation interface may be provided on a terminal (e.g., via application software) for an administrator to create and register document workflow profiles. Such workflow creation interface may be configured to include a processing location selector to receive selection by the administrator of (i) a processing location amongst plural processing locations or (ii) automatic determination. Such processing location selection may be on a connector-by-connector basis or for the entire workflow. For example, the administrator can register multiple versions of a workflow, to be processed at respective processing locations. As another example, a hybrid workflow can be created in which some workflow connectors or components are performed at one location acid other workflow connectors or components are performed at another location.
    Type: Grant
    Filed: December 10, 2015
    Date of Patent: September 5, 2017
    Assignee: RICOH COMPANY, LTD.
    Inventors: Qinlei Fan, Yuuki Ohtaka
  • Patent number: 9754076
    Abstract: A computer processor may receive medical data including a report and an image. The computer processor may analyze the report using natural language processing to identify a condition and a corresponding criterion. The computer processor may also analyze the image using an image processing model to generate an image analysis. The computer processor may determine whether the report has a potential problem by comparing the image analysis to the criterion.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: September 5, 2017
    Assignee: International Business Machines Corporation
    Inventors: Keith P. Biegert, Brendan C. Bull, David Contreras, Robert C. Sizemore, Sterling R. Smith
  • Patent number: 9747274
    Abstract: A similarity between character strings is assessed by identifying first and second character strings as candidate similar character strings, determining a frequency of occurrence for at least one of the first and second character strings from a collection of character strings, and designating the first and second character strings as similar based on the determined frequency of occurrence.
    Type: Grant
    Filed: April 15, 2015
    Date of Patent: August 29, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Shudong Huang, Jeffrey J. Jonas, Brian E. Macy, Frankie E. Patman Maguire, Charles K. Williams
  • Patent number: 9747273
    Abstract: A similarity between character strings is assessed by identifying first and second character strings as candidate similar character strings, determining a frequency of occurrence for at least one of the first and second character strings from a collection of character strings, and designating the first and second character strings as similar based on the determined frequency of occurrence.
    Type: Grant
    Filed: August 19, 2014
    Date of Patent: August 29, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Shudong Huang, Jeffrey J. Jonas, Brian E. Macy, Frankie E. Patman Maguire, Charles K. Williams
  • Patent number: 9747316
    Abstract: Methods and apparatus consistent with the invention provide the ability to organize, index, search, and present time series data based on searches. Time series data are sequences of time stamped records occurring in one or more usually continuous streams, representing some type of activity. In one embodiment, time series data is organized into discrete events with normalized time stamps and the events are indexed by time and keyword. A search is received and relevant event information is retrieved based in whole or in part on the time indexing mechanism, keyword indexing mechanism, or statistical indices calculated at the time of the search.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: August 29, 2017
    Assignee: Splunk Inc.
    Inventors: Michael Joseph Baum, R. David Carasso, Robin Kumar Das, Rory Greene, Bradley Hall, Nicholas Christian Mealy, Brian Philip Murphy, Stephen Phillip Sorkin, Andre David Stechert, Erik M. Swan
  • Patent number: 9727804
    Abstract: Determining a set of edit operations to perform on a string, such as one generated by optical character recognition, to satisfy a string template by determining a minimum cost of performing edit operations on the string to satisfy the string template and then determining the set of edit operations corresponding to the minimum cost. Transforming a string to satisfy one or more string templates by determining a minimum cost of performing edit operations on the string to satisfy one or more string templates, selecting one or more minimum costs, determining a set of edit operations corresponding to the minimum costs, and then performing the set of edit operations on the string. Determining a minimum cost of performing edit operations on a string to satisfy a string template by determining set costs of performing sets of edit operations using costs associated with edit operations of the set and determining the minimum cost using the set costs.
    Type: Grant
    Filed: April 15, 2005
    Date of Patent: August 8, 2017
    Assignee: Matrox Electronic Systems, LTD.
    Inventor: Jean-Simon Lapointe
  • Patent number: 9720644
    Abstract: A system that acquires captured voice data corresponding to a spoken command; sequentially analyzes the captured voice data; causes a display to display a visual indication corresponding to the sequentially analyzed captured voice data; and performs a predetermined operation corresponding to the spoken command when it is determined that the sequential analysis of the captured voice data is complete.
    Type: Grant
    Filed: March 11, 2014
    Date of Patent: August 1, 2017
    Assignee: SONY CORPORATION
    Inventors: Junki Ohmura, Michinari Kohno, Kenichi Okada
  • Patent number: 9691009
    Abstract: The present invention provides a portable optical reader, an optical reading method using the portable optical reader and a computer program capable of detecting a high possibility of a reading error and notifying a user of a possibility of a reading error. A character string as a reading target is imaged and a character string is recognized based on the captured image. A plurality of reading formats defining an attribute of the character string is stored, and a first reading format matched with the recognized character string among a plurality of stored reading format is searched. Among the plurality of stored reading formats, a second reading format in which a character string matched with the first reading format as a partial character string is searched. Based on the search result, a possibility of a reading error regarding the recognized character string is notified.
    Type: Grant
    Filed: April 6, 2015
    Date of Patent: June 27, 2017
    Assignee: Keyence Corporation
    Inventors: Taiga Nomi, Shusuke Oki
  • Patent number: 9690767
    Abstract: Computer-implemented systems and methods are provided for suggesting emoticons for insertion into text based on an analysis of sentiment in the text. An example method includes: determining a first sentiment of text in a text field; selecting first text from the text field in proximity to a current position of an input cursor in the text field; identifying one or more candidate emoticons wherein each candidate emoticon is associated with a respective score indicating relevance to the first text and the first sentiment based on, at least, historical user selections of emoticons for insertion in proximity to respective second text having a respective second sentiment; providing one or more candidate emoticons having respective highest scores for user selection; and receiving user selection of one or more of the provided emoticons and inserting the selected emoticons into the text field at the current position of the input cursor.
    Type: Grant
    Filed: June 6, 2016
    Date of Patent: June 27, 2017
    Assignee: Machine Zone, Inc.
    Inventors: Gabriel Leydon, Nikhil Bojja
  • Patent number: 9639783
    Abstract: Systems, apparatuses, and methods to relate images of words to a list of words are provided. A trellis based word decoder analyses a set of OCR characters and probabilities using a forward pass across a forward trellis and a reverse pass across a reverse trellis. Multiple paths may result, however, the most likely path from the trellises has the highest probability with valid links. A valid link is determined from the trellis by some dictionary word traversing the link. The most likely path is compared with a list of words to find the word closest to the most.
    Type: Grant
    Filed: April 28, 2015
    Date of Patent: May 2, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Pawan Kumar Baheti, Kishor K. Barman, Raj Kumar Krishna Kumar
  • Patent number: 9613299
    Abstract: Methods and systems for performing character recognition of a document image include analyzing verification performed by a user on a recognized text obtained by character recognition of a document image, identifying analogous changes of a first incorrect character for a first correct character, and prompting the user to initiate a training of a recognition pattern based on the identified analogous changes.
    Type: Grant
    Filed: December 11, 2014
    Date of Patent: April 4, 2017
    Assignee: ABBYY Development LLC
    Inventors: Michael Krivosheev, Natalia Kolodkina, Alexander Makushev
  • Patent number: 9519917
    Abstract: A method and a system for context-based real-time advertising are provided. In example embodiments, a document content that is displayed to a user may be analyzed and keywords may be identified. Selected listings from a publication system may be received; the selected listings may be selected using the keywords. The system may detect user events associated with the keywords and, in response to the detection of the user events, display information related to the listings while maintaining the displaying of the document content.
    Type: Grant
    Filed: December 1, 2014
    Date of Patent: December 13, 2016
    Assignee: eBay Inc.
    Inventor: Thomas Geiger
  • Patent number: 9471219
    Abstract: A text recognition apparatus and method of the portable terminal is provided for recognizing text image selected by a pen on a screen image as text. The text recognition method of the present invention includes displaying an image; configuring a recognition area on the image in response to a gesture made with a pen; recognizing text in the recognition area; displaying the recognized text and action items corresponding to the text; and executing, when one of the action items is selected, an action corresponding to the selected action item.
    Type: Grant
    Filed: August 21, 2013
    Date of Patent: October 18, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sihak Jang, Seonhwa Kim, HeeJin Kim, Mijung Park
  • Patent number: 9454240
    Abstract: A computing device is described that outputs, for display, a graphical keyboard comprising a plurality of keys. The computing device receives, an indication of a gesture detected at a presence-sensitive input device. The computing device determines, based at least in part on the indication of the gesture and at least one characteristic of the gesture, one or more keys from the plurality of keys. The computing device determines a character string based on the one or more keys from the plurality of keys. In response to determining that the character string is not included in a lexicon and a spatial model probability associated with the one or more keys from the plurality of keys exceeds a probability threshold, the computing device outputs, for display, the character string.
    Type: Grant
    Filed: April 18, 2013
    Date of Patent: September 27, 2016
    Assignee: Google Inc.
    Inventors: Satoshi Kataoka, Keisuke Kuroyanagi
  • Patent number: 9445142
    Abstract: An information processing apparatus which communicates with an image capturing apparatus and reproduces video data obtained by the image capturing apparatus, comprises a unit which requests a segment list in which information of segments of video data is written; a unit which acquires the requested segment list; a unit which decides which segment, from the segments in the acquired segment list to request; a unit which requests the decided segment from the image capturing apparatus; a unit which acquires the requested segment; and a unit which calculates a delay time for segment transmission based on a number of segments in the segment list.
    Type: Grant
    Filed: July 10, 2014
    Date of Patent: September 13, 2016
    Assignee: Canon Kabushiki Kaisha
    Inventor: Mayu Yokoi
  • Patent number: 9424251
    Abstract: A method of extracting the semantic distance from the mathematical sentence and classifying the mathematical sentence by the semantic distance, includes: receiving a user query; extracting at least one keyword included in the received user query; and extracting a semantic distance by, indexing one or more of natural language tokens and mathematical equation tokens including semantic information, extracting the semantic distance, between the at least one extracted keyword and the one or more indexed semantic information by referring indexed information, and acquiring a similarity of the received user query and the semantic information.
    Type: Grant
    Filed: June 6, 2013
    Date of Patent: August 23, 2016
    Assignee: SK TELECOM CO., LTD.
    Inventors: Keun Tae Park, Yong Gil Park, Hyeongin Choi, Nam Sook Wee, Doo Seok Lee, Jung Kyo Sohn, Haeng Moon Kim, Dong Hahk Lee
  • Patent number: 9411801
    Abstract: Disclosed are implementations of methods and systems for displaying definitions and translations of words by searching for a translation simultaneously in various languages according to a query in a general language dictionary. The invention removes the need to specify a source language for the word or word combination when translated into a target language. The target language may be preset. Translation is possible for word combinations in multiple sources languages. Source words may be entered manually or captured by an imaging component of an electronic device. When captured, a word combination is selected, and subjected to optical character recognition (OCR) and translation. Source language and OCR language may be suggested via geolocation of the electronic device.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: August 9, 2016
    Assignee: ABBYY Development LLC
    Inventor: Maria Osipova
  • Patent number: 9412052
    Abstract: A method for extracting text from an image data is disclosed. The method includes pre-processing, via a processor, the image data to obtain a readable image data. The method further includes filtering, via the processor, a plurality of copies of the readable image data using a plurality of noise filters to obtain a corresponding plurality of noise removed images. Yet further, the method includes performing, via the processor, image data recognition on each of the plurality of noise removed images to obtain a text copy associated with each of the plurality of noise removed images. Moreover, the method includes ranking, via the processor, each word in the text copy associated with each of the plurality of noise removed images based on a predefined set of parameters. Finally, the method includes selecting, via the processor, highest ranked words within the text copy associated with each of the plurality of noise removed images to obtain output text for the image data.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: August 9, 2016
    Assignee: WIPRO LIMITED
    Inventors: Harihara Vinayakaram Natarajan, Tamilselvan Subramanian
  • Patent number: 9396389
    Abstract: A digital camera associated with a mobile processing apparatus is used to produce a file containing a 2D digitized image of a document having pre-formatted fields for user's check marks. The image is electronically matched to a digital template of the document for extracting digitized images of the pre-formatted fields, which are thereafter analyzed for presence therein of user-entered check marks.
    Type: Grant
    Filed: October 8, 2014
    Date of Patent: July 19, 2016
    Assignee: ABBYY Development LLC
    Inventor: Sergey Anatolyevich Kuznetsov
  • Patent number: 9384423
    Abstract: A system and method for computing confidence in an output of a text recognition system includes performing character recognition on an input text image with a text recognition system to generate a candidate string of characters. A first representation is generated, based on the candidate string of characters, and a second representation is generated based on the input text image. A confidence in the candidate string of characters is computed based on a computed similarity between the first and second representations in a common embedding space.
    Type: Grant
    Filed: May 28, 2013
    Date of Patent: July 5, 2016
    Assignee: XEROX CORPORATION
    Inventors: Jose Antonio Rodriguez-Serrano, Florent C. Perronnin
  • Patent number: 9383913
    Abstract: A data filtering menu enabling a user to select different characteristics and values may be displayed. Each of the characteristics may be displayed in a first selectable list in the filtering menu. Once a user selects one of the characteristics, a second list containing selectable values associated with the selected characteristic may be displayed in a second list. The selected values may filter a set of data and the list of characteristics may be modified to display a representation of values selected from one or more of the second lists. Additionally, a selectable object associated with a characteristic having user selected values may also be displayed with a filtered result. If this object is selected, a list of values from the second list may be redisplayed. The user may then select different values and re-executed the filter with the new values.
    Type: Grant
    Filed: May 30, 2012
    Date of Patent: July 5, 2016
    Assignee: SAP SE
    Inventors: Timo Hoyer, Sascha Hans Grub
  • Patent number: 9372608
    Abstract: Computer-implemented systems and methods are provided for suggesting emoticons for insertion into text based on an analysis of sentiment in the text. An example method includes: determining a first sentiment of text in a text field; selecting first text from the text field in proximity to a current position of an input cursor in the text field; identifying one or more candidate emoticons wherein each candidate emoticon is associated with a respective score indicating relevance to the first text and the first sentiment based on, at least, historical user selections of emoticons for insertion in proximity to respective second text having a respective second sentiment; providing one or more candidate emoticons having respective highest scores for user selection; and receiving user selection of one or more of the provided emoticons and inserting the selected emoticons into the text field at the current position of the input cursor.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: June 21, 2016
    Assignee: Machine Zone, Inc.
    Inventors: Gabriel Leydon, Nikhil Bojja
  • Patent number: 9348459
    Abstract: A method for inputting a character executed by a computer that inputs a character, the method includes: obtaining a first pressed position at which the pressing operation has been performed and a first key corresponding to the first pressed position; detecting deletion of a character input using the first key; obtaining, when the deletion is detected, a second pressed position at which a next pressing operation has been performed and a second key corresponding to the second pressed position; determining whether or not a distance between the first pressed position and the second key is smaller than or equal to a threshold; and correcting, when the distance is smaller than or equal to the threshold, a range that is recognized as the second key in the region on the basis of the first pressed position.
    Type: Grant
    Filed: May 13, 2013
    Date of Patent: May 24, 2016
    Assignee: FUJITSU LIMITED
    Inventors: Taichi Murase, Nobuyuki Hara, Atsunori Moteki, Takahiro Matsuda
  • Patent number: 9336285
    Abstract: Methods, systems, and computer program products are provided for presenting content in accordance with a placement designation. A user interface is provided for designating that a content item is to be presented in a search suggestion control along with search suggestions for completing a search string. Requests for content are processed, including evaluating a partial form of a received search string, identifying search suggestions for completing the partial form of the search string, and presenting the content item in the search suggestion control along with search suggestions that match campaign terms.
    Type: Grant
    Filed: February 28, 2013
    Date of Patent: May 10, 2016
    Assignee: Google Inc.
    Inventor: Ezequiel Vidra
  • Patent number: 9269028
    Abstract: Provided are string similarity assessment techniques. In one embodiment, the techniques include receiving a plurality of input strings comprising characters from a character set and generating hashtables for each respective input string using a hash function that assigns the characters as keys and character positions in the strings as values. The techniques may also include determine a character similarity index for at least two of the input strings relative to each other by comparing a similarity of the values for each key in the their respective hashtables; determining a total disordering index based representative of an alignment of the at least two input strings by determining differences between a plurality of index values for each individual key in their respective hashtables and determining the total disordering index based on the differences; and determining a string similarity metric based on at least one character similarity index and the total disordering index.
    Type: Grant
    Filed: July 7, 2014
    Date of Patent: February 23, 2016
    Assignee: General Electric Company
    Inventors: Jake Matthew Kurzer, Brett Csorba
  • Patent number: 9251430
    Abstract: A character recognition apparatus may include an imaging element configured to read a character string placed on an information recording medium; an image memory configured to store image data of the character string; and a character segmenting unit configured to segment a character constituting the character string. The character segmenting unit may include a minimum intensity curve creating unit configured to detect a minimum intensity value among light intensity values, and create a minimum intensity curve of the image data according to the minimum intensity value of each pixel row; a character segmenting position detecting unit configured to calculate a space between the characters neighboring in the created minimum intensity curve, in order to detect a character segmenting position between the characters; and a character segmenting process unit configured to segment each character according to the detected character segmenting position between the characters.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: February 2, 2016
    Assignee: NIDEC SANKYO CORPORATION
    Inventor: Hiroshi Nakamura
  • Patent number: 9244907
    Abstract: Various embodiments provide a method that comprises receiving a set of segments from a text field, analyzing the set of segments to determine at least one of a target subtext or a target meaning associated with the set of segments, and identifying a set of candidate emoticons where each candidate emoticon in the set of candidate emoticons has an association between the candidate emoticon and at least one of the target subtext or the target meaning. The method may further comprise presenting the set of candidate emoticons for entry selection at a current position of an input cursor, receiving an entry selection for a set of selected emoticons from the set of candidate emoticons, and inserting the set of selected emoticons into the text field at the current position of the input cursor.
    Type: Grant
    Filed: June 8, 2015
    Date of Patent: January 26, 2016
    Assignee: Machine Zone, Inc.
    Inventor: Gabriel Leydon
  • Patent number: 9247100
    Abstract: A method for routing a confirmation of receipt and/or delivery of a facsimile or portion thereof according to one embodiment includes generating text of a facsimile in a computer readable format; ascertaining one or more of a significance and a relevance of at least a portion of the text by locating one or more keywords in the text, wherein at least two of the keywords are not adjacent in the text; analyzing the text for at least one of a meaning and a context of the text; and routing at least one confirmation of receipt and/or delivery of the facsimile to one or more confirmation destinations based on the analysis. Additional disclosed embodiments include systems and computer program products configured to similarly route confirmation of receipt and/or delivery of a facsimile or a portion thereof.
    Type: Grant
    Filed: October 21, 2013
    Date of Patent: January 26, 2016
    Assignee: Kofax, Inc.
    Inventors: Roy Couchman, Roland G. Borrey
  • Patent number: 9239833
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for presenting additional information for text depicted by an image. In one aspect, a method includes receiving an image. Text depicted in the image is identified. A presentation context is selected for the image based on an arrangement of the text depicted by the image. Each presentation context corresponds to a particular arrangement of text within images. Each presentation context has a corresponding user interface for presenting additional information related to the text. The user interface for each presentation context is different from the user interface for other presentation contexts. The user interface that corresponds to the selected presentation context is identified. Additional information is presented for at least a portion of the text depicted in the image using the identified user interface. The user interface can present the additional information in an overlay over the image.
    Type: Grant
    Filed: November 8, 2013
    Date of Patent: January 19, 2016
    Assignee: Google Inc.
    Inventors: Alexander J. Cuthbert, Joshua J. Estelle
  • Patent number: 9236043
    Abstract: Controlling a reading machine while reading a document to a user by receiving an image of a document, accessing a knowledge base that provides data that identifies sections in the document and processing user commands to select a section of the document. The reading machine applies text-to-speech to a text file that corresponds to the selected section of the document, to read the selected section of the document aloud to the user.
    Type: Grant
    Filed: April 1, 2005
    Date of Patent: January 12, 2016
    Assignee: KNFB READER, LLC
    Inventors: Raymond C. Kurzweil, Paul Albrecht, Lucy Gibson
  • Patent number: 9229608
    Abstract: A character display apparatus, a character display method, and a non-transitory computer readable medium storing a character display program are capable of automatically avoiding a handwriting character being illegible during input of the character by detecting an overlap between lines to be drawn during drawing based on trajectory data of a handwriting character to determine a presence/absence of an illegible part. If it is determined that an illegible part is present, the thickness of the handwriting character is automatically selected again to be thinner and the image of the handwriting character is drawn again, which avoids illegible part occurring in the handwriting character without inputting the character all over again.
    Type: Grant
    Filed: January 30, 2014
    Date of Patent: January 5, 2016
    Assignee: Sharp Kabushiki Kaisha
    Inventor: Teppei Hosokawa