Ideographic Characters (e.g., Japanese Or Chinese) Patents (Class 382/185)
  • Patent number: 11961317
    Abstract: Aspects of the present disclosure are directed to extracting textual information from image documents. In one embodiment, a system, upon receiving a request to extract textual information from an image document, a digital processing system performs character recognition based on content of the image document using multiple approaches to generate corresponding texts. The texts are then combined to determine a result text representing the textual information contained in the image document. The result is then provided as a response to the request.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: April 16, 2024
    Assignee: Oracle Financial Services Software Limited
    Inventors: Dakshayani Singaraju, Veresh Jain, Kartik Kumar
  • Patent number: 11354503
    Abstract: A method for providing gesture-based complete suggestions is provided. The method includes detecting at least one gesture performed by a user to complete an incomplete text provided by the user in an electronic device. Further, the method includes determining at least one remaining text to complete the incomplete text based on the at least one gesture and the incomplete text. Further, the method includes forming at least one complete text by adding the at least one remaining text to the incomplete text. Further, the method includes displaying the at least one complete text.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: June 7, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Vikrant Singh, Ankit Vijay, Pragya Paramita Sahu, Shankar Mosur Venkatesan, Viswanath Veera
  • Patent number: 11295155
    Abstract: A method and system to generate training data for a deep learning model in memory instead of loading pre-generated data from disk storage. A corpus may be stored as lines of text. The lines of text can be manipulated in the memory of a central processing unit (CPU) of a computing system, using asynchronous multi-processing, in parallel with a training process being conducted on the system's graphics processing unit (GPU). With such an approach, for a given line of text, it is possible to take advantage of different fonts and different types of image augmentation without having to put the images in disk storage for subsequent retrieval. Consequently, the same line of text can be used to generate different training images for use in different epochs, providing more variability in training data (no training sample is trained on more than once). A single training corpus may yield many different training data sets.
    Type: Grant
    Filed: April 8, 2020
    Date of Patent: April 5, 2022
    Assignee: KONICA MINOLTA BUSINESS SOLUTIONS U.S.A., INC.
    Inventor: Ting Xu
  • Patent number: 11281370
    Abstract: Methods and apparatuses are provided for detecting a gesture at an electronic device. The gesture is received through an input module of the electronic device. A direction combination corresponding to the gesture is determined. The direction combination includes a plurality of directions. Information regarding the direction combination is compared with information regarding at least one direction combination, which is stored in a memory of the electronic device. A state of the electronic device is changed from a first state to a second state, using at least one processor of the electronic device, according to a result of comparing the information regarding the direction combination with the information regarding the at least one direction combination.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: March 22, 2022
    Inventors: Jae Wook Lee, An Ki Cho, Jun Hyung Cho
  • Patent number: 11256322
    Abstract: The present disclosure relates to methods and systems for providing virtual environments that are responsively adaptable to users' characteristics. Embodiments provide for identifying a virtual action to be performed by a virtual representation of a patient within a virtual environment. The virtual action corresponds to a physical action by a physical limb of the patient in the real-world. In embodiments, the virtual action is required to be performed to a first target area within the virtual environment. Embodiments determine that the patient has at least one limitation that limits the patient performing the physical action. A determination of whether the patient has performed the physical action to at least a physical threshold associated is made. The virtual environment is caused to adapt in order to allow the virtual action to be performed in response to the determination that the patient has performed the physical action to the physical threshold.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: February 22, 2022
    Assignee: Neurological Rehabilitation Virtual Reality, LLC
    Inventor: Veena Somareddy
  • Patent number: 11182029
    Abstract: Disclosed are a smart interactive tablet and a driving method thereof. The smart interactive tablet includes a touch panel, a display panel, a first driving circuit and a second driving circuit. The touch panel is provided with a sampling circuit configured to acquire coordinates of points of a writing trajectory on the touch panel, send pre-display coordinates corresponding to a certain point of the writing trajectory to be displayed and different from an end point of the writing trajectory to the first driving circuit, and send real-time coordinates corresponding to the end point of the writing trajectory to the second driving circuit; the first driving circuit is configured to report the pre-display coordinates to the second driving circuit; and the second driving circuit is configured to predict a scribing trajectory between the pre-display coordinates and the real-time coordinates and drive the display panel to display the scribing trajectory.
    Type: Grant
    Filed: April 3, 2020
    Date of Patent: November 23, 2021
    Assignees: BEIJING BOE DISPLAY TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Rui Guo, Jianting Wang, Zhanchang Bu
  • Patent number: 11146705
    Abstract: A character recognition device is configured to convert characters on a scanned image obtained by reading a document into digital data. The character recognition device includes control circuitry. The control circuitry is configured to perform character recognition processing on the scanned image; generate data in which candidate characters or character strings of a character or character string recognized by the character recognition processing are associated with recognition degrees representing probabilities of the candidate characters or character strings; display a first candidate having a highest recognition degree among the candidate characters or character strings; and generate a document file in a format in which another candidate, other than the first candidate, of the candidate characters or character strings is displayed simultaneously with the first candidate, in association with the first candidate, and in a form different from a form of the first candidate.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: October 12, 2021
    Assignee: RICOH COMPANY, LTD.
    Inventor: Takayuki Saitoh
  • Patent number: 11089457
    Abstract: Systems and methods are provided for a personalized entity repository. For example, a computing device comprises a personalized entity repository having fixed sets of entities from an entity repository stored at a server, a processor, and memory storing instructions that cause the computing device to identify fixed sets of entities that are relevant to a user based on context associated with the computing device, rank the fixed sets by relevancy, and update the personalized entity repository using selected sets determined based on the rank and on set usage parameters applicable to the user. In another example, a method includes generating fixed sets of entities from an entity repository, including location-based sets and topic-based sets, and providing a subset of the fixed sets to a client, the client requesting the subset based on the client's location and on items identified in content generated for display on the client.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: August 10, 2021
    Assignee: GOOGLE LLC
    Inventors: Matthew Sharifi, Jorge Pereira, Dominik Roblek, Julian Odell, Cong Li, David Petrou
  • Patent number: 10990799
    Abstract: The present disclosure allows digitizing handwritten signatures efficiently. A live stream of image frames is received from a digital camera unit of an electronic device. The received image frames are displayed and a guideline pattern defining a first target area and a second target area is overlaid. The content of a first image frame section of a current frame overlaid by the first target area is read. A multidimensional machine-readable code decoder is applied to the read content in order to interpret the read content. If the read content is successfully interpreted and if a precondition is met, a second image frame section of the current image frame overlaid by the second target area is captured.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: April 27, 2021
    Assignee: ORF Box Oy
    Inventor: Ovidiu Cernautan
  • Patent number: 10924634
    Abstract: In the case where both thickening processing and UCR processing are performed for an object, appropriate effects are obtained for both pieces of the processing. An image processing apparatus including an image processing unit configured to perform thickening processing for an object included in an input image and to perform saturation suppression processing for an edge portion of the object in the input image for which the thickening processing has been performed.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: February 16, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventor: Shinji Sano
  • Patent number: 10775991
    Abstract: In one embodiment, overlaying a first element on top of a second element in a user interface; and adjusting visual appearance of the first element based on a portion of the second element underneath the first element.
    Type: Grant
    Filed: December 16, 2014
    Date of Patent: September 15, 2020
    Assignee: Facebook, Inc.
    Inventors: Michael Matas, Kimon Tsinteris, Austin Sarner, Charles Melcher
  • Patent number: 10565443
    Abstract: This disclosure relates generally to document processing, and more particularly to method and system for determining structural blocks of a document. In one embodiment, the method may include extracting text from the document, the text including text lines. The method may further include generating a feature vector for each of the text lines, the feature vector for the text line including a set of feature values for a set of corresponding features in the text line. The method may further include creating an input matrix for each of the text lines, the input matrix for the text line including a set of feature vectors corresponding to a set of neighboring text lines along with the text line. The method may further include determining a structural block tag for each of the text lines based on the corresponding input matrix using a machine learning model.
    Type: Grant
    Filed: March 31, 2018
    Date of Patent: February 18, 2020
    Assignee: Wipro Limited
    Inventors: Raghavendra Hosabettu, Sneha Subhaschandra Banakar
  • Patent number: 10438098
    Abstract: A method for template matching can include iteratively selecting a template set of points to project over a centerline of a candidate symbol; conducting a template matching analysis; assigning a score to each template set; and selecting a template set with a highest assigned score. For example, the score can depend on proximity of the template points to a center and/or boundaries of a principal tracing path of the symbol. Additionally, one or more template sets having a top rank can be selected for a secondary analysis of proximity of the template points to a boundary of a printing of the symbol. The method can further include using the template with the highest score to interpret the candidate symbol.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: October 8, 2019
    Assignee: Hand Held Products, Inc.
    Inventors: Edward Hatton, H. Sprague Ackley
  • Patent number: 10373028
    Abstract: According to an embodiment, a pattern recognition device is configured to divide an input signal into a plurality of elements, convert the divided elements into feature vectors having the same dimensionality to generate a set of feature vectors, and evaluate the set of feature vectors using a recognition dictionary including models corresponding to respective classes, to output a recognition result representing a class or a set of classes to which the input signal belongs. The models each include sub-models each corresponding to one of possible division patterns in which a signal to be classified into a class corresponding to the model can be divided into a plurality of elements. A label expressing a model including a sub-model conforming to the set of feature vectors, or a set of labels expressing a set of models including sub-models conforming to the set of feature vectors is output as the recognition result.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: August 6, 2019
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA DIGITAL SOLUTIONS CORPORATION
    Inventors: Soichiro Ono, Hiroyuki Mizutani
  • Patent number: 10353582
    Abstract: A terminal apparatus according to the present application includes a receiving unit, a first display control unit, and a second display control unit. The receiving unit receives an operation to designate a first area. When the receiving unit has received the operation to designate the first area, the first display control unit displays first input candidates. When an operation to designate a second area has been received, the second display control unit displays second input candidates corresponding to a first input candidate determined to be selected among the first input candidates.
    Type: Grant
    Filed: July 5, 2016
    Date of Patent: July 16, 2019
    Assignee: YAHOO JAPAN CORPORATION
    Inventors: Ryuki Sakamoto, Yuta Suzuki, Sakiko Nishi
  • Patent number: 10255267
    Abstract: A method includes displaying a set of one or more suggestions including one or more character strings that are suggested replacements for a first set of one or more entered characters. The method further includes: while displaying the set of suggestions, receiving one or more additional entered characters; and after receiving the additional entered characters, updating the set of suggestions based on an updated set of entered characters that includes the first set of entered characters and the additional entered characters. The updating comprises changing a first suggestion in the set of suggestions from a first character string that is a suggested replacement for the first set of entered characters to a second character string that is a suggested replacement for the updated set of entered characters.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: April 9, 2019
    Assignee: Apple Inc.
    Inventors: Imran A. Chaudhri, Chanaka G. Karunamuni, Tiffany S. Jon, Jason C. Beaver, Joshua H. Shaffer, Christopher P. Willmore, Nicholas K. Jong
  • Patent number: 10176409
    Abstract: Embodiments of the present disclosure disclose an image character recognition model generation method and apparatus, and a vertically-oriented character image recognition method and apparatus. The image character recognition model generation method includes: generating a rotated line character training sample, wherein the rotated line character training sample includes a rotated line character image and an expected character recognition result corresponding to the rotated line character image, and there is a difference of 90 degrees between character units in the rotated line character image and character units in a standard line character image; and training a set neural network by using the rotated line character training sample, to generate an image character recognition model.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: January 8, 2019
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Shufu Xie, Hang Xiao
  • Patent number: 10152300
    Abstract: An electronic device provides data to present a user interface with a plurality of user interface objects, including a control user interface object at a first location. The control user interface object is configured to control a parameter. The device receives an input that corresponds to an interaction with the control user interface object. While receiving the input that corresponds to the interaction with the control user interface object, the device provides data to move the control user interface object, in accordance with the input, from the first location to a second location. The device also provides first sound information to provide a sound output with characteristics that are different from the parameter controlled by the control user interface object and that change with movement of the control user interface object from the first location to the second location.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: December 11, 2018
    Assignee: APPLE INC.
    Inventors: Matthew I. Brown, Avi E. Cieplinski
  • Patent number: 10140262
    Abstract: Systems and associated methodology are presented for Arabic handwriting synthesis including partitioning a dataset of sentences associated with the alphabet into a legative partition including isolated bigram representation and classified words that contain ligature representations of the collected dataset, an unlegative partition including single character shape representation of the collected data set, an isolated characters partition, and a passages and repeated phrases partition, generating a pangram, the pangram including the occurrence of every character shape in the collected dataset and further including a special lipogram condition set based on a desired digital output of the collected dataset, and outputting a digital representation of the pangram including synthesized text.
    Type: Grant
    Filed: May 3, 2016
    Date of Patent: November 27, 2018
    Assignee: King Fahd University of Petroleum and Minerals
    Inventor: Yousef S. I. Elarian
  • Patent number: 10082952
    Abstract: A method and system for text input by a continuous sliding operation is provided.
    Type: Grant
    Filed: January 15, 2015
    Date of Patent: September 25, 2018
    Assignee: SHANGHAI CHULE (COOTEK) INFORMATION TECHNOLOGY CO. LTD.
    Inventors: Jialiang Wang, Kan Zhang, Lin Zou
  • Patent number: 9916498
    Abstract: A system for document processing including decomposing an image of a document into at least one data entry region sub-image, providing the data entry region sub-image to a data entry clerk available for processing the data entry region sub-image, receiving from the data entry clerk a data entry value associated with the data entry region sub-image, and validating the data entry value.
    Type: Grant
    Filed: January 30, 2017
    Date of Patent: March 13, 2018
    Assignee: Orbograph Ltd.
    Inventors: Avikam Baltsan, Ori Sarid, Aryeh Elimelech, Aharon Boker, Zvi Segal, Gideon Miller
  • Patent number: 9870143
    Abstract: In a handwriting recognition method applied in an electronic device with touch screen and display screen, a handwriting input area and a handwriting display area are shown on a touch screen when a handwriting command is given. The handwriting of a complete or partial word in the handwriting input area is recognized and displayed and a new handwriting input area can be added when a preset slide operation is applied that the tracing of a handwritten character collides with a boundary of the input area. Handwriting in the new handwriting input area is recognized and displayed and the content of the original handwriting input area and of the new handwriting input area are combined to form a complete word when the handwriting input is finished. The complete word is then displayed.
    Type: Grant
    Filed: July 13, 2015
    Date of Patent: January 16, 2018
    Assignees: Fu Tai Hua Industry (Shenzhen) Co., Ltd., HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: Chih-San Chiang, Hai-Jun Mao
  • Patent number: 9697264
    Abstract: A method and apparatus for selecting items from a collection of items are indexed by a list of item identifiers. The item identifiers may be in the form of text, symbols, graphics, etc. An initial display is generated which includes on or more parts of the item identifiers. Selection of the one or more parts may be made and results in the generation of a display of further one or more parts for selection. The further one or more parts may be selected in order to the selected on or more parts to build a larger part or whole of an item identifier. Accordingly, selection from a large list of item identifiers may be carried out in a relatively short time period.
    Type: Grant
    Filed: December 28, 2007
    Date of Patent: July 4, 2017
    Assignee: Kannuu Pty. Ltd.
    Inventor: Kevin W. Dinn
  • Patent number: 9696873
    Abstract: The invention relates to a system for processing sliding operations on a portable terminal device. The portable terminal device includes a touch screen. The system includes a memory device configured to store data related to sliding operations, and a processor coupled to the memory device. The processor is configured to cause to display, on the touch screen, a communication function interface for receiving user sliding operations. The processor is further configured to receive original messages obtained on the touch screen corresponding to the user sliding operations, and process the original messages to determine possible sliding patterns corresponding to the user sliding operations. The processor is also configured to set a user-defined sliding pattern based on the possible sliding patterns.
    Type: Grant
    Filed: July 12, 2013
    Date of Patent: July 4, 2017
    Assignee: SHANGHAI CHULE (COO TEK) INFORMATION TECHNOLOGY CO. LTD.
    Inventors: Kan Zhang, Jialiang Wang, Jingshen Wu, Meng Zhang
  • Patent number: 9633257
    Abstract: Automatic classification of different types of documents is disclosed. An image of a form or document is captured. The document is assigned to one or more type definitions by identifying one or more objects within the image of the document. A matching model is selected via identification of the document image. In the case of multiple identifications, a profound analysis of the document type is performed—either automatically or manually. An automatic classifier may be trained with document samples of each of a plurality of document classes or document types where the types are known in advance or a system of classes may be formed automatically without a priori information about types of samples. An automatic classifier determines possible features and calculates a range of feature values and possible other feature parameters for each type or class of document. A decision tree, based on rules specified by a user, may be used for classifying documents.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: April 25, 2017
    Assignee: ABBYY DEVELOPMENT LLC
    Inventors: Irina Filimonova, Sergey Zlobin, Andrey Myakutin
  • Patent number: 9588678
    Abstract: A method of operating electronic handwriting includes receiving at least two handwriting strokes from a touch screen, determining whether the at least two handwriting strokes overlap each other, selecting at least one of the overlapped handwriting strokes into a group, and recognizing a handwriting stroke belonging to the group. An electronic device for recognizing handwriting includes at least one of a touch device configured to receive a handwriting strokes, a storage configured to store information comprising the at least one handwriting stroke, and a controller configured to determine whether at least two handwriting strokes overlap each other, select at least one of the overlapped handwriting strokes into a group, and perform text recognition on a handwriting strokes belonging to the group.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: March 7, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyojin Kim, Inhyung Jung
  • Patent number: 9558417
    Abstract: A system for document processing including decomposing an image of a document into at least one data entry region sub-image, providing the data entry region sub-image to a data entry clerk available for processing the data entry region sub-image, receiving from the data entry clerk a data entry value associated with the data entry region sub-image, and validating the data entry value.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: January 31, 2017
    Assignee: Orbograph, LTD
    Inventors: Avikam Baltsan, Ori Sarid, Aryeh Elimelech, Aharon Boker, Zvi Segal, Gideon Miller
  • Patent number: 9501693
    Abstract: An action recognition system recognizes driver actions by using a random forest model to classify images of the driver. A plurality of predictions is generated using the random forest model. Each prediction is generated by one of the plurality of decision trees and each prediction comprises a predicted driver action and a confidence score. The plurality of predictions is regrouped into a plurality of groups with each of the plurality of groups associated with one of the driver actions. The confidence scores are combined within each group to determine a combined score associated with each group. The driver action associated with the highest combined score is selected.
    Type: Grant
    Filed: October 9, 2013
    Date of Patent: November 22, 2016
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Trevor Sarratt, Kikuo Fujimura
  • Patent number: 9501694
    Abstract: Methods and devices provide a quick and intuitive method to launch a specific application, dial a number or send a message by drawing a pictorial key, symbol or shape on a computing device touchscreen, touchpad or other touchsurface. A shape drawn on a touchsurface is compared to one or more code shapes stored in memory to determine if there is a match or correlation. If the entered shape correlates to a stored code shape, an application, file, function or keystroke sequence linked to the correlated code shape is implemented. The methods also enable communication involving sending a shape or parameters defining a shape from one computing device to another where the shape is compared to code shapes in memory of the receiving computing device. If the received shape correlates to a stored code shape, an application, file, function or keystroke sequence linked to the correlated code shape is implemented.
    Type: Grant
    Filed: November 24, 2008
    Date of Patent: November 22, 2016
    Assignee: QUALCOMM Incorporated
    Inventor: Mong Suan Yee
  • Patent number: 9495019
    Abstract: The present invention provides a display method of a mobile device selection and a terminal device. The method includes: receiving a location movement signal sent by a mobile device; determining a location, of a cursor focus of the mobile device, on a screen according to the location movement signal; and determining that the cursor focus moves toward a target icon, and if a distance between the cursor focus and the target icon is greater than zero and is less than or equal to a first threshold, determining that the cursor focus selects the target icon, thereby improving user operation efficiency, reducing operation complexity, and ensuring desirable interaction experience of a user.
    Type: Grant
    Filed: December 31, 2014
    Date of Patent: November 15, 2016
    Assignees: Huawei Technologies Co., Ltd.
    Inventors: Simon Ekstrand, Cheng Cheng
  • Patent number: 9418281
    Abstract: Implementations of the disclosed subject matter provide methods and systems for identifying a candidate character cut for an overwritten character. A method may include providing a handwriting input area. The handwriting input area may be divided into multiple sections and a first portion of the multiple sections may be located in an end point region. A first handwritten input comprising a first stroke that ends in a section located in the end point region may be received. A second handwritten input comprising a second stroke that begins in a section that is not located in the end point region may be received. As a result, a first candidate character cut may be identified between the first stroke and the second stroke.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: August 16, 2016
    Assignee: Google Inc.
    Inventors: Henry Allan Rowley, Thomas Deselaers, Li-Lun Wang
  • Patent number: 9323726
    Abstract: Systems and methods are provided for optimizing a glyph-based file. Individual components may be identified within glyphs of a file. Each identified component within a glyph may be a portion of the glyph, and may be a joint component or disjoint component. Groupings of components may then be determined, where the groupings are determined based at least in part by identifying similarly shaped components. A representative component may then be selected from each grouping. Composite glyphs may be generated and stored in an optimized file, where each composite glyph includes a reference to at least one representative component.
    Type: Grant
    Filed: June 27, 2012
    Date of Patent: April 26, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Lokesh Joshi, Satishkumar Kothandapani Shanmugasundaram, Nadia C. Payet, Viswanath Sankaranarayanan
  • Patent number: 9218525
    Abstract: Shape recognition is performed based on determining whether one or more ink strokes is not part of a shape or a partial shape. Ink strokes are divided into segments and the segments analyzed employing a relative angular distance histogram. The histogram analysis yields stable, incremental, and discriminating featurization results. Neural networks may also be employed along with the histogram analysis to determine complete shapes from partial shape entries and autocomplete suggestions provided to users for conversion of the shape into a known object.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: December 22, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alexander Kolmykov-Zotov, Sashi Raghupathy, Xin Wang
  • Patent number: 9152876
    Abstract: A system and method for determining handwritten character segmentation shape parameters for a user in automated handwriting recognition by prompting the user for a training sample; obtaining an image that includes handwritten text that corresponds to the training sample; sweeping the image with shapes corresponding to parameters to determine coordinates of the shapes in the image; segmenting the image into segmented characters based on the coordinates of the shapes; determining character segmentation accuracies of the parameters; and storing an association between the user and the parameters. The system and method can further include receiving a writing sample from the same user and utilizing the stored parameters to segment characters in the writing sample for use in automated handwriting recognition of the writing sample.
    Type: Grant
    Filed: March 18, 2014
    Date of Patent: October 6, 2015
    Assignee: XEROX CORPORATION
    Inventors: Eric M. Gross, Eric S. Hamby, Isaiah Simmons
  • Patent number: 9081495
    Abstract: An apparatus and a method for processing data of a terminal are provided. The method includes displaying a feature point extracting method selection window for selecting a feature point extracting method for extracting feature point information which specify data according to displayed data, in a Data save mode which saves at least one data displayed on one screen, extracting the feature point information according to the data by using the feature point extracting method selected through the feature point extracting method selection window, and saving at least one feature point information extracted according to the data as group feature point information.
    Type: Grant
    Filed: November 9, 2010
    Date of Patent: July 14, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Cheol Ho Cheong, Jae Sun Park
  • Patent number: 9020266
    Abstract: A method for processing handwriting input includes determining a first boundary point and a second boundary point corresponding to each target track point, forming an enclosed area by connecting all first boundary points determined for all target track points, connecting all second boundary points determined for all the target track points, connecting the first boundary point corresponding to the first target track point with the second boundary point corresponding to the first target track point, and connecting the first boundary point corresponding to the last target track point with the second boundary points corresponding to the last target track point, and filling the enclosed area.
    Type: Grant
    Filed: December 31, 2012
    Date of Patent: April 28, 2015
    Assignees: Peking University Founder Group Co., Ltd., Peking University, Beijing Founder Electronics Co., Ltd.
    Inventors: Ying Wang, Lei Ma, Yingmin Tang
  • Patent number: 9020265
    Abstract: A system and method is provided for automatically recognizing building numbers in street level images. In one aspect, a processor selects a street level image that is likely to be near an address of interest. The processor identifies those portions of the image that are visually similar to street numbers, and then extracts the numeric values of the characters displayed in such portions. If an extracted value corresponds with the building number of the address of interest such as being substantially equal to the address of interest, the extracted value and the image portion are displayed to a human operator. The human operator confirms, by looking at the image portion, whether the image portion appears to be a building number that matches the extracted value. If so, the processor stores a value that associates that building number with the street level image.
    Type: Grant
    Filed: June 5, 2014
    Date of Patent: April 28, 2015
    Assignee: Google Inc.
    Inventors: Bo Wu, Alessandro Bissacco, Raymond W. Smith, Kong Man Cheung, Andrea Frome, Shlomo Urbach
  • Patent number: 9014478
    Abstract: A method and apparatus for determining a reading order of characters The method includes preparing a list of character information, which is character information extracted from image data by character recognition processing and preparing a list of line information, which is made up of a line box surrounding a set of characters which are continuously aligned in the same direction in image data and an alignment direction of characters in the line box. In response to a request for adding character information to the list of character information, extracting a line box containing a character region of the character to be added, obtaining all character information having the character region contained in the concerned line box from the list of character information and rearranging according to the position with respect to the alignment direction of characters corresponding to the line box to determine a new reading order of characters.
    Type: Grant
    Filed: August 30, 2012
    Date of Patent: April 21, 2015
    Assignee: International Business Machines Corporation
    Inventors: Toshinari Itoko, Daisuke Sato
  • Patent number: 9014481
    Abstract: A method for Arabic and Farsi font recognition for determining the font of text using a nearest neighbor classifier, where the classifier uses a combination of features including: box counting dimension, center of gravity, the number of vertical and horizontal extrema, the number of black and white components, the smallest black component, the Log baseline position, concave curvature features, convex curvature features, direction and direction length features, Log-Gabor features, and segmented Log-Gabor features. The method is tested using various combination of features on various text fonts, sizes, and styles. It is observed the segmented Log-Gabor features produce a 99.85% font recognition rate, and the combination of all non-Log-Gabor features produces a 97.96% font recognition rate.
    Type: Grant
    Filed: April 22, 2014
    Date of Patent: April 21, 2015
    Assignees: King Fahd University of Petroleum and Minerals, King Abdulaziz City for Science and Technology
    Inventors: Hamzah Abdullah Luqman, Sabri Abdullah Mohammed
  • Patent number: 9015029
    Abstract: A portable device may include a camera to capture a picture or a video, object recognition logic to identify a target object within the picture or the video captured by the camera, and output a first string corresponding to the identified target object, logic to translate the first string to a second string of another language that corresponds to the identified target object, and logic to display on a display or store in a memory the second string.
    Type: Grant
    Filed: June 4, 2007
    Date of Patent: April 21, 2015
    Assignees: Sony Corporation, Sony Mobile Communications AB
    Inventor: Anders Bertram Eibye
  • Patent number: 8989494
    Abstract: A method and apparatus for determining a reading order of characters The method includes preparing a list of character information, which is character information extracted from image data by character recognition processing and preparing a list of line information, which is made up of a line box surrounding a set of characters which are continuously aligned in the same direction in image data and an alignment direction of characters in the line box. In response to a request for adding character information to the list of character information, extracting a line box containing a character region of the character to be added, obtaining all character information having the character region contained in the concerned line box from the list of character information and rearranging according to the position with respect to the alignment direction of characters corresponding to the line box to determine a new reading order of characters.
    Type: Grant
    Filed: June 5, 2012
    Date of Patent: March 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Toshinari Itoko, Daisuke Sato
  • Patent number: 8989492
    Abstract: A first technique of recognizing content is disclosed, including: determining a first value representative of a pixel content present at a first set of pixels associated with a first distance from a pixel under consideration; determining a second value representative of a pixel content present at a second set of pixels associated with a second distance from the pixel under consideration; and using the first and second values to compute one or more spatial features associated with the pixel under consideration for purposes of content recognition. A second technique of recognizing content is also disclosed, including: determining, for a pixel, a first value representative of a first feature associated with a set of pixels associated with a first direction from the pixel; and determining, for the pixel, a second value representative of a second feature associated with a set of pixels associated with a second direction from the pixel.
    Type: Grant
    Filed: June 4, 2012
    Date of Patent: March 24, 2015
    Assignee: Apple Inc.
    Inventors: Jerome R. Bellegarda, Jannes G. A. Dolfing
  • Publication number: 20150055868
    Abstract: A character data processing method executed by a computer includes detecting glyph variant information from an input character data string, and converting detected glyph variant information to extended expression data, the extended data and the detected glyph variant information, the basic character data being associated with the detected glyph variant information in the input character string, wherein the extended expression data can be converted to the basic character data by specific bit arithmetic processing.
    Type: Application
    Filed: August 25, 2014
    Publication date: February 26, 2015
    Inventors: Masaki Takatsuka, Masahiro Takeda
  • Patent number: 8965126
    Abstract: A character recognition device includes image input unit that receives an image, character region detection unit that detects a character region in the image, character region separation unit that separates the character region on a character-by-character basis, character recognition unit that performs character-by-character recognition on the characters present in separated regions and outputs one or more character recognition result candidates for each character, first character string transition data creation unit that receives the candidates, calculates weights for transitions to the candidates and creates first character string transition data based on a set of the candidates and the weights, and WFST processing unit that sequentially performs state transitions based on the first character string transition data, accumulates weights in each state transition and calculates a cumulative weight for each state transition, and outputs one or more state transition results based on the cumulative weight.
    Type: Grant
    Filed: February 24, 2012
    Date of Patent: February 24, 2015
    Assignee: NTT DOCOMO, INC.
    Inventors: Takafumi Yamazoe, Minoru Etoh, Takeshi Yoshimura, Kosuke Tsujino
  • Patent number: 8923647
    Abstract: Embodiments generally relate to providing privacy in a social network system. In some embodiments, a method includes recognizing one or more objects in at least one photo. The method also includes determining one or more objects to be obscured in the at least one photo based on one or more user preferences. The method also includes causing the at least one photo to be displayed such that the determined one or more objects are obscured.
    Type: Grant
    Filed: September 25, 2012
    Date of Patent: December 30, 2014
    Assignee: Google, Inc.
    Inventor: Devesh Kothari
  • Patent number: 8923618
    Abstract: An expression, for which complementary information can be outputted, is extracted from a document obtained by character recognition for an image. Complementary information related to the extracted expression is outputted when a character or a symbol adjacent to the beginning or the end of the extracted expression is not a predetermined character or symbol. Output of complementary information related to the extracted expression is skipped when the character or symbol adjacent to the beginning or the end of the extracted expression is the predetermined character or symbol. A problem that complementary information unrelated to an original text is outputted is prevented even when a false character recognition occurs.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: December 30, 2014
    Assignee: Sharp Kabushiki Kaisha
    Inventor: Takeshi Kutsumi
  • Patent number: 8896470
    Abstract: An electronic device for disambiguation of stroke input, the device comprising: an input device coupled to the microprocessor for accepting a stroke input; and a stroke disambiguation module resident in the memory for execution by the microprocessor. The device is configured to: receive a signal representing a stroke input sequence at the stroke disambiguation module; apply one or more stroke disambiguation rules to the stroke input sequence to generate an updated input sequence; and transmit a signal representing the updated input sequence.
    Type: Grant
    Filed: July 10, 2009
    Date of Patent: November 25, 2014
    Assignee: BlackBerry Limited
    Inventors: Vadim Fux, Xiaoting Sun, Timothy Koo, Aleksej Trefilov
  • Patent number: 8872979
    Abstract: Techniques are presented for analyzing audio-video segments, usually from multiple sources. A combined similarity measure is determined from text similarities and video similarities. The text and video similarities measure similarity between audio-video scenes for text and video, respectively. The combined similarity measure is then used to determine similar scenes in the audio-video segments. When the audio-video segments are from multiple audio-video sources, the similar scenes are common scenes in the audio-video segments. Similarities may be converted to or measured by distance. Distance matrices may be determined by using the similarity matrices. The text and video distance matrices are normalized before the combined similarity matrix is determined. Clustering is performed using distance values determined from the combined similarity matrix.
    Type: Grant
    Filed: May 21, 2002
    Date of Patent: October 28, 2014
    Assignee: Avaya Inc.
    Inventors: Amit Bagga, Jianying Hu, Jialin Zhong
  • Patent number: 8831364
    Abstract: An information processing apparatus of the present invention selects one language group, then selects one language from the selected language group, and performs OCR processing appropriate for the selected language on characters included in an image. From an obtained OCR processing result, a matching degree indicating a degree of similarity between the recognized characters in the image and the language selected for the OCR processing is calculated. Then, in a case where the matching degree is equal to or smaller than a particular value, a language belonging to a different language group is selected to further perform OCR processing. The efficiency of the OCR processing is improved. The information processing apparatus of the present invention allows improvement in the efficiency of the OCR processing.
    Type: Grant
    Filed: February 1, 2013
    Date of Patent: September 9, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hiromasa Kawasaki
  • Publication number: 20140226904
    Abstract: An information processing apparatus includes a network creating unit that creates a network in which respective characters of plural character recognition results are represented as nodes, and in which nodes of adjacent character images are connected with a link, a first determining unit that determines a first candidate boundary in the network, a second determining unit that determines a second candidate boundary different from the first candidate boundary in the network, and an extracting unit that extracts, as to-be-searched objects, plural candidate character strings from a set of candidate character strings each formed of nodes between the first candidate boundary and the second candidate boundary.
    Type: Application
    Filed: September 19, 2013
    Publication date: August 14, 2014
    Applicant: Fuji Xerox Co., Ltd.
    Inventors: Shunichi KIMURA, Eiichi Tanaka, Takuya Sakurai, Motoyuki Takaai, Masatsugu Tonoike, Yohei Yamane