Segmenting Individual Characters Or Words Patents (Class 382/177)
  • Patent number: 11568659
    Abstract: A character recognizing apparatus includes an acquiring unit, an identifying unit, and a character recognizing unit. The acquiring unit acquires a string image that is an image of a string generated in accordance with one of multiple string generation schemes. The identifying unit identifies a range specified for a result of character recognition in each of the multiple string generation schemes. The character recognizing unit performs first character recognition on the string image, and if a result of the first character recognition has a feature of a particular string generation scheme of the multiple string generation schemes, the character recognizing unit performs second character recognition on the string image within the range specified for a result of character recognition in the particular string generation scheme.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: January 31, 2023
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Yusuke Suzuki
  • Patent number: 11551034
    Abstract: Described herein are systems, methods, and other techniques for training a generative adversarial network (GAN) to perform an image-to-image transformation for recognizing text. A pair of training images are provided to the GAN. The pair of training images include a training image containing a set of characters in handwritten form and a reference training image containing the set of characters in machine-recognizable form. The GAN includes a generator and a discriminator. The generated image is generated using the generator based on the training image. Update data is generated using the discriminator based on the generated image and the reference training image. The GAN is trained by modifying one or both of the generator and the discriminator using the update data.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: January 10, 2023
    Assignee: Ancestry.com Operations Inc.
    Inventors: Mostafa Karimi, Gopalkrishna Veni, Yen-Yun Yu
  • Patent number: 11538235
    Abstract: Methods and apparatus to determine the dimensions of a region of interest of a target object and a class of the target object from an image using target object landmarks are disclosed herein. An example method includes identifying a landmark of a target object in an image based on a match between the landmark and a template landmark; classifying a target object based on the identified landmark; projecting dimensions of the template landmark based on a location of the landmark in the image; and determining a region of interest based on the projected dimensions, the region of interest corresponding to text printed on the target object.
    Type: Grant
    Filed: December 7, 2020
    Date of Patent: December 27, 2022
    Assignee: The Nielsen Company (US), LLC
    Inventor: Kevin Deng
  • Patent number: 11461782
    Abstract: Systems and methods are provided that distinguish humans from computers. In one implementation, a computer-implemented method selects, from a storage device, a plurality of images. The method further generates a document comprising the plurality of images for the security challenge. At least one image included in the plurality of images is oriented for display in a different direction than the other images. The method further receives a selection of one or more images included in the plurality of images and determines whether the selected one or more images is oriented for display in a different direction than the other images.
    Type: Grant
    Filed: June 11, 2009
    Date of Patent: October 4, 2022
    Assignee: Amazon Technologies, Inc.
    Inventor: William Randolph Zettler, Jr.
  • Patent number: 11409754
    Abstract: A method for context-aware data mining of a text document includes receiving a list of words parsed and preprocessed from an input query; computing a related distributed embedding representation for each word in the list of words using a word embedding model of the text document being queried; aggregating the related distributed embedding representations of all words in the list of words to represent the input query with a single embedding, by using one of an average of all the related distributed embedding representations or a maximum of all the related distributed embedding representations; retrieving a ranked list of document segments of N lines that are similar to the aggregated word embedding representation of the query, where N is a positive integer provided by the user; and returning the list of retrieved segments to a user.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: August 9, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Giacomo Domeniconi, Eun Kyung Lee, Alessandro Morari
  • Patent number: 11393079
    Abstract: There is provided an image processing apparatus including an input device configured to receive a stroke input, and a display controller configured to control a displaying of a modified stroke, wherein the modified stroke is synthesized based on characteristic parameters of the received stroke input and characteristic parameters of a reference stroke that has been matched to the received stroke input.
    Type: Grant
    Filed: January 19, 2018
    Date of Patent: July 19, 2022
    Assignee: SONY CORPORATION
    Inventors: Yoshihito Ohki, Yasuyuki Koga, Tsubasa Tsukahara, Ikuo Yamano, Hiroyuki Mizunuma, Miwa Ichikawa
  • Patent number: 11373038
    Abstract: The present disclosure relates to a method and a terminal for performing word segmentation on text information, and a storage medium. The method includes: acquiring the text information and configuration information, in which the configuration information includes at least two first word segmentation rules; converting the first word segmentation rules into second word segmentation rules according to a predetermined rule; in response to determining that an intersection exists between character strings of the text information matched by two of the second word segmentation rules, determining that two first word segmentation rules corresponding to the two of the second word segmentation rules associated with the intersection conflict; and processing the text information according to the configuration information, and outputting a result of the word segmentation on the text information.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: June 28, 2022
    Assignee: Beijing Xiaomi Intelligent Technology Co., Ltd.
    Inventors: Shuo Wang, Liang Shi, Yupeng Chen, Qun Guo
  • Patent number: 11354345
    Abstract: Systems and methods for receiving a set analyzing case records by extracting case text, performing natural language processing, and allocating each case text to a topic. Topics may be clustered to identify meaningful patterns that are reflected in numerous case records. The data resulting from the analysis may be visualized on a dashboard to allow users to identify and explore these patterns.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: June 7, 2022
    Assignee: JPMORGAN CHASE BANK, N.A.
    Inventors: Philip Jacob, Brandon Chihkai Yang, Maria Beltran, Sohajpal Shergill, Chienchung Chen
  • Patent number: 11334578
    Abstract: According to one aspect of the invention, there is provided a method for searching for documents containing mathematical expressions, the method comprising the steps of: dividing a first document containing mathematical expressions into a plurality of components; comparing the plurality of components with a plurality of other components extracted from a plurality of other documents, with reference to weights respectively assigned to the plurality of components according to types of the components; and determining a document associated with the first document among the plurality of other documents, with reference to a result of the comparison, wherein the weights are adaptively adjusted according to a result of the determination of the document associated with the first document.
    Type: Grant
    Filed: April 5, 2018
    Date of Patent: May 17, 2022
    Assignee: CLASSCUBE CO., LTD.
    Inventor: Seong Chan Ahn
  • Patent number: 11302286
    Abstract: A picture obtaining method and apparatus and a picture processing method and apparatus are provided. The method includes: obtaining a grayscale image corresponding to a first picture and a first image, where a size of the first picture is equal to a size of the first image, the first image includes N parallel lines, a spacing between two adjacent lines does not exceed a spacing threshold, and N is an integer greater than 1; translating a pixel included in each line in the first image based on the grayscale image, to obtain a second image, where the second image includes a contour of an image in the first picture; and set a pixel value of each pixel included in each line in the second image, to obtain a second picture.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: April 12, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Simon Ekstrand, Sha Qian, Johan Larsby, Haitao Dai, Fredrik Andreasson, Jonas Hans Andreas Fredriksson, Tim Jeppsson, Guolang Li, Rubin Cai, Xueyan Huang
  • Patent number: 11301627
    Abstract: System, method, and various embodiments for providing contextualized character recognition system are described herein. An embodiment operates by determining a plurality of predicted words of an image. An accuracy measure or each of the plurality of predicted words is identified and a replaceable word with an accuracy measure below a threshold is identified. A plurality of candidate words associated with the replaceable word are identified and a probability for each of the candidate words is calculated based on a contextual analysis. One of the candidate words with a highest probability is selected. The plurality of predicted words including the selected candidate word with the highest probability replacing the replaceable word is output.
    Type: Grant
    Filed: January 6, 2020
    Date of Patent: April 12, 2022
    Assignee: SAP SE
    Inventors: Rohit Kumar Gupta, Johannes Hoehne, Anoop Raveendra Katti
  • Patent number: 11270146
    Abstract: Aspects of the present invention provide a new text location technique, which can be applied to general handwriting detection at a variety of levels, including characters, words, and sentences. The inventive technique is efficient in training deep learning systems to locate text. The technique works for different languages, for text in different orientations, and for overlapping text. In one aspect, the technique's ability to separate overlapping text also makes the technique useful in application to overlapping objects. Embodiments take advantage of a so-called skyline appearance that text tends to have. Recognizing a skyline appearance for text can facilitate the proper identification of bounding boxes for the text. Even in the case of overlapping text, discernment of a skyline appearance for words can help with the proper identification of bounding boxes for each of the overlapping text words/phrases, thereby facilitating the separation of the text for purposes of recognition.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: March 8, 2022
    Assignee: KONICA MINOLTA BUSINESS SOLUTIONS U.S.A., INC.
    Inventor: Junchao Wei
  • Patent number: 11250252
    Abstract: Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: February 15, 2022
    Assignee: ADOBE INC.
    Inventors: Christopher Alan Tensmeyer, Rajiv Jain, Curtis Michael Wigington, Brian Lynn Price, Brian Lafayette Davis
  • Patent number: 11210546
    Abstract: The present disclosure proposes an end-to-end text recognition method and apparatus, computer device and readable medium. The method comprises: obtaining a to-be-recognized picture containing a text region; recognizing a position of the text region in the to-be-recognized picture and text content included in the text region with a pre-trained end-to-end text recognition model; the end-to-end text recognition model comprising a region of interest perspective transformation processing module for performing perspective transformation processing for the text region. The technical solution of the present disclosure does not need to serially arrange a plurality of steps, and may avoid introducing the accumulated errors and may effectively improve the accuracy of the text recognition.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: December 28, 2021
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Yipeng Sun, Chengquan Zhang, Zuming Huang, Jiaming Liu, Junyu Han, Errui Ding
  • Patent number: 11200412
    Abstract: A method and system for generating a parsed document from a digital document. The method includes segmenting the digital document into at least one section; classifying the at least one section of the digital document into at least one of a class: text class, table class, figure class, noise class; identifying a reading order of the digital document; and processing each of the at least one section of the digital document. Furthermore, processing each of the at least one section of the digital document comprises extracting content from each of the at least one section based on the class; and structuring the extracted content based on the reading order to generate the parsed document.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: December 14, 2021
    Assignee: Innoplexus AG
    Inventors: Gaurav Tripathi, Rohit Kewalramani, Jijeesh KR, Vatsal Agarwal
  • Patent number: 11176364
    Abstract: Described herein are various technologies pertaining to text extraction from a document. A computing device receives the document. The document comprises computer-readable text and a layout, wherein the layout defines positions of the computer-readable text within a two-dimensional area represented by the document. Responsive to receiving the document, the computing device identifies at least one textual element in the computer-readable text based upon spatial factors between portions of the computer-readable text and contextual relationships between the portions of the computer-readable text. The computing device then outputs the at least one textual element.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: November 16, 2021
    Assignee: HYLAND SOFTWARE, INC.
    Inventors: Ralph Meier, Thorsten Wanschura, Johannes Hausmann, Harry Urbschat
  • Patent number: 11176675
    Abstract: A method of identifying contiguities in images is disclosed. The contiguities are indicative features and various qualities of an image, which may be used for identifying objects and/or relationships in images. Alternatively, the contiguities may be helpful in ensuring that an image has a desired switching factor, so as to create a desired effect when combined with other images in a composite image. The contiguity may be a group of picture elements that are adjacent to one another that form a continuous image element that extends generally horizontally (e.g., diagonally, horizontally) across the image.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: November 16, 2021
    Assignee: CONFLU3NCE LTD
    Inventor: Tami Robyn Ellison
  • Patent number: 11163992
    Abstract: An information processing apparatus includes a first designation unit, a second designation unit, a position acquisition unit, a memory, and an extraction unit. The first designation unit designates an extensive area from a first read image, the extensive area including an output area and an object area. The second designation unit designates the output area from the designated extensive area. The position acquisition unit acquires positional information regarding the extensive area with respect to the first read image and positional information regarding the output area with respect to the extensive area. The memory stores the positional information regarding the extensive area and the positional information regarding the output area. The extraction unit identifies a position of the extensive area in a second read image in a format identical to a format of the first read image on a basis of the positional information regarding the extensive area stored by the memory.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: November 2, 2021
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Kunihiko Kobayashi, Shintaro Adachi, Shigeru Okada, Akinobu Yamaguchi, Junichi Shimizu, Kazuhiro Oya, Shinya Nakamura, Akane Abe
  • Patent number: 11126838
    Abstract: A computer implemented method includes receiving a document with line item textual entries and an attachment containing images of different objects characterizing different transactions. The images of the different objects are split into individual image objects. Attributes from the individual image objects are extracted. The line item textual entries are matched with the individual image objects to form matched image objects. The matched image objects include ambiguous matches with multiple individual image objects assigned to a single line item textual entry or a single individual image object assigned to multiple line item textual entries. An assignment model is applied to resolve the ambiguous matches. The assignment model defines priority constraints, assigns pairs of line item textual entries and individual image objects that meet highest priority constraints, removes highest priority constraints when ambiguous matches remain, and repeats these operations until no ambiguous matches remain.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: September 21, 2021
    Assignee: APPZEN, INC.
    Inventors: Edris Naderan, Thomas James White, Deepti Chafekar, Debashish Panigrahi, Kunal Verma, Snigdha Purohit
  • Patent number: 11113569
    Abstract: An information processing device according to an embodiment includes a determination unit and a first training unit. The determination unit determines whether an unlabeled data point whose class label is unknown is a non-targeted data point that is not targeted for pattern recognition. The first training unit trains a first classifier for use in the pattern recognition through semi-supervised learning using a first training dataset including unlabeled data determined not to be the non-targeted data and not including unlabeled data determined to be the non-targeted data.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: September 7, 2021
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Digital Solutions Corporation
    Inventor: Ryohei Tanaka
  • Patent number: 11113518
    Abstract: A method for extracting data from lineless tables includes storing an image including a table in a memory. A processor operably coupled to the memory identifies a plurality of text-based characters in the image, and defines multiple bounding boxes based on the characters. Each of the bounding boxes is uniquely associated with at least one of the text-based characters. A graph including multiple nodes and multiple edges is generated based on the bounding boxes, using a graph construction algorithm. At least one of the edges is identified for removal from the graph, and removed from the graph to produce a reduced graph. The reduced graph can be sent to a neural network to predict row labels and column labels for the table.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: September 7, 2021
    Inventors: Freddy Chongtat Chua, Tigran Ishkhanov, Nigel Paul Duffy
  • Patent number: 11080910
    Abstract: The present invention relates to a device and a method for placing an original or translated explanation of a reference numeral around the reference numeral in a patent drawing, by recognizing a reference numeral included in a patent drawing, searching for a space to place an explanation corresponding to the recognized reference numeral, generating a placement information set including position information for displaying the explanation of the reference numeral in the searched empty space, and providing the placement information set to a corresponding patent drawing image. Utilization of the present invention makes it possible to recognize clearly and quickly what is represented by a reference numeral included in a patent drawing, thereby increasing the readability of a drawing, and facilitating understanding of the technical idea of a patent through patent drawings.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: August 3, 2021
    Assignee: KWANGGETOCO., LTD.
    Inventors: Min Soo Kang, Jae Sung Hwang, Seok Hyoun Noe
  • Patent number: 11062164
    Abstract: A method for estimating text heights of text line images includes estimating a text height with a sequence recognizer. The method further includes normalizing a vertical dimension and/or position of text within a text line image based on the text height. The method may also further include calculating a feature of the text line image. In some examples, the sequence recognizer estimates the text height with a machine learning model.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: July 13, 2021
    Assignee: LEVERTON HOLDING LLC
    Inventors: Florian Kuhlmann, Michael Kieweg, Saurabh Shekhar Verma
  • Patent number: 11017258
    Abstract: A system for automated user input alignment receives the user input at a touchscreen display. A skew of the user input is identified as the user input is being received at a touchscreen display. A skew correction is determined based on the identified skew. The skew correction is applied to the user input to align the user input on the touchscreen display. The skew correction applied in an automated alignment process that. The user input is displayed with the applied skew correction on the touchscreen display with improved efficiency and without user manipulation to perform the alignment.
    Type: Grant
    Filed: June 5, 2018
    Date of Patent: May 25, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Arie Y. Gur, Amir Zyskind
  • Patent number: 11010423
    Abstract: Examples of systems and methods for automatic population of electronic documents are described. In an example, a digital base document having the information to be populated in a data field of the electronic document may be obtained. From the digital base document a data item to provide the information may be extracted. Further, for the digital base document, a similarity score may be computed with respect to each document type defined in predefined mapping data, the predefined mapping data including, for each document type, a weight associated with data items occurring in the document type, the weight being associated based on the importance of the data item to the document. Based on the similarity score, a document type of the digital base document may be identified. Further, based on a position of the data item in the digital base document and the identified document type, the data field may be populated.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: May 18, 2021
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Subhasish Roy, Inderpreet Singh, Anobika Roy, Yabesh Jebaraj
  • Patent number: 10970581
    Abstract: An image forming apparatus includes an image reading unit, an extraction section, a character recognition section, a search section, an attachment section, and file storage. The image reading unit generates first image information. The extraction section extracts a specific area from the image base on the first image information. The character recognition section generates text information corresponding to information of a character string image included in the specific area. The search section searches for a webpage containing information relating to a meaning of a text indicated by the text information. The attachment section attaches link information of the webpage to the information of the character string image to generate second image information. The file storage section stores the second image information as a file therein. The specific area is an area with a specific mark applied thereto.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: April 6, 2021
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Daigo Tashiro
  • Patent number: 10963693
    Abstract: A method and apparatus for training a character detector based on weak supervision, a character detection system and a computer readable storage medium are provided, wherein the method includes: inputting coarse-grained annotation information of a to-be-processed object, wherein the coarse-grained annotation information including a whole bounding outline of a word, text bar or line of the to-be-processed object; dividing the whole bounding outline of the coarse-grained annotation information, to obtain a coarse bounding box of a character of the to-be-processed object; obtaining a predicted bounding box of the character of the to-be-processed object through a neural network model from the coarse-grained annotation information; and determining a fine bounding box of the character of the to-be-processed object as character-based annotation of the to-be-processed object, according to the coarse bounding box and the predicted bounding box.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: March 30, 2021
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Chengquan Zhang, Jiaming Liu, Junyu Han, Errui Ding
  • Patent number: 10902301
    Abstract: An information processing device includes a display controller that displays a term expression expressing a term which appears in target data, on a display in a display mode based on a level of liveliness of the target data when the term appears.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: January 26, 2021
    Assignee: FUJI XEROX CO., LTD.
    Inventor: Aoi Takahashi
  • Patent number: 10817741
    Abstract: In an optical character recognition system, a word segmentation method, comprising: acquiring a sample image comprising a word spacing marker or a non-word spacing marker; processing the sample image with a convolutional neural network to obtain a first eigenvector corresponding to the sample image, a word spacing probability value and/or a non-word spacing probability value corresponding to the first eigenvector; acquiring a to-be-tested image, and processing the to-be-tested image with the convolutional neural network to obtain a second eigenvector corresponding to the to-be-tested image, a word spacing probability value or a non-word spacing probability value corresponding to the second eigenvector; and performing word segmentation on the to-be-tested image by using the just obtained word spacing probability value or the non-word spacing probability value.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: October 27, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Wenmeng Zhou, Mengli Cheng, Xudong Mao, Xing Chu
  • Patent number: 10796143
    Abstract: An information processing apparatus includes: a first extracting unit that extracts a position of a character entry box in an input image; a recognizing unit that recognizes a character string written in the character entry box; a calculating unit that calculates recognition accuracy of each of characters of the character string recognized by the recognizing unit; a first detector that detects that a value based on the recognition accuracy is equal to or larger than a preset threshold value; a second extracting unit that extracts a position of a circumscribed rectangle for each character of the character string in the input image; a second detector that detects contact of the circumscribed rectangle with the character entry box; and a display that displays the character string to be corrected on the basis of a result of detection by the first detector and a result of detection by the second detector.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: October 6, 2020
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Satoshi Kubota, Shunichi Kimura
  • Patent number: 10776951
    Abstract: An approach is provided for an asymmetric evaluation of polygon similarity. The approach, for instance, involves receiving a first polygon representing an object depicted in an image. The approach also involves generating a transformation of the image comprising image elements whose values are based on a respective distance that each image element is from a nearest image element located on a first boundary of the first polygon. The approach further involves determining a subset of the plurality of image elements of the transformation that intersect with a second boundary of a second polygon. The approach further involves calculating a polygon similarity of the second polygon with respect the first polygon based on the values of the subset of image elements normalized to a length of the second boundary of the second polygon.
    Type: Grant
    Filed: August 10, 2017
    Date of Patent: September 15, 2020
    Assignee: HERE Global B.V.
    Inventors: Richard Kwant, Anish Mittal, David Lawlor
  • Patent number: 10769424
    Abstract: A new segment of electronic handwriting is provided to a handwriting recognition module to obtain a plurality of textual interpretations of the new segment. The textual interpretations obtained from the handwriting recognition module are scored based on how each respective electronic handwriting representation would change a display of existing electronic content when the respective electronic handwriting representation is displayed substantially at the user designated position within or adjacent to the existing electronic content. Based on the scoring, an electronic handwriting representation corresponding to a respective textual interpretation of the plurality of textual interpretations is selected, and the existing electronic content is modified to include the selected electronic handwriting representation located substantially at the user designated position.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: September 8, 2020
    Assignee: Google LLC
    Inventors: Maria Cirimele, Thomas William Buckley, Robert Ky Mickle, Tayeb Al Karim
  • Patent number: 10586133
    Abstract: The present disclosure relates to a system and method to transform character images from one representation to another representation. According to some embodiments of the present disclosure, a form may be processed to separate background data from content data, wherein character images from one or both the background data and the content data may be transformed. In some aspects, one or both handwritten font and type font may be processed in the character images, wherein the original fonts may be transformed into a uniform type font. In some embodiments, the character images may be translated to their correct state, wherein the translation may occur before or after the transformation. In some implementations, the translation and font transformation may allow for more efficient and effective character recognition.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: March 10, 2020
    Inventors: Matthew Thomas Berseth, Robert Jackson Marsh
  • Patent number: 10552535
    Abstract: The positioning of elements of a broken word can be corrected by receiving an optical character recognition (OCR) conversion of a printed publication and identifying multiple parts of the broken word from the OCR conversion to output in a graphical user interface (GUI). The multiple parts can be placed in the GUI using original positioning data for the printed publication. A user can make a selection in the GUI indicating that multiple parts from the OCR conversion are of the broken word and can automatically adjust bounds of the multiple parts to form a corrected word.
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: February 4, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Satishkumar Kothandapani Shanmugasundaram, Shubham Chandra Gupta, Arpita Agrawal
  • Patent number: 10546209
    Abstract: A machine learning method for learning how to form bounding boxes, performed by a machine learning apparatus, includes extracting learning images including a target object among a plurality of learning images included in a learning database, generating additional learning images in which the target object is rotated from the learning images including the target object, and updating the learning database using the additional learning images.
    Type: Grant
    Filed: December 8, 2017
    Date of Patent: January 28, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung Jae Lee, Hyung Kwan Son, Keun Dong Lee, Jong Gook Ko, Weon Geun Oh, Da Un Jung
  • Patent number: 10521686
    Abstract: An image processing apparatus including: a processor; and memory storing computer-readable instructions therein, the computer-readable instructions, when executed by the processor, causing the image processing apparatus to perform: acquiring target image data configured by a plurality of pixels and representing a target image including a character; acquiring a character code corresponding to the character in the target image; acquiring an index value relating to a number of a plurality of character pixels configuring the character in the target image by using the character code corresponding to the character in the target image; determining a first extraction condition by using the index value; and extracting the plurality of character pixels satisfying the first extraction condition from the plurality of pixels in the target image.
    Type: Grant
    Filed: January 27, 2017
    Date of Patent: December 31, 2019
    Assignee: Brother Kogyo Kabushiki Kaisha
    Inventor: Koichi Tsugimura
  • Patent number: 10402431
    Abstract: A method and system for identifying search terms for placing advertisements along with search results is provided. The advertisement system selects a description of an item that is to be advertised. The advertisement system then retrieves documents that match the selected description. The advertisement system generates a score for each word of the retrieved documents that indicates relatedness of the word to the item to be advertised. After generating the scores for the words, the advertisement system identifies phrases of the words within the documents that are related to the item. The advertisement system then generates search terms for the item to be advertised from the identified phrases. The advertisement system submits the search terms and an advertisement to a search engines service for placement of a paid-for advertisement for the item.
    Type: Grant
    Filed: November 4, 2016
    Date of Patent: September 3, 2019
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Nathaniel B. Scholl, Alexander W. DeNeui
  • Patent number: 10402673
    Abstract: Systems and methods for digitized document image data spillage recovery are provided. One or more memories may be coupled to one or more processors, the one or more memories including instructions operable to be executed by the one or more processors. The one or more processors may be configured to capture an image; process the image through at least a first pass to generate a first contour; remove a preprinted bounding region of the first contour to retain text; generate one or more pixel blobs by applying one or more filters to smudge the text; identify the one or more pixel blobs that straddle one or more boundaries of the first contour; resize the first contour to enclose spillage of the one or more pixel blobs; overlay the text from the image within the resized contour; and apply pixel masking to the resized contour.
    Type: Grant
    Filed: October 4, 2018
    Date of Patent: September 3, 2019
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventor: Douglas Slattery
  • Patent number: 10402704
    Abstract: Various examples are directed to methods and systems for object recognition in an image. A computer vision system may receive a patch comprising a plurality of pixels arranged in a grid. The computer vision system may determine a plurality of columns and a plurality of rows in the patch. The plurality of columns may be based at least in part on a column target sum and the plurality of rows may be based at least in part on a row target sum.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: September 3, 2019
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventor: Shuang Wu
  • Patent number: 10372980
    Abstract: This disclosure is generally directed to identifying electronic forms using spatial information of elements presented on a website. Identifying a type of an electronic form may include identifying particular input elements associated with the form, determining a bounding region of the input element, expanding the bounding region, and determining any intersections of the expanded bounding region with one or more label elements proximate to the input element. Keywords of the label elements can be analyzed to increase or decrease a confidence level that an input element is associated with a particular input type. A bounding region can be dynamically sized based on a number of intersecting elements. An electronic form can be identified based on the identified input elements. In some instances, the electronic forms may assist a user in accessing or updating remotely stored personal information, including payment information, across one or more third party electronic sites.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: August 6, 2019
    Assignee: Switch, Inc.
    Inventors: Chris Hopen, Trevor Marcus
  • Patent number: 10354161
    Abstract: The present disclosure relates to optical character recognition, and more specifically techniques for detecting font size in a digital image. Accordingly to one embodiment, a client device receives a digital image of a document having one or more textual components. The client device finds one or more contours bounding the one or more textual components in the digital image of the document. The client device detects a font size for text contained in the digital image using the one or more contours. The client device extracts the text from the digital image upon detecting that the detected font size is above a defined threshold value.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: July 16, 2019
    Assignee: INTUIT, INC.
    Inventors: Peijun Chiang, Vijay Yellapragada
  • Patent number: 10331212
    Abstract: The present invention provides a safety control system for a vehicle with controls located on the vehicle steering wheel. The controls maybe arranged in a cluster on one or both sides of the upper half of the steering wheel. The controls can be located in areas of the steering wheel including the spokes, the rim or a special flange extended from the rim or the spoke of the steering wheel and are easily recognizable and accessible by the driver while the driver is looking ahead from a normal driving position with the driver's eyes focused on the road and maintaining the driver's hands on the steering wheel. The controls can be further enhanced by varied coloring, shape, size, and texture to make them easily identifiable. The controls can be used to access and control vehicle systems or portable telematics devices in multi modal process in conjunction with thumb gesture interpretation or speech recognition.
    Type: Grant
    Filed: November 15, 2016
    Date of Patent: June 25, 2019
    Assignee: ACT-IP
    Inventor: Mouhamad Ahmad Naboulsi
  • Patent number: 10311149
    Abstract: Natural language translation device contains a bus, an input interface connecting to the bus for receiving a source sentence in a first natural language to be translated to a target sentence in second natural language one word at a time in sequential order. A two-dimensional (2-D) symbol containing a super-character characterizing the i-th word of the target sentence based on the received source sentence is formed in accordance with a set of 2-D symbol creation rules. The i-th word of the target sentence is obtained by classifying the 2-D symbol via a deep learning model that contains multiple ordered convolution layers in a Cellular Neural Networks or Cellular Nonlinear Networks (CNN) based integrated circuit.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: June 4, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Catherine Chi, Charles Jin Young, Jason Z Dong, Baohua Sun
  • Patent number: 10248313
    Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: April 2, 2019
    Assignee: Google LLC
    Inventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
  • Patent number: 10181075
    Abstract: An information processing apparatus includes an evaluation unit configured to evaluate whether a partial region of a photographing range of an imaging unit is a region suitable for analysis processing to be performed based on feature quantities of an object, with reference to a track of the object in an image captured by the imaging unit, and an output control unit configured to control the information processing apparatus to output information reflecting an evaluation result obtained by the evaluation unit. Accordingly, the information processing apparatus can support a user to improve the accuracy of the analysis processing to be performed based on the feature quantities of the object.
    Type: Grant
    Filed: October 12, 2016
    Date of Patent: January 15, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventors: Hiroshi Tojo, Tomoya Honjo, Shinji Yamamoto
  • Patent number: 10176148
    Abstract: Technologies are described to provide smart flipping of groups of objects. According to some examples, a graphics module within an application may determine whether an object within a group of objects to be flipped is flippable, that is can be flipped without resulting in loss of object context after the flip operation. Then, the graphics module may flip the group of objects translating all objects (moving their locations to appropriate new locations based on the flip operation), flipping the Objects that can be flipped, and not flipping the object deemed not flippable, thereby preserving the displayed context of the object.
    Type: Grant
    Filed: August 27, 2015
    Date of Patent: January 8, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Rahul Dhaundiyal
  • Patent number: 10134367
    Abstract: In one embodiment, dividing a set of texts into one or more text blocks, each text block including a portion of the set of texts; rendering each text block to obtain one or more rendered text blocks; determining a placement instruction for each rendered text block, the placement instruction indicating a position of the rendered text block when it is displayed; and sending the one or more rendered text blocks and their respectively associated placement instructions to an electronic device for displaying on the electronic device.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: November 20, 2018
    Assignee: Facebook, Inc.
    Inventor: Barak Reuven Naveh
  • Patent number: 10114889
    Abstract: Techniques for filtering information are described herein. In accordance with the present disclosure, a text acquisition module is configured to acquire text content to be filtered and a scanning module is configured to scan the text content to be filtered. The disclosed techniques scan the text content through a preset keyword dictionary, record a position of each keyword in the text content and acquire character pitch between keywords in the text content according to the position of each keyword in text content. A pitch judgment module is configured to judge whether the character pitch exceeds a preset character pitch and filter the keyword(s) in the text content in response to a determination that the character pitch exceeds the preset character pitch.
    Type: Grant
    Filed: May 15, 2013
    Date of Patent: October 30, 2018
    Assignee: Beijing Qihoo Technology Company Limited
    Inventors: Menggang Han, Tiejun Li, Xuping Liu
  • Patent number: 10102453
    Abstract: A string of natural language texts is received and formed a multi-layer 2-D symbol in a first computing system. The 2-D symbol comprises a matrix of N×N pixels of data representing a “super-character”. The matrix is divided into M×M sub-matrices with each sub-matrix containing (N/M)×(N/M) pixels. N and M are positive integers, and N is preferably a multiple of M. Each sub-matrix represents one ideogram defined in an ideogram collection set. “Super-character” represents a meaning formed from a specific combination of a plurality of ideograms. The meaning of the “super-character” is learned in a second computing system by using an image processing technique to classify the 2-D symbol, which is formed in the first computing system and transmitted to the second computing system. Image process technique includes predefining a set of categories and determining a probability for associating each of the predefined categories with the meaning of the “super-character”.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: October 16, 2018
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Patent number: 10083171
    Abstract: A string of natural language texts is received and formed a multi-layer 2-D symbol in a computing system. The 2-D symbol comprises a matrix of N×N pixels of K-bit data representing a “super-character”. The matrix is divided into M×M sub-matrices with each sub-matrix containing (N/M)×(N/M) pixels. K, N and M are positive integers, and N is preferably a multiple of M. Each sub-matrix represents one ideogram defined in an ideogram collection set. “Super-character” represents a meaning formed from a specific combination of a plurality of ideograms. The meaning of the “super-character” is learned by classifying the 2-D symbol via a trained convolutional neural networks model having bi-valued 3×3 filter kernels in a Cellular Neural Networks or Cellular Nonlinear Networks (CNN) based integrated circuit.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: September 25, 2018
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun