Limited To Specially Coded, Human-readable Characters Patents (Class 382/182)
  • Patent number: 11907306
    Abstract: A system may iteratively scan a portion of a document, extract first data from the portion of the document, and determine, using a trained model, whether the first data corresponds to one or more document types based on one or more confidence thresholds. The system may repeat this process, increasing the portion of the document scanned by a predetermined amount each iteration, until the first data corresponds to the one or more document types based on the one or more confidence thresholds. Responsive to determining the first data corresponds to the one or more document types based on the one or more confidence thresholds, the system may cause a graphical user interface (GUI) of a user device to display a notification indicating a document type match.
    Type: Grant
    Filed: January 4, 2022
    Date of Patent: February 20, 2024
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventor: Aaron Attar
  • Patent number: 11863995
    Abstract: A wireless access point information generation method, a device, and a computer readable medium are provided. The method includes: extracting candidate character images from an obtained image, wherein the obtained image includes an image indicating a wireless access point; determining a character image in the extracted candidate character images; determining a recognition result of the determined character image by using a character-recognition model, wherein the character-recognition model is used for representing a correspondence between the character image and a character; and generating an access point identifier and a password of the wireless access point according to the determined recognition result. The method provides a manner of generating wireless access point information.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: January 2, 2024
    Assignee: SHANGHAI LIANSHANG NETWORK TECHNOLOGY CO., LTD.
    Inventors: Shengfu Chen, Ting Shan, Chuanqi Liu
  • Patent number: 11823128
    Abstract: Systems and methods for automating image annotations are provided, such that a large-scale annotated image collection may be efficiently generated for use in machine learning applications. In some aspects, a mobile device may capture image frames, identifying items appearing in the image frames and detect objects in three-dimensional space across those image frames. Cropped images may be created as associated with each item, which may then be correlated to the detected objects. A unique identifier may then be captured that is associated with the detected object, and labels are automatically applied to the cropped images based on data associated with that unique identifier. In some contexts, images of products carried by a retailer may be captured, and item data may be associated with such images based on that retailer's item taxonomy, for later classification of other/future products.
    Type: Grant
    Filed: December 1, 2022
    Date of Patent: November 21, 2023
    Assignee: Target Brands, Inc.
    Inventors: Ryan Siskind, Matthew Nokleby, Nicholas Eggert, Stephen Radachy, Corey Hadden, Rachel Alderman, Edgar Cobos
  • Patent number: 11811979
    Abstract: Images of the plurality of document pages are scanned to generate image data with one scanning instruction. A single folder named with a received character string is determined as a storage destination of image data corresponding to the plurality of document pages generated with the scanning instruction.
    Type: Grant
    Filed: November 3, 2022
    Date of Patent: November 7, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yasunori Shimakawa
  • Patent number: 11790171
    Abstract: A natural language understanding method begins with a radiological report text containing clinical findings. Errors in the text are corrected by analyzing character-level optical transformation costs weighted by a frequency analysis over a corpus corresponding to the report text. For each word within the report text, a word embedding is obtained, character-level embeddings are determined, and the word and character-level embeddings are concatenated to a neural network which generates a plurality of NER tagged spans for the report text. A set of linked relationships are calculated for the NER tagged spans by generating masked text sequences based on the report text and determined pairs of potentially linked NER spans. A dense adjacency matrix is calculated based on attention weights obtained from providing the one or more masked text sequences to a Transformer deep learning network, and graph convolutions are then performed over the calculated dense adjacency matrix.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: October 17, 2023
    Assignee: Covera Health
    Inventors: Ron Vianu, W. Nathaniel Brown, Gregory Allen Dubbin, Daniel Robert Elgort, Benjamin L. Odry, Benjamin Sellman Suutari, Jefferson Chen
  • Patent number: 11776248
    Abstract: Systems and methods are configured for correcting the orientation of an image data object subject to optical character recognition (OCR) by receiving an original image data object, generating initial machine readable text for the original image data object via OCR, generating an initial quality score for the initial machine readable text via machine-learning models, determining whether the initial quality score satisfies quality criteria, upon determining that the initial quality score does not satisfy the quality criteria, generating a plurality of rotated image data objects each comprising the original image data object rotated to a different rotational position, generating a rotated machine readable text data object for each of the plurality of rotated image data objects and generating a rotated quality score for each of the plurality of rotated machine readable text data objects, and determining that one of the plurality of rotated quality scores satisfies the quality criteria.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: October 3, 2023
    Assignee: Optum, Inc.
    Inventors: Rahul Bhaskar, Daryl Seiichi Furuyama, Daniel William James
  • Patent number: 11763488
    Abstract: Systems and methods for determining a geographic location of an environment from an image including an annotation on a mobile device without GPS, with no network access, and with no access to peripheral devices or media is described. Open source data indicative of the earth's surface may be obtained and combined into grids or regions. Elevation data may be used to create skyline models at grid points on the surface. An image of an environment may be obtained from a camera on a mobile device. The user of the mobile device may trace a skyline of the environment depicted in the image. The annotation may be used to create reduced regions for edge detection analysis. The edge detection analysis may detect the skyline. The detected skyline may be compared to the skyline models to determine a most likely location of the user.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: September 19, 2023
    Assignee: Applied Research Associates, Inc.
    Inventors: Dirk B. Warnaar, Douglas J. Totten
  • Patent number: 11749006
    Abstract: A processor may receive an image and determine a number of foreground pixels in the image. The processor may obtain a result of optical character recognition (OCR) processing performed on the image. The processor may identify at least one bounding box surrounding at least one portion of text in the result and overlay the at least one bounding box on the image to form a masked image. The processor may determine a number of foreground pixels in the masked image and a decrease in the number of foreground pixels in the masked image relative to the number of foreground pixels in the image. Based on the decrease, the processor may modify an aspect of the OCR processing for subsequent image processing.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: September 5, 2023
    Assignee: INTUIT INC.
    Inventors: Sameeksha Khillan, Prajwal Prakash Vasisht
  • Patent number: 11750547
    Abstract: A caption of a multimodal message (e.g., social media post) can be identified as a named entity using an entity recognition system. The entity recognition system can use an attention-based mechanism that emphasis or de-emphasizes each data type (e.g., image, word, character) in the multimodal message based on each datatypes relevance. The output of the attention mechanism can be used to update a recurrent network to identify one or more words in the caption as being a named entity.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: September 5, 2023
    Assignee: Snap Inc.
    Inventors: Vitor Rocha de Carvalho, Leonardo Ribas Machado das Neves, Seungwhan Moon
  • Patent number: 11710302
    Abstract: A computer implemented method of performing single pass optical character recognition (OCR) including at least one fully convolutional neural network (FCN) engine including at least one processor and at least one memory, the at least one memory including instructions that, when executed by the at least processor, cause the FCN engine to perform a plurality of steps. The steps include preprocessing an input image, extracting image features from the input image, determining at least one optical character recognition feature, building word boxes using the at least one optical character recognition feature, determining each character within each word box based on character predictions and transmitting for display each word box including its predicted corresponding characters.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: July 25, 2023
    Assignee: TRICENTIS GMBH
    Inventors: David Colwell, Michael Keeley
  • Patent number: 11663654
    Abstract: A computer system can implement a network service by receiving, from a computing device of a user, image data comprising an image of a record. The computer system can then execute image processing logic to determine a set of information items from the image. The computer system may then execute augmentation logic to process the record by (i) accessing a transaction database to identify a plurality of transactions made by the user, (ii) identifying a matching transaction from the plurality of transactions that pertains to the record, and (iii) resolving the set of information items using the matching transaction.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: May 30, 2023
    Assignee: Expensify, Inc.
    Inventors: David M. Barrett, Kevin Michael Kuchta
  • Patent number: 11651604
    Abstract: The present invention provides a word recognition method. The method includes: acquiring an image of a word to be recognized; recognizing edges of each character of the word to be recognized from the image of the word to be recognized; determining a geometric position of the word to be recognized; stretching the geometric position of the word to be recognized to a horizontal position; and recognizing the word to be recognized in the horizontal position.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: May 16, 2023
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Guangwei Huang, Yue Li
  • Patent number: 11568382
    Abstract: A system includes a processor and a non-transitory computer readable medium coupled to the processor. The non-transitory computer readable medium includes code, that when executed by the processor, causes the processor to receive input from a user of a user device to generate an optimal payment location on an application display, generate a first boundary of the optimal payment location on the application display of the user device based upon a first motion of a payment enabled card in a first direction and generate a second boundary of the optimal payment location on the application display of the user device based upon a second motion of the payment enabled card in a second direction. The first boundary and the second boundary combine to form defining edges of the optimal payment location.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: January 31, 2023
    Assignee: VISA INTERNATIONAL SERVICE ASSOCIATION
    Inventors: Kasey Chiu, Kuen Mee Summers, Whitney Wilson Gonzalez
  • Patent number: 11562122
    Abstract: An information processing apparatus includes a processor configured to extract, from a document, words of plural categories, select one extracted word from each of the plural categories, generate a first character string by arranging the selected words in accordance with a rule, wherein the rule determines positions of the selected words within the first character string based on the categories of the selected words, in response to reception of an operation of changing a first word in the first character string from a user, present to the user one or more candidate words from the category of the first portion of the first character string, generate a second character string by replacing the first word in the first character string with a user-selected word selected by the user from among the one or more candidate words, and store the second character string in a memory in association with the document.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: January 24, 2023
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Miyuki Iizuka
  • Patent number: 11551461
    Abstract: A text classifying apparatus (100), an optical character recognition unit (1), a text classifying method (S220) and a program are provided for performing the classification of text. A segmentation unit (110) segments an image into a plurality of lines of text (401-412; 451-457; 501-504; 701-705) (S221). A selection unit (120) selects a line of text from the plurality of lines of text (S222-S223). An identification unit (130) identifies a sequence of classes corresponding to the selected line of text (S224). A recording unit (140) records, for the selected line of text, a global class corresponding to a class of the sequence of classes (S225-S226). A classification unit (150) classifies the image according to the global class, based on a confidence level of the global class (S227-S228).
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: January 10, 2023
    Assignee: I.R.I.S.
    Inventors: Frédéric Collet, Vandana Roy
  • Patent number: 11538238
    Abstract: Systems and methods for classifying at least a portion of an image as being textured or textureless are presented. The system receives an image generated by an image capture device, wherein the image represents one or more objects in a field of view of the image capture device. The system generates one or more bitmaps based on at least one image portion of the image. The one or more bitmaps describe whether one or more features for feature detection are present in the at least one image portion, or describe whether one or more visual features for feature detection are present in the at least one image portion, or describe whether there is variation in intensity across the at least one image portion. The system determines whether to classify the at least one image portion as textured or textureless based on the one or more bitmaps.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: December 27, 2022
    Assignee: Mujin, Inc.
    Inventors: Jinze Yu, Jose Jeronimo Moreira Rodrigues, Ahmed Abouelela
  • Patent number: 11531838
    Abstract: Systems and methods for automating image annotations are provided, such that a large-scale annotated image collection may be efficiently generated for use in machine learning applications. In some aspects, a mobile device may capture image frames, identifying items appearing in the image frames and detect objects in three-dimensional space across those image frames. Cropped images may be created as associated with each item, which may then be correlated to the detected objects. A unique identifier may then be captured that is associated with the detected object, and labels are automatically applied to the cropped images based on data associated with that unique identifier. In some contexts, images of products carried by a retailer may be captured, and item data may be associated with such images based on that retailer's item taxonomy, for later classification of other/future products.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: December 20, 2022
    Assignee: Target Brands, Inc.
    Inventors: Ryan Siskind, Matthew Nokleby, Nicholas Eggert, Stephen Radachy, Corey Hadden, Rachel Alderman, Edgar Cobos
  • Patent number: 11532087
    Abstract: The disclosure relates, in part, to computer-based visualization of stent position within a blood vessel. A stent can be visualized using intravascular data and subsequently displayed as stent struts or portions of a stent as a part of a one or more graphic user interface(s) (GUI). In one embodiment, the method includes steps to distinguish stented region(s) from background noise using an amalgamation of angular stent strut information for a given neighborhood of frames. The GUI can include views of a blood vessel generated using distance measurements and demarcating the actual stented region(s), which provides visualization of the stented region. The disclosure also relates to display of intravascular diagnostic information such as indicators. An indicator can be generated and displayed with images generated using an intravascular data collection system. The indicators can include one or more viewable graphical elements suitable for indicating diagnostic information such as stent information.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: December 20, 2022
    Assignee: LightLab Imaging, Inc.
    Inventors: Sonal Ambwani, Christopher E. Griffin, James G. Peterson, Satish Kaveti, Joel M. Friedman
  • Patent number: 11495019
    Abstract: A method for optical character recognition of text and information on a curved surface, comprising: activating an image capture device; scanning of the surface using the image capture device in order to acquire a plurality of scans of sections of the surface; performing OCR on the plurality of scans; separating the OCRed content into layers for each of the plurality of scans; merging the separated layers into single layers; and merging the single layers into an image.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: November 8, 2022
    Assignee: GIVATAR, INC.
    Inventors: William E. Becorest, Yongkeng Xiao
  • Patent number: 11436816
    Abstract: The information processing device includes a storage section storing a learnt model, a reception section, and a processor. The learnt model is obtained by mechanically learning the relationship between a sectional image obtained by dividing a voucher image and a type of a character string included in the sectional image based on a data set in which the sectional image is associated with type information indicating the type. The reception section receives an input of the voucher image to be subjected to a recognition process. The processing section generates a sectional image by dividing the voucher image received as an input and determines a type of the generated sectional image based on the learnt model.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: September 6, 2022
    Assignee: Seiko Epson Corporation
    Inventor: Kiyoshi Mizuta
  • Patent number: 11429790
    Abstract: Automated detection of personal information in free text, which includes: automatically applying a named-entity recognition (NER) algorithm to a digital text document, to detect named entities appearing in the digital text document, wherein the named entities are selected from the group consisting of: at least one person-type entity, and at least one non-person-type entity; automatically detecting at least one relation between the named entities, by applying a parts-of-speech (POS) tagging algorithm and a dependency parsing algorithm to sentences of the digital text document which contain the detected named entities; automatically estimating whether the at least one relation between the named entities is indicative of personal information; and automatically issuing a notification of a result of the estimation.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: August 30, 2022
    Assignee: International Business Machines Corporation
    Inventors: Andrey Finkelshtein, Bar Haim, Eitan Menahem
  • Patent number: 11328504
    Abstract: An image-processing device includes: a reliability calculation unit configured to calculate reliability of a character recognition result on a document image which is a character recognition target on the basis of a feature amount of a character string of a specific item included in the document image; and an output destination selection unit configured to select an output destination of the character recognition result in accordance with the reliability.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: May 10, 2022
    Assignee: NEC CORPORATION
    Inventors: Yuichi Nakatani, Katsuhiko Kondoh, Satoshi Segawa, Michiru Sugimoto, Yasushi Hidaka, Junya Akiyama
  • Patent number: 11308175
    Abstract: While current voice assistants can respond to voice requests, creating smarter assistants that leverage location, past requests, and user data to enhance responses to future requests and to provide robust data about locations is desirable. A method for enhancing a geolocation database (“database”) associates a user-initiated triggering event with a location in a database by sensing user position and orientation within the vehicle and a position and orientation of the vehicle. The triggering event is detected by sensors arranged within a vehicle with respect to the user. The method determines a point of interest (“POI”) near the location based on the user-initiated triggering event. The method, responsive to the user-initiated triggering event, updates the database based on information related to the user-initiated triggering event at an entry of the database associated with the POI. The database and voice assistants can leverage the enhanced data about the POI for future requests.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: April 19, 2022
    Assignee: Cerence Operating Company
    Inventors: Nils Lenke, Mohammad Mehdi Moniri, Reimund Schmald, Daniel Kindermann
  • Patent number: 11275597
    Abstract: Techniques for augmenting data visualizations based on user interactions to enhance user experience are provided. In one aspect, a method for providing real-time recommendations to a user includes: capturing user interactions with a data visualization, wherein the user interactions include images captured as the user interacts with the data visualization; building stacks of the user interactions, wherein the stacks of the user interactions are built from sequences of the user interactions captured over time; generating embeddings for the stacks of the user interactions; finding clusters of embeddings having similar properties; and making the real-time recommendations to the user based on the clusters of embeddings having the similar properties.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: March 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: German H Flores, Eric Kevin Butler, Robert Engel, Aly Megahed, Yuya Jeremy Ong, Nitin Ramchandani
  • Patent number: 11238305
    Abstract: An information processing apparatus includes a processor configured to execute first preprocessing on acquired image data, and execute second preprocessing on a specified partial region of the image data as a target in a case where information for specifying at least one partial region in an image corresponding to the image data is received from post processing on which the image data after the first preprocessing is processed.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: February 1, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Hiroyoshi Uejo, Yuki Yamanaka
  • Patent number: 11238481
    Abstract: A financial institution can provide a best price guarantee to debit or credit card account holders. By providing a consolidated system including automatic price monitoring of purchased products and automatic claim form generation upon identifying a lower price, the consumer is relieved of the burden typically associated with conventional price matching.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: February 1, 2022
    Assignee: CITICORP CREDIT SERVICES, INC.
    Inventors: Neeraj Sharma, Ateesh Tankha, Anthony Merola, Michael Ying
  • Patent number: 11195315
    Abstract: Near-to-eye displays support a range of applications from helping users with low vision through augmenting a real world view to displaying virtual environments. The images displayed may contain text to be read by the user. It would be beneficial to provide users with text enhancements to improve its readability and legibility, as measured through improved reading speed and/or comprehension. Such enhancements can provide benefits to both visually impaired and non-visually impaired users where legibility may be reduced by external factors as well as by visual dysfunction(s) of the user. Methodologies and system enhancements that augment text to be viewed by an individual, whatever the source of the image, are provided in order to aid the individual in poor viewing conditions and/or to overcome physiological or psychological visual defects affecting the individual or to simply improve the quality of the reading experience for the user.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: December 7, 2021
    Assignee: eSight Corp.
    Inventors: Frank Jones, James Benson Bacque
  • Patent number: 11176576
    Abstract: Techniques for providing remote messages to mobile devices based on image data and other sensor data are discussed herein. Some embodiments may include one or more servers configured to: receive, from a consumer device via a network, location data indicating a consumer device location of a consumer device; receive, from the consumer device via the network, image data captured by a camera of the consumer device; receive, from the consumer device via the network, orientation data defining an orientation of the camera when the image data was captured, wherein the orientation data is captured by an accelerometer of the consumer device; attempt to extract a merchant identifier from the image based on programmatically processing the image data; determine one or more merchants based on a fuzzy search of available ones of the location data, the merchant identifier, and the orientation data.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: November 16, 2021
    Assignee: GROUPON, INC.
    Inventors: Gajaruban Kandavanam, Sarika Oak, Gloria Ye, Chunjun Chen
  • Patent number: 11116454
    Abstract: An imaging device and method which can easily obtain a curve of time-varying changes in pixel value of a region of interest, even if the region of interest moves with a subject's body motion. A controller includes an image processor executing various types of image processing on fluorescence images and visible light images. The image processor includes a pixel value measurement unit which sequentially measures values of pixels at positions corresponding to a region of interest (ROI) in the fluorescence image, a change curve creation unit which creates a curve of time-varying changes in pixel value of the ROI by sampling, among the pixel values measured by the pixel value measurement unit, a minimum pixel value within a period equal to or longer than a cycle of the subject's body motion, and a smoothing unit which smooths the curve created by the change curve creation unit.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: September 14, 2021
    Assignee: Shimadzu Corporation
    Inventor: Akihiro Ishikawa
  • Patent number: 11100363
    Abstract: A method disclosed herein uses a processor of a server to function as a processing unit to enhance accuracy of character recognition in a terminal connected to the server, using a communication apparatus of the server. The processing unit may be configured to acquire first data indicating a result of character recognition with respect to image data taken by the terminal. The processing unit can determine a character type of a character included in the image data when it is determined that misrecognition is included in the result of character recognition based on the first data. The processing unit controls the communication apparatus to transmit second data according to the character type to terminal and instructs the terminal to perform character recognition using the second data with respect to the image data in order to improve the accuracy of character recognition.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: August 24, 2021
    Assignee: TOSHIBA TEC KABUSHIKI KAISHA
    Inventor: Syusaku Takara
  • Patent number: 11093036
    Abstract: A system including: a first sensor module having an inertial measurement unit and attached to an upper arm of a user, the first sensor module generating first motion data identifying an orientation of the upper arm; a second sensor module having an inertial measurement unit and attached to a hand of the user, the second sensor module generating second motion data identifying an orientation of the hand; and a computing device coupled to the first sensor module and the second sensor module through communication links, the computing device calculating, based on the orientation of the upper arm and the orientation of the hand, an orientation of a forearm connected to the hand by a wrist of the user and connected to the upper arm by an elbow joint of the user.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: August 17, 2021
    Assignee: Finch Technologies Ltd.
    Inventors: Viktor Vladimirovich Erivantcev, Rustam Rafikovich Kulchurin, Alexander Sergeevich Lobanov, Iakov Evgenevich Sergeev, Alexey Ivanovich Kartashov
  • Patent number: 11087077
    Abstract: Embodiments are generally directed to techniques for extracting contextually structured data from document images, such as by automatically identifying document layout, document data, and/or document metadata in a document image, for instance. Many embodiments are particularly directed to generating and utilizing a document template database for automatically extracting document image contents into a contextually structured format. For example, the document template database may include a plurality of templates for identifying/explaining key data elements in various document image formats that can be used to extract contextually structured data from incoming document images with a matching document image format. Several embodiments are particularly directed to automatically identifying and associating document metadata with corresponding document data in a document image, such as for generating a machine-facilitated annotation of the document image.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: August 10, 2021
    Assignee: SAS INSTITUTE INC.
    Inventors: David James Wheaton, William Robert Nadolski, Heather Michelle GoodyKoontz
  • Patent number: 11080273
    Abstract: A computer-implemented method, a cognitive intelligence system and computer program product adapt a relational database containing image data types. At least one image token in the relational database is converted to a textual form. Text is produced based on relations of tokens in the relational database. A set of word vectors is produced based on the text. A cognitive intelligence query expressed as a structured query language (SQL) query may be applied to the relational database using the set of word vectors. An image token may be converted to textual form by converting the image to a tag, by using a neural network classification model and replacing the image token with a corresponding cluster identifier, by binary comparison or by a user-specified similarity function. An image token may be converted to a plurality of textual forms using more than one conversion method.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: August 3, 2021
    Assignee: International Business Machines Corporation
    Inventors: Bortik Bandyopadhyay, Rajesh Bordawekar, Tin Kam Ho
  • Patent number: 11080563
    Abstract: A computer implemented a method and system for enrichment of OCR extracted data is disclosed comprising of accepting a set of extraction criteria and a set of configuration parameters by a data extraction engine. The data extraction engine captures data satisfying an extraction criteria using the configuration parameters and adapts the captured data using a set of domain specific rules and a set of OCR error patterns. A learning engine generates learning data models using the adapted data and the configuration parameters and the system dynamically updates the extraction criteria using the generated learning data models. The extraction criteria comprise one or more extraction templates wherein an extraction template includes one of a regular expression, geometric markers, anchor text markers and a combination thereof.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: August 3, 2021
    Assignee: INFOSYS LIMITED
    Inventors: Shreyas Bettadapura Guruprasad, Radha Krishna Pisipati
  • Patent number: 11042733
    Abstract: An information processing apparatus includes an acquiring unit, a confirming unit, and a controller. The acquiring unit acquires a text recognition result with respect to a first image showing a document and a certainty factor indicating a certainty of the text recognition result. The confirming unit confirms the text recognition result if the certainty factor is above or equal to a threshold value. The controller controls an output of a warning for the text recognition result with respect to the first image in a case where the text recognition result and a text recognition result with respect to a second image showing a relevant document related to the document do not match even when the certainty factor is above or equal to the threshold value.
    Type: Grant
    Filed: February 12, 2019
    Date of Patent: June 22, 2021
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Takumi Kitamura
  • Patent number: 11023764
    Abstract: Systems and methods for performing OCR of a series of images depicting text symbols.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: June 1, 2021
    Assignee: ABBYY Production, LLC
    Inventors: Aleksey Ivanovich Kalyuzhny, Aleksey Yevgenyevich Lebedev
  • Patent number: 11003911
    Abstract: Provided is a an inspection assistance device. This inspection assistance device is provided with: an image data acquisition unit that acquires image data in which a to-be-inspected object is captured; a display control unit that causes a display unit to display information about inspection results of the to-be-inspected object, recognized on the basis of the acquired image data, in such a manner as to be superimposed on an image that includes the to-be-inspected object; and a recording control unit that records the information being displayed on the display unit and the information about the to-be-inspected object in association with each other.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: May 11, 2021
    Assignee: NEC CORPORATION
    Inventors: Takami Sato, Kota Iwamoto, Yoshinori Saida, Shin Norieda
  • Patent number: 10997283
    Abstract: A computer-implemented method of providing security for a software container according to an example of the present disclosure includes receiving a software container image having a software application layer that is encrypted and includes a software application, and having a separate security agent layer that includes a security agent. The method includes receiving a request to instantiate the software container image as a software container. The method also includes, based on the request: launching the security agent and utilizing the security agent to decrypt and authenticate the software application layer, and control operation of the software application based on the authentication.
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: May 4, 2021
    Assignee: AQUA SECURITY SOFTWARE, LTD.
    Inventors: Amir Gerebe, Rani Osnat
  • Patent number: 10984274
    Abstract: Apparatus and method for detecting hidden encoding of text strings, such as Internet web-domain addresses or email addresses, using optical character recognition (OCR) techniques. In some embodiments, a first set of digital data having a first string of text character codes are converted into an image. Optical character recognition (OCR) is applied to the image to generate a second set of digital data having a second string of text character codes based on detection of the image. The first string of text character codes are compared to the second string of text character codes to detect the presence or absence of hidden codes in the first set of digital data. In some cases, a smoothing function such as Gaussian blurring is applied to degrade the image prior to the application of OCR.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: April 20, 2021
    Assignee: Seagate Technology LLC
    Inventor: John Luis Sosa-Trustham
  • Patent number: 10949660
    Abstract: An improved machine learning system is provided. For example, a content management server may provide real-time analysis of a user's handwriting to assess the user's knowledge of a language, including using a convolution neural network method. The convolution neural network method may be executed to normalize at least some identified strokes in the user's handwritten user input. Normalization may be performed by translating a window comprising a subset of pixels in a digital representation of the handwritten user input amongst a plurality of pixels in the digital representation.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: March 16, 2021
    Assignee: PEARSON EDUCATION, INC.
    Inventor: Zhaodong Wang
  • Patent number: 10949697
    Abstract: An image processing apparatus includes a character recognition section, a translation section, an image processing section, a selection acceptance section, and a control section. The character recognition section performs character recognition processing on image data. The translation section translates an original text obtained through the character recognition processing performed by the character recognition section into a predetermined language and creates a translated text. The image processing section generates a replaced image in which a text portion of an original image shown in the image data is replaced from the original text by the translated text. The selection acceptance section accepts an instruction of selecting, as an output target, either one or both of the original image shown in the image data and the replaced image. The control section performs, in accordance with the accepted instruction, processing of outputting an output target image selected as the output target.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: March 16, 2021
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Ariyoshi Hikosaka
  • Patent number: 10949525
    Abstract: Aspects described herein may allow for the application of generating captcha images using relations among objects. The objects in ground-truth images may be clustered based on the probabilities of co-occurrence. Further aspects described herein may provide for generating a first captcha image comprising a first object and a second object, and generating a second captcha image based on the first captcha image by replacing the first object with the third object. Finally, the first and second captcha images may be presented as security challenges and user access requests may be granted or denied based on responses to the security challenges.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: March 16, 2021
    Assignee: Capital One Services, LLC
    Inventors: Anh Truong, Vincent Pham, Galen Rafferty, Jeremy Goodsitt, Mark Watson, Austin Walters
  • Patent number: 10943108
    Abstract: An image reader includes a document reading unit, and a control unit that functions as an individual image cutting section, character string detection section, mismatch detection section, judgment section, and correction section. The individual image cutting section cuts out individual images from image data obtained through reading by the document reading unit. The character string detection section detects character strings present on the individual images. The mismatch detection section detects, for the character strings detected by the character string detection section, a mismatching portion by making comparison between the individual images with considering character strings having contents identical or similar to each other as same information. The judgment section judges for the mismatching portions whether a ratio of majority characters reaches a predefined ratio.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: March 9, 2021
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Keisaku Matsumae
  • Patent number: 10880447
    Abstract: An image processing apparatus includes a size acquisition unit and a decision unit. The size acquisition unit acquires first size information indicating a first sheet size of a first sheet surface read by an image reading unit. The decision unit decides second image information indicating a second image size of a second image to be stored of a second sheet surface different from the first sheet surface based on first image information indicating a first image size of a first image to be stored based on the first size information.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: December 29, 2020
    Assignee: TOSHIBA TEC KABUSHIKI KAISHA
    Inventor: Fumiyuki Watanabe
  • Patent number: 10853643
    Abstract: Increasing the probability that an image showing the front face of a display object is displayed initially is enabled. An image extraction device acquires position information indicating a position of a display object and display object information indicating the display object. The image extraction device extracts a partial image including the acquired display object information from images photographed from at least one spot located within a predetermined distance of a position indicated by the acquired position information. The image extraction device outputs the extracted partial image.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: December 1, 2020
    Assignee: Rakuten, Inc.
    Inventors: Soh Masuko, Naho Kono, Ryosuke Kuroki
  • Patent number: 10846525
    Abstract: The present disclosure is related to field of machine learning and image processing, disclosing method and device for identifying cell region of table including cell borders from an image document. Table detecting system rescales a primary image document into plurality of secondary image documents of different size and resolution to detect plurality of candidate regions comprising predefined table features in each secondary image document. Further, for each candidate region, set of connected components are determined and the connected components corresponding to the IDs that are present in more than one set of the connected components are clustered. Subsequently, areas corresponding to the clusters that are determined to form a table are cropped from the primary image document and each cell region of the table is identified by modifying pixel values of the clusters of the connected components in the cropped area.
    Type: Grant
    Filed: March 30, 2019
    Date of Patent: November 24, 2020
    Assignee: Wipro Limited
    Inventors: Aniket Anand Gurav, Rupesh Wadibhasme, Swapnil Dnyaneshwar Belhe
  • Patent number: 10796187
    Abstract: The present disclosure relates to detection of texts. A text detecting method includes: acquiring a first image to be detected of a text object to be detected; determining whether the first image to be detected contains a predetermined indicator; determining, if the first image to be detected contains the predetermined indicator, a position of the predetermined indicator, and acquiring a second image to be detected of the text object to be detected; determining whether the second image to be detected contains the predetermined indicator; and determining, if the second image to be detected does not contain the predetermined indicator, a text detecting region based on the position of the predetermined indicator.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: October 6, 2020
    Assignee: NEXTVPU (SHANGHAI) CO., LTD.
    Inventors: Song Mei, Haijiao Cai, Xinpeng Feng, Ji Zhou
  • Patent number: 10783390
    Abstract: A non-transitory computer-readable recording medium recording a character area extraction program for causing a computer to execute a process includes changing a relationship in relative sizes between an image and a scanning window that scans the image; scanning the scanning window based on a changed relationship, specifying a scanning position at which an edge density of an image area included in the scanning window is equal to or larger than a threshold value, extracting one or more areas indicated by the scanning window at the specified scanning position as one or more character area candidates, determining, when overlapped character area candidates included in the one or more character area candidates overlap with each other, a maximum character area candidate having a maximum edge density among the overlapped character area candidates, and extracting the image area included in the maximum character area candidate as a character area.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: September 22, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Kazuya Yonezawa
  • Patent number: 10762370
    Abstract: In accordance with an embodiment, a magnetic ink character recognition apparatus comprises a magnetic head; a conveyance module configured to relatively convey a medium on which a magnetic ink character is printed with respect to the magnetic head; an acquisition module configured to acquire a magnetic detection signal of the medium read by the magnetic head; an excluding module configured to exclude a predetermined exclusion section including a reading result of an end portion of the medium from a signal section of the magnetic detection signal; and a recognition module configured to recognize the magnetic ink character based on the magnetic detection signal of the remaining signal section except for the exclusion section.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: September 1, 2020
    Assignee: TOKSHIBA TEC KABUSHIKI KAISHA
    Inventors: Antonius Kosasih, Noriyuki Watanabe
  • Patent number: 10679069
    Abstract: Methods and systems for automatic video summary generation are disclosed. A method includes: extracting, by a computing device, a plurality of frames from a video; determining, by the computing device, for each of the plurality of extracted frames, features in the frame; creating, by the computing device, a scene detection model using the determined features for each of the plurality of extracted frames; scoring, by the computing device, each of the plurality of extracted frames using the created scene detection model; and generating, by the computing device, a video summary using the scored plurality of extracted frames.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: June 9, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Craig M. Trim, Veronica Wyatt, Olav Laudij