Limited To Specially Coded, Human-readable Characters Patents (Class 382/182)
  • Patent number: 11651604
    Abstract: The present invention provides a word recognition method. The method includes: acquiring an image of a word to be recognized; recognizing edges of each character of the word to be recognized from the image of the word to be recognized; determining a geometric position of the word to be recognized; stretching the geometric position of the word to be recognized to a horizontal position; and recognizing the word to be recognized in the horizontal position.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: May 16, 2023
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Guangwei Huang, Yue Li
  • Patent number: 11568382
    Abstract: A system includes a processor and a non-transitory computer readable medium coupled to the processor. The non-transitory computer readable medium includes code, that when executed by the processor, causes the processor to receive input from a user of a user device to generate an optimal payment location on an application display, generate a first boundary of the optimal payment location on the application display of the user device based upon a first motion of a payment enabled card in a first direction and generate a second boundary of the optimal payment location on the application display of the user device based upon a second motion of the payment enabled card in a second direction. The first boundary and the second boundary combine to form defining edges of the optimal payment location.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: January 31, 2023
    Assignee: VISA INTERNATIONAL SERVICE ASSOCIATION
    Inventors: Kasey Chiu, Kuen Mee Summers, Whitney Wilson Gonzalez
  • Patent number: 11562122
    Abstract: An information processing apparatus includes a processor configured to extract, from a document, words of plural categories, select one extracted word from each of the plural categories, generate a first character string by arranging the selected words in accordance with a rule, wherein the rule determines positions of the selected words within the first character string based on the categories of the selected words, in response to reception of an operation of changing a first word in the first character string from a user, present to the user one or more candidate words from the category of the first portion of the first character string, generate a second character string by replacing the first word in the first character string with a user-selected word selected by the user from among the one or more candidate words, and store the second character string in a memory in association with the document.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: January 24, 2023
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Miyuki Iizuka
  • Patent number: 11551461
    Abstract: A text classifying apparatus (100), an optical character recognition unit (1), a text classifying method (S220) and a program are provided for performing the classification of text. A segmentation unit (110) segments an image into a plurality of lines of text (401-412; 451-457; 501-504; 701-705) (S221). A selection unit (120) selects a line of text from the plurality of lines of text (S222-S223). An identification unit (130) identifies a sequence of classes corresponding to the selected line of text (S224). A recording unit (140) records, for the selected line of text, a global class corresponding to a class of the sequence of classes (S225-S226). A classification unit (150) classifies the image according to the global class, based on a confidence level of the global class (S227-S228).
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: January 10, 2023
    Assignee: I.R.I.S.
    Inventors: Frédéric Collet, Vandana Roy
  • Patent number: 11538238
    Abstract: Systems and methods for classifying at least a portion of an image as being textured or textureless are presented. The system receives an image generated by an image capture device, wherein the image represents one or more objects in a field of view of the image capture device. The system generates one or more bitmaps based on at least one image portion of the image. The one or more bitmaps describe whether one or more features for feature detection are present in the at least one image portion, or describe whether one or more visual features for feature detection are present in the at least one image portion, or describe whether there is variation in intensity across the at least one image portion. The system determines whether to classify the at least one image portion as textured or textureless based on the one or more bitmaps.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: December 27, 2022
    Assignee: Mujin, Inc.
    Inventors: Jinze Yu, Jose Jeronimo Moreira Rodrigues, Ahmed Abouelela
  • Patent number: 11531838
    Abstract: Systems and methods for automating image annotations are provided, such that a large-scale annotated image collection may be efficiently generated for use in machine learning applications. In some aspects, a mobile device may capture image frames, identifying items appearing in the image frames and detect objects in three-dimensional space across those image frames. Cropped images may be created as associated with each item, which may then be correlated to the detected objects. A unique identifier may then be captured that is associated with the detected object, and labels are automatically applied to the cropped images based on data associated with that unique identifier. In some contexts, images of products carried by a retailer may be captured, and item data may be associated with such images based on that retailer's item taxonomy, for later classification of other/future products.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: December 20, 2022
    Assignee: Target Brands, Inc.
    Inventors: Ryan Siskind, Matthew Nokleby, Nicholas Eggert, Stephen Radachy, Corey Hadden, Rachel Alderman, Edgar Cobos
  • Patent number: 11532087
    Abstract: The disclosure relates, in part, to computer-based visualization of stent position within a blood vessel. A stent can be visualized using intravascular data and subsequently displayed as stent struts or portions of a stent as a part of a one or more graphic user interface(s) (GUI). In one embodiment, the method includes steps to distinguish stented region(s) from background noise using an amalgamation of angular stent strut information for a given neighborhood of frames. The GUI can include views of a blood vessel generated using distance measurements and demarcating the actual stented region(s), which provides visualization of the stented region. The disclosure also relates to display of intravascular diagnostic information such as indicators. An indicator can be generated and displayed with images generated using an intravascular data collection system. The indicators can include one or more viewable graphical elements suitable for indicating diagnostic information such as stent information.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: December 20, 2022
    Assignee: LightLab Imaging, Inc.
    Inventors: Sonal Ambwani, Christopher E. Griffin, James G. Peterson, Satish Kaveti, Joel M. Friedman
  • Patent number: 11495019
    Abstract: A method for optical character recognition of text and information on a curved surface, comprising: activating an image capture device; scanning of the surface using the image capture device in order to acquire a plurality of scans of sections of the surface; performing OCR on the plurality of scans; separating the OCRed content into layers for each of the plurality of scans; merging the separated layers into single layers; and merging the single layers into an image.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: November 8, 2022
    Assignee: GIVATAR, INC.
    Inventors: William E. Becorest, Yongkeng Xiao
  • Patent number: 11436816
    Abstract: The information processing device includes a storage section storing a learnt model, a reception section, and a processor. The learnt model is obtained by mechanically learning the relationship between a sectional image obtained by dividing a voucher image and a type of a character string included in the sectional image based on a data set in which the sectional image is associated with type information indicating the type. The reception section receives an input of the voucher image to be subjected to a recognition process. The processing section generates a sectional image by dividing the voucher image received as an input and determines a type of the generated sectional image based on the learnt model.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: September 6, 2022
    Assignee: Seiko Epson Corporation
    Inventor: Kiyoshi Mizuta
  • Patent number: 11429790
    Abstract: Automated detection of personal information in free text, which includes: automatically applying a named-entity recognition (NER) algorithm to a digital text document, to detect named entities appearing in the digital text document, wherein the named entities are selected from the group consisting of: at least one person-type entity, and at least one non-person-type entity; automatically detecting at least one relation between the named entities, by applying a parts-of-speech (POS) tagging algorithm and a dependency parsing algorithm to sentences of the digital text document which contain the detected named entities; automatically estimating whether the at least one relation between the named entities is indicative of personal information; and automatically issuing a notification of a result of the estimation.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: August 30, 2022
    Assignee: International Business Machines Corporation
    Inventors: Andrey Finkelshtein, Bar Haim, Eitan Menahem
  • Patent number: 11328504
    Abstract: An image-processing device includes: a reliability calculation unit configured to calculate reliability of a character recognition result on a document image which is a character recognition target on the basis of a feature amount of a character string of a specific item included in the document image; and an output destination selection unit configured to select an output destination of the character recognition result in accordance with the reliability.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: May 10, 2022
    Assignee: NEC CORPORATION
    Inventors: Yuichi Nakatani, Katsuhiko Kondoh, Satoshi Segawa, Michiru Sugimoto, Yasushi Hidaka, Junya Akiyama
  • Patent number: 11308175
    Abstract: While current voice assistants can respond to voice requests, creating smarter assistants that leverage location, past requests, and user data to enhance responses to future requests and to provide robust data about locations is desirable. A method for enhancing a geolocation database (“database”) associates a user-initiated triggering event with a location in a database by sensing user position and orientation within the vehicle and a position and orientation of the vehicle. The triggering event is detected by sensors arranged within a vehicle with respect to the user. The method determines a point of interest (“POI”) near the location based on the user-initiated triggering event. The method, responsive to the user-initiated triggering event, updates the database based on information related to the user-initiated triggering event at an entry of the database associated with the POI. The database and voice assistants can leverage the enhanced data about the POI for future requests.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: April 19, 2022
    Assignee: Cerence Operating Company
    Inventors: Nils Lenke, Mohammad Mehdi Moniri, Reimund Schmald, Daniel Kindermann
  • Patent number: 11275597
    Abstract: Techniques for augmenting data visualizations based on user interactions to enhance user experience are provided. In one aspect, a method for providing real-time recommendations to a user includes: capturing user interactions with a data visualization, wherein the user interactions include images captured as the user interacts with the data visualization; building stacks of the user interactions, wherein the stacks of the user interactions are built from sequences of the user interactions captured over time; generating embeddings for the stacks of the user interactions; finding clusters of embeddings having similar properties; and making the real-time recommendations to the user based on the clusters of embeddings having the similar properties.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: March 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: German H Flores, Eric Kevin Butler, Robert Engel, Aly Megahed, Yuya Jeremy Ong, Nitin Ramchandani
  • Patent number: 11238305
    Abstract: An information processing apparatus includes a processor configured to execute first preprocessing on acquired image data, and execute second preprocessing on a specified partial region of the image data as a target in a case where information for specifying at least one partial region in an image corresponding to the image data is received from post processing on which the image data after the first preprocessing is processed.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: February 1, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Hiroyoshi Uejo, Yuki Yamanaka
  • Patent number: 11238481
    Abstract: A financial institution can provide a best price guarantee to debit or credit card account holders. By providing a consolidated system including automatic price monitoring of purchased products and automatic claim form generation upon identifying a lower price, the consumer is relieved of the burden typically associated with conventional price matching.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: February 1, 2022
    Assignee: CITICORP CREDIT SERVICES, INC.
    Inventors: Neeraj Sharma, Ateesh Tankha, Anthony Merola, Michael Ying
  • Patent number: 11195315
    Abstract: Near-to-eye displays support a range of applications from helping users with low vision through augmenting a real world view to displaying virtual environments. The images displayed may contain text to be read by the user. It would be beneficial to provide users with text enhancements to improve its readability and legibility, as measured through improved reading speed and/or comprehension. Such enhancements can provide benefits to both visually impaired and non-visually impaired users where legibility may be reduced by external factors as well as by visual dysfunction(s) of the user. Methodologies and system enhancements that augment text to be viewed by an individual, whatever the source of the image, are provided in order to aid the individual in poor viewing conditions and/or to overcome physiological or psychological visual defects affecting the individual or to simply improve the quality of the reading experience for the user.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: December 7, 2021
    Assignee: eSight Corp.
    Inventors: Frank Jones, James Benson Bacque
  • Patent number: 11176576
    Abstract: Techniques for providing remote messages to mobile devices based on image data and other sensor data are discussed herein. Some embodiments may include one or more servers configured to: receive, from a consumer device via a network, location data indicating a consumer device location of a consumer device; receive, from the consumer device via the network, image data captured by a camera of the consumer device; receive, from the consumer device via the network, orientation data defining an orientation of the camera when the image data was captured, wherein the orientation data is captured by an accelerometer of the consumer device; attempt to extract a merchant identifier from the image based on programmatically processing the image data; determine one or more merchants based on a fuzzy search of available ones of the location data, the merchant identifier, and the orientation data.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: November 16, 2021
    Assignee: GROUPON, INC.
    Inventors: Gajaruban Kandavanam, Sarika Oak, Gloria Ye, Chunjun Chen
  • Patent number: 11116454
    Abstract: An imaging device and method which can easily obtain a curve of time-varying changes in pixel value of a region of interest, even if the region of interest moves with a subject's body motion. A controller includes an image processor executing various types of image processing on fluorescence images and visible light images. The image processor includes a pixel value measurement unit which sequentially measures values of pixels at positions corresponding to a region of interest (ROI) in the fluorescence image, a change curve creation unit which creates a curve of time-varying changes in pixel value of the ROI by sampling, among the pixel values measured by the pixel value measurement unit, a minimum pixel value within a period equal to or longer than a cycle of the subject's body motion, and a smoothing unit which smooths the curve created by the change curve creation unit.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: September 14, 2021
    Assignee: Shimadzu Corporation
    Inventor: Akihiro Ishikawa
  • Patent number: 11100363
    Abstract: A method disclosed herein uses a processor of a server to function as a processing unit to enhance accuracy of character recognition in a terminal connected to the server, using a communication apparatus of the server. The processing unit may be configured to acquire first data indicating a result of character recognition with respect to image data taken by the terminal. The processing unit can determine a character type of a character included in the image data when it is determined that misrecognition is included in the result of character recognition based on the first data. The processing unit controls the communication apparatus to transmit second data according to the character type to terminal and instructs the terminal to perform character recognition using the second data with respect to the image data in order to improve the accuracy of character recognition.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: August 24, 2021
    Assignee: TOSHIBA TEC KABUSHIKI KAISHA
    Inventor: Syusaku Takara
  • Patent number: 11093036
    Abstract: A system including: a first sensor module having an inertial measurement unit and attached to an upper arm of a user, the first sensor module generating first motion data identifying an orientation of the upper arm; a second sensor module having an inertial measurement unit and attached to a hand of the user, the second sensor module generating second motion data identifying an orientation of the hand; and a computing device coupled to the first sensor module and the second sensor module through communication links, the computing device calculating, based on the orientation of the upper arm and the orientation of the hand, an orientation of a forearm connected to the hand by a wrist of the user and connected to the upper arm by an elbow joint of the user.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: August 17, 2021
    Assignee: Finch Technologies Ltd.
    Inventors: Viktor Vladimirovich Erivantcev, Rustam Rafikovich Kulchurin, Alexander Sergeevich Lobanov, Iakov Evgenevich Sergeev, Alexey Ivanovich Kartashov
  • Patent number: 11087077
    Abstract: Embodiments are generally directed to techniques for extracting contextually structured data from document images, such as by automatically identifying document layout, document data, and/or document metadata in a document image, for instance. Many embodiments are particularly directed to generating and utilizing a document template database for automatically extracting document image contents into a contextually structured format. For example, the document template database may include a plurality of templates for identifying/explaining key data elements in various document image formats that can be used to extract contextually structured data from incoming document images with a matching document image format. Several embodiments are particularly directed to automatically identifying and associating document metadata with corresponding document data in a document image, such as for generating a machine-facilitated annotation of the document image.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: August 10, 2021
    Assignee: SAS INSTITUTE INC.
    Inventors: David James Wheaton, William Robert Nadolski, Heather Michelle GoodyKoontz
  • Patent number: 11080273
    Abstract: A computer-implemented method, a cognitive intelligence system and computer program product adapt a relational database containing image data types. At least one image token in the relational database is converted to a textual form. Text is produced based on relations of tokens in the relational database. A set of word vectors is produced based on the text. A cognitive intelligence query expressed as a structured query language (SQL) query may be applied to the relational database using the set of word vectors. An image token may be converted to textual form by converting the image to a tag, by using a neural network classification model and replacing the image token with a corresponding cluster identifier, by binary comparison or by a user-specified similarity function. An image token may be converted to a plurality of textual forms using more than one conversion method.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: August 3, 2021
    Assignee: International Business Machines Corporation
    Inventors: Bortik Bandyopadhyay, Rajesh Bordawekar, Tin Kam Ho
  • Patent number: 11080563
    Abstract: A computer implemented a method and system for enrichment of OCR extracted data is disclosed comprising of accepting a set of extraction criteria and a set of configuration parameters by a data extraction engine. The data extraction engine captures data satisfying an extraction criteria using the configuration parameters and adapts the captured data using a set of domain specific rules and a set of OCR error patterns. A learning engine generates learning data models using the adapted data and the configuration parameters and the system dynamically updates the extraction criteria using the generated learning data models. The extraction criteria comprise one or more extraction templates wherein an extraction template includes one of a regular expression, geometric markers, anchor text markers and a combination thereof.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: August 3, 2021
    Assignee: INFOSYS LIMITED
    Inventors: Shreyas Bettadapura Guruprasad, Radha Krishna Pisipati
  • Patent number: 11042733
    Abstract: An information processing apparatus includes an acquiring unit, a confirming unit, and a controller. The acquiring unit acquires a text recognition result with respect to a first image showing a document and a certainty factor indicating a certainty of the text recognition result. The confirming unit confirms the text recognition result if the certainty factor is above or equal to a threshold value. The controller controls an output of a warning for the text recognition result with respect to the first image in a case where the text recognition result and a text recognition result with respect to a second image showing a relevant document related to the document do not match even when the certainty factor is above or equal to the threshold value.
    Type: Grant
    Filed: February 12, 2019
    Date of Patent: June 22, 2021
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Takumi Kitamura
  • Patent number: 11023764
    Abstract: Systems and methods for performing OCR of a series of images depicting text symbols.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: June 1, 2021
    Assignee: ABBYY Production, LLC
    Inventors: Aleksey Ivanovich Kalyuzhny, Aleksey Yevgenyevich Lebedev
  • Patent number: 11003911
    Abstract: Provided is a an inspection assistance device. This inspection assistance device is provided with: an image data acquisition unit that acquires image data in which a to-be-inspected object is captured; a display control unit that causes a display unit to display information about inspection results of the to-be-inspected object, recognized on the basis of the acquired image data, in such a manner as to be superimposed on an image that includes the to-be-inspected object; and a recording control unit that records the information being displayed on the display unit and the information about the to-be-inspected object in association with each other.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: May 11, 2021
    Assignee: NEC CORPORATION
    Inventors: Takami Sato, Kota Iwamoto, Yoshinori Saida, Shin Norieda
  • Patent number: 10997283
    Abstract: A computer-implemented method of providing security for a software container according to an example of the present disclosure includes receiving a software container image having a software application layer that is encrypted and includes a software application, and having a separate security agent layer that includes a security agent. The method includes receiving a request to instantiate the software container image as a software container. The method also includes, based on the request: launching the security agent and utilizing the security agent to decrypt and authenticate the software application layer, and control operation of the software application based on the authentication.
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: May 4, 2021
    Assignee: AQUA SECURITY SOFTWARE, LTD.
    Inventors: Amir Gerebe, Rani Osnat
  • Patent number: 10984274
    Abstract: Apparatus and method for detecting hidden encoding of text strings, such as Internet web-domain addresses or email addresses, using optical character recognition (OCR) techniques. In some embodiments, a first set of digital data having a first string of text character codes are converted into an image. Optical character recognition (OCR) is applied to the image to generate a second set of digital data having a second string of text character codes based on detection of the image. The first string of text character codes are compared to the second string of text character codes to detect the presence or absence of hidden codes in the first set of digital data. In some cases, a smoothing function such as Gaussian blurring is applied to degrade the image prior to the application of OCR.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: April 20, 2021
    Assignee: Seagate Technology LLC
    Inventor: John Luis Sosa-Trustham
  • Patent number: 10949525
    Abstract: Aspects described herein may allow for the application of generating captcha images using relations among objects. The objects in ground-truth images may be clustered based on the probabilities of co-occurrence. Further aspects described herein may provide for generating a first captcha image comprising a first object and a second object, and generating a second captcha image based on the first captcha image by replacing the first object with the third object. Finally, the first and second captcha images may be presented as security challenges and user access requests may be granted or denied based on responses to the security challenges.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: March 16, 2021
    Assignee: Capital One Services, LLC
    Inventors: Anh Truong, Vincent Pham, Galen Rafferty, Jeremy Goodsitt, Mark Watson, Austin Walters
  • Patent number: 10949660
    Abstract: An improved machine learning system is provided. For example, a content management server may provide real-time analysis of a user's handwriting to assess the user's knowledge of a language, including using a convolution neural network method. The convolution neural network method may be executed to normalize at least some identified strokes in the user's handwritten user input. Normalization may be performed by translating a window comprising a subset of pixels in a digital representation of the handwritten user input amongst a plurality of pixels in the digital representation.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: March 16, 2021
    Assignee: PEARSON EDUCATION, INC.
    Inventor: Zhaodong Wang
  • Patent number: 10949697
    Abstract: An image processing apparatus includes a character recognition section, a translation section, an image processing section, a selection acceptance section, and a control section. The character recognition section performs character recognition processing on image data. The translation section translates an original text obtained through the character recognition processing performed by the character recognition section into a predetermined language and creates a translated text. The image processing section generates a replaced image in which a text portion of an original image shown in the image data is replaced from the original text by the translated text. The selection acceptance section accepts an instruction of selecting, as an output target, either one or both of the original image shown in the image data and the replaced image. The control section performs, in accordance with the accepted instruction, processing of outputting an output target image selected as the output target.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: March 16, 2021
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Ariyoshi Hikosaka
  • Patent number: 10943108
    Abstract: An image reader includes a document reading unit, and a control unit that functions as an individual image cutting section, character string detection section, mismatch detection section, judgment section, and correction section. The individual image cutting section cuts out individual images from image data obtained through reading by the document reading unit. The character string detection section detects character strings present on the individual images. The mismatch detection section detects, for the character strings detected by the character string detection section, a mismatching portion by making comparison between the individual images with considering character strings having contents identical or similar to each other as same information. The judgment section judges for the mismatching portions whether a ratio of majority characters reaches a predefined ratio.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: March 9, 2021
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Keisaku Matsumae
  • Patent number: 10880447
    Abstract: An image processing apparatus includes a size acquisition unit and a decision unit. The size acquisition unit acquires first size information indicating a first sheet size of a first sheet surface read by an image reading unit. The decision unit decides second image information indicating a second image size of a second image to be stored of a second sheet surface different from the first sheet surface based on first image information indicating a first image size of a first image to be stored based on the first size information.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: December 29, 2020
    Assignee: TOSHIBA TEC KABUSHIKI KAISHA
    Inventor: Fumiyuki Watanabe
  • Patent number: 10853643
    Abstract: Increasing the probability that an image showing the front face of a display object is displayed initially is enabled. An image extraction device acquires position information indicating a position of a display object and display object information indicating the display object. The image extraction device extracts a partial image including the acquired display object information from images photographed from at least one spot located within a predetermined distance of a position indicated by the acquired position information. The image extraction device outputs the extracted partial image.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: December 1, 2020
    Assignee: Rakuten, Inc.
    Inventors: Soh Masuko, Naho Kono, Ryosuke Kuroki
  • Patent number: 10846525
    Abstract: The present disclosure is related to field of machine learning and image processing, disclosing method and device for identifying cell region of table including cell borders from an image document. Table detecting system rescales a primary image document into plurality of secondary image documents of different size and resolution to detect plurality of candidate regions comprising predefined table features in each secondary image document. Further, for each candidate region, set of connected components are determined and the connected components corresponding to the IDs that are present in more than one set of the connected components are clustered. Subsequently, areas corresponding to the clusters that are determined to form a table are cropped from the primary image document and each cell region of the table is identified by modifying pixel values of the clusters of the connected components in the cropped area.
    Type: Grant
    Filed: March 30, 2019
    Date of Patent: November 24, 2020
    Assignee: Wipro Limited
    Inventors: Aniket Anand Gurav, Rupesh Wadibhasme, Swapnil Dnyaneshwar Belhe
  • Patent number: 10796187
    Abstract: The present disclosure relates to detection of texts. A text detecting method includes: acquiring a first image to be detected of a text object to be detected; determining whether the first image to be detected contains a predetermined indicator; determining, if the first image to be detected contains the predetermined indicator, a position of the predetermined indicator, and acquiring a second image to be detected of the text object to be detected; determining whether the second image to be detected contains the predetermined indicator; and determining, if the second image to be detected does not contain the predetermined indicator, a text detecting region based on the position of the predetermined indicator.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: October 6, 2020
    Assignee: NEXTVPU (SHANGHAI) CO., LTD.
    Inventors: Song Mei, Haijiao Cai, Xinpeng Feng, Ji Zhou
  • Patent number: 10783390
    Abstract: A non-transitory computer-readable recording medium recording a character area extraction program for causing a computer to execute a process includes changing a relationship in relative sizes between an image and a scanning window that scans the image; scanning the scanning window based on a changed relationship, specifying a scanning position at which an edge density of an image area included in the scanning window is equal to or larger than a threshold value, extracting one or more areas indicated by the scanning window at the specified scanning position as one or more character area candidates, determining, when overlapped character area candidates included in the one or more character area candidates overlap with each other, a maximum character area candidate having a maximum edge density among the overlapped character area candidates, and extracting the image area included in the maximum character area candidate as a character area.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: September 22, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Kazuya Yonezawa
  • Patent number: 10762370
    Abstract: In accordance with an embodiment, a magnetic ink character recognition apparatus comprises a magnetic head; a conveyance module configured to relatively convey a medium on which a magnetic ink character is printed with respect to the magnetic head; an acquisition module configured to acquire a magnetic detection signal of the medium read by the magnetic head; an excluding module configured to exclude a predetermined exclusion section including a reading result of an end portion of the medium from a signal section of the magnetic detection signal; and a recognition module configured to recognize the magnetic ink character based on the magnetic detection signal of the remaining signal section except for the exclusion section.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: September 1, 2020
    Assignee: TOKSHIBA TEC KABUSHIKI KAISHA
    Inventors: Antonius Kosasih, Noriyuki Watanabe
  • Patent number: 10679069
    Abstract: Methods and systems for automatic video summary generation are disclosed. A method includes: extracting, by a computing device, a plurality of frames from a video; determining, by the computing device, for each of the plurality of extracted frames, features in the frame; creating, by the computing device, a scene detection model using the determined features for each of the plurality of extracted frames; scoring, by the computing device, each of the plurality of extracted frames using the created scene detection model; and generating, by the computing device, a video summary using the scored plurality of extracted frames.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: June 9, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Craig M. Trim, Veronica Wyatt, Olav Laudij
  • Patent number: 10657517
    Abstract: Methods and systems for a transportation vehicle are provided. For example, one method includes initializing a transaction mode for using a transaction card having a front portion and a rear portion from a seat device of a transportation vehicle; adjusting lighting from the seat device to capture an image of the transaction card; capturing the image of the transaction card using a camera of the seat device; and processing the image of the transaction card and extracting information from the image of the transaction card.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: May 19, 2020
    Assignee: Panasonic Avionics Corporation
    Inventor: Nigel Blackwell
  • Patent number: 10628934
    Abstract: A device comprising a printer configured to apply a code of printed content on a substrate of a product based on a printer technology type, the code having a plurality of digits. The device includes an optical code detector, executed by one or more processors, to detect the code in a received image of the product printed by the printer by optically recognizing characters in the received image using a trained optical character recognition (OCR) algorithm for the printer technology type. The OCR algorithm is trained to identify each digit of the plurality of digits of the code in a region of interest (ROI) based on at least one product parameter to which the printed content is directly applied and the printer technology type. A system and method are also provided.
    Type: Grant
    Filed: August 11, 2017
    Date of Patent: April 21, 2020
    Assignee: VIDEOJET TECHNOLOGIES INC
    Inventor: Robert Weaver
  • Patent number: 10614163
    Abstract: A system, method and computer program product for cognitive copy and paste. The method includes: receiving, at a hardware processor of a computer system, an input representing a selection of a content captured from a source application program, and receiving an input representing an identified target application program that will receive the selected content to be copied and rendered in the target application program. The selected content is analyzed to determine a context for the selected content; and a rendering of the selected content at a location within the destination application based on the determined context, the rendering achieving a best representation of the selected content on the destination application. The analyzing includes invoking a natural language processor to determine an intent, meaning, or an intended use of the selected content based on the determined context, and employs a support vector machine for determining a best format change when rendering.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: April 7, 2020
    Assignee: International Business Machines Corporation
    Inventors: Trudy L. Hewitt, Jonathan Dunne, Kelley Anders, Robert Grant
  • Patent number: 10599772
    Abstract: A system, method and computer program product for cognitive copy and paste. The method includes: receiving, at a hardware processor of a computer system, an input representing a selection of a content captured from a source application program, and receiving an input representing an identified target application program that will receive the selected content to be copied and rendered in the target application program. The selected content is analyzed to determine a context for the selected content; and a rendering of the selected content at a location within the destination application based on the determined context, the rendering achieving a best representation of the selected content on the destination application. The analyzing includes invoking a natural language processor to determine an intent, meaning, or an intended use of the selected content based on the determined context, and employs a support vector machine for determining a best format change when rendering.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: March 24, 2020
    Assignee: International Business Machines Corporation
    Inventors: Trudy L. Hewitt, Jonathan Dunne, Kelley Anders, Robert Grant
  • Patent number: 10599956
    Abstract: An automatic classifying system in a dining environment includes a picture uploading component implemented in an electronic device for transmitting a set of pictures via the Internet, and a server for directly or indirectly receiving the set of pictures. The server has a picture analysis component for classifying one of the pictures according to at least two classifications and generating an analysis result to a web-platform system so as to display the picture and the analysis result.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: March 24, 2020
    Assignee: Digital Drift Co.Ltd
    Inventor: Chien-Wei Huang
  • Patent number: 10592736
    Abstract: The invention provides a method for CSI-based fine-grained gesture recognition, wherein the method comprises the following steps: determining a start point, an end point, a velocity, a direction and/or an inflection point of at least one stroke gesture in multiple dimensions according to an eigenvalue of channel state information; dividing the strokes according to the start point, the end point, the velocity, the direction and/or the inflection point of the stroke using a machine learning method and forming a stroke sequence; building a stroke decipherment model according to frequencies of the strokes appearing in natural language rules and/or scientific language rules and/or connection rules between the strokes; and dividing and recognizing the stroke sequence as a letter sequence, a radical sequence, a numeral sequence and/or a pattern sequence conforming to the natural language rules and/or the scientific language rules using the stroke decipherment model.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: March 17, 2020
    Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Jiang Xiao, Yuxi Wang, Hai Jin, Mingxuan Ni
  • Patent number: 10572728
    Abstract: A text image processing method and a text image processing apparatus are provided. In some embodiments, a text image processing method includes: preprocessing a text image to obtain a binary image, where the binary image includes multiple connected regions; acquiring a convex hull corresponding to each of the connected regions with a convex hull algorithm; acquiring a character region circumscribing the convex hull; performing character segmentation on the acquired character region to obtain multiple character blocks; and merging the character blocks based on heights of the character blocks to obtain word blocks of the text image.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: February 25, 2020
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Longsha Zhou, Hongfa Wang
  • Patent number: 10565680
    Abstract: A computer-implemented method comprises: extracting a setting from a description file of a virtual pan-tilt-zoom (PTZ) camera used to capture an original image through a wide-angle lens; determining a first set of coordinates of a pixel of the original image for each cell of a sparse conversion map represented as a first look-up table, wherein the sparse conversion map corresponds to a sparse grid of pixels of an output image; determining, via interpolating the first set of coordinates, a second set of coordinates of a pixel of the original image for each cell of a full conversion map, wherein the second set of coordinates is represented as a second look-up table, wherein the full conversion map corresponds to a full grid of pixels of the output image; instructing a display to present the output image, wherein the original image is less rectilinear than the output image.
    Type: Grant
    Filed: January 22, 2018
    Date of Patent: February 18, 2020
    Assignee: Intelligent Security Systems Corporation
    Inventor: Oleg Vladimirovich Stepanenko
  • Patent number: 10552702
    Abstract: Systems and methods for performing OCR of a series of images depicting text symbols.
    Type: Grant
    Filed: December 26, 2017
    Date of Patent: February 4, 2020
    Assignee: ABBYY PRODUCTION LLC
    Inventors: Aleksey Ivanovich Kalyuzhny, Aleksey Yevgenyevich Lebedev
  • Patent number: 10496809
    Abstract: Aspects described herein may allow for the application of generating captcha images using relations among objects. The objects in ground-truth images may be clustered based on the probabilities of co-occurrence. Further aspects described herein may provide for generating a first captcha image comprising a first object and a second object, and generating a second captcha image based on the first captcha image by replacing the first object with the third object. Finally, the first and second captcha images may be presented as security challenges and user access requests may be granted or denied based on responses to the security challenges.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: December 3, 2019
    Assignee: Capital One Services, LLC
    Inventors: Vincent Pham, Galen Rafferty, Jeremy Goodsitt, Mark Watson, Austin Walters, Anh Truong
  • Patent number: 10460191
    Abstract: A user device detects, in a field of view of the camera, a first side of a document, and determines first information associated with the first side of the document. The user device selects a first image resolution based on the first information and captures, by the camera, a first image of the first side of the document according to the first image resolution. The user device detects, in the field of view of the camera, a second side of the document, and determines second information associated with the second side of the document. The user device selects a second image resolution based on the second information, and captures, by the camera, a second image of the second side of the document according to the second image resolution. The user device performs an action related to the first image and the second image.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: October 29, 2019
    Assignee: Capital One Services, LLC
    Inventors: Jason Pribble, Daniel Alan Jarvis, Nicholas Capurso