Limited To Specially Coded, Human-readable Characters Patents (Class 382/182)
-
Patent number: 12347217Abstract: An information processing apparatus (10) includes a controller (11) that acquires an image containing a figure and a character string and generates association information indicating an association between the figure and the character string based on a positional relationship between the figure and the character string in the image.Type: GrantFiled: April 13, 2021Date of Patent: July 1, 2025Assignee: Yokogawa Electric CorporationInventors: Kie Shimamura, Tomohiro Kuroda, Yukiyo Akisada, Makoto Niimi
-
Patent number: 12299398Abstract: Techniques are disclosed for predicting a table column using machine learning. For example, a system can include at least one processing device including a processor coupled to a memory, the processing device being configured to implement the following: determining a local word density for words in a table, the local word density measuring a count of other words in a first region surrounding the words; determining a local numeric density for the words, the local numeric density measuring a proportion of digits in a second region surrounding the words; determining semantic associations for the words by processing the words using an ML-based semantic association model trained based on surrounding words in nearby table columns and rows; and predicting a table column index for the words by processing the table using an ML-based table column model trained based on the local word density, local numeric density, and semantic association.Type: GrantFiled: January 27, 2022Date of Patent: May 13, 2025Assignee: Dell Products L.P.Inventors: Romulo Teixeira de Abreu Pinho, Paulo Abelha Ferreira, Vinicius Gottin, Pablo Nascimento Da Silva
-
Patent number: 12293522Abstract: A signal processing method is performed by a computer. The signal processing method includes: obtaining first compressed image data including hyperspectral information and indicating a two-dimensional image in which the hyperspectral information is compressed, the hyperspectral information being luminance information on each of at least four wavelength bands included in a target wavelength range; extracting partial image data from the first compressed image data; and generating first two-dimensional image data corresponding to a first wavelength band and second two-dimensional image data corresponding to a second wavelength band from the partial image data.Type: GrantFiled: November 22, 2022Date of Patent: May 6, 2025Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Motoki Yako, Atsushi Ishikawa
-
Patent number: 12277789Abstract: A Smart Optical Character Recognition (SOCR) Trainer comprises software developed for automating Quality Control (QC) using unsupervised machine-learning techniques to analyze, classify, and optimize textual data extracted from an image or PDF document. SOCR Trainer serves as a ‘data treatment’ utility service that can be embedded into data processing workflows (e.g., data pipelines, ETL processes, data versioning repositories, etc.). SOCR Trainer performs a series of automated tests on the quality of images and their respective extracted textual data to determine if the extraction is trustworthy. If deficiencies are detected, SOCR Trainer will analyze certain parameters of the document, perform conditional optimizations, re-perform text extraction, and repeat QA testing until the output meets desired specifications. SOCR Trainer will produce audit files recording the provenance and differences between original documents and enhanced optimized document text.Type: GrantFiled: July 14, 2022Date of Patent: April 15, 2025Assignee: Innovative Computing & Applied Technology LLCInventors: Radu Stoicescu, Jesse Osborne
-
Patent number: 12205390Abstract: A system for identifying handwritten characters on an image using a classification model that employs a neural network. The system includes a computer having a processor and a memory device that stores data and executable code that, when executed, causes the processor to read and convert typed text on the image to machine encoded text to identify locations of the typed text on the image; identify a location on the image that includes handwritten text based on the location of predetermined typed text on the image; identify clusters of non-white pixels in the image at the location having the handwritten text, where constraints are employed to refine and limit the clusters; generate an individual and separate cluster image for each identified cluster; and classify each cluster image using machine learning and at least one neural network to determine the likelihood that the cluster is a certain character.Type: GrantFiled: May 2, 2022Date of Patent: January 21, 2025Assignee: TRUIST BANKInventor: Raphael Fitzgerald
-
Patent number: 12198452Abstract: A system for identifying handwritten characters on an image using a classification model that employs a neural network. The system includes a computer having a processor and a memory device that stores data and executable code that, when executed, causes the processor to read and convert typed text on the image to machine encoded text to identify locations of the typed text on the image; identify a location on the image that includes handwritten text based on the location of predetermined typed text on the image; identify clusters of non-white pixels in the image at the location having the handwritten text; generate an individual and separate cluster image for each identified cluster; classify each cluster image using machine learning and at least one neural network to determine the likelihood that the cluster is a certain character; and determine the accuracy of the characters by comparing to a secondary database.Type: GrantFiled: May 2, 2022Date of Patent: January 14, 2025Assignee: TRUIST BANKInventor: Raphael Fitzgerald
-
Patent number: 12182650Abstract: A reading apparatus includes a synchronizing signal output unit, a first imaging unit, a second imaging unit, an acquisition unit, and a transmission unit. The synchronizing signal output unit outputs a synchronizing signal. The first imaging unit images a read target at an imaging position from a first direction in synchronism with the synchronizing signal. The second imaging unit images the read target at the imaging position from a second direction in synchronism with the synchronizing signal. The acquisition unit acquires a first image of the read target captured by the first imaging unit in synchronism with the synchronizing signal, and a second image of the read target captured by the second imaging unit in synchronism with the synchronizing signal. The transmission unit correlates the first image and the second image, and transmits the correlated first image and second image.Type: GrantFiled: June 6, 2023Date of Patent: December 31, 2024Assignee: Toshiba Tec Kabushiki KaishaInventor: Takashi Ichikawa
-
Patent number: 12181404Abstract: ultraviolet (UV) based imaging method for determining protein concentrations of unknown protein samples based on automated multi-wavelength calibration. In various embodiments, a processor receives each of a standard set of wavelength data and an unknown set of wavelength data as recorded by a detector. Each standard set of wavelength data and unknown set of wavelength data defines a series of absorbance-to-wavelength value pairs across a first range of wavelengths selected from a range of a single-wavelength light beams of a UV spectra. The processor generates a multi-wavelength calibration model based on each of a first series of first absorbance-to-wavelength value pairs of the standard set of wavelength data. The processor implements the multi-wavelength calibration model to determine, for each unknown protein sample of the given unknown protein samples, a plurality of protein concentration values.Type: GrantFiled: August 6, 2020Date of Patent: December 31, 2024Assignee: AMGEN INC.Inventors: Shang Zeng, Gang Xue
-
Patent number: 12141200Abstract: Systems and methods for spatial-textual clustering-based recognition of text in videos are disclosed. A method includes performing textual clustering on a first subset of a set of predictions that correspond to numeric characters only and performing spatial-textual clustering on a second subset of the set of predictions that correspond to alphabetical characters only. The method includes, for each cluster of predictions associated with the first subset of the set of predictions, choosing a first cluster representative to correct any errors in each cluster of predictions associated with the first subset of the set of predictions and outputting any recognized numeric characters. The method includes, for each cluster of predictions associated with the second subset of the set of predictions, choosing a second cluster representative to correct any errors in each cluster of predictions associated with the second subset of the set of predictions and outputting any recognized alphabetical characters.Type: GrantFiled: May 27, 2022Date of Patent: November 12, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Yonit Hoffman, Maayan Yedidia, Avner Levi
-
Patent number: 12136236Abstract: Systems and methods for determining a geographic location of an environment from an image including an annotation on a mobile device without GPS, with no network access, and with no access to peripheral devices or media is described. Open source data indicative of the earth's surface may be obtained and combined into grids or regions. Elevation data may be used to create skyline models at grid points on the surface. An image of an environment may be obtained from a camera on a mobile device. The user of the mobile device may trace a skyline of the environment depicted in the image. The annotation may be used to create reduced regions for edge detection analysis. The edge detection analysis may detect the skyline. The detected skyline may be compared to the skyline models to determine a most likely location of the user.Type: GrantFiled: August 17, 2023Date of Patent: November 5, 2024Assignee: Applied Research Associates, Inc.Inventors: Dirk B. Warnaar, Douglas J. Totten
-
Patent number: 12135738Abstract: A method for identifying cyberbullying may include obtaining content from a first electronic device associated with a first user; identifying, using a machine learning model, the cyberbullying of the first user based on the content obtained from the first electronic device; and providing anonymized information related to the cyberbullying to a second electronic device associated with a second user.Type: GrantFiled: June 23, 2021Date of Patent: November 5, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Moiz Kaizar Sonasath, Vinod Cherian Joseph
-
Patent number: 12106591Abstract: A system and method for identifying handwritten characters on an image using a classification model that employs a neural network. The system includes a computer having a processor and a memory device that stores data and executable code that, when executed, causes the processor to read and convert typed text on the image to machine encoded text to identify locations of the typed text on the image; identify a location on the image that includes handwritten text based on the location of predetermined typed text on the image; identify clusters of non-white pixels in the image at the location having the handwritten text; generate an individual and separate cluster image for each identified cluster; classify each cluster image using machine learning and at least one neural network to determine the likelihood that the cluster is a certain character; and determine what character each cluster image is based on the classification.Type: GrantFiled: May 2, 2022Date of Patent: October 1, 2024Assignee: TRUIST BANKInventor: Raphael Fitzgerald
-
Patent number: 12106539Abstract: A system includes: an image sensor configured to acquire an image; an image processor configured to generate a quantized image based on the acquired image using a trained quantization filter; and an output interface configured to output the quantized image.Type: GrantFiled: September 14, 2020Date of Patent: October 1, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Sangil Jung, Dongwook Lee, Jinwoo Son, Changyong Son, Jaehyoung Yoo, Seohyung Lee, Changin Choi, Jaejoon Han
-
Patent number: 12095789Abstract: A multi-level, ensemble network monitoring system for detection of suspicious network activity from one or more a plurality of user computing devices on an external network communicatively connected via a network server to a private communication network is disclosed. In malware detection, the ensemble network monitoring system comprises artificial intelligence (AI) with bidirectional long short-term memory (BDLSTM) recurrent neural networks (RNNs) and natural language processing (NLP) to predict possible security threats and then initiate remedial measures accordingly. Enabling a proactive approach to detection and prevention of potential malicious activity, the BDLSTM RNN may perform real-time monitoring and proactively forecast network security violations to block network communications associated with high-risk user computing devices from accessing a private communication network.Type: GrantFiled: August 25, 2021Date of Patent: September 17, 2024Assignee: Bank of America CorporationInventors: Sujatha Balaji, Ramachandran Periyasamy, Sneha Mehta
-
Patent number: 12080006Abstract: Systems and methods for classifying at least a portion of an image as being textured or textureless are presented. The system receives an image generated by an image capture device, wherein the image represents one or more objects in a field of view of the image capture device. The system generates one or more bitmaps based on at least one image portion of the image. The one or more bitmaps describe whether one or more features for feature detection are present in the at least one image portion, or describe whether one or more visual features for feature detection are present in the at least one image portion, or describe whether there is variation in intensity across the at least one image portion. The system determines whether to classify the at least one image portion as textured or textureless based on the one or more bitmaps.Type: GrantFiled: December 22, 2022Date of Patent: September 3, 2024Assignee: MUJIN, INC.Inventors: Jinze Yu, Jose Jeronimo Moreira Rodrigue, Ahmed Abouelela
-
Patent number: 12073645Abstract: An information processing apparatus includes a processor configured to perform processing for displaying character information recognized by reading plural forms, in a descending or ascending order of the number of pieces of character information recognized as being identical.Type: GrantFiled: May 20, 2021Date of Patent: August 27, 2024Assignee: FUJIFILM Business Innovation Corp.Inventor: Hiroshi Todoroki
-
Patent number: 12014559Abstract: A processor may receive an image and determine a number of foreground pixels in the image. The processor may obtain a result of optical character recognition (OCR) processing performed on the image. The processor may identify at least one bounding box surrounding at least one portion of text in the result and overlay the at least one bounding box on the image to form a masked image. The processor may determine a number of foreground pixels in the masked image and a decrease in the number of foreground pixels in the masked image relative to the number of foreground pixels in the image. Based on the decrease, the processor may modify an aspect of the OCR processing for subsequent image processing.Type: GrantFiled: July 17, 2023Date of Patent: June 18, 2024Assignee: INTUIT INC.Inventors: Sameeksha Khillan, Prajwal Prakash Vasisht
-
Patent number: 11907306Abstract: A system may iteratively scan a portion of a document, extract first data from the portion of the document, and determine, using a trained model, whether the first data corresponds to one or more document types based on one or more confidence thresholds. The system may repeat this process, increasing the portion of the document scanned by a predetermined amount each iteration, until the first data corresponds to the one or more document types based on the one or more confidence thresholds. Responsive to determining the first data corresponds to the one or more document types based on the one or more confidence thresholds, the system may cause a graphical user interface (GUI) of a user device to display a notification indicating a document type match.Type: GrantFiled: January 4, 2022Date of Patent: February 20, 2024Assignee: CAPITAL ONE SERVICES, LLCInventor: Aaron Attar
-
Patent number: 11863995Abstract: A wireless access point information generation method, a device, and a computer readable medium are provided. The method includes: extracting candidate character images from an obtained image, wherein the obtained image includes an image indicating a wireless access point; determining a character image in the extracted candidate character images; determining a recognition result of the determined character image by using a character-recognition model, wherein the character-recognition model is used for representing a correspondence between the character image and a character; and generating an access point identifier and a password of the wireless access point according to the determined recognition result. The method provides a manner of generating wireless access point information.Type: GrantFiled: December 28, 2020Date of Patent: January 2, 2024Assignee: SHANGHAI LIANSHANG NETWORK TECHNOLOGY CO., LTD.Inventors: Shengfu Chen, Ting Shan, Chuanqi Liu
-
Patent number: 11823128Abstract: Systems and methods for automating image annotations are provided, such that a large-scale annotated image collection may be efficiently generated for use in machine learning applications. In some aspects, a mobile device may capture image frames, identifying items appearing in the image frames and detect objects in three-dimensional space across those image frames. Cropped images may be created as associated with each item, which may then be correlated to the detected objects. A unique identifier may then be captured that is associated with the detected object, and labels are automatically applied to the cropped images based on data associated with that unique identifier. In some contexts, images of products carried by a retailer may be captured, and item data may be associated with such images based on that retailer's item taxonomy, for later classification of other/future products.Type: GrantFiled: December 1, 2022Date of Patent: November 21, 2023Assignee: Target Brands, Inc.Inventors: Ryan Siskind, Matthew Nokleby, Nicholas Eggert, Stephen Radachy, Corey Hadden, Rachel Alderman, Edgar Cobos
-
Patent number: 11811979Abstract: Images of the plurality of document pages are scanned to generate image data with one scanning instruction. A single folder named with a received character string is determined as a storage destination of image data corresponding to the plurality of document pages generated with the scanning instruction.Type: GrantFiled: November 3, 2022Date of Patent: November 7, 2023Assignee: Canon Kabushiki KaishaInventor: Yasunori Shimakawa
-
Patent number: 11790171Abstract: A natural language understanding method begins with a radiological report text containing clinical findings. Errors in the text are corrected by analyzing character-level optical transformation costs weighted by a frequency analysis over a corpus corresponding to the report text. For each word within the report text, a word embedding is obtained, character-level embeddings are determined, and the word and character-level embeddings are concatenated to a neural network which generates a plurality of NER tagged spans for the report text. A set of linked relationships are calculated for the NER tagged spans by generating masked text sequences based on the report text and determined pairs of potentially linked NER spans. A dense adjacency matrix is calculated based on attention weights obtained from providing the one or more masked text sequences to a Transformer deep learning network, and graph convolutions are then performed over the calculated dense adjacency matrix.Type: GrantFiled: April 15, 2020Date of Patent: October 17, 2023Assignee: Covera HealthInventors: Ron Vianu, W. Nathaniel Brown, Gregory Allen Dubbin, Daniel Robert Elgort, Benjamin L. Odry, Benjamin Sellman Suutari, Jefferson Chen
-
Patent number: 11776248Abstract: Systems and methods are configured for correcting the orientation of an image data object subject to optical character recognition (OCR) by receiving an original image data object, generating initial machine readable text for the original image data object via OCR, generating an initial quality score for the initial machine readable text via machine-learning models, determining whether the initial quality score satisfies quality criteria, upon determining that the initial quality score does not satisfy the quality criteria, generating a plurality of rotated image data objects each comprising the original image data object rotated to a different rotational position, generating a rotated machine readable text data object for each of the plurality of rotated image data objects and generating a rotated quality score for each of the plurality of rotated machine readable text data objects, and determining that one of the plurality of rotated quality scores satisfies the quality criteria.Type: GrantFiled: October 17, 2022Date of Patent: October 3, 2023Assignee: Optum, Inc.Inventors: Rahul Bhaskar, Daryl Seiichi Furuyama, Daniel William James
-
Patent number: 11763488Abstract: Systems and methods for determining a geographic location of an environment from an image including an annotation on a mobile device without GPS, with no network access, and with no access to peripheral devices or media is described. Open source data indicative of the earth's surface may be obtained and combined into grids or regions. Elevation data may be used to create skyline models at grid points on the surface. An image of an environment may be obtained from a camera on a mobile device. The user of the mobile device may trace a skyline of the environment depicted in the image. The annotation may be used to create reduced regions for edge detection analysis. The edge detection analysis may detect the skyline. The detected skyline may be compared to the skyline models to determine a most likely location of the user.Type: GrantFiled: August 30, 2021Date of Patent: September 19, 2023Assignee: Applied Research Associates, Inc.Inventors: Dirk B. Warnaar, Douglas J. Totten
-
Patent number: 11750547Abstract: A caption of a multimodal message (e.g., social media post) can be identified as a named entity using an entity recognition system. The entity recognition system can use an attention-based mechanism that emphasis or de-emphasizes each data type (e.g., image, word, character) in the multimodal message based on each datatypes relevance. The output of the attention mechanism can be used to update a recurrent network to identify one or more words in the caption as being a named entity.Type: GrantFiled: August 27, 2021Date of Patent: September 5, 2023Assignee: Snap Inc.Inventors: Vitor Rocha de Carvalho, Leonardo Ribas Machado das Neves, Seungwhan Moon
-
Patent number: 11749006Abstract: A processor may receive an image and determine a number of foreground pixels in the image. The processor may obtain a result of optical character recognition (OCR) processing performed on the image. The processor may identify at least one bounding box surrounding at least one portion of text in the result and overlay the at least one bounding box on the image to form a masked image. The processor may determine a number of foreground pixels in the masked image and a decrease in the number of foreground pixels in the masked image relative to the number of foreground pixels in the image. Based on the decrease, the processor may modify an aspect of the OCR processing for subsequent image processing.Type: GrantFiled: December 15, 2021Date of Patent: September 5, 2023Assignee: INTUIT INC.Inventors: Sameeksha Khillan, Prajwal Prakash Vasisht
-
Patent number: 11710302Abstract: A computer implemented method of performing single pass optical character recognition (OCR) including at least one fully convolutional neural network (FCN) engine including at least one processor and at least one memory, the at least one memory including instructions that, when executed by the at least processor, cause the FCN engine to perform a plurality of steps. The steps include preprocessing an input image, extracting image features from the input image, determining at least one optical character recognition feature, building word boxes using the at least one optical character recognition feature, determining each character within each word box based on character predictions and transmitting for display each word box including its predicted corresponding characters.Type: GrantFiled: November 6, 2020Date of Patent: July 25, 2023Assignee: TRICENTIS GMBHInventors: David Colwell, Michael Keeley
-
Patent number: 11663654Abstract: A computer system can implement a network service by receiving, from a computing device of a user, image data comprising an image of a record. The computer system can then execute image processing logic to determine a set of information items from the image. The computer system may then execute augmentation logic to process the record by (i) accessing a transaction database to identify a plurality of transactions made by the user, (ii) identifying a matching transaction from the plurality of transactions that pertains to the record, and (iii) resolving the set of information items using the matching transaction.Type: GrantFiled: January 2, 2020Date of Patent: May 30, 2023Assignee: Expensify, Inc.Inventors: David M. Barrett, Kevin Michael Kuchta
-
Patent number: 11651604Abstract: The present invention provides a word recognition method. The method includes: acquiring an image of a word to be recognized; recognizing edges of each character of the word to be recognized from the image of the word to be recognized; determining a geometric position of the word to be recognized; stretching the geometric position of the word to be recognized to a horizontal position; and recognizing the word to be recognized in the horizontal position.Type: GrantFiled: March 31, 2020Date of Patent: May 16, 2023Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventors: Guangwei Huang, Yue Li
-
Patent number: 11568382Abstract: A system includes a processor and a non-transitory computer readable medium coupled to the processor. The non-transitory computer readable medium includes code, that when executed by the processor, causes the processor to receive input from a user of a user device to generate an optimal payment location on an application display, generate a first boundary of the optimal payment location on the application display of the user device based upon a first motion of a payment enabled card in a first direction and generate a second boundary of the optimal payment location on the application display of the user device based upon a second motion of the payment enabled card in a second direction. The first boundary and the second boundary combine to form defining edges of the optimal payment location.Type: GrantFiled: July 28, 2021Date of Patent: January 31, 2023Assignee: VISA INTERNATIONAL SERVICE ASSOCIATIONInventors: Kasey Chiu, Kuen Mee Summers, Whitney Wilson Gonzalez
-
Patent number: 11562122Abstract: An information processing apparatus includes a processor configured to extract, from a document, words of plural categories, select one extracted word from each of the plural categories, generate a first character string by arranging the selected words in accordance with a rule, wherein the rule determines positions of the selected words within the first character string based on the categories of the selected words, in response to reception of an operation of changing a first word in the first character string from a user, present to the user one or more candidate words from the category of the first portion of the first character string, generate a second character string by replacing the first word in the first character string with a user-selected word selected by the user from among the one or more candidate words, and store the second character string in a memory in association with the document.Type: GrantFiled: September 11, 2020Date of Patent: January 24, 2023Assignee: FUJIFILM Business Innovation Corp.Inventor: Miyuki Iizuka
-
Patent number: 11551461Abstract: A text classifying apparatus (100), an optical character recognition unit (1), a text classifying method (S220) and a program are provided for performing the classification of text. A segmentation unit (110) segments an image into a plurality of lines of text (401-412; 451-457; 501-504; 701-705) (S221). A selection unit (120) selects a line of text from the plurality of lines of text (S222-S223). An identification unit (130) identifies a sequence of classes corresponding to the selected line of text (S224). A recording unit (140) records, for the selected line of text, a global class corresponding to a class of the sequence of classes (S225-S226). A classification unit (150) classifies the image according to the global class, based on a confidence level of the global class (S227-S228).Type: GrantFiled: April 10, 2020Date of Patent: January 10, 2023Assignee: I.R.I.S.Inventors: Frédéric Collet, Vandana Roy
-
Patent number: 11538238Abstract: Systems and methods for classifying at least a portion of an image as being textured or textureless are presented. The system receives an image generated by an image capture device, wherein the image represents one or more objects in a field of view of the image capture device. The system generates one or more bitmaps based on at least one image portion of the image. The one or more bitmaps describe whether one or more features for feature detection are present in the at least one image portion, or describe whether one or more visual features for feature detection are present in the at least one image portion, or describe whether there is variation in intensity across the at least one image portion. The system determines whether to classify the at least one image portion as textured or textureless based on the one or more bitmaps.Type: GrantFiled: August 12, 2020Date of Patent: December 27, 2022Assignee: Mujin, Inc.Inventors: Jinze Yu, Jose Jeronimo Moreira Rodrigues, Ahmed Abouelela
-
Patent number: 11532087Abstract: The disclosure relates, in part, to computer-based visualization of stent position within a blood vessel. A stent can be visualized using intravascular data and subsequently displayed as stent struts or portions of a stent as a part of a one or more graphic user interface(s) (GUI). In one embodiment, the method includes steps to distinguish stented region(s) from background noise using an amalgamation of angular stent strut information for a given neighborhood of frames. The GUI can include views of a blood vessel generated using distance measurements and demarcating the actual stented region(s), which provides visualization of the stented region. The disclosure also relates to display of intravascular diagnostic information such as indicators. An indicator can be generated and displayed with images generated using an intravascular data collection system. The indicators can include one or more viewable graphical elements suitable for indicating diagnostic information such as stent information.Type: GrantFiled: January 22, 2021Date of Patent: December 20, 2022Assignee: LightLab Imaging, Inc.Inventors: Sonal Ambwani, Christopher E. Griffin, James G. Peterson, Satish Kaveti, Joel M. Friedman
-
Patent number: 11531838Abstract: Systems and methods for automating image annotations are provided, such that a large-scale annotated image collection may be efficiently generated for use in machine learning applications. In some aspects, a mobile device may capture image frames, identifying items appearing in the image frames and detect objects in three-dimensional space across those image frames. Cropped images may be created as associated with each item, which may then be correlated to the detected objects. A unique identifier may then be captured that is associated with the detected object, and labels are automatically applied to the cropped images based on data associated with that unique identifier. In some contexts, images of products carried by a retailer may be captured, and item data may be associated with such images based on that retailer's item taxonomy, for later classification of other/future products.Type: GrantFiled: November 6, 2020Date of Patent: December 20, 2022Assignee: Target Brands, Inc.Inventors: Ryan Siskind, Matthew Nokleby, Nicholas Eggert, Stephen Radachy, Corey Hadden, Rachel Alderman, Edgar Cobos
-
Patent number: 11495019Abstract: A method for optical character recognition of text and information on a curved surface, comprising: activating an image capture device; scanning of the surface using the image capture device in order to acquire a plurality of scans of sections of the surface; performing OCR on the plurality of scans; separating the OCRed content into layers for each of the plurality of scans; merging the separated layers into single layers; and merging the single layers into an image.Type: GrantFiled: March 29, 2021Date of Patent: November 8, 2022Assignee: GIVATAR, INC.Inventors: William E. Becorest, Yongkeng Xiao
-
Patent number: 11436816Abstract: The information processing device includes a storage section storing a learnt model, a reception section, and a processor. The learnt model is obtained by mechanically learning the relationship between a sectional image obtained by dividing a voucher image and a type of a character string included in the sectional image based on a data set in which the sectional image is associated with type information indicating the type. The reception section receives an input of the voucher image to be subjected to a recognition process. The processing section generates a sectional image by dividing the voucher image received as an input and determines a type of the generated sectional image based on the learnt model.Type: GrantFiled: March 4, 2020Date of Patent: September 6, 2022Assignee: Seiko Epson CorporationInventor: Kiyoshi Mizuta
-
Patent number: 11429790Abstract: Automated detection of personal information in free text, which includes: automatically applying a named-entity recognition (NER) algorithm to a digital text document, to detect named entities appearing in the digital text document, wherein the named entities are selected from the group consisting of: at least one person-type entity, and at least one non-person-type entity; automatically detecting at least one relation between the named entities, by applying a parts-of-speech (POS) tagging algorithm and a dependency parsing algorithm to sentences of the digital text document which contain the detected named entities; automatically estimating whether the at least one relation between the named entities is indicative of personal information; and automatically issuing a notification of a result of the estimation.Type: GrantFiled: September 25, 2019Date of Patent: August 30, 2022Assignee: International Business Machines CorporationInventors: Andrey Finkelshtein, Bar Haim, Eitan Menahem
-
Patent number: 11328504Abstract: An image-processing device includes: a reliability calculation unit configured to calculate reliability of a character recognition result on a document image which is a character recognition target on the basis of a feature amount of a character string of a specific item included in the document image; and an output destination selection unit configured to select an output destination of the character recognition result in accordance with the reliability.Type: GrantFiled: March 26, 2019Date of Patent: May 10, 2022Assignee: NEC CORPORATIONInventors: Yuichi Nakatani, Katsuhiko Kondoh, Satoshi Segawa, Michiru Sugimoto, Yasushi Hidaka, Junya Akiyama
-
Patent number: 11308175Abstract: While current voice assistants can respond to voice requests, creating smarter assistants that leverage location, past requests, and user data to enhance responses to future requests and to provide robust data about locations is desirable. A method for enhancing a geolocation database (“database”) associates a user-initiated triggering event with a location in a database by sensing user position and orientation within the vehicle and a position and orientation of the vehicle. The triggering event is detected by sensors arranged within a vehicle with respect to the user. The method determines a point of interest (“POI”) near the location based on the user-initiated triggering event. The method, responsive to the user-initiated triggering event, updates the database based on information related to the user-initiated triggering event at an entry of the database associated with the POI. The database and voice assistants can leverage the enhanced data about the POI for future requests.Type: GrantFiled: June 27, 2019Date of Patent: April 19, 2022Assignee: Cerence Operating CompanyInventors: Nils Lenke, Mohammad Mehdi Moniri, Reimund Schmald, Daniel Kindermann
-
Patent number: 11275597Abstract: Techniques for augmenting data visualizations based on user interactions to enhance user experience are provided. In one aspect, a method for providing real-time recommendations to a user includes: capturing user interactions with a data visualization, wherein the user interactions include images captured as the user interacts with the data visualization; building stacks of the user interactions, wherein the stacks of the user interactions are built from sequences of the user interactions captured over time; generating embeddings for the stacks of the user interactions; finding clusters of embeddings having similar properties; and making the real-time recommendations to the user based on the clusters of embeddings having the similar properties.Type: GrantFiled: January 29, 2021Date of Patent: March 15, 2022Assignee: International Business Machines CorporationInventors: German H Flores, Eric Kevin Butler, Robert Engel, Aly Megahed, Yuya Jeremy Ong, Nitin Ramchandani
-
Patent number: 11238481Abstract: A financial institution can provide a best price guarantee to debit or credit card account holders. By providing a consolidated system including automatic price monitoring of purchased products and automatic claim form generation upon identifying a lower price, the consumer is relieved of the burden typically associated with conventional price matching.Type: GrantFiled: July 11, 2019Date of Patent: February 1, 2022Assignee: CITICORP CREDIT SERVICES, INC.Inventors: Neeraj Sharma, Ateesh Tankha, Anthony Merola, Michael Ying
-
Patent number: 11238305Abstract: An information processing apparatus includes a processor configured to execute first preprocessing on acquired image data, and execute second preprocessing on a specified partial region of the image data as a target in a case where information for specifying at least one partial region in an image corresponding to the image data is received from post processing on which the image data after the first preprocessing is processed.Type: GrantFiled: June 24, 2020Date of Patent: February 1, 2022Assignee: FUJIFILM Business Innovation Corp.Inventors: Hiroyoshi Uejo, Yuki Yamanaka
-
Patent number: 11195315Abstract: Near-to-eye displays support a range of applications from helping users with low vision through augmenting a real world view to displaying virtual environments. The images displayed may contain text to be read by the user. It would be beneficial to provide users with text enhancements to improve its readability and legibility, as measured through improved reading speed and/or comprehension. Such enhancements can provide benefits to both visually impaired and non-visually impaired users where legibility may be reduced by external factors as well as by visual dysfunction(s) of the user. Methodologies and system enhancements that augment text to be viewed by an individual, whatever the source of the image, are provided in order to aid the individual in poor viewing conditions and/or to overcome physiological or psychological visual defects affecting the individual or to simply improve the quality of the reading experience for the user.Type: GrantFiled: January 22, 2020Date of Patent: December 7, 2021Assignee: eSight Corp.Inventors: Frank Jones, James Benson Bacque
-
Patent number: 11176576Abstract: Techniques for providing remote messages to mobile devices based on image data and other sensor data are discussed herein. Some embodiments may include one or more servers configured to: receive, from a consumer device via a network, location data indicating a consumer device location of a consumer device; receive, from the consumer device via the network, image data captured by a camera of the consumer device; receive, from the consumer device via the network, orientation data defining an orientation of the camera when the image data was captured, wherein the orientation data is captured by an accelerometer of the consumer device; attempt to extract a merchant identifier from the image based on programmatically processing the image data; determine one or more merchants based on a fuzzy search of available ones of the location data, the merchant identifier, and the orientation data.Type: GrantFiled: June 12, 2019Date of Patent: November 16, 2021Assignee: GROUPON, INC.Inventors: Gajaruban Kandavanam, Sarika Oak, Gloria Ye, Chunjun Chen
-
Patent number: 11116454Abstract: An imaging device and method which can easily obtain a curve of time-varying changes in pixel value of a region of interest, even if the region of interest moves with a subject's body motion. A controller includes an image processor executing various types of image processing on fluorescence images and visible light images. The image processor includes a pixel value measurement unit which sequentially measures values of pixels at positions corresponding to a region of interest (ROI) in the fluorescence image, a change curve creation unit which creates a curve of time-varying changes in pixel value of the ROI by sampling, among the pixel values measured by the pixel value measurement unit, a minimum pixel value within a period equal to or longer than a cycle of the subject's body motion, and a smoothing unit which smooths the curve created by the change curve creation unit.Type: GrantFiled: July 19, 2018Date of Patent: September 14, 2021Assignee: Shimadzu CorporationInventor: Akihiro Ishikawa
-
Patent number: 11100363Abstract: A method disclosed herein uses a processor of a server to function as a processing unit to enhance accuracy of character recognition in a terminal connected to the server, using a communication apparatus of the server. The processing unit may be configured to acquire first data indicating a result of character recognition with respect to image data taken by the terminal. The processing unit can determine a character type of a character included in the image data when it is determined that misrecognition is included in the result of character recognition based on the first data. The processing unit controls the communication apparatus to transmit second data according to the character type to terminal and instructs the terminal to perform character recognition using the second data with respect to the image data in order to improve the accuracy of character recognition.Type: GrantFiled: September 19, 2019Date of Patent: August 24, 2021Assignee: TOSHIBA TEC KABUSHIKI KAISHAInventor: Syusaku Takara
-
Patent number: 11093036Abstract: A system including: a first sensor module having an inertial measurement unit and attached to an upper arm of a user, the first sensor module generating first motion data identifying an orientation of the upper arm; a second sensor module having an inertial measurement unit and attached to a hand of the user, the second sensor module generating second motion data identifying an orientation of the hand; and a computing device coupled to the first sensor module and the second sensor module through communication links, the computing device calculating, based on the orientation of the upper arm and the orientation of the hand, an orientation of a forearm connected to the hand by a wrist of the user and connected to the upper arm by an elbow joint of the user.Type: GrantFiled: July 10, 2019Date of Patent: August 17, 2021Assignee: Finch Technologies Ltd.Inventors: Viktor Vladimirovich Erivantcev, Rustam Rafikovich Kulchurin, Alexander Sergeevich Lobanov, Iakov Evgenevich Sergeev, Alexey Ivanovich Kartashov
-
Patent number: 11087077Abstract: Embodiments are generally directed to techniques for extracting contextually structured data from document images, such as by automatically identifying document layout, document data, and/or document metadata in a document image, for instance. Many embodiments are particularly directed to generating and utilizing a document template database for automatically extracting document image contents into a contextually structured format. For example, the document template database may include a plurality of templates for identifying/explaining key data elements in various document image formats that can be used to extract contextually structured data from incoming document images with a matching document image format. Several embodiments are particularly directed to automatically identifying and associating document metadata with corresponding document data in a document image, such as for generating a machine-facilitated annotation of the document image.Type: GrantFiled: November 5, 2020Date of Patent: August 10, 2021Assignee: SAS INSTITUTE INC.Inventors: David James Wheaton, William Robert Nadolski, Heather Michelle GoodyKoontz
-
Patent number: 11080563Abstract: A computer implemented a method and system for enrichment of OCR extracted data is disclosed comprising of accepting a set of extraction criteria and a set of configuration parameters by a data extraction engine. The data extraction engine captures data satisfying an extraction criteria using the configuration parameters and adapts the captured data using a set of domain specific rules and a set of OCR error patterns. A learning engine generates learning data models using the adapted data and the configuration parameters and the system dynamically updates the extraction criteria using the generated learning data models. The extraction criteria comprise one or more extraction templates wherein an extraction template includes one of a regular expression, geometric markers, anchor text markers and a combination thereof.Type: GrantFiled: June 19, 2019Date of Patent: August 3, 2021Assignee: INFOSYS LIMITEDInventors: Shreyas Bettadapura Guruprasad, Radha Krishna Pisipati