Patents Examined by Daniel G. Mariam
  • Patent number: 11521372
    Abstract: A device may receive image data that includes an image of a document and lexicon data identifying a lexicon, and may perform an extraction technique on the image data to identify at least one field in the document. The device may utilize form segmentation to automatically generate label data identifying labels for the image data, and may process the image data, the label data, and data identifying the at least one field, with a first model, to identify visual features. The device may process the image data and the visual features, with a second model, to identify sequences of characters, and may process the image data and the sequences of characters, with a third model, to identify strings of characters. The device may compare the lexicon data and the strings of characters to generate verified strings of characters that may be utilized to generate a digitized document.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: December 6, 2022
    Assignee: Accenture Global Solutions Limited
    Inventors: Rajendra Prasad Tanniru, Aditi Kulkarni, Koushik M. Vijayaraghavan, Luke Higgins, Xiwen Sun, Riley Green, Man Lok Ching, Jiayi Chen, Xiaolei Liu, Isabella Phoebe Groenewegen Moore, Reuben Lema
  • Patent number: 11501550
    Abstract: A method, system, and computer program product for segmenting and processing documents for optical character recognition is provided. The method includes receiving a document and detecting different types of text data. The document is divided into a plurality of text regions associated with the different types of said text data. Optical noise is removed from each text region and differing optical character recognition software code is selected for application to each text region. The differing optical character recognition software code is executed with respect to each text region resulting in extractable computer readable text located within each said text region.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: November 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: Zhong Fang Yuan, Yu Pan, Tong Liu, Yi Chen Zhong, Li Juan Gao, Qiong Wu, Dan Dan Wu
  • Patent number: 11495038
    Abstract: A computer-implemented method for processing a digital image. The digital image comprises one or more text cells, wherein each of the one or more text cells comprises a string and a bounding box. The method comprises receiving the digital image in a first format, the first format providing access to the strings and the bounding boxes of the one more text cells. The methods further comprises encoding the strings of the one or more text cells as visual pattern according to a predefined string encoding scheme and providing the digital image in a second format. The second format comprises the visual pattern of the strings of the one or more text cells. A corresponding system and a related computer program product is provided.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: November 8, 2022
    Assignee: International Business Machines Corporation
    Inventors: Peter Willem Jan Staar, Michele Dolfi, Christoph Auer, Leonidas Georgopoulos, Konstantinos Bekas
  • Patent number: 11487884
    Abstract: Methods and systems that provide data privacy for implementing a neural network-based inference are described. A method includes injecting stochasticity into the data to produce perturbed data, wherein the injected stochasticity satisfies an F-differential privacy criterion and transmitting the perturbed data to a neural network or to a partition of the neural network for inference.
    Type: Grant
    Filed: March 24, 2022
    Date of Patent: November 1, 2022
    Assignee: The Regents of the University of California
    Inventors: Fatemehsadat Mireshghallah, Hadi Esmaeilzadeh
  • Patent number: 11481489
    Abstract: The present disclosure provides for systems and methods for generating an image of a web resource to detect a modification of the web resource. An exemplary method includes selecting one or more objects of the web resource based on one or more object attributes; identifying a plurality of tokens for each selected object based on contents of the selected object; calculating a hash signature for each selected object of the web resource using the identified plurality of tokens; identifying potentially malicious calls within the identified plurality of tokens; generating an image of the web resource based on the plurality of hash signatures and based on the identified potentially malicious calls, wherein the image of the web resource comprises a vector representation of the contents of the web resource; and detecting whether the web resource is modified based on the image of the web resource.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: October 25, 2022
    Assignee: AO Kaspersky Lab
    Inventors: Vladimir A. Skvortsov, Evgeny B. Kolotinsky
  • Patent number: 11481605
    Abstract: There is provided a 2D document extractor for extracting entities from a structured document, the 2D document extractor includes a first convolutional neural network (CNN), a second CNN, and a third recurrent neural network (RNN). A plurality of text sequences and structural elements indicative of location of the text sequences in the document are received. The first CNN encodes the text sequences and structural elements to obtain a 3D encoded image indicative of semantic characteristics of the text sequences and having the structure of the document. The second CNN compresses the 3D encoded image to obtain a feature vector, the feature vector being indicative of a combination of spatial characteristics and semantic characteristics of the 3D encoded image. The third RNN decodes the feature vector to extract the text entities, a given text entity being associated with a text sequence.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: October 25, 2022
    Assignee: ServiceNow Canada Inc.
    Inventors: Olivier Nguyen, Archy De Berker, Eniola Alese, Majid Laali
  • Patent number: 11482029
    Abstract: An image processing device includes: an identifying unit that identifies a plurality of character strings that are candidates for a recording character string among a plurality of character strings acquired by recognizing characters included in a document image; an output unit that outputs a checking screen that represents positions of the plurality of character strings; and a feature quantity extracting unit that extracts a feature quantity of a character string corresponding to a position identified by a user on the checking screen as a feature quantity of the recording character string.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: October 25, 2022
    Assignee: NEC CORPORATION
    Inventors: Katsuhiko Kondoh, Yasushi Hidaka, Satoshi Segawa, Yuichi Nakatani, Michiru Sugimoto, Junya Akiyama
  • Patent number: 11475684
    Abstract: An image may be evaluated by a computer vision system to determine whether it is fit for analysis. The computer vision system may generate an embedding of the image. An embedding quality score (EQS) of the image may be determined based on the image's embedding and a reference embedding associated with a cluster of reference noisy images. The quality of the image may be evaluated based on the EQS of the image to determine whether the quality meets filter criteria. The image may be further processed when the quality is sufficient, or otherwise the image may be removed.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: October 18, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Siqi Deng, Yuanjun Xiong, Wei Li, Shuo Yang, Wei Xia, Meng Wang
  • Patent number: 11475358
    Abstract: Techniques are provided for enhancing the efficiency and accuracy of annotating data samples for supervised machine learning algorithms using an advanced annotation pipeline. According to an embodiment, a method can comprise collecting, by a system comprising a processor, unannotated data samples for input to a machine learning model and storing the unannotated data samples in an annotation queue. The method further comprises determining, by the system, annotation priority levels for respective unannotated data samples of the unannotated data samples, selecting, by the system from amongst different annotation techniques, one or more of the different annotation techniques for annotating the respective unannotated data samples based the annotation priority levels associated with the respective unannotated data samples.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: October 18, 2022
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Marc T. Edgar, Travis R. Frosch, Gopal B. Avinash, Garry M. Whitley
  • Patent number: 11475280
    Abstract: A system includes a computing platform having a hardware processor and a memory storing a software code and a neural network (NN) having multiple layers including a last activation layer and a loss layer. The hardware processor executes the software code to identify different combinations of layers for testing the NN, each combination including candidate function(s) for the last activation layer and candidate function(s) for the loss layer. For each different combination, the software code configures the NN based on the combination, inputs, into the configured NN, a training dataset including multiple data objects, receives, from the configured NN, a classification of the data objects, and generates a performance assessment for the combination based on the classification. The software code determines a preferred combination of layers for the NN including selected candidate functions for the last activation layer and the loss layer, based on a comparison of the performance assessments.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: October 18, 2022
    Assignees: Disney Enterprises, Inc., ETH Zurich
    Inventors: Hayko Jochen Wilhelm Riemenschneider, Leonhard Markus Helminger, Christopher Richard Schroers, Abdelaziz Djelouah
  • Patent number: 11475687
    Abstract: An information processing system includes a text acquisition unit that acquires text on the basis of a first user's operation, a reception unit that receives a keyword in response to a second user's operation; and a contact support unit that receives a contact from the second user having performed an operation on the keyword to the first user having performed an operation on the text, and notifies the first user of the contact, in a case where the text acquired by the text acquisition unit and the keyword received by the reception unit satisfy a predefined condition.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: October 18, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Soushi Noguchi
  • Patent number: 11468658
    Abstract: This disclosure involves automatically generating a typographical image using an image and a text document. Aspects of the present disclosure include detecting a region of interest from the image and generating an object template from the detected region of interest. The object template defines the areas of the image, in which words of the text document are inserted. A text rendering protocol is executed to iteratively insert the words of the text document into the available locations of the object template. The typographical image is generated by rendering each word of the text document onto the available location assigned to the word.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: October 11, 2022
    Assignee: Adobe Inc.
    Inventor: Ionut Mironica
  • Patent number: 11450127
    Abstract: An electronic apparatus including a display is disclosed. The electronic apparatus, based on text information including a plurality of words being input via an input unit, obtain an input claim based on the text information, determine novelty of the input claim by inputting the input claim to a first neural network model, and based on the input claim being novel, determine inventiveness of the input claim by inputting the input claim to a second neural network model.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: September 20, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Anant Baijal, Changkun Park, Jeongrok Jang, Jaehwang Lee
  • Patent number: 11443505
    Abstract: A method for analyzing images to generate a plurality of output features includes receiving input features of the image and performing Fourier transforms on each input feature. Kernels having coefficients of a plurality of trained features are received and on-the-fly Fourier transforms (OTF-FTs) are performed on the coefficients in the kernels. The output of each Fourier transform and each OTF-FT are multiplied together to generate a plurality of products and each of the products are added to produce one sum for each output feature. Two-dimensional inverse Fourier transforms are performed on each sum.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: September 13, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Mihir Narendra Mody, Manu Mathew, Chaitanya Satish Ghone
  • Patent number: 11436851
    Abstract: Image data having text associated with a plurality of text-field types is received, the image data including target image data and context image data. The target image data including target text associated with a text-field type. The context image data providing a context for the target image data. A trained neural network that is constrained to a set of characters for the text-field type is applied to the image data. The trained neural network identifies the target text of the text-field type using a vector embedding that is based on learned patterns for recognizing the context provided by the context image data. One or more predicted characters are provided for the target text of the text-field type in response to identifying the target text using the trained neural network.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: September 6, 2022
    Assignee: Bill.com, LLC
    Inventor: Eitan Anzenberg
  • Patent number: 11436862
    Abstract: A privacy protecting capturing module including a capture device, a memory unit storing at least part of an image captured by the capture device, an interface for receiving commands and transmitting information, and a processor, executing receiving a first image captured by the capture device, analyzing the first image and determining whether the first image meets a condition, subject to the condition being met, transmitting information related to the first image through the interface, receiving a second image captured by the capture device consequent to the first image, analyzing the second image and determining whether the second image meets a second condition, and subject to the second condition being met, prohibiting transmission of further information through the interface, where all accesses to the privacy protecting capturing module are through the interface, and no direct access is enabled to the capture device or to the memory unit.
    Type: Grant
    Filed: August 18, 2021
    Date of Patent: September 6, 2022
    Assignee: EMZA VISUAL SENSE LTD.
    Inventor: Tomer Kimhi
  • Patent number: 11423760
    Abstract: The present invention relates to a device (2) for detecting drowning individuals or individuals in a situation presenting a risk of drowning, comprising at least one program of codes that are executable on one or more processing hardware components such as a microprocessor, the program being stored in memory in at least one readable medium and implementing an artificial neural network (20) having an automatic learning architecture composed of several layers, the artificial neural network (20) being pre-trained on image data from at least one standard non-specific database, the program being characterized in that the neural network is further trained a second time by learning transfer on image data from videos of simulated or real drowning situations or situations presenting a risk of drowning, the trained program being configured by this learning transfer to identify, preferably in real time, drowning situations or situations presenting a risk of drowning based on new image data provided.
    Type: Grant
    Filed: August 21, 2020
    Date of Patent: August 23, 2022
    Assignee: BULL SAS
    Inventors: Clémentine Nemo, Nicolas Lutz, Gérard Richter, Nicolas Lebreton
  • Patent number: 11418658
    Abstract: An image forming apparatus including: a character information area display unit configured to highlight and display at least one character information area including a handwritten character string out of character information areas; an input receiving unit configured to receive a character information area including a handwritten character string specified a user; a character information area display selecting unit configured to select at least one character information area other than the character information area including the handwritten character string specified by the user to be combined with the character information area including the handwritten character string specified by the user; and a character information combining unit configured to combine character information in the character information area including the handwritten character string specified by the user and character information in the character information area selected by the character information area display selecting unit.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: August 16, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Takuhiro Okuda, Katsuyuki Takahashi
  • Patent number: 11410441
    Abstract: An information processing apparatus includes a processor. The processor is configured to extract from a memory an image concerning a user in accordance with content of a document and to attach the extracted image to the document.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: August 9, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Yuki Ando
  • Patent number: 11409992
    Abstract: A method and a computer program product for identification and improvement of machine learning (ML) under-performance The method comprises slicing data of ML model based on a functional model representing requirements of a system utilizing the ML model. The functional model comprises a set of attributes and respective domain of values. Each data slice is associated with a different valuation of one or more attributes of the functional model. Each data instance of the ML model is mapped to one or more data slices, based on valuation of the attributes. A performance measurement of the ML model over is computed for each data slice, based on an application of the ML model on each data instance that is mapped to the data slice. A Determination whether ML model adheres to a target performance requirement may be performed based on the performance measurements of the data slices.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: August 9, 2022
    Assignee: International Business Machines Corporation
    Inventors: Rachel Brill, Eitan Farchi, Orna Raz, Aviad Zlotnick