Patents Examined by Daniel G. Mariam
  • Patent number: 11978273
    Abstract: Systems and techniques are provided for automatically analyzing and processing domain-specific image artifacts and document images. A process can include obtaining a plurality of document images comprising visual representations of structured text. An OCR-free machine learning model can be trained to automatically extract text data values from different types or classes of document image, based on using a corresponding region of interest (ROI) template corresponding to the structure of the document image type for at least initial rounds of annotations and training. The extracted information included in an inference prediction of the trained OCR-free machine learning model can be reviewed and validated or corrected correspondingly before being written to a database for use by one or more downstream analytical tasks.
    Type: Grant
    Filed: November 10, 2023
    Date of Patent: May 7, 2024
    Assignee: 32Health Inc.
    Inventors: Deepak Ramaswamy, Ravindra Kompella, Shaju Puthussery
  • Patent number: 11978211
    Abstract: A phase image is formed by calculation from a hologram image of a cell, and segmentation is performed for each pixel for the phase image using a fully convolution neural network to identify an undifferentiated cell region, a deviated cell region, a foreign substance region, and the like. When learning, when a learning image included in a mini-batch is read, the image is randomly inverted vertically or horizontally and then is rotated by a random angle. A part that has been lost within the frame by the pre-rotation image is compensated for by a mirror-image inversion with an edge of a post-rotation image as an axis thereof. Learning of a fully convolution neural network is performed using the generated learning image. The same processing is repeated for all mini-batches, and the learning is repeated by a predetermined number of times while shuffling the training data allocated to the mini-batch. The precision of the learning model is thus improved.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: May 7, 2024
    Assignee: SHIMADZU CORPORATION
    Inventors: Wataru Takahashi, Ayako Akazawa
  • Patent number: 11967095
    Abstract: This image processing system is provided with: a measurement part which measures the three-dimensional shape of a target object based on a captured image obtained by capturing an image of the target object; a reliability calculation part which calculates, for each area, an index that indicates the reliability in the measurement of the three-dimensional shape; a reliability evaluation part which evaluates, for each area, whether the calculated index satisfies a predetermined criterion; and a display part which simultaneously or selectively displays the measurement result of the three-dimensional shape and a result image that shows the area that does not satisfy the criterion in the captured image.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: April 23, 2024
    Assignee: OMRON Corporation
    Inventor: Motoharu Okuno
  • Patent number: 11967166
    Abstract: The present disclosure provides a method and system architectures for carrying out the method of automated marine life object classification and identification utilising a core of a Deep Neural Network, DNN, to facilitate the operations of a post-processing module subnetwork such as instance segmentation, masking, labelling, and image overlay of an input image determined to contain one or more target marine life objects. Multiple instances of target objects from the same image data can be easily classified and labelled for post-processing through application of a masking layer over each respective object by a semantic segmentation network.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: April 23, 2024
    Inventors: Tianye Wang, Shiwei Liu, Xiaoge Cheng
  • Patent number: 11961318
    Abstract: An information processing device includes a processor configured to acquire a document image illustrating a document, acquire a related character string associated with a target character string included in the document image, and extract target information corresponding to the target character string from a region set with reference to a position of the related character string in the document image.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: April 16, 2024
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Fumi Kosaka, Akinobu Yamaguchi, Junichi Shimizu, Shinya Nakamura, Jun Ando, Masanori Yoshizuka, Akane Abe
  • Patent number: 11954988
    Abstract: A system and method for image processing for wildlife detection is provided which consists of object detection and object classification. An image capturing means capture one or more images. The image taken is converted to greyscale and re-sized and passed on to a Deep Neural Network (DNN). The image classification is executed by a processor via the Deep Neural Network in two steps. The second step is carried by a custom Convolutional Neural Network (CNN). The CNN classifies the detected object with certain parameters. After classifying a particular animal species in the captured image, it sends notifications, SMS, alerts to the surrounding neighbours. For a correct image classification, the feedback data is sent to the CNN for further re-training. Periodic retraining of the model with the images captured as part of the system execution adapts the system to a specific area being monitored and the wildlife in that area.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: April 9, 2024
    Inventor: Vivek Satya Bharati
  • Patent number: 11954903
    Abstract: This application relates to a system for automatically recognizing geographical area information provided on an item. The system may include an optical scanner configured to capture geographical area information provided on an item, the geographical area information comprising a plurality of geographical area components. The system may also include a controller in data communication with the optical scanner and configured to recognize the captured geographical area information by running a plurality of machine learning or deep learning models separately and sequentially on the plurality of geographical area components of the captured geographical area information.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: April 9, 2024
    Assignee: United States Postal Service
    Inventor: Ryan J. Simpson
  • Patent number: 11948384
    Abstract: The present disclosure is directed to systems and methods that enable scanning of any type of card regardless of the shape and design of a given card and/or a font, a shape and a format with which characters such as numbers, letters and symbols are printed on the cards including cards with non-embossed characters printed thereon. In one example, a method includes scanning a card, the card including at least an account number associated with a user of the card and an identifier of the user; detecting, by applying a machine learning model to the card after scanning the card, at least the account number printed on the card; and completing a task using the account number.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: April 2, 2024
    Assignee: Synchrony Bank
    Inventors: Brian Yang, Michael Storiale
  • Patent number: 11948228
    Abstract: A color correction method for a panoramic image comprises: acquiring a first and second fisheye images; expanding the first fisheye image and the second fisheye image respectively to obtain a first image and a second image in an RGB color space; calculating overlapping areas between the images; converting the first image and the second image from the RGB color space to a Lab color space; in the Lab color space, adjusting the brightness value of the first image and the brightness value of the second image; converting the first image and the second image from the Lab color space to the RGB color space; according to the mean color values of a first and second overlapping areas, adjusting the color value of the second image by using the first image as a reference, or adjusting the color value of the first image by using the second image as a reference.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: April 2, 2024
    Assignee: ARASHI VISION INC.
    Inventor: Chenglong Yin
  • Patent number: 11948387
    Abstract: Systems and methods for training an object detection network are described. Embodiments train an object detection network using a labeled training set, wherein each element of the labeled training set includes an image and ground truth labels for object instances in the image, predict annotation data for a candidate set of unlabeled data using the object detection network, select a sample image from the candidate set using a policy network, generate a labeled sample based on the selected sample image and the annotation data, wherein the labeled sample includes labels for a plurality of object instances in the sample image, and perform additional training on the object detection network based at least in part on the labeled sample.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: April 2, 2024
    Assignee: ADOBE INC.
    Inventors: Sumit Shekhar, Bhanu Prakash Reddy Guda, Ashutosh Chaubey, Ishan Jindal, Avneet Jain
  • Patent number: 11941915
    Abstract: A system and method for video analytics of a golf game is disclosed. In an embodiment, cameras capture videos from different angles of a golfer's swing and/or strike; a system network comprises: a processing module to receive the videos, to 3D-model the trajectory of the swing/strike, and 3D-model the golfer; a machine-learning module to receive 3D swing-trajectories and golfer models of swings/strikes of professional golfers and compute a 3D model of one or more reference swings, as a function of an aggregation of the professional golfers' swings/strikes; a database storing the reference swings/strikes; an analysis module configured to receive the golfer's 3D swing/strike trajectory model and the 3D golfer model, compare the 3D trajectory model with the reference swing, and compute recommendations for the golfer, as a function of the comparison; and a display module configured to display the recommendations to the golfer.
    Type: Grant
    Filed: November 29, 2020
    Date of Patent: March 26, 2024
    Assignee: RoundU Technologies Ltd, UAB
    Inventor: Boris Tyomkin
  • Patent number: 11941918
    Abstract: An image processing component is trained to process 2D images of human body parts, in order to extract depth information about the human body parts captured therein. Image processing parameters are learned during the training from a training set of captured 3D training images, each 3D training image of a human body part and captured using 3D image capture equipment and comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image, and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D images of human body parts.
    Type: Grant
    Filed: April 14, 2023
    Date of Patent: March 26, 2024
    Assignee: Yoti Holding Limited
    Inventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
  • Patent number: 11941904
    Abstract: A computer-implemented method (300) for extracting content (302) from a physical writing surface (304), the method (300) comprising the steps of: (a) receiving a reference frame (306) including image data relating to at least a portion of the physical writing surface (304), the image data including a set of data points; (b) determining an extraction region (308), the extraction region (308) including a subset of the set of data points from which content (302) is to be extracted; (c) extracting content (302) from the extraction region (308) and writing the content (302) to a display frame (394); (d) receiving a subsequent frame (406) including subsequent image data relating to at least a portion of the physical writing surface (304), the subsequent image data including a subsequent set of data points; (e) determining a subsequent extraction region (408), the subsequent extraction region (408) including a subset of the subsequent set of data points from which content (402) is to be extracted; and (f) extracting
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: March 26, 2024
    Assignee: INKERZ PTY LTD
    Inventors: Vahid Kolahdouzan, Abdolhossein Aminaiee, Masoud Kolahdouzan
  • Patent number: 11935301
    Abstract: An information processing method includes obtaining image information including a first image of a first person in a predetermined facility and a second image of a second person in the predetermined facility; classifying each of the first person and the second person as a resident of the facility or a visitor to the facility, the first person being classified as the resident, the second person being classified as the visitor; calculating a distance between the first person and the second person, based on the first image and the second image; determining whether the first person and the second person are having a conversation with each other, based on the calculated distance; measuring, when it is determined that the first person and the second person are having a conversation with each other, a conversation time during which the first person and the second person are having a conversation with each other; and transmitting, when the measured conversation time exceeds a predetermined time, infection notificatio
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: March 19, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventor: Tetsuya Takayanagi
  • Patent number: 11928853
    Abstract: Embodiments include techniques to determine a set of credit risk assessment data samples, generate local credit risk assessment attributions for the set of credit risk assessment samples, and normalize each local credit risk assessment attribution of the local credit risk assessment attributions. Further, embodiments techniques to compare each pair of normalized local credit risk assessment attributions and assign a rank distance thereto proportional to a degree of ranking differences between the pair of normalized local credit risk assessment attributions. The techniques also include applying a K-medoids clustering algorithm to generate clusters of the local risk assessment attributions, generating global attributions, and determining insights for the neural network based on the global attributions.
    Type: Grant
    Filed: November 18, 2022
    Date of Patent: March 12, 2024
    Assignee: Capital One Services, LLC
    Inventors: Mark Ibrahim, John Paisley, Ceena Modarres, Melissa Louie
  • Patent number: 11928876
    Abstract: This disclosure is directed to methods and systems that enable automatic recognition of the meaning, sentiment, and intent of an Internet meme. An Internet meme refers to a digitized image, video, or sound that is a unit of cultural information, carries symbolic meaning representing a particular phenomenon or theme, and is generally known and understood by members of a particular culture. The disclosed methods include automatic identification of a meme template and automatic detection of the sentiment and relationships between entities in the meme. The methods provide the determination of a meme's meaning as intended by its purveyors, as well as recognition of the original sentiment and attitudes conveyed by the use of entities within the meme.
    Type: Grant
    Filed: March 7, 2023
    Date of Patent: March 12, 2024
    Assignee: VIRALMOMENT INC.
    Inventors: Chelsie Morgan Hall, Connyre Hamalainen, Gareth Morinan, Sheyda Demooei
  • Patent number: 11893768
    Abstract: The present invention discloses a method and system for recognizing a geometric regularity image of a honeycomb structure. The method includes the steps of image acquisition, image processing, vertex extraction, cell reconstruction, and quality evaluation, wherein a step of binaryzation is set between the step of image processing and the step of vertex extraction, and is to set a pixel value of a background in the image to be 0 and set a pixel value of a honeycomb skeleton in the image to be 1 to form a binary image, and the step of quality evaluation is to calculate angular deviation values of all the cells and an average thereof as well as linear deviation values and an average thereof based on the reconstructed cell image, and determine whether the honeycomb structure is qualified or not by comparing with a set tolerance zone.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: February 6, 2024
    Assignee: CENTRAL SOUTH UNIVERSITY
    Inventors: Zhonggang Wang, Xifeng Liang, Chong Shi, Wei Zhou, Can Cui, Wei Xiong, Xinxin Wang
  • Patent number: 11893819
    Abstract: Methods and systems for extracting and processing data using optical character recognition in real-time environments. For example, the methods and systems provide novel techniques during extracting data using OCR and for a mechanism to process that data. These methods and systems are particularly relevant in real-time environments as the methods and system limit the need for manual review.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: February 6, 2024
    Assignee: Capital One Services, LLC
    Inventors: Kenneth Cardozo, Landon Nehmer, Esmat Zare, Mani Afsari, Jitender Jain, Venkateshwar Parpelli, Bhuvaneswari Balasubramanian, Bijun Du, Daniel Nizinski, Tausif Shahid
  • Patent number: 11887358
    Abstract: Systems and methods for identifying and segmenting objects from images include a preprocessing module configured to adjust a size of a source image; a region-proposal module configured to propose one or more regions of interest in the size-adjusted source image; and a prediction module configured to predict a classification, bounding box coordinates, and mask. Such systems and methods may utilize end-to-end training of the modules using adversarial loss, facilitating the use of a small training set, and can be configured to process historical documents, such as large images comprising text. The preprocessing module within said systems and methods can utilize a conventional image scaler in tandem with a custom image scaler to provide a resized image suitable for GPU processing, and the region-proposal module can utilize a region-proposal network from a single-stage detection model in tandem with a two-stage detection model paradigm to capture substantially all particles in an image.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: January 30, 2024
    Assignee: Ancestry.com Operations Inc.
    Inventors: Masaki Stanley Fujimoto, Yen-Yun Yu
  • Patent number: 11886490
    Abstract: A neural network device comprises a processor that performs an operation for training a neural network, a feature extraction module that extracts unlabeled feature vectors that correspond to unlabeled images and labeled feature vectors that correspond to labeled images, and a classifier that classifies classes of query images, wherein the processor performs first learning with respect to a plurality of codebooks by using the labeled feature vectors, and performs second learning with respect to the plurality of codebooks by optimizing an entropy based on all of the labeled feature vectors and the unlabeled feature vectors.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: January 30, 2024
    Assignees: SAMSUNG ELECTRONICS CO, LTD., SEOUL NATIONAL UNIVERSITY R & DB FOUNDATION
    Inventors: Youngkyun Jang, Namik Cho