Abstract: An information processor can logically support prediction based on past statistical information even though the information contains qualitative or non-numerical data. The processor determines whether an input pattern corresponding to an input object (a determination target) belongs to a specific class among multiple classes, based on feature subsets of any combination of a plurality of features, each feature comprises multiple categories. The processor includes a storage storing the input pattern corresponding to the input object and samples corresponding to respective sample objects and a classification determiner determining whether the input pattern belongs to the specific class.
Abstract: Systems and methods are provided for computer aided phenotyping of fibrosis-related conditions. A digital image indicates presence of collagens in a biological tissue sample. The image is processed to quantify parameters, each parameter describing a feature of the collagens that is expected to be different for different phenotypes of fibrosis. At least some features are tissue level features that describe macroscopic characteristics of the collagens, morphometric level features that describe morphometric characteristics of the collagens, and texture level features that describe an organization of the collagens. At least some of the plurality of parameters are statistics associated with histograms corresponding to distributions of the associated parameters across at least some of the digital image. At least some of the plurality of parameters are combined to obtain one or more composite scores that quantify a phenotype of fibrosis for the biological tissue sample.
Abstract: A method is provided for classifying objects. The method detects objects in one or more images. The method tags each object with multiple features. Each feature describes a specific object attribute and has a range of values to assist with a determination of an overall quality of the one or more images. The method specifies a set of training examples by classifying the overall quality of at least some of the objects as being of an acceptable quality or an unacceptable quality, based on a user's domain knowledge about an application program that takes the objects as inputs. The method constructs a plurality of first-level classifiers using the set of training examples. The method constructs a second-level classifier from outputs of the first-level automatic classifiers. The second-level classifier is for providing a classification for at least some of the objects of either the acceptable quality or the unacceptable quality.
Type:
Grant
Filed:
July 26, 2019
Date of Patent:
August 23, 2022
Inventors:
Biplob Debnath, Debayan Deb, Srimat Chakradhar
Abstract: Advanced driver assistance systems can be designed to recognize and to classify traffic signs under real time constraints, and under a wide variety of visual conditions. This disclosure provides techniques that employ binary masks extracted by color space segmentation, with a different binary mask generated for each sign shape. Temporal tracking is employed to add robustness to the detection system. The system is generic, and is trainable to the traffic signs used in various countries.
Type:
Grant
Filed:
May 1, 2020
Date of Patent:
July 26, 2022
Assignee:
Texas Instruments Incorporated
Inventors:
Arun Shankar Kudana, Manu Mathew, Soyeb Nagori
Abstract: The technology disclosed relates to operating a motion-capture system responsive to available computational resources. In particular, it relates to assessing a level of image acquisition and image-analysis resources available using benchmarking of system components. In response, one or more image acquisition parameters and/or image-analysis parameters are adjusted. Acquisition and/or analysis of image data are then made compliant with the adjusted image acquisition parameters and/or image-analysis parameters. In some implementations, image acquisition parameters include frame resolution and frame capture rate and image-analysis parameters include analysis algorithm and analysis density.
Abstract: A periphery monitoring device includes: an acquisition unit that acquires capture images which are captured by an imaging unit in time series when a vehicle is moving, the imaging unit being provided in the vehicle and capable of imaging surroundings of the vehicle; and a restoration processing unit that, in a case where dirt is present in a latest capture image among the capture images, inputs the capture images into a first learned model and generates a first restoration image as a restoration image obtained by restoring a region concealed with the dirt in the latest capture image, the first learned model being a result obtained by machine learning a relationship between a first learning image in which learning dirt is not present and first learning dirt images each of which is made by causing the learning dirt to be present in the first learning image.
Abstract: An abrasion inspection apparatus includes: a first imaging unit that is installed on a side of a track, a vehicle traveling along the track, a guide wheel being installed on a side of the vehicle, the first imaging unit imaging an inside of the track via a telecentric lens; a second imaging unit that is installed in a vehicle traveling direction with respect to the first imaging unit on the side of the track and images the inside of the track via a telecentric lens; an image acquisition unit that acquires an image which is an image of a boundary of the guide wheel captured by the first imaging unit and is an image of a boundary on a first direction side in the vehicle traveling direction and an image which is an image of the boundary of the guide wheel captured by the second imaging unit at the same time as the capturing of the image by the first imaging unit and is an image of a boundary on an opposite side to the first direction side; and a guide wheel detection unit that detects an abrasion situation of th
Abstract: A method can include classifying, using a compressed and specialized convolutional neural network (CNN), an object of a video frame into classes, clustering the object based on a distance of a feature vector of the object to a feature vector of a centroid object of the cluster, storing top-k classes, a centroid identification, and a cluster identification, in response to receiving a query for objects of class X from a specific video stream, retrieving image data for each centroid of each cluster that includes the class X as one of the top-k classes, classifying, using a ground truth CNN (GT-CNN), the retrieved image data for each centroid, and for each centroid determined to be classified as a member of the class X providing image data for each object in each cluster associated with the centroid.
Abstract: The disclosure relates to an artificial intelligence (AI) system utilizing a machine learning algorithm, and application thereof. In particular, an electronic apparatus according to the disclosure includes a memory storing a trained artificial intelligence model, and a processor configured to acquire a plurality of feature values by inputting an input image to the artificial intelligence model. The trained artificial intelligence model applies each of a plurality of filters to a plurality of feature maps extracted from the input image and includes a pooling layer for acquiring feature values for the plurality of feature maps to which each of the plurality of filters is applied.
Abstract: Systems, methods, and computer-executable instructions for extracting key value data. Optical character recognition (OCR) text of a document is received. The y-coordinate of characters are adjusted to a common y-coordinate. The rows of OCR text are tokenized into tokens based on a distance between characters. The tokens are ordered based on the x,y coordinates of the characters. The document is clustered into a cluster based on the ordered tokens and ordered tokens from other documents. Keys for the cluster are determined from the first set of documents. Each key is a token from a first set of documents. A value is assigned to each kay based on the tokens for the document, and values are assigned to each key for the other documents. The values for the document and the values for the other documents are stored in an output document.
Abstract: Embodiments of the present disclosure include a method, device and computer readable medium involving receiving image data to detect tissue lesions, passing the image data through at least one first convoluted neural network, segmenting the image data, fusing the segmented image data, and detecting tissue lesions.
Abstract: One embodiment provides a method, including: receiving, at an information handling device, drawing input; identifying, using a processor, at least one object in the drawing input; determining, based on the identifying, whether a factual anomaly exists in the drawing input with respect to the at least one object; and notifying, responsive to determining that a factual anomaly exists, a user of the factual anomaly.
Type:
Grant
Filed:
September 18, 2019
Date of Patent:
May 17, 2022
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Ghulam Ahmed Ansari, Amrita Saha, Srikanth Govindaraj Tamilselvam
Abstract: A method, a non-transitory computer readable medium and a system for determining three dimensional (3D) information of structural elements of a substrate.
Abstract: The present disclosure relates to systems and methods for imaging. The method may include obtaining a real-time representation of a subject. The method may also include determining at least one scanning parameter associated with the subject by automatically processing the representation according to a parameter obtaining model. The method may further include performing a scan on the subject based at least in part on the at least one scanning parameter.
Type:
Grant
Filed:
October 25, 2019
Date of Patent:
February 8, 2022
Assignee:
SHANGHAI UNITED IMAGING INTELLIGENCE CO., LTD.
Abstract: A system and method are disclosed for segmenting a set of two-dimensional CT slices corresponding to a lesion. In an embodiment, for each of at least a subset of the set of CT slices, the system inputs the CT slice into a plurality of branches of a trained segmentation block. Each branch of the segmentation block includes a convolutional neural network (CNN) with filters at a different scale, and produces one or more levels of output. The system generates, for each CT slice in the subset, feature maps for each level of output. The system generates a segmentation of each CT slice in the subset based on the feature maps of each level of output. The system aggregates the segmentations of each slice in the subset to generate a three-dimensional segmentation of the lesion. The system transmits data representing the three-dimensional segmentation to a user interface for display.
Type:
Grant
Filed:
April 14, 2020
Date of Patent:
January 25, 2022
Assignee:
Merck Sharp & Dohme Corp.
Inventors:
Antong Chen, Gregory Goldmacher, Bo Zhou
Abstract: A monitoring-screen-data generation device includes an object-data generation unit, a screen-data generation unit, and an assignment processing unit. The object-data generation unit identifies a plurality of objects included in an image based on image data, and generates object data. The screen-data generation unit generates monitoring screen data on the basis of the object data. On the basis of definition data that defines a state transition and the object data, the assignment processing unit assigns data that defines the state transition to an image object included in a monitoring screen of the monitoring screen data.
Abstract: An image processing system, comprising an input interface (IN) for receiving a plurality of input images acquired of test objects. The system further comprises a material type analyzer (MTA) configured to produce material type readings at corresponding locations across said input images (IM(CH)). A statistical module (SM) of the system is configured to determine based on said readings an estimate for a probability distribution of material type for said corresponding locations.
Type:
Grant
Filed:
June 29, 2017
Date of Patent:
January 4, 2022
Assignee:
KONINKLIJKE PHILIPS N.V.
Inventors:
Dominik Benjamin Kutra, Thomas Buelow, Joerg Sabczynski, Kirsten Regina Meetz
Abstract: Method and apparatus for segmenting a cellular image are disclosed. A specific embodiment of the method includes: acquiring a cellular image; enhancing the cellular image using a generative adversarial network to obtain an enhanced cellular image; and segmenting the enhanced cellular image using a hierarchical fully convolutional network for image segmentation to obtain cytoplasm and zona pellucida areas in the cellular image.
Type:
Grant
Filed:
October 22, 2019
Date of Patent:
December 28, 2021
Assignee:
The Chinese University of Hong Kong
Inventors:
Yiu Leung Chan, Mingpeng Zhao, Han Hui Li, Tin Chiu Li
Abstract: Method and system for grading a tumor. For example, a system for grading a tumor comprising: an image obtaining module configured to obtain a pathological image of a tissue to be examined; a snippet obtaining module configured to obtain one or more snippets having one or more sizes from the pathological image; an analyzing module configured to obtain one or more classification features based on at least analyzing the one or more snippets using one or more selected trained detection models of the analyzing module, wherein each selected trained detection model is configured to identify one or more classification features; and an outputting module configured to determine a tumor identification result based on at least the one or more classification features and output the tumor identification result.
Type:
Grant
Filed:
October 18, 2019
Date of Patent:
December 14, 2021
Assignee:
Shanghai United Imaging Intelligence Co., Ltd.
Abstract: A monitoring-screen-data generation device includes an object-data generation unit, a screen-data generation unit, and an assignment processing unit. The object-data generation unit identifies a plurality of objects included in an image based on image data, and generates object data. The screen-data generation unit generates monitoring screen data on the basis of the object data. On the basis of definition data that defines a state transition and the object data, the assignment processing unit assigns data that defines the state transition to an image object included in a monitoring screen of the monitoring screen data.