Abstract: According to one embodiment, a medical image processing apparatus includes processing circuitry. The processing circuitry specifies, before position alignment between a first X-ray image and a second X-ray image which is acquired with a device inserted, a device area candidate in the second X-ray image as a candidate of an area where the device appears. The processing circuitry performs the position alignment using first processing of removing the specified device area candidate or second processing of reducing a contribution of the specified device area candidate.
Abstract: The subject disclosure presents systems and methods for automatically selecting meaningful regions on a whole-slide image and performing quality control on the resulting collection of FOVs. Density maps may be generated quantifying the local density of detection results. The heat maps as well as combinations of maps (such as a local sum, ratio, etc.) may be provided as input into an automated FOV selection operation. The selection operation may select regions of each heat map that represent extreme and average representative regions, based on one or more rules. One or more rules may be defined in order to generate the list of candidate FOVs. The rules may generally be formulated such that FOVs chosen for quality control are the ones that require the most scrutiny and will benefit the most from an assessment by an expert observer.
Type:
Grant
Filed:
January 17, 2020
Date of Patent:
May 10, 2022
Assignees:
Ventana Medical Systems, Inc., HOFFMANN-LA ROCHE INC.
Inventors:
Joerg Bredno, Astrid Heller, Gabriele Hoelzlwimmer
Abstract: Embodiments discussed herein facilitate determination of tumor mutation status based on context and spatial information. One example embodiment can access a MRI scan of a tumor comprising voxels; extract radiomic feature(s) from the voxels; generate a spatial feature descriptor indicating probabilities the tumor has a first mutation status and a second mutation status, based on the MRI scan, a first population atlas for the first mutation status, and a second population atlas for the second mutation status; provide the radiomic feature(s) and the spatial feature descriptor to a machine learning model; and receive, via the machine learning model, a map indicating, for each voxel of the voxels, a probability of the first mutation status for that voxel and a probability of the second mutation status for that voxel, wherein the map is based at least on the one or more radiomic features and the spatial feature descriptor.
Abstract: An image display apparatus includes a display which can display an image; and a hardware processor. The hardware processor obtains image data of a dynamic image or a moving image including a plurality of frames generated by a radiation imaging apparatus. The hardware processor displays on the display as a failure image at least one failure frame included in the dynamic image or the moving image based on the obtained image data or analyzed image data obtained by analyzing the image data, the failure frame including a reason why the dynamic image or the moving image cannot be provided for diagnosis.
Abstract: In a method of operation of an artificial intelligence-equipped specimen scanning and analysis unit to digitally scan and analyze pathological specimen slides, a sample slide with a mounted tissue sample is scanned and analyzed according to one or more user-selected algorithms in order to generate a heatmap visually depicting the presence of one or more user-selected sample attributes of the tissue sample. One or more artificial intelligence modules, including a deep learning computation module, is provided and can be trained by the user for future analysis of new samples. One or more regions of interest may be selected from the heatmap to include in the results of the analysis. A focus window may be used to closely inspect any given region of the whole slide image, and a trail map is generated from the movement of the focus window.
Abstract: A spinal image generation system based on the ultrasonic rubbing technique, comprises an acquisition unit and a processing unit. The system generates the ultrasonic rubbing based on two-dimensional spinal ultrasonic images. The image needs to include surface characteristic contour of the vertebra structure. The ultrasonic rubbing matches with a digital medical image through characteristic contour. After matching, a personalized spinal surface topographical map is established, which keeps real-time updating consistently with the intraoperative posture of the patient under surgical condition. A positioning and navigation system for spinal surgery based on the spinal image generation system, comprising a navigation module and the image generation system above. The navigation system can acquire a personalized spinal surface topographical map, which keeps real-time updating consistently with the intraoperative posture of the patient under surgical condition.
Abstract: A system for illustrating the flight of a sports ball includes a radar, an imager, and a controller. The imager is configured to image a moving sports ball. The controller is configured to (i) receive, from the imager, an image including the moving sports ball, (ii) receive, from the radar, radar data associated with the moving sports ball, (iii) determine, from the radar data, a portion of a trajectory of the moving sports hall, (iv) alter the image to illustrate the portion of the trajectory relative to the moving sports ball, and (v) output the altered image.
Abstract: An image processing system and related method. The system comprises an input interface (IN) configured for receiving an n[?2]-dimensional input image with a set of anchor points defined in same, said set of anchor points forming an input constellation. A constellation modifier (CM) is configured to modify said input constellation into a modified constellation. A constellation evaluator (CE) configured to evaluate said input constellation based on said hyper-surface to produce a score. A comparator (COMP) is configured to compare said score against a quality criterion. Through an output interface (OUT) said constellation is output if the score meets said criterion. The constellation suitable to define a segmentation for said input image.
Type:
Grant
Filed:
July 26, 2017
Date of Patent:
April 5, 2022
Assignee:
KONINKLIJKE PHILIPS N.V.
Inventors:
Rafael Wiemker, Tobias Klinder, Alexander Schmidt-Richberg, Axel Saalbach, Irina Waechter-Stehle, Tim Philipp Harder, Jens von Berg
Abstract: Embodiments provide line-based feature generation for vision-based driver assistance systems and methods. For one embodiment, a feature generator includes a circular buffer and a processor coupled to an image sensor. The circular buffer receives image data from the image sensor and stores N lines at a time of an image frame captured by the image sensor. The N lines of the image frame are less than all of the lines for the image frame. The processor receives the N lines from the circular buffer and stores one or more features generated from the N lines in a memory. Iterative blocks of N lines of image data are processed to complete processing of the full image frame, and multiple frames can be processed. The generated features are analyzed by a vision processor to identify, classify, and track objects for vision-based driver assistance and related vision-based assistance actions.
Type:
Grant
Filed:
July 31, 2019
Date of Patent:
April 5, 2022
Assignee:
NXP USA, Inc.
Inventors:
Sharath Subramanya Naidu, Ajit Singh, Michael Andreas Staudenmaier, Leonardo Surico, Stephan Matthias Herrmann
Abstract: Disclosed is a method and system for three-dimensional tracking of a target located within a body, the method performed using at least one processing system. A two-dimensional scanned image of the body including the target is processed to obtain a two-dimensional image of the target. A first present dataset of the target is predicted using a previous dataset of the target and a state transition model, the first present dataset includes a three-dimensional present position value of the target. A second present dataset of the target is measured by template-matching of the two-dimensional image of the target with a model of the target. A third present dataset of the target is estimated by statistical inference using the first present dataset and the second present dataset. The previous dataset of the target is updated to match the third present dataset.
Abstract: An establishing method of a retinal layer thickness detection model includes following steps. A reference database is obtained, and an image pre-processing step, a feature selecting step, a training step and a confirming step are performed. The reference database includes reference optical coherence tomographic images. In the image pre-processing step, the reference optical coherence tomographic images are duplicated and cell segmentation lines of retinal layers are marked to obtain control optical coherence tomographic images. In the feature selecting step, the reference optical coherence tomographic images are analyzed to obtain reference image features. The training step is to train with the reference image features and obtain the retinal layer thickness detection model.
Type:
Grant
Filed:
September 28, 2020
Date of Patent:
March 29, 2022
Assignees:
NATIONAL CHIN-YI UNIVERSITY OF TECHNOLOGY, CHI MEI MEDICAL CENTER
Inventors:
Yue-Jing He, Ching-Ping Chang, Shu-Chun Kuo, Kao-Chang Lin
Abstract: A method for processing CT imaging data includes providing CT imaging data obtained at two x-ray energy levels in a first respiratory phase, preferably in an inhalation phase, of the subject and providing second CT imaging data obtained at two x-ray energy levels in a second respiratory phase, preferably in an exhalation phase, of the subject. The method may include reconstructing first regional perfusion blood volume (PBV) imaging data from the provided first CT imaging data, reconstructing second regional PBV imaging data from the provided second CT imaging data, reconstructing first virtual non-contrast (VNC) imaging data from the provided first CT imaging data, reconstructing second VNC imaging data from the provided second CT imaging data, determining a transformation function for registering the first and second reconstructed VNC imaging data, and registering the first and second reconstructed VNC imaging data by applying the transformation function.
Abstract: Provided are a moving body tracking apparatus that contributes to shortening treatment time and a radiation therapy system including the moving body tracking apparatus, a program, and a moving body tracking method. The moving body tracking apparatus includes a fluoroscopic apparatus that acquires fluoroscopic images including the target 2 from at least two directions, and the moving body tracking control apparatus 30A that obtains a position of the target 2 from the fluoroscopic images acquired by the fluoroscopic apparatus. The moving body tracking control apparatus 30A creates a simulated fluoroscopic image from the CT image including the target 2, creates a two-dimensional region including the target 2 from the simulated fluoroscopic image as a template, matches each of at least two fluoroscopic images with the template, and obtains a three-dimensional position of the target 2 from a plurality of matching results.
Type:
Grant
Filed:
February 6, 2020
Date of Patent:
March 22, 2022
Assignees:
HITACHI, LTD., NATIONAL UNIVERSITY CORPORATION HOKKAIDO UNIVERSITY
Abstract: The present disclosure provides a method and a system for recognizing medical image, the present disclosure utilizes the image data with markers of different diseases for calculating and analyzing to build a pre-trained model, the present disclosure has significant improvements to improve the accuracy of image recognition under the general situation of insufficient effective data in the field of medical image recognition technology, the present disclosure can be applied to the field of medical image recognition technology, including X-ray, CT, MRI, ultrasonic, pathological slice photography or fundus photography.
Type:
Grant
Filed:
September 16, 2020
Date of Patent:
March 22, 2022
Assignee:
MUEN BIOMEDICAL AND OPTOELECTRONIC TECHNOLOGIES INC.
Abstract: Various image diagnostic systems, and methods of operating thereof, are disclosed herein. Example embodiments relate to operating the image diagnostic system to identify one or more tissue types within an image patch according to a hierarchical histological taxonomy, identifying an image patch associated with normal tissue, generating a pixel-level segmented image patch for an image patch, generating an encoded image patch for an image patch of at least one tissue, searching for one or more histopathological images, and assigning an image patch to one or more pathological cases.
Type:
Grant
Filed:
May 1, 2020
Date of Patent:
March 15, 2022
Assignee:
Huron Technologies International Inc.
Inventors:
Mahdi S. Hosseini, Konstantinos N. Plataniotis, Lyndon Chan, Jasper Hayes, Savvas Damaskinos
Abstract: To plan tumor treating fields (TTFields) therapy, a model of a patient's head is often used to determine where to position the transducer arrays during treatment, and the accuracy of this model depends in large part on an accurate segmentation of MRI images. The quality of a segmentation can be improved by presenting the segmentation to a previously-trained machine learning system. The machine learning system generates a quality score for the segmentation. Revisions to the segmentation are accepted, and the machine learning system scores the revised segmentation. The quality scores are used to determine which segmentation provides better results, optionally by running simulations for models that correspond to each segmentation for a plurality of different transducer array layouts.
Type:
Grant
Filed:
January 7, 2020
Date of Patent:
March 15, 2022
Assignee:
Novocure GmbH
Inventors:
Reuven R. Shamir, Zeev Bomzon, Mor Vardi
Abstract: Provided is a disease diagnosis support method employing endoscopic images of a digestive organ using a neural network, and the like. The disease diagnosis support method employing endoscopic images of a digestive organ using a neural network trains the neural network by using first endoscopic images of the digestive organ, and corresponding to the first endoscopic images, at least one of definitive diagnosis result of being positive or negative for the disease of the digestive organ, a past disease, a severity level, and information corresponding to an imaged region. The trained neural network outputs, based on second endoscopic images of the digestive organ, at least one of a probability of being positive and/or negative for the disease of the digestive organ, a probability of a past disease, a severity level of the disease, and the information corresponding to the imaged region.
Abstract: Disclosed herein are a system and method that may help place or position a component, such as an acetabular cup or a femoral component, during surgery. An example system may iteratively register a plurality of two-dimensional projections from a three-dimensional model of a portion of a patient, the three-dimensional model being generated from a data set of imaging information obtained at a neutral position. An example system may further score each two-dimensional projection against an intra-operative image by calculating a spatial difference between corresponding points. A two-dimensional projection having a minimum score reflecting the smallest distance between the corresponding points may be identified.
Abstract: A culture information processing device includes: a feature value computing unit that computes growth feature values indicating features of growth characteristics of cells from data acquired in a particular first subculturing process selected from among a plurality of subculturing processes included in a culture period of the cells; a condition setting unit that sets culturing conditions of a second subculturing process one process after the first subculturing process; and an information computing unit that computes, on the basis of the growth feature values computed by the feature value computing unit and the culturing conditions set by the condition setting unit, characteristics-related information related to growth characteristics in the second subculturing process.