Abstract: A method of implementing an artificial intelligence based neuroradiology platform for neurological tumor identification comprises providing a multilayer convolutional network for neurological tumor identification configured for segmenting data sets of full neurologic scans into resolution voxels; supervised learning and validation of the platform by classification of tissue within classification voxels of a specific given training and validation data sets by the multilayer convolutional network for neurological tumor identification with each classification voxel of the training and validation data sets having a predetermined ground truth; and implementing the platform by classification of tissue within classification voxels of a specific given patient data sets by the multilayer convolutional network for neurological tumor identification with each classification voxel of each data set assigned a label. The platform may be used for T-cell therapy initiation and tracking.
Abstract: Using artificial intelligence and data observed using sensors or imaging devices to prompt a patient to provide responses or perform actions and then observing the patient's responses to the prompts and performing an assessment resulting in a quantitative result. The quantitative result is then used to complete a clinical qualitative assessment of the patient.
Type:
Grant
Filed:
December 31, 2021
Date of Patent:
November 29, 2022
Assignee:
IX INNOVATION LLC
Inventors:
Jeffrey Roh, Justin Esterberg, John Cronin, Seth Cronin, Michael John Baker
Abstract: Systems, methods, devices, and other techniques using machine learning for interpreting, or assisting in the interpretation of, biologic specimens based on digital images are provided. Methods for improving image-based cellular identification, diagnostic methods, methods for evaluating effectiveness of a disease intervention, and visual outputs useful in assisting professionals in the interpretation of biologic specimens are also provided.
Type:
Grant
Filed:
October 15, 2019
Date of Patent:
November 22, 2022
Assignee:
UPMC
Inventors:
Erastus Zachariah Allen, Keith Michael Callenberg, Liron Pantanowitz, Adit Bharat Sanghvi
Abstract: A method of training a lifeguard to properly view an area of a swimming pool or body of water and recognize a swimmer/bather in distress. The method includes: positioning submersible devices or other objects on a bottom of the swimming pool or body of water according to an established grid or pattern; observing the submersible devices to make observations; analyzing the observations to evaluate the ability to see the submersible devices under varying environmental and density conditions. The observation trains the lifeguard to recognize the swimmer/bather in distress in the swimming pool or body of water to minimize the risk of the swimmer/bather drowning.
Abstract: Systems and methods for detecting illicit activity based on body language features identified during a video visitation session or video communication are described herein. In some embodiments, a system may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the system to: analyze a video communication between a first party and a second party, where the analysis is configured to detect a body language feature that indicates an event taking place during the communication; and create an electronic record identifying the communication as containing the event.
Abstract: The disclosure herein relates to systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking. In some embodiments, the systems, devices, and methods described herein are configured to analyze non-invasive medical images of a subject to automatically and/or dynamically identify one or more features, such as plaque and vessels, and/or derive one or more quantified plaque parameters, such as radiodensity, radiodensity composition, volume, radiodensity heterogeneity, geometry, location, and/or the like. In some embodiments, the systems, devices, and methods described herein are further configured to generate one or more assessments of plaque-based diseases from raw medical images using one or more of the identified features and/or quantified parameters.
Type:
Grant
Filed:
January 5, 2021
Date of Patent:
November 15, 2022
Assignee:
CLEERLY, INC.
Inventors:
James K. Min, James P. Earls, Hugo Miguel Rodrigues Marques
Abstract: Methods, devices, and systems are provided for quantifying an extent of various pathology patterns in scanned subject images. The detection and quantification of pathology is performed automatically and unsupervised via a trained system. The methods, devices, and systems described herein generate unique dictionaries of elements based on actual image data scans to automatically identify pathology of new image data scans of subjects. The automatic detection and quantification system can detect a number of pathologies including a usual interstitial pneumonia pattern on computed tomography images, which is subject to high inter-observer variation, in the diagnosis of idiopathic pulmonary fibrosis.
Abstract: Disclosed herein is an image processing apparatus and a control method thereof. The image processing apparatus includes communication circuitry, a storage, and a controller configured to control the image processing apparatus to: perform object recognition for recognizing a plurality of objects in first image data stored in the storage, obtain a score inferred through operation processing through a neural network for the recognized plurality of objects, generate second image data based on the obtained score and proximity of the plurality of objects, and perform image processing based on the second image data.
Abstract: A method for distinguishing plaque and calculus is provided. The method is used in a device and includes the following steps: emitting, by a blue light-emitting diode, blue light to illuminate teeth in an oral cavity, wherein the blue light is used to generate autofluorescence of plaque and calculus on the teeth; sensing, by an image sensor, the autofluorescence of plaque and calculus; and distinguishing, by a processor, a plaque area and a calculus area on the teeth based on the autofluorescence.
Abstract: Methods, devices, and systems are provided for quantifying an extent of various pathology patterns in scanned subject images. The detection and quantification of pathology is performed automatically and unsupervised via a trained system. The methods, devices, and systems described herein generate unique dictionaries of elements based on actual image data scans to automatically identify pathology of new image data scans of subjects. The automatic detection and quantification system can detect a number of pathologies including a usual interstitial pneumonia pattern on computed tomography images, which is subject to high inter-observer variation, in the diagnosis of idiopathic pulmonary fibrosis.
Abstract: A method includes receiving a first operation instruction; responsive to receiving the first operation instruction, determining whether one or more first images from a set of images comprise a first common feature; and responsive to determining that the one or more first images from the set of images comprise the first common feature, displaying the one or more first.
Abstract: Described is a multiple-camera system and process for re-identifying an agent located in a materials handling facility based on anterior views of agents. An anterior view of a newly detected agent may be partitioned and color signatures generated for each partition. Likewise, stored anterior views of agents (candidate agents) that may potentially be the newly detected agent are partitioned and color signatures generated for each partition. Based on the color signatures, a similarity between the anterior view of the newly detected agent and the candidate agents is determined. The similarity may be used to either determine that the newly detected agent is one of the candidate agents or reduce the set of candidate agents that are considered during a manual review.
Abstract: An image processor includes an image generator configured to generate corresponding image data corresponding to first microscopic image data obtained under a first observation condition, based on second microscopic image data and third microscopic image data obtained under a second observation condition, and an image output unit configured to output the corresponding image data. The corresponding image data may be image data corresponding to a first focal plane from which the first microscopic image data are obtained, and wherein the second microscopic image data and the third microscopic image data may be image data on a second focal plane and a third focal plane, respectively, which are different from the first focal plane.
Type:
Grant
Filed:
January 16, 2017
Date of Patent:
September 13, 2022
Assignee:
NIKON CORPORATION
Inventors:
Ichiro Sase, Yutaka Sasaki, Takaaki Okamoto, Yuki Terui, Kohki Konishi, Masafumi Mimura, Martin Berger, Petr Gazak, Miroslav Svoboda
Abstract: Provided is a system and method for automatically recognizing user motion. The system for automatically recognizing user motion includes an input unit configured to receive three-dimensional (3D) measurement data, a memory which stores a program for performing automatic recognition on 3D user motion using 3D low-quality depth data and a deep learning model, and a processor configured to execute the program, wherein the processor converts the 3D low-quality depth data into 3D high-quality image data.
Type:
Grant
Filed:
November 25, 2020
Date of Patent:
September 6, 2022
Assignee:
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
Inventors:
Jong Sung Kim, Seung Joon Kwon, Sang Woo Seo, Yeong Jae Choi
Abstract: The present disclosure relates to systems and methods for cellular imaging and identification through the use of a light sheet flow cytometer. In one implementation, a light sheet flow cytometer may include a light source configured to emit light having one or more wavelengths, at least one optical element configured to form a light sheet from the emitted light, a microfluidic channel configured to hold a sample, and an imaging device. The imaging device may be adapted to forming 3-D images of the sample such that identification tags attached to the sample are visible.
Type:
Grant
Filed:
February 12, 2020
Date of Patent:
August 30, 2022
Assignee:
Verily Life Sciences LLC
Inventors:
Cheng-Hsun Wu, Brian M. Rabkin, Supriyo Sinha, John D. Perreault, Chinmay Belthangady, James Higbie, Seung Ah Lee
Abstract: An example embodiment includes: an acquisition unit that acquires a first image generated by capturing an object by using a light at a first wavelength, a second image generated by capturing the object by using a light at a second wavelength, and depth information on the object; a detection unit that detects a face included in the second image; a check unit that, based on the depth information, checks whether or not a face detected by the detection unit is one obtained by capturing a living body; and an extraction unit that, based on information on a face checked by the check unit as obtained by capturing a living body, extracts a face image from the first image.
Abstract: An image processing method includes: a fluorescent image capturing step of capturing a fluorescent image of a fluorescently labeled tissue specimen; a creating step of creating a fluorescent whole slide image based on the captured fluorescent image; and a storing step of storing the created fluorescent whole slide image. The tissue specimen is fluorescently labelled by using, as a staining reagent, fluorescent substance integrated nanoparticles obtained by bonding a biological substance recognition part to fluorescent particles on which a plurality of fluorescent substances are integrated.
Abstract: A method and a system for generating an assistance function for an ophthalmological surgery are presented. The method includes capturing digital image data of a surgical microscope, which were generated during an ophthalmological surgery by an image sensor and which are annotated. The method furthermore includes capturing sensor data of a phaco system, which were generated during the ophthalmological surgery by a sensor of the phaco system and which are annotated, wherein the annotated sensor data and the annotated digital image data have synchronized timestamps and wherein the annotations refer in indicative fashion to a state of an ophthalmological surgery.
Abstract: A drawing management apparatus includes a receiver, a searcher, and a presenter. The receiver receives input of information concerning a specific subject and information concerning a purpose of a design change to be made to the specific subject. The searcher searches for a pair of drawings which have characteristics similar to characteristics of the specific subject and which are constituted by a drawing to which a design change has been made in accordance with the purpose and a drawing to which the design change has not yet been made, on the basis of the information concerning the specific subject and the information concerning the purpose of the design change. The presenter presents the searched pair of drawings to a user.
Abstract: A imaging system includes a camera, a display and a processor. The camera has color video acquisition capability, and is mounted to a distal end of an interventional instrument insertable within an object, the camera providing image frames for imaging vasculature of the object, each image frame including multiple pixels providing corresponding signals, respectively. The processor is programmed to receive the signals; amplify variations in at least one of color and motion of the signals corresponding to the multiple pixels; determine spatial phase variability, frequency and signal characteristics of at least some of the amplified signals corresponding to the multiple pixels, respectively; identify pixels indicative of abnormal vascularity based on the spatial phase variability, frequency and/or signal characteristics; create a vascular map corresponding to each, where each vascularity map includes a portion of the object having the abnormal vascularity; and operate the display to display each vascularity map.