Patents Examined by Tsung-Yin Tsai
-
Patent number: 11603560Abstract: Provided is a particle measuring method for measuring, based on image data containing an image of a plurality of particles, a size of each of the particles included in the image data. The particle measuring method includes: acquiring a position of each of the plurality of particles from the image data; extracting two particles vicinal to each other; and calculating the size of each of the particles based on a distance between the two vicinal particles.Type: GrantFiled: June 24, 2019Date of Patent: March 14, 2023Assignee: CANON KABUSHIKI KAISHAInventor: Toru Sasaki
-
Patent number: 11604941Abstract: A method of training an action selection neural network to perform a demonstrated task using a supervised learning technique. The action selection neural network is configured to receive demonstration data comprising actions to perform the task and rewards received for performing the actions. The action selection neural network has auxiliary prediction task neural networks on one or more of its intermediate outputs. The action selection policy neural network is trained using multiple combined losses, concurrently with the auxiliary prediction task neural networks.Type: GrantFiled: October 29, 2018Date of Patent: March 14, 2023Assignee: DeepMind Technologies LimitedInventor: Todd Andrew Hester
-
Patent number: 11605163Abstract: An automatic abnormal cell recognition method, the method including: 1) scanning a slide using a digital pathological scanner and obtaining a cytological slide image; 2) obtaining a set of centroid coordinates of all nuclei that is denoted as CentroidOfNucleus by automatically localizing nuclei of all cells in the cytological slide image using a feature fusion based localizing method; 3) obtaining a set of cell square region of interest (ROI) images that are denoted as ROI_images; 4) grouping all cell images in the ROI_images into different groups based on sampling without replacement, where each group contains ROW×COLUMN cell images with preset ROW and COLUMN parameters; obtaining a set of splice images; and 5) classifying all cell images in the splice image simultaneously by using the splice image as an input of a trained deep neural network; and recognizing cells classified as abnormal categories.Type: GrantFiled: August 25, 2020Date of Patent: March 14, 2023Assignee: WUHAN UNIVERSITYInventors: Juan Liu, Jiasheng Liu, Zhuoyu Li, Chunbing Hua
-
Patent number: 11594036Abstract: Disclosed are techniques for improving an advanced driver-assistance system (ADAS) by pre-processing image data. In one embodiment, a method is disclosed comprising receiving one or more image frames captured by an image sensor installed on a vehicle; identifying a position of a skyline in the one or more image frames, the position comprising a horizontal position of the skyline; cropping one or more future image frames based on the position of the skyline, the cropping generating cropped images comprising a subset of the corresponding future image frames; and processing the cropped images at an advanced driver-assistance system (ADAS).Type: GrantFiled: August 21, 2019Date of Patent: February 28, 2023Assignee: Micron Technology, Inc.Inventor: Gil Golov
-
Patent number: 11582402Abstract: An image processing device includes a rotation processor and an image processor. The rotation processor receives an input image and generates a temporary image according to the input image. The image processor is coupled to the rotation processor and outputs a processed image according to the temporary image, wherein the image processor has a predetermined image processing width, a width of the input image is larger than the predetermined image processing width, and a width of the temporary image is less than the predetermined image processing width.Type: GrantFiled: June 5, 2019Date of Patent: February 14, 2023Assignee: eYs3D Microelectronics, Co.Inventor: Chi-Feng Lee
-
Systems and methods for image segmentation using a scalable and compact convolutional neural network
Patent number: 11574406Abstract: Embodiments of the disclosure provide systems and methods for segmenting an image. An exemplary system includes a communication interface configured to receive the image acquired by an image acquisition device. The system further includes a memory configured to store a multi-level learning network comprising a plurality of convolution blocks cascaded at multiple levels. The system also includes a processor configured to apply a first convolution block and a second convolution block of the multi-level learning network to the image in series. The first convolution block is applied to the image and the second convolution block is applied to a first output of the first convolution block. The processor is further configured to concatenate the first output of the first convolution block and a second output of the second convolution block to obtain a feature map and obtain a segmented image based on the feature map.Type: GrantFiled: August 19, 2020Date of Patent: February 7, 2023Assignee: KEYA MEDICAL TECHNOLOGY CO., LTD.Inventors: Hanbo Chen, Shanhui Sun, Youbing Yin, Qi Song -
Patent number: 11574140Abstract: Systems and methods are disclosed for identifying a diagnostic feature of a digitized pathology image, including receiving one or more digitized images of a pathology specimen, and medical metadata comprising at least one of image metadata, specimen metadata, clinical information, and/or patient information, applying a machine learning model to predict a plurality of relevant diagnostic features based on medical metadata, the machine learning model having been developed using an archive of processed images and prospective patient data, and determining at least one relevant diagnostic feature of the relevant diagnostic features for output to a display.Type: GrantFiled: May 6, 2021Date of Patent: February 7, 2023Assignee: Paige.AI, Inc.Inventors: Jillian Sue, Thomas Fuchs, Christopher Kanan
-
Patent number: 11568533Abstract: A computer-implemented method for automated classification of 3D image data of teeth includes a computer receiving one or more of 3D image data sets where a set defines an image volume of voxels representing 3D tooth structures within the image volume associated with a 3D coordinate system. The computer pre-processes each of the data sets and provides each of the pre-processed data sets to the input of a trained deep neural network. The neural network classifies each of the voxels within a 3D image data set on the basis of a plurality of candidate tooth labels of the dentition. Classifying a 3D image data set includes generating for at least part of the voxels of the data set a candidate tooth label activation value associated with a candidate tooth label defining the likelihood that the labelled data point represents a tooth type as indicated by the candidate tooth label.Type: GrantFiled: October 2, 2018Date of Patent: January 31, 2023Assignee: PROMATON HOLDING B.V.Inventors: David Anssari Moin, Frank Theodorus Catharina Claessen, Bas Alexander Verheij
-
Patent number: 11568201Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining an artificial neural network architecture corresponding to a sub-graph of a synaptic connectivity graph. In one aspect, there is provided a method comprising: obtaining data defining a graph representing synaptic connectivity between neurons in a brain of a biological organism; determining, for each node in the graph, a respective set of one or more node features characterizing a structure of the graph relative to the node; identifying a sub-graph of the graph, comprising selecting a proper subset of the nodes in the graph for inclusion in the sub-graph based on the node features of the nodes in the graph; and determining an artificial neural network architecture corresponding to the sub-graph of the graph.Type: GrantFiled: January 30, 2020Date of Patent: January 31, 2023Assignee: X Development LLCInventors: Sarah Ann Laszlo, Georgios Evangelopoulos, Philip Edwin Watson
-
Patent number: 11568632Abstract: This disclosure describes a system for automatically identifying an item from among a variation of items of a same type. For example, an image may be processed and resulting item image information compared with stored item image information to determine a type of item represented in the image. If the matching stored item image information is part of a cluster, the item image information may then be compared with distinctive features associated with stored item image information of the cluster to determine the variation of the item represented in the received image.Type: GrantFiled: September 4, 2020Date of Patent: January 31, 2023Assignee: Amazon Technologies, Inc.Inventors: Sudarshan Narasimha Raghavan, Xiaofeng Ren, Michel Leonard Goldstein, Ohil K. Manyam
-
Patent number: 11551355Abstract: A method is provided for measuring or estimating stress distributions on heart valve leaflets by obtaining three-dimensional images of the heart valve leaflets, segmenting the heart valve leaflets in the three-dimensional images by capturing locally varying thicknesses of the heart valve leaflets in three-dimensional image data to generate an image-derived patient-specific model of the heart valve leaflets, and applying the image-derived patient-specific model of the heart valve leaflets to a finite element analysis (FEA) algorithm to estimate stresses on the heart valve leaflets. The images of the heart valve leaflets may be obtained using real-time 3D transesophageal echocardiography (rt-3DTEE). Volumetric images of the mitral valve at mid-systole may be analyzed by user-initialized segmentation and 3D deformable modeling with continuous medial representation to obtain, a compact representation of shape.Type: GrantFiled: August 24, 2020Date of Patent: January 10, 2023Assignee: The Trustees of the University of PennsylvaniaInventors: Benjamin M Jackson, Robert C Gorman, Joseph H Gorman, III, Alison M Pouch, Chandra M Sehgal, Paul A Yushkevich, Brian B Avants, Hongzhi Wang
-
Patent number: 11538280Abstract: Systems and methods for eyelid shape estimation are disclosed. In one aspect, after receiving an eye image of an eye (e.g., from an image capture device), an eye pose of the eye in the eye image is determined. From the eye pose, an eyelid shape (of an upper eyelid or a lower eyelid) can be estimated using an eyelid shape mapping model. The eyelid shape mapping model relates the eye pose and the eyelid shape. In another aspect, the eyelid shape mapping model is learned (e.g., using a neural network).Type: GrantFiled: June 1, 2020Date of Patent: December 27, 2022Assignee: Magic Leap, Inc.Inventor: Adrian Kaehler
-
Patent number: 11532084Abstract: A facility for processing a medical imaging image is described. The facility applies each of a number of constituent models making up an ensemble machine learning models to the image to produce a constituent model result that predicts a value for each pixel of the image. The facility aggregates the results produced by the constituent models of the plurality to determine a result of the ensemble machine learning model. For each of the pixels of the accessed image, the facility determines a measure of variation among the values predicted for the pixel among the constituent models. Facility determines a confidence measure for the ensemble machine learning model result based at least in part on for how many of the pixels of the accessed image a variation measure is determined that exceeds a variation threshold.Type: GrantFiled: November 3, 2020Date of Patent: December 20, 2022Assignee: ECHONOUS, INC.Inventors: Babajide Ayinde, Eric Wong, Allen Lu
-
Patent number: 11526703Abstract: In an approach for classifying regions of tissue captured in multispectral videos into medically meaningful classes using GPU accelerated perfusion estimation, a processor receives one or more multispectral videos of a subject tissue of a patient. A processor extracts one or more fluorescence time series profiles from the one or more multispectral videos. A processor estimates one or more sets of perfusion parameters based on the one or more fluorescence time series profiles. A processor inputs one or more feature vectors into a classifier, wherein the one or more feature vectors are derived the one or more sets of perfusion parameters. A processor receives a classification result for each of the one or more feature vectors, wherein the classification result comprises a set of medically relevant labels for each of the one or more feature vectors with a level of certainty for each label of the set of medically relevant labels.Type: GrantFiled: July 28, 2020Date of Patent: December 13, 2022Assignee: International Business Machines CorporationInventors: Stephen Michael Moore, Sergiy Zhuk, Seshu Tirupathi, Michele Gazzetti, Pol MacAonghusa
-
Patent number: 11521007Abstract: A method for configuring a set of hardware accelerators to process a CNN. In an embodiment, the method includes one or more computer processors determining a set of parameters related to a feature map to analyze at a respective layer of the CNN, the set of parameters include quantization value and respective values that describe a shape of the feature map. The method further includes configuring a set of hardware accelerators for the respective layer of the CNN. The method further includes receiving a portion of the feature map to the configured set of hardware accelerators for the respective layer of the CNN, wherein the received portion of the feature map includes a group of sequential data slices. The method further includes analyzing the group of sequential data slices among the configured set of hardware accelerators.Type: GrantFiled: February 17, 2020Date of Patent: December 6, 2022Assignee: International Business Machines CorporationInventors: Junsong Wang, Chang Xu, Tao Wang, Yan Gong
-
Patent number: 11514575Abstract: A discrete attribute value dataset is obtained that is associated with a plurality of probe spots each assigned a different probe spot barcode. The dataset comprises spatial projections, each comprising images of a biological sample. Each image includes a corresponding plurality of discrete attribute values for the probe spots. Each such value is associated with a probe spot in the plurality of probes spots based on the probe spot barcodes. The dataset is clustered using the discrete attribute values, or dimension reduction components thereof, for a plurality of loci at each respective probe spot across the images of the projections thereby assigning each probe spot to a cluster in a plurality of clusters. Morphological patterns are identified from the spatial arrangement of the probe spots in the various clusters.Type: GrantFiled: September 30, 2020Date of Patent: November 29, 2022Assignee: 10X GENOMICS, INC.Inventors: Jeffrey Clark Mellen, Jasper Staab, Kevin J. Wu, Neil Ira Weisenfeld, Florian Baumgartner, Brynn Claypoole
-
Patent number: 11514571Abstract: Systems and methods for identifying and assessing lymph nodes are provided. Medical image data (e.g., one or more computed tomography images) of a patient is received and anatomical landmarks in the medical image data are detected. Anatomical objects are segmented from the medical image data based on the one or more detected anatomical landmarks. Lymph nodes are identified in the medical image data based on the one or more detected anatomical landmarks and the one or more segmented anatomical objects. The identified lymph nodes may be assessed by segmenting the identified lymph nodes from the medical image data and quantifying the segmented lymph nodes. The identified lymph nodes and/or the assessment of the identified lymph nodes are output.Type: GrantFiled: December 13, 2019Date of Patent: November 29, 2022Assignee: Siemens Healthcare GmbHInventors: Bogdan Georgescu, Elijah D. Bolluyt, Alexandra Comaniciu, Sasa Grbic
-
Patent number: 11508064Abstract: A method includes: acquiring a training data set including pieces of training data, each of the pieces including an image of a training target, first annotation data representing a rectangular region in the image, and second annotation data; training, based on the image and the first annotation data, an object detection model specifying a rectangular region including the training target; training, based on the image and the second annotation data, a neural network; and calculating a first index value related to a relationship of a pixel number, the trained estimation model and the calculated first index value being used in a determination process that determines, based on the calculated first index value and a second index value relationship between a pixel number in an output result and an estimation result, whether or not a target in a target image is normal.Type: GrantFiled: January 8, 2021Date of Patent: November 22, 2022Assignees: FUJITSU LIMITED, RIKEN, NATIONAL CANCER CENTERInventors: Akira Sakai, Masaaki Komatsu, Ai Dozen
-
Patent number: 11508065Abstract: This disclosure generally pertains to methods and systems for automatically detecting acquisition errors in a medical image using machine learning. Certain embodiments relate to methods for the development of deep learning algorithms that perform machine recognition of specific features and conditions in imaging and other medical data. Another embodiment provides systems for detecting acquisition errors in an X-ray image, the system comprising a non-transitory computer-readable medium storing a preprocessing quality control module that, when executed by at least one electronic processor, is configured to generate associated classifications identifying characteristics of the medical image.Type: GrantFiled: March 19, 2021Date of Patent: November 22, 2022Assignee: Qure.ai Technologies Private LimitedInventors: Preetham Putha, Manoj Tadepalli, Bhargava Reddy, Tarun Raj, Ammar Jagirdar, Pooja Rao, Prashant Warier
-
Patent number: 11501446Abstract: A facility for identifying the boundaries of 3-dimensional structures in 3-dimensional images is described. For each of multiple 3-dimensional images, the facility receives results of a first attempt to identify boundaries of structures in the 3-dimensional image, and causes the results of the first attempt to be presented to a person. For each of a number of 3-dimensional images, the facility receives input generated by the person providing feedback on the results of the first attempt. The facility then uses the following to train a deep-learning network to identify boundaries of 3-dimensional structures in 3-dimensional images: at least a portion of the plurality of 3-dimensional images, at least a portion of the received results, and at least a portion of provided feedback.Type: GrantFiled: October 30, 2019Date of Patent: November 15, 2022Assignee: Allen InstituteInventors: Jianxu Chen, Liya Ding, Matheus Palhares Viana, Susanne Marie Rafelski