Using Neural Network Or Trainable (adaptive) System Patents (Class 600/408)
  • Patent number: 11278260
    Abstract: A method and a system for acquiring a 3D ultrasound image. The method includes receiving a request to capture a plurality of ultrasound image for a medical test corresponding to a medical condition. The method further includes determining a body part corresponding to the medical test. Further, the method includes identifying an imaging site particular to the medical test. Furthermore, the method includes providing a navigational guidance to the user in real time for positioning a handheld ultrasound device. Subsequently, the user is assisted to capture the plurality of ultrasound image of the imaging site in real time using deep learning. Further, the plurality of ultrasound images of the imaging site is captured. Finally, the method includes converting the plurality of ultrasound image to a 3-Dimensional (3D) ultrasound image in real time.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: March 22, 2022
    Inventors: Preetham Putha, Manoj Tadepalli, Prashant Warier, Pooja Rao, Rohan Sahu
  • Patent number: 11253324
    Abstract: One embodiment provides a method for training a machine-learning model to detect a location of a person's appendix, comprising: receiving, at the machine-learning model, a plurality of images, each image being a slice of a body taken by a CT scan; identifying, for each of the plurality of images, features of the appendix, wherein the identifying comprises analyzing a plurality of slices of each of the plurality of images and classifying each of the plurality of slices, into one of a plurality of classification groups, based upon a feature of the appendix within the slice; segmenting each of the plurality of image slices included in the one of the plurality of classification groups that classifies the slice as containing the appendix, thereby identifying probable locations of the appendix, via utilizing a probability mask for each of the probable locations.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: February 22, 2022
    Assignee: Cognistic, LLC
    Inventors: Tom Bu, Sanjay Chopra, Roshan Bhave, Ji Liu
  • Patent number: 11222717
    Abstract: A medical scan triaging system is operable to generate a global abnormality probability for each of a plurality of medical scans by utilizing a computer vision model trained on a training set of medical scans. A triage probability threshold is determined based on user input to a client device. A first subset of the plurality of medical scans, designated for human review, is determined by identifying medical scans with a corresponding global abnormality probability that compares favorably to the triage probability threshold. A second subset of the plurality of medical scans, designated as normal, is determined by identifying ones of the plurality of medical scans with a corresponding global abnormality probability that compares unfavorably to the triage probability threshold. Transmission of the first subset of the plurality of medical scans to a plurality of client devices associated with a plurality of users is facilitated.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: January 11, 2022
    Assignee: Enlitic, Inc.
    Inventors: Kevin Lyman, Li Yao, Eric C. Poblenz, Jordan Prosky, Ben Covington, Anthony Upton
  • Patent number: 11176413
    Abstract: A discriminator includes a common learning unit and a plurality of learning units that are connected to an output unit of the common learning unit. The discriminator is trained, using a plurality of data sets of a first image obtained by capturing an image of a subject that has developed a disease and an image data of a disease region in the first image, such that information indicating the disease region is output from a first learning unit in a case in which the first image is input to the common learning unit. In addition, the discriminator is trained, using a plurality of data sets of an image set obtained by registration between the first image and a second image whose type is different from the type of the first image, such that an estimated image of the second image is output from an output unit of a second learning unit.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: November 16, 2021
    Assignee: FUJIFILM Corporation
    Inventor: Sadato Akahori
  • Patent number: 11176404
    Abstract: An embodiment of this application provides an image object detection method. The method may include obtaining a detection image, an n-level deep feature map framework, and an m-level non-deep feature map framework. The method may further include extracting deep feature from an (i?1)-level feature of the detection image using an i-level deep feature map framework, to obtain an i-level feature of the detection image. The method may further include extracting non-deep feature from a (j?1+n)-level feature of the detection image using a j-level non-deep feature map framework, to obtain a (j+n)-level feature of the detection image. The method may further include performing information regression operation on an a-level feature to an (m+n)-level feature of the detection image, to obtain an object type information and an object position information of an object in the detection image. The a is an integer less than n and greater than or equal to 2.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: November 16, 2021
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Shijie Zhao, Feng Li, Xiaoxiang Zuo
  • Patent number: 11170502
    Abstract: Provided is a method based on deep neural network to extract appearance and geometry features for pulmonary textures classification, which belongs to the technical fields of medical image processing and computer vision. Taking 217 pulmonary computed tomography images as original data, several groups of datasets are generated through a preprocessing procedure. Each group includes a CT image patch, a corresponding image patch containing geometry information and a ground-truth label. A dual-branch residual network is constructed, including two branches separately takes CT image patches and corresponding image patches containing geometry information as input. Appearance and geometry information of pulmonary textures are learnt by the dual-branch residual network, and then they are fused to achieve high accuracy for pulmonary texture classification. Besides, the proposed network architecture is clear, easy to be constructed and implemented.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: November 9, 2021
    Inventors: Rui Xu, Xinchen Ye, Lin Lin, Haojie Li, Xin Fan, Zhongxuan Luo
  • Patent number: 11170470
    Abstract: Techniques are described for content-adaptive downsampling of digital images and videos for computer vision operations, such as semantic segmentation. A computer vision system comprises a memory, one or more processors operably coupled to the memory and a downsampling module configured for execution by the one or more processors to perform, based on a non-uniform sampling model trained to predict content-aware sampling parameters, downsampling input image data to generate downsampled image data. A segmentation module is configured for execution by the one or more processors to perform segmentation on the downsampled image to produce a segmentation result, such as a feature map that assigns pixels of the downsampled image data to object classes. An upsampling module is configured for execution by the one or more processors to perform upsampling according to the segmentation result to produce upsampled image data.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: November 9, 2021
    Assignee: Facebook, Inc.
    Inventors: Zijian He, Peter Vajda, Priyam Chatterjee, Shanghsuan Tsai, Dmitrii Marin
  • Patent number: 11164309
    Abstract: An embodiment of the invention may include a method, computer program product and computer system for object detection and identification. The method, computer program product and computer system may include computing device which may receive an image from an imaging device. The image may be a medical image. The computing device may detect one or more potential indicators of disease in the image using a first algorithm and determine areas of potential disease in the image using an artificial intelligence algorithm. The computing device may determine a correlation between the determined areas of potential disease in the image and the one or more potential indicators of disease for the image. The computing device may, in response to determining a positive correlation, identify one or more of the potential indicators of disease for annotation and generate a report indicating one or more potential indicators of disease was found in the image.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: November 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Marwan Sati, David Richmond
  • Patent number: 11164067
    Abstract: Disclosed are provided systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging. For example, a system having means to execute a neural network model formed from a plurality of layer blocks including an encoder layer block which precedes a plurality of decoder layer blocks includes: means for associating a resolution value with each of the plurality of layer blocks; means for processing via the encoder layer block a respective layer block input including a down-sampled layer block output processing, via decoder layer blocks, a respective layer block input including an up-sampled layer block output and a layer block output of a previous layer block associated with a prior resolution value of a layer block which precedes the respective decoder layer block; and generating the respective layer block output by summing or concatenating the processed layer block inputs.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: November 2, 2021
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jianming Liang, Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh
  • Patent number: 11157796
    Abstract: A joint position estimation device including a memory, and a processor connected to the memory. The processor executes a process including estimating, by a first DNN for which a first parameter determined by learning of the first DNN has been set, a body part region of the animal with respect to input image to be processed; and estimating, by the second DNN for which a second parameter determined by learning of the second DNN has been set, a first joint position and a second joint position in each of the body part region estimated by the first DNN and a plural body parts region in which a plurality of the body part regions are connected.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: October 26, 2021
    Inventors: Satoshi Tanabe, Ryosuke Yamanaka, Mitsuru Tomono
  • Patent number: 11147633
    Abstract: Described herein are systems and methods related to image management. A system may include an instrument that includes an elongate body and an imaging device. The elongate body can be configured to be inserted into a luminal network. The imaging device may be positioned at a distal tip of the elongate body. The system may receive from the imaging device one or more images captured when the elongate body is within the luminal network. For each of the images, the system may determine one or more metrics that are indicative of a reliability of an image for localization of the distal tip of the elongate body within the luminal network. The system may determine a reliability threshold value for each of the one or more metrics. The system can utilize the one or more images based on whether the one or more metrics meet corresponding reliability threshold values.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: October 19, 2021
    Assignee: Auris Health, Inc.
    Inventor: Menglong Ye
  • Patent number: 11146774
    Abstract: The present invention relates to a three-dimensional face image capturing method and comprises the steps of: capturing a face region of a user in a direction from a right chin to a left forehead; capturing the face region in a direction from a left forehead to a left chin; capturing the face region in a direction from a left chin to a right forehead; and capturing the face region in a direction from a right forehead to a center of face.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: October 12, 2021
    Assignee: Korea Institute of Oriental Medicine
    Inventors: Jun Hyeong Do, Young Min Kim, Jun Su Jang
  • Patent number: 11126915
    Abstract: An information processing apparatus and a method for volume data visualization is provided. The information processing apparatus stores an auto-encoder that includes an encoder network and a decoder network. The encoder network includes a loss function and a first plurality of neural network (NN) layers. The information processing apparatus inputs volume data to an initial NN layer of the first plurality of NN layers and generates a latent image as an output from a final NN layer of the first plurality of NN layers based on application of the encoder network on the input volume data. The information processing apparatus estimates a distance between the generated latent image and a reference image based on the loss function and updates the encoder network based on the estimated distance. Finally, the information processing apparatus outputs the updated encoder network as a trained encoder network based on the estimated distance being a minimum.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: September 21, 2021
    Inventors: Shigeru Owada, Frank Nielsen
  • Patent number: 11113506
    Abstract: The invention concerns a method for providing an evaluation means (60) for at least one optical application system (5) of a microscope-based application technology, wherein the following steps are performed, in particular each by an optical training system (4): performing an input detection (101) of at least one sample (2) according to the application technology in order to obtain at least one input record (110) of the sample (2) from the input detection (101), performing a target detection (102) of the sample (2) according to a training technology to obtain at least one target record (112) of the sample (2) from the target detection (102), the training technology being different from the application technology at least in that additional information (115) about the sample (2) is provided, training (130) of the evaluation means (60) at least on the basis of the input recording (110) and the target recording (112), in order to obtain a training information (200) of the evaluation means (60), in that vario
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: September 7, 2021
    Inventors: Daniel Krueger, Mike Woerdemann, Stefan Diepenbrock
  • Patent number: 11107564
    Abstract: An accession number correction system is operable to determine that an accession number of a received DICOM image does not link to any corresponding one of a plurality of medical reports. A query indicating medical report criteria, generated based on the first DICOM image, is transmitted to a report database, and a set of medical reports are received from the report database in response. One report of the set of medical reports that corresponds to the DICOM image is determined by performing a comparison function on the DICOM image and the one reports to generate a comparison value, and by determining the comparison value compares favorably to a comparison threshold. Updated report header data that includes the accession number of the first DICOM image is generated for the one report and is transmitted to the report database for storage.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: August 31, 2021
    Assignee: Enlitic, Inc.
    Inventors: Eric C. Poblenz, Kevin Lyman, Chris Croswhite
  • Patent number: 11100642
    Abstract: The purpose of the present disclosure is to provide a computer system, an animal diagnosis method, and a program in which the accuracy of animal diagnosis can be improved. The computer system acquires a visible light image of an animal imaged by a camera, compares the acquired visible light image with a normal visible light image of the animal and performs image analysis, identifies a species of the animal according to the result of the image analysis, identifies an abnormal portion of the animal according to the result of the image analysis, acquires environmental data of the animal, and diagnoses a condition of the animal according to the identified species, the identified abnormal portion and the acquired environmental data.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: August 24, 2021
    Inventor: Shunji Sugaya
  • Patent number: 11049239
    Abstract: Techniques are provided for deep neural network (DNN) identification of realistic synthetic images generated using a generative adversarial network (GAN). According to an embodiment, a system is described that can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise, a first extraction component that extracts a subset of synthetic images classified as non-real like as opposed to real-like, wherein the subset of synthetic images were generated using a GAN model. The computer executable components can further comprise a training component that employs the subset of synthetic images and real images to train a DNN network model to classify synthetic images generated using the GAN model as either real-like or non-real like.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: June 29, 2021
    Inventors: Ravi Soni, Min Zhang, Zili Ma, Gopal B. Avinash
  • Patent number: 11017269
    Abstract: A method for determining optimized deep learning architecture includes receiving a plurality of training images and a plurality of real time images corresponding to a subject. The method further includes receiving, by a medical practitioner, a plurality of learning parameters comprising a plurality of filter classes and a plurality of architecture parameters. The method also includes determining a deep learning model based on the plurality of learning parameters and the plurality of training images, wherein the deep learning model comprises a plurality of reusable filters. The method further includes determining a health condition of the subject based on the plurality of real time images and the deep learning model. The method also includes providing the health condition of the subject to the medical practitioner.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: May 25, 2021
    Assignee: General Electric Company
    Inventors: Sheshadri Thiruvenkadam, Sohan Rashmi Ranjan, Vivek Prabhakar Vaidya, Hariharan Ravishankar, Rahul Venkataramani, Prasad Sudhakar
  • Patent number: 11003909
    Abstract: A machine trains a first neural network using a first set of images. Training the first neural network comprises computing a first set of weights for a first set of neurons. The machine, for each of one or more alpha values in order from smallest to largest, trains an additional neural network using an additional set of images. The additional set of images comprises a homographic transformation of the first set of images. The homographic transformation is computed based on the alpha value. Training the additional neural network comprises computing an additional set of weights for an additional set of neurons. The additional set of weights is initialized based on a previously computed set of weights. The machine generates a trained ensemble neural network comprising the first neural network and one or more additional neural networks corresponding to the one or more alpha values.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: May 11, 2021
    Assignee: Raytheon Company
    Inventors: Peter Kim, Michael J. Sand
  • Patent number: 10970518
    Abstract: A voxel feature learning network receives a raw point cloud and converts the point cloud into a sparse 4D tensor comprising three-dimensional coordinates (e.g. X, Y, and Z) for each voxel of a plurality of voxels and a fourth voxel feature dimension for each non-empty voxel. In some embodiments, convolutional mid layers further transform the 4D tensor into a high-dimensional volumetric representation of the point cloud. In some embodiments, a region proposal network identifies 3D bounding boxes of objects in the point cloud based on the high-dimensional volumetric representation. In some embodiments, the feature learning network and the region proposal network are trained end-to-end using training data comprising known ground truth bounding boxes, without requiring human intervention.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: April 6, 2021
    Assignee: Apple Inc.
    Inventors: Yin Zhou, Cuneyt O. Tuzel, Jerremy Holland
  • Patent number: 10937541
    Abstract: Systems and methods are disclosed for processing an electronic image corresponding to a specimen. One method for processing the electronic image includes: receiving a target electronic image of a slide corresponding to a target specimen, the target specimen including a tissue sample from a patient, applying a machine learning system to the target electronic image to determine deficiencies associated with the target specimen, the machine learning system having been generated by processing a plurality of training images to predict stain deficiencies and/or predict a needed recut, the training images including images of human tissue and/or images that are algorithmically generated; and based on the deficiencies associated with the target specimen, determining to automatically order an additional slide to be prepared.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: March 2, 2021
    Assignee: PAIGE.AI, Inc.
    Inventors: Rodrigo Ceballos Lentini, Christopher Kanan, Patricia Raciti, Leo Grady, Thomas Fuchs
  • Patent number: 10839581
    Abstract: A computer-implemented method for generating a composite image. The method includes iteratively optimizing an intermediate style transfer image using an initial style transfer image as a starting point based on a predefined loss function, original content features of a first input image, and original style features of a second input image; generating an optimized style transfer image after iteratively optimizing is performed for N times, N>1; and morphing the optimized style transfer image with the second input image to generate the composite image.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: November 17, 2020
    Assignee: BOE Technology Group Co., Ltd.
    Inventor: Guannan Chen
  • Patent number: 10834523
    Abstract: An autonomous transport device system is enabled for one or more autonomous vehicles and autonomous surface delivery devices. Specific pickup and drop off zones for package delivery and user transport may be defined. The system leverages an artificial intelligence based learning algorithm to understand various environments. Packages may be dropped off in the geofenced areas. In some instances, packages may be stored in hidden areas that are purposely cached local to a likely delivery area. Some areas may be marked for pickup and drop off. Shippers may cache certain packages proximate to locations based on demand, joint distribution centers, and the presence of multiple transport devices including rovers, drones, UAVs, and autonomous vehicles.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: November 10, 2020
    Assignee: Accelerate Labs, LLC
    Inventor: Sanjay K. Rao
  • Patent number: 10825190
    Abstract: A dynamic image processing apparatus includes: a hardware processor that: extracts a lung-field region from at least one of a plurality of frame images of a chest dynamic image obtained by radiographing a dynamic state of a chest of an examinee; sets a feature point in a position that moves according to a movement of a lung field due to respiration in the lung-field region extracted by the hardware processor; searches a frame image other than a frame image in which the feature point has been set for a corresponding point that corresponds to the feature point set by the hardware processor, and estimates a correspondence relationship of each pixel in the lung-field region among the plurality of frame images in accordance with a positional relationship between the feature point set by the hardware processor and the corresponding point searched for by the hardware processor.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: November 3, 2020
    Assignee: KONICA MINOLTA, INC.
    Inventors: Sho Noji, Kenta Shimamura
  • Patent number: 10769493
    Abstract: The embodiments of the present invention provide training and construction methods and apparatus of a neural network for object detection, an object detection method and apparatus based on a neural network and a neural network. The training method of the neural network for object detection, comprises: inputting a training image including a training object to the neural network to obtain a predicted bounding box of the training object; acquiring a first loss function according to a ratio of the intersection area to the union area of the predicted bounding box and a true bounding box, the true bounding box being a bounding box of the training object marked in advance in the training image; and adjusting parameters of the neural network by utilizing at least the first loss function to train the neural network.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: September 8, 2020
    Inventors: Jiahui Yu, Qi Yin
  • Patent number: 10762421
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing inputs using a neural network system that includes a whitened neural network layer. One of the methods includes receiving an input activation generated by a layer before the whitened neural network layer in the sequence; processing the received activation in accordance with a set of whitening parameters to generate a whitened activation; processing the whitened activation in accordance with a set of layer parameters to generate an output activation; and providing the output activation as input to a neural network layer after the whitened neural network layer in the sequence.
    Type: Grant
    Filed: June 6, 2016
    Date of Patent: September 1, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Guillaume Desjardins, Karen Simonyan, Koray Kavukcuoglu, Razvan Pascanu
  • Patent number: 10740897
    Abstract: Embodiments of the present invention provide a method and a device for three-dimensional feature-embedded image object component-level semantic segmentation, the method includes: acquiring three-dimensional feature information of a target two-dimensional image; performing a component-level semantic segmentation on the target two-dimensional image according to the three-dimensional feature information of the target two-dimensional image and two-dimensional feature information of the target two-dimensional image. In the technical solution of the present application, not only the two-dimensional feature information of the image but also the three-dimensional feature information of the image are taken into consideration when performing the component-level semantic segmentation on the image, thereby improving the accuracy of the image component-level semantic segmentation.
    Type: Grant
    Filed: January 15, 2018
    Date of Patent: August 11, 2020
    Inventors: Xiaowu Chen, Jia Li, Yafei Song, Yifan Zhao, Qinping Zhao
  • Patent number: 10687766
    Abstract: Systems and methods are described to generate, using a first image generation technique, a first image based on the first image data, display the first image to an operator, receive, from the operator, one or more indications of features in the first image of the patient volume, generate, using a second image generation technique, a second image based on the first image data, perform automated feature extraction on the second image to automatically extract information associated with features of the patient volume, and output a feature report of the patient volume based on the one or more indications of features and the information associated with features.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: June 23, 2020
    Assignee: Siemens Healthcare GmbH
    Inventor: Christian D. Eusemann
  • Patent number: 10685262
    Abstract: Techniques related to implementing convolutional neural networks for object recognition are discussed. Such techniques may include generating a set of binary neural features via convolutional neural network layers based on input image data and applying a strong classifier to the set of binary neural features to generate an object label for the input image data.
    Type: Grant
    Filed: March 20, 2015
    Date of Patent: June 16, 2020
    Assignee: Intel Corporation
    Inventors: Anbang Yao, Lin Xu, Jianguo Li, Yurong Chen
  • Patent number: 10642942
    Abstract: A method of designing or selecting an implantable medical device comprises the steps of: i) obtaining a plurality of measured data points of a characteristic of an anatomical feature of an individual; ii) using said data points to construct a surrogate model of said characteristic, the surrogate model being constructed by interpolating or regressing measured data points of the characteristic, and using said surrogate model to obtain predicted values of said characteristic at a plurality of locations; iii) using said predicted values to determine or select at least one value of a design parameter of the implantable medical device. There is further disclosed a method of monitoring or diagnosing a disease or disorder.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: May 5, 2020
    Inventor: Neil W. Bressloff
  • Patent number: 10631773
    Abstract: A living body state estimation apparatus acquires information indicating a state of a living body. The living body state estimation apparatus is configured to include an electrocardiogram signal acquisition unit which acquires an electrocardiogram signal of the living body and an information acquisition unit which acquires a parameter as the information, the parameter specifying a predetermined function indicating a probability distribution for a reference wave interval which is a time interval between peaks of consecutive predetermined reference waves in the acquired electrocardiogram signal.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: April 28, 2020
    Inventors: Yoshitaka Kimura, Nobuo Yaegashi, Kunihiro Okamura, Takuya Ito, Kunihiro Koide, Miyuki Endo
  • Patent number: 10366494
    Abstract: A computer-readable storage medium may be configured to store a program comprising instructions configured to, when executed by a computing device, cause the computing device to detect a selection of a partial area of the image, transform the image into a transformed image in which the selected partial area is positioned in a center of the transformed image, extract at least one feature from the transformed image, using a deep learning technique, enhance at least one feature of the at least one extracted feature, restore, as a restored image, at least one feature of the at least one enhanced feature, and inversely transform the restored image to provide segmented images.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: July 30, 2019
    Inventors: Yeongkyeong Seong, Wonsik Kim
  • Patent number: 10330084
    Abstract: A wind park comprising at least two wind turbines that produce electrical power by means of a wind rotor and a generator and delivers this to an accumulating network, and comprising a park master that is configured to control said wind turbines and has a power regulator whose input is supplied with a target power signal and, at whose output, power control signals are emitted for the wind turbines, said power regulator comprising a feed-forward control module that imposes a value for the target power onto the output of said power regulator by means of a multiplication element.
    Type: Grant
    Filed: January 14, 2015
    Date of Patent: June 25, 2019
    Assignee: Senvion GmbH
    Inventors: Jens Geisler, Roman Bluhm, Thomas Ott, Thomas Schröter
  • Patent number: 10304191
    Abstract: A three dimensional bounding box is determined from a two dimensional image. A two dimensional bounding box is calculated based on a detected object within the image. A three dimensional bounding box is parameterized as having a yaw angle, dimensions, and a position. The yaw angle is defined as the angle between a ray passing through a center of the two dimensional bounding box and an orientation of the three dimensional bounding box. The yaw angle and dimensions are determined by passing the portion of the image within the two dimensional bounding box through a trained convolutional neural network. The three dimensional bounding box is then positioned such that the projection of the three dimensional bounding box into the image aligns with the two dimensional bounding box previously detected. Characteristics of the three dimensional bounding box are then communicated to an autonomous system for collision and obstacle avoidance.
    Type: Grant
    Filed: October 11, 2016
    Date of Patent: May 28, 2019
    Assignee: Zoox, Inc.
    Inventors: Arsalan Mousavian, John Patrick Flynn, Dragomir Dimitrov Anguelov
  • Patent number: 10282840
    Abstract: An image reporting method is provided. The image reporting method comprises the steps of retrieving an image representation of a sample structure from an image source; mapping a generic structure to the sample structure, the generic structure being related to the sample structure; determining a region of interest within the sample structure based on content of the image representation of the sample structure; providing a focused set of representations of diagnostic knowledge which is contextually appropriate to the region of interest and prompting the user to select at least one diagnostic finding from the focused set of knowledge representation or by entering free-form text; and generating a diagnostic report based on the selections and free-form text entries.
    Type: Grant
    Filed: November 30, 2013
    Date of Patent: May 7, 2019
    Inventors: Armin Moehrle, Crandon F Clark
  • Patent number: 10186030
    Abstract: According to an aspect of an exemplary embodiment, an apparatus for avoiding region of interest (ROI) re-detection includes a detector configured to detect an ROI from an input medical image; a re-detection determiner configured to determine whether the detected ROI corresponds to a previously-detected ROI using pre-stored user determination information; and an ROI processor configured to perform a process for the detected ROI based on the determination.
    Type: Grant
    Filed: November 6, 2015
    Date of Patent: January 22, 2019
    Inventor: Hyoung Min Park
  • Patent number: 10022090
    Abstract: The present invention provides an energy based dissection device that automatically provides a nerve protection function. Specifically, the present invention operatively connects nerve monitoring technology and energy based dissection technology to provide a device that provides energy based dissection functionality that cannot operate, or operates differently, upon receipt of real time information from the nerve monitoring functionality that nerve damage may be imminent in the absence of such safety shutdown. The present invention also creates a real-time graphical display of the nerve, including size and location relative to the energy based dissection device to enable the operator to safely and accurately avoid damaging the nerve. Accordingly, the present invention provides a surgical device that removes human error and reaction time issues which prevent unintended dissection and concomitant nerve damage.
    Type: Grant
    Filed: October 18, 2013
    Date of Patent: July 17, 2018
    Assignee: Atlantic Health System, Inc.
    Inventor: Eric D. Whitman
  • Patent number: 9990558
    Abstract: Techniques for increasing robustness of a convolutional neural network based on training that uses multiple datasets and multiple tasks are described. For example, a computer system trains the convolutional neural network across multiple datasets and multiple tasks. The convolutional neural network is configured for learning features from images and accordingly generating feature vectors. By using multiple datasets and multiple tasks, the robustness of the convolutional neural network is increased. A feature vector of an image is used to apply an image-related operation to the image. For example, the image is classified, indexed, or objects in the image are tagged based on the feature vector. Because the robustness is increased, the accuracy of the generating feature vectors is also increased. Hence, the overall quality of an image service is enhanced, where the image service relies on the image-related operation.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: June 5, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Zhe Lin, Xiaohui Shen, Jonathan Brandt, Jianming Zhang
  • Patent number: 9927604
    Abstract: A method for tracking three-dimensional movement of an object and focusing thereon is provided. Software repeatedly detects three-dimensional locations of one or more objects and image quality by sampling images at many parameter settings as the objects move in three-dimensional environment. The images are ranked occurring to a fitness score and parameter settings for subsequent images to established using a biologically-inspired algorithm.
    Type: Grant
    Filed: September 14, 2015
    Date of Patent: March 27, 2018
    Assignee: Research Foundation of the City University of New York
    Inventors: M. Umit Uyar, Stephen Gundry
  • Patent number: 9687171
    Abstract: A magnetic resonance imaging apparatus includes a trigger generating unit, a blood flow image generating unit and a control unit. The trigger generating unit acquires blood flow information of an object and generates a trigger based on the blood flow information. The blood flow image generating unit acquires imaging data with using the trigger and generates blood flow image data. The control unit controls so as to repeatedly perform a probe sequence for acquiring the blood flow information and an imaging sequence for acquiring the imaging data alternately.
    Type: Grant
    Filed: August 11, 2009
    Date of Patent: June 27, 2017
    Inventor: Shinichi Kitane
  • Patent number: 9675301
    Abstract: Systems and methods are disclosed for providing a cardiovascular score for a patient. A method includes receiving, using at least one computer system, patient-specific data regarding a geometry of multiple coronary arteries of the patient; and creating, using at least one computer system, a three-dimensional model representing at least portions of the multiple coronary arteries based on the patient-specific data. The method also includes evaluating, using at least one computer system, multiple characteristics of at least some of the coronary arteries represented by the model; and generating, using at least one computer system, the cardiovascular score based on the evaluation of the multiple characteristics. Another method includes generating the cardiovascular score based on evaluated multiple characteristics for portions of the coronary arteries having fractional flow reserve values of at least a predetermined threshold value.
    Type: Grant
    Filed: October 19, 2012
    Date of Patent: June 13, 2017
    Assignee: HeartFlow, Inc.
    Inventors: Timothy A. Fonte, Jonathan Tang, Gilwoo Choi
  • Patent number: 9589374
    Abstract: Described are systems, media, and methods for applying deep convolutional neural networks to medical images to generate a real-time or near real-time diagnosis.
    Type: Grant
    Filed: August 1, 2016
    Date of Patent: March 7, 2017
    Inventors: Dashan Gao, Xin Zhong
  • Patent number: 9439621
    Abstract: A method and system acquiring, processing and displaying breast ultrasound images in a way that makes breast ultrasound screening more practical and thus more widely used, and reduces missing cancers in screening and diagnosis, using automated scanning of chestwardly compressed breasts with ultrasound. Enhanced, whole-breast navigator overview images are generated from scanning breasts with ultrasound that emphasize abnormalities in the breast while excluding obscuring influences of non-breast structures, particularly those external to the breast such as ribs and chest wall, and differentiating between likely malignant and likely benign abnormalities and otherwise enhancing the navigator overview image and other images, to thereby reduce the time to read, screen, or diagnose to practical time limits and also reduce screening or diagnostic errors.
    Type: Grant
    Filed: July 31, 2014
    Date of Patent: September 13, 2016
    Assignee: QVIEW, MEDICAL INC
    Inventors: Wei Zhang, Shih-Ping Wang, Alexander Schneider, Harlan Romsdahl, Thomas Neff
  • Publication number: 20150148657
    Abstract: A computerized method of adapting a presentation of ultrasonographic images during an ultrasonographic fetal evaluation. The method comprises performing an analysis of a plurality of ultrasonographic images captured by an ultrasonographic probe during an evaluation of a fetus, automatically identifying at least one location of at least one anatomical landmark of at least one reference organ or tissue or body fluid of the fetus in the plurality of ultrasonographic images based on an outcome of the analysis, automatically localizing a region of interest (ROI) in at least some of the plurality of ultrasonographic images by using at least one predefined locational anatomical property of the at least one anatomical landmark, and concealing the ROI in a presentation of the at least some ultrasonographic images during the evaluation. At least one anatomical landmark is imaged in the presentation and not concealed by the ROI.
    Type: Application
    Filed: June 4, 2013
    Publication date: May 28, 2015
    Inventors: David Shashar, Reuven Achiron, Arnaldo Mayer
  • Publication number: 20150141778
    Abstract: In a combination invasive and non-invasive bioparameter monitoring device an invasive component measures the bioparameter and transmits the reading to the non-invasive component. The non-invasive component generates a bioparametric reading upon insertion by the patient of a body part. A digital processor processes a series over time of digital color images of the body part and represents the digital images as a signal over time that is converted to a learning vector using mathematical functions. A learning matrix is created. A coefficient of learning vector is deduced. From a new vector from non-invasive measurements, a new matrix of same size and structure is created. Using the coefficient of learning vector, a recognition matrix may be tested to measure the bioparameter non-invasively. The learning matrix may be expanded and kept regular. After a device is calibrated to the individual patient, universal calibration can be generated from sending data over the Internet.
    Type: Application
    Filed: December 24, 2014
    Publication date: May 21, 2015
    Inventor: Yosef SEGMAN
  • Publication number: 20150112182
    Abstract: A method and system for determining fractional flow reserve (FFR) for a coronary artery stenosis of a patient is disclosed. In one embodiment, medical image data of the patient including the stenosis is received, a set of features for the stenosis is extracted from the medical image data of the patient, and an FFR value for the stenosis is determined based on the extracted set of features using a trained machine-learning based mapping. In another embodiment, a medical image of the patient including the stenosis of interest is received, image patches corresponding to the stenosis of interest and a coronary tree of the patient are detected, an FFR value for the stenosis of interest is determined using a trained deep neural network regressor applied directly to the detected image patches.
    Type: Application
    Filed: October 16, 2014
    Publication date: April 23, 2015
    Inventors: Puneet Sharma, Ali Kamen, Bogdan Georgescu, Frank Sauer, Dorin Comaniciu, Yefeng Zheng, Hien Nguyen, Vivek Kumar Singh
  • Publication number: 20150087957
    Abstract: For therapy response assessment, texture features are input for machine learning a classifier and for using a machine learnt classifier. Rather than or in addition to using formula-based texture features, data driven texture features are derived from training images. Such data driven texture features are independent analysis features, such as features from independent subspace analysis. The texture features may be used to predict the outcome of therapy based on a few number of or even one scan of the patient.
    Type: Application
    Filed: August 28, 2014
    Publication date: March 26, 2015
    Inventors: David Liu, Shaohua Kevin Zhou, Martin Kramer, Michael Sühling, Christian Tietjen, Grzegorz Soza, Andreas Wimmer
  • Publication number: 20150080702
    Abstract: Computer-implemented methods of improved quantitative interpretation of cell index profiles are provided.
    Type: Application
    Filed: September 10, 2014
    Publication date: March 19, 2015
    Inventors: Lisa A. Boardman, Kavishwar B. Wagholikar, Rajeev Chaudhry
  • Patent number: 8981770
    Abstract: The present invention relates to an apparatus and a method for void size determination of voids within an object into which an aerosol containing magnetic particles has been introduced, in particular for determining the size of a patient's pulmonary alveoli, said patient having inhaled an aerosol containing magnetic particles To review information concerning the lung structure, it is proposed to use magnetic particle imaging. First and second detection signals are acquired subsequently at different moments in time after introduction of the aerosol containing the magnetic particles into the object, in particular after inhalation of the aerosol by the patient. These detection signals are exploited, in particular the drop in intensity and/or the signal decay time, to get information about the diffusion of the magnetic particles within the voids, in particular alveoli, and to retrieve information therefrom about the size of the voids, in particular alveoli.
    Type: Grant
    Filed: July 12, 2010
    Date of Patent: March 17, 2015
    Assignee: Koninklijke Philips N.V.
    Inventor: Bernhard Gleich
  • Patent number: 8977339
    Abstract: A method of assessing stenosis severity for a patient includes obtaining patient information relevant to assessing severity of a stenosis, including anatomical imaging data of the patient. Based on the anatomical imaging data, the existence of any lesions of concerns may be identified. A three dimensional image can be generated of any irregular shaped lesion of concern and a surrounding area from the patient anatomical imaging data. A plurality of comparative two dimensional lesion specific models may be created that have conditions that correspond to the three dimensional model. The comparative two dimensional models may represent vessels having regular shaped lesions with each of the comparative two dimensional models represents a different stenosis severity. The three dimensional model can then be mapped to one of the plurality of comparative two dimensional models. After this mapping, a diagnosis of whether the patient has coronary artery disease may be made.
    Type: Grant
    Filed: December 5, 2013
    Date of Patent: March 10, 2015
    Assignee: Intrinsic Medical Imaging LLC
    Inventors: Zhongle Wu, James P. Jacobs