Patents Examined by Vincent Rudolph
  • Patent number: 12045916
    Abstract: Techniques for computed tomography (CT) image reconstruction are presented. The techniques can include acquiring, by a detector grid of a computed tomography system, detector signals for a location within an object of interest representing a voxel, where each detector signal of a plurality of the detector signals is obtained from an x-ray passing through the location at a different viewing angle; reconstructing a three-dimensional representation of at least the object of interest, the three-dimensional representation comprising the voxel, where the reconstructing comprises computationally perturbing a location of each detector signal of the plurality of detector signals within the detector grid, where the computationally perturbing corresponds to randomly perturbing a location of the x-ray within the voxel; and outputting the three-dimensional representation.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: July 23, 2024
    Assignee: THE JOHNS HOPKINS UNIVERSITY
    Inventors: Joseph Webster Stayman, Alejandro Sisniega, Jeffrey H. Siewerdsen
  • Patent number: 12026813
    Abstract: Disclosed are an apparatus and method for providing a digital twin bookshelf, which can generate a digital twin bookshelf including a virtual bookshelf and book objects by using book information and an image obtained by capturing a real bookshelf. The apparatus captures an image of a real bookshelf in which real books are arranged, detects book recognition information from the real bookshelf image, generates a digital twin bookshelf based on the real bookshelf image and the book information corresponding to the book recognition information, and outputs the generated digital twin bookshelf.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: July 2, 2024
    Assignee: WOONGJIN THINKBIG CO., LTD.
    Inventors: Samrak Choi, Uiyoung Kim, Sanghyo Lee
  • Patent number: 12008695
    Abstract: Various methods and systems are provided for translating magnetic resonance (MR) images to pseudo computed tomography (CT) images. In one embodiment, a method comprises acquiring an MR image, generating, with a multi-task neural network, a pseudo CT image corresponding to the MR image, and outputting the MR image and the pseudo CT image. In this way, the benefits of CT imaging with respect to accurate density information, especially in sparse regions of bone which exhibit with high dynamic range, may be obtained in an MR-only workflow, thereby achieving the benefits of enhanced soft-tissue contrast in MR images while eliminating CT dose exposure for a patient.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: June 11, 2024
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Sandeep Kaushik, Dattesh Shanbhag, Cristina Cozzini, Florian Wiesinger
  • Patent number: 11986286
    Abstract: Neurodegeneration can be assessed based on a gait signature, using a machine-learning model trained on gait metrics acquired for a patient, in conjunction with cognitive test data and neuropathology information about the patient. The gait signature can be derived from gait kinematic data, e.g., as obtained with a video-based, marker-less motion capture system.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: May 21, 2024
    Assignee: GaitIQ, Inc.
    Inventors: Richard Morris, Barbara Schnan Mastronardi
  • Patent number: 11969279
    Abstract: A method and system is disclosed for acquiring image data of a subject. The image data can be collected with an imaging system having a detector able to move relative to the subject. A contrast agent can be injected into the subject and image data can be acquired with the contrast agent in various phases of the subject. A volumetric model of multiple phases can be reconstructed selected reconstruction techniques.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: April 30, 2024
    Assignee: Medtronic Navigation, Inc.
    Inventors: Patrick A. Helm, Shuanghe Shi
  • Patent number: 11972604
    Abstract: An image feature visualization method and apparatus, and an electronic device during model training, inputs the real training data with positive samples into a mapping generator to obtain fictitious training data with negative samples. The mapping generator includes a mapping module configured to learn a key feature map that distinguishes the real training data with positive samples/negative samples, and the fictitious training data with negative samples is generated based on the real training data with positive samples and the key feature map. The training data with negative samples is input into a discriminator to obtain a discrimination result. An optimizer optimizes the mapping generator and the discriminator until training is completed. During model application, a target image that is to be processed is input into the mapping generator, and the mapper in the mapping generator extracts features of the target image.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: April 30, 2024
    Assignee: SHENZHEN INSTITUTES OF ADVANCED TECHNOLOGY
    Inventors: Shuqiang Wang, Wen Yu, Chenchen Xiao, Shengye Hu, Yanyan Shen
  • Patent number: 11967127
    Abstract: Systems and methods capture image dynamics and use those captured image dynamics for image feature recognition and classification. Other methods and systems train a neural network to capture image dynamics. An image vector representing image dynamics is extracted from an image of an image stream using a first neural network. A second neural network, predicts a previous and/or subsequent image in the image stream from the image vector. The predicted previous and/or subsequent image is compared with an actual previous and/or subsequent image from the image stream. The first and second neural networks are trained using the result of the comparison.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: April 23, 2024
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Komath Naveen Kumar
  • Patent number: 11961278
    Abstract: A method for detecting an occluded image. The method includes: after an image is captured by a camera, obtaining the image as an image to be detected; inputting the image to be detected into a trained occluded-image detection model, the occluded-image detection model is trained based on original occluded images and non-occluded images by using a trained data feature augmentation network; determining whether the image to be detected is an occluded image based on the occluded-image detection model; and outputting an image detection result.
    Type: Grant
    Filed: May 31, 2021
    Date of Patent: April 16, 2024
    Assignee: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventors: Ruoyu Liu, Zhi Qu, Yasen Zhang, Yan Song, Zhipeng Ge
  • Patent number: 11961293
    Abstract: A system and related methods for identifying characteristics of handbags is described. One method includes receiving one or more images of a handbag, eliminating all but select images from the one or more images of the handbag to obtain a grouping of one or more select images, the select images being those embodying a complete periphery and frontal view of the handbag. For each of the one or more select images, aligning feature-corresponding pixels with an image axis, comparing at least a portion of the one or more select images with a plurality of stored images, and determining characteristics of the handbag based on said comparing.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: April 16, 2024
    Assignee: FASHIONPHILE Group, LLC
    Inventors: Sarah Davis, Ben Hemminger
  • Patent number: 11961255
    Abstract: The object detection device extracts a plurality of predetermined features from an image in which a target object is represented, calculates an entire coincidence degree between the plurality of predetermined features set for an entire model pattern of the target object and the plurality of predetermined features extracted from a corresponding region on the image while changing a relative positional relationship between the image and the model pattern, and calculates, for each partial region including a part of the model pattern, a partial coincidence degree between the predetermined features included in the partial region and the predetermined features extracted from a region corresponding to the partial region on the image. Then, the object detection device determines whether or not the target object is represented in the region on the image corresponding to the model pattern based on the entire coincidence degree and the partial coincidence degree.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: April 16, 2024
    Assignee: FANUC CORPORATION
    Inventor: Shoutarou Ogura
  • Patent number: 11935256
    Abstract: A distance estimation system comprised of a laser light emitter, two image sensors, and an image processor are positioned on a baseplate such that the fields of view of the image sensors overlap and contain the projections of an emitted collimated laser beam within a predetermined range of distances. The image sensors simultaneously capture images of the laser beam projections. The images are superimposed and displacement of the laser beam projection from a first image taken by a first image sensor to a second image taken by a second image sensor is extracted by the image processor. The displacement is compared to a preconfigured table relating displacement distances with distances from the baseplate to projection surfaces to find an estimated distance of the baseplate from the projection surface at the time that the images were captured.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: March 19, 2024
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Soroush Mehrnia
  • Patent number: 11922604
    Abstract: PET/MR images are compensated with simplified adaptive algorithms for truncated parts of the body. The compensation adapts to a specific location of truncation of the body or organ in the MR image, and to attributes of the truncation in the truncated body part. Anatomical structures in a PET image that do not require any compensation are masked using a MR image with a smaller field of view. The organs that are not masked are then classified as types of anatomical structures, the orientation of the anatomical structures, and type of truncation. Structure specific algorithms are used to compensate for a truncated anatomical structure. The compensation is validated for correctness and the ROI is filled in where there is missing voxel data. Attenuation maps are generated from the compensated ROI.
    Type: Grant
    Filed: October 20, 2015
    Date of Patent: March 5, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Shekhar Dwivedi
  • Patent number: 11900704
    Abstract: The method to determine tampering of a security label (102) comprises, associating at least a portion of a first pattern to an external reference (110), wherein a first layer (202) of the security label (102) comprises the first pattern. Further, a second pattern (206) defined in a second layer (204) is used to change the contour of the portion of the first pattern, when the security label (102) is at least partially disengaged from a surface. Subsequently, when there is change in contour of the portion of the first pattern, the portion of the first pattern is disassociated from an external reference (110). Further, the portion of the first pattern and the external reference (110) are scanned and finally tampering of the security label (102) is determined based on the association between the portion of the first pattern and the external reference (110).
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: February 13, 2024
    Inventor: Ashish Anand
  • Patent number: 11890093
    Abstract: A method and system of training a machine learning neural network (MLNN) in anatomical degenerative conditions in accordance with anatomical dynamics. The method comprises receiving, in a first input layer of the MLNN, from a millimeter wave (mmWave) radar sensing device, a first set of mmWave radar point cloud data representing a first gait characteristic of a subject in motion, comprising an arm swing velocity, receiving, in a second layer, a second set of mmWave radar point cloud data representing a second gait characteristic comprising a measure of dynamic postural stability, the input layers being interconnected with an output layer of the MLNN via an intermediate layer, and training a MLNN classifier in accordance with a classification that increases a correlation between a degenerative condition of the subject as generated at the output layer and the sets of mmWave point cloud data.
    Type: Grant
    Filed: December 23, 2022
    Date of Patent: February 6, 2024
    Assignee: Ventech Solutions, Inc.
    Inventors: Ravi Kiran Pasupuleti, Ravi Kunduru
  • Patent number: 11887215
    Abstract: Provided is an image processing method according to an embodiment, which includes: obtaining a label of a first image by inputting the first image to a recognition model; obtaining reference style data for a target reference image to which a visual sentiment label is assigned, the visual sentiment label being the same as the obtained label from among visual sentiment labels pre-assigned to reference images; generating second style data based on first style data for the first image and the obtained reference style data; and generating a second image based on the generated second style data.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: January 30, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Lei Zhang, Yehoon Kim, Chanwon Seo
  • Patent number: 11887274
    Abstract: An image dataset comprising pixel depth arrays might be processed by an interpolator, wherein interpolation is based on pixel samples. Input pixels to be interpolated from and an interpolated pixel might comprise deep pixels, each represented with a list of samples. Accumulation curves might be generated from each input pixel, weights applied, and accumulation curves combined to form an interpolation accumulation curve. An interpolated deep pixel can be derived from the interpolation accumulation curve, taking into account zero-depth samples as needed. Samples might represent color values of pixels.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: January 30, 2024
    Assignee: Unity Technologies SF
    Inventor: Peter M. Hillman
  • Patent number: 11883223
    Abstract: A system and method for determining kinetic parameters associated with a kinetic model of an imaging agent in a liver is provided. An image reconstruction device can receive radiotracer activities corresponding to a predetermined time period. For example, these radiotracer activities can include PET scan data corresponding to a number of time frames. The radiotracer activities can be used to determine a liver time activity curve and a circulatory input function. The liver time activity curve and circulatory input function can be used along with a kinetic model of the liver to produce kinetic parameters. These kinetic parameters can be used to determine hepatic scores, such as a hepatic steatosis score, a hepatic inflammation score, and a cirrhosis score. These scores are indicative of diseases of the liver, including nonalcoholic fatty liver disease, nonalcoholic steatohepatitis, and hepatic fibrosis.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: January 30, 2024
    Assignee: The Regents of the University of California
    Inventors: Guobao Wang, Souvik Sarkar, Ramsey D. Badawi
  • Patent number: 11887356
    Abstract: The disclosed systems, structures, and methods are directed to receiving a training data set comprising a plurality of original training samples, augmenting the original training samples by applying default transformations, training the machine learning model on at least a portion of the original training samples and at least a portion of the first set of augmented training samples, computing an unaugmented accuracy, augmenting the original training samples and the first set of augmented training samples by applying a candidate transformation, training the machine learning model on at least a portion of the original training samples, at least a portion of the first set of augmented training samples, and at least a portion of the second set of augmented training samples, computing an augmented accuracy, computing an affinity metric from the unaugmented accuracy and the augmented accuracy, and updating the candidate augmentation transformations list and the default augmentation transformations list.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: January 30, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Forouqsadat Khonsari, Jaspreet Singh Sambee
  • Patent number: 11879977
    Abstract: Described are street level intelligence platforms, systems, and methods that can include a fleet of swarm vehicles having imaging devices. Images captured by the imaging devices can be used to produce and/or be integrated into maps of the area to produce high-definition maps in near real-time. Such maps may provide enhanced street level intelligence useful for fleet management, navigation, traffic monitoring, and/or so forth.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: January 23, 2024
    Assignee: Woven by Toyota, U.S., Inc.
    Inventors: Ro Gupta, Justin Day, Chase Nicholl, Max Henstell, Ioannis Stamos, David Boyle, Ethan Sorrelgreen, Huong Dinh
  • Patent number: 11880903
    Abstract: Embodiments of the present disclosure provide a Bayesian image denoising method based on a distribution constraint of noiseless images from a distribution of noisy images. The method includes: determining a distribution constraint of noiseless images from a distribution of noisy images; and performing Bayesian denoising on noisy images based on the distribution constraint of noiseless images.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: January 23, 2024
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Yuxiang Xing, Li Zhang, Hewei Gao, Zhi Deng