Abstract: Disclosed herein are system, method, and computer program product embodiments for optimizing the determination of a phenotypic impact of a molecular variant identified in molecular tests, samples, or reports of subjects by way of regularly incorporating, updating, monitoring, validating, selecting, and auditing the best-performing evidence models for the interpretation of molecular variants across a plurality of evidence classes.
Type:
Grant
Filed:
September 14, 2023
Date of Patent:
November 5, 2024
Assignee:
Laboratory Corporation of America Holdings
Inventors:
Alexandre Colavin, Carlos L. Araya, Jason A. Reuter
Abstract: An image processing device includes: a contribution ratio calculation unit that calculates a contribution ratio of a predetermined pixel or a predetermined region in depth calculation in each of a plurality of pixels or a plurality of regions included in an input image; and a correction unit that corrects a depth value of the predetermined pixel or the predetermined region based on the contribution ratio.
Abstract: Disclosed herein is a method to disentangle linear-encoded facial semantics from facial images without external supervision. The method uses linear regression and sparse representation learning concepts to make the disentangled latent representations easily interpreted and manipulated. Generated facial images are decomposed into multiple semantic features and latent representations are extracted to capture interpretable facial semantics. The semantic features may be manipulated to synthesize photorealistic facial images by sampling along vectors representing the semantic features, thereby changing the associate semantics.
Type:
Grant
Filed:
February 9, 2022
Date of Patent:
November 5, 2024
Assignee:
Carnegie Mellon University
Inventors:
Yutong Zheng, Marios Savvides, Yu Kai Huang
Abstract: A system and method include obtaining and authenticating image files from users such as insured users at the request of an entity such as an insurance provider. The requesting entity may supply an electronic address of the user and a unique identifier. The system may transmit a link to the electronic address. When selected, the link causes an image authentication application to be installed on a user device. The application takes the images securely and separately from a native camera application. Each image authentication application may be customized for each requesting entity. The authentication server may identify the requesting entity that made the request and identify a corresponding image authentication application to be provided to the electronic address. The images from the image authentication application may be authenticated via reverse image search, time, geolocation, and/or other information. The authenticated images and/or related data may be provided to the requesting entity.
Type:
Grant
Filed:
April 28, 2023
Date of Patent:
October 15, 2024
Assignee:
TruePic Inc.
Inventors:
Jeffrey McGregor, Craig Stack, Jason Lyons, Matthew Robben
Abstract: A method and a device for determining an extraction model of a green tide coverage ratio based on mixed pixels. The method includes acquiring sample truth values respectively corresponding to a plurality of target regions water body and green tide; acquiring a plurality of first remote sensing data of a first satellite sensor, the plurality of remote sensing data are in one-to-one correspondence with the plurality of target regions; determining reflection index sets respectively corresponding to the plurality of target regions according to the plurality of remote sensing data; and determining the extraction model of the green tide coverage ratio corresponding to the first satellite sensor according to the sample truth value corresponding to each of the plurality of target regions and the reflection index set corresponding to each of the plurality of target regions.
Type:
Grant
Filed:
July 12, 2021
Date of Patent:
October 15, 2024
Assignee:
THE SECOND INSTITUTE OF OCEANOGRAPHY (SIO), MNR
Inventors:
Difeng Wang, Xiaoguang Huang, Fang Gong, Yan Bai, Xianqiang He
Abstract: Objects are tracked in real time in a composed video acquired by joining a plurality of videos. A grouping candidate determining unit extracts objects present within an overlapping area, in which pieces of frame data are overlapped, among objects that have been detected and tracked in each of a plurality of pieces of frame data that were captured at the same time as candidate objects. A grouping unit arranges a plurality of candidate objects of which a degree of overlapping is equal to or larger than a predetermined threshold as a group, and an integration unit assigns integration object IDs to groups and objects that have not been grouped.
Type:
Grant
Filed:
April 10, 2020
Date of Patent:
October 15, 2024
Assignee:
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Abstract: A LUS robotic surgical system is trainable by a surgeon to automatically move a LUS probe in a desired fashion upon command so that the surgeon does not have to do so manually during a minimally invasive surgical procedure. A sequence of 2D ultrasound image slices captured by the LUS probe according to stored instructions are processable into a 3D ultrasound computer model of an anatomic structure, which may be displayed as a 3D or 2D overlay to a camera view or in a PIP as selected by the surgeon or programmed to assist the surgeon in inspecting an anatomic structure for abnormalities. Virtual fixtures are definable so as to assist the surgeon in accurately guiding a tool to a target on the displayed ultrasound image.
Type:
Grant
Filed:
June 12, 2023
Date of Patent:
October 8, 2024
Assignees:
Intuitive Surgical Operations, Inc., The Johns Hopkins University
Inventors:
Christopher J. Hasser, Russell H. Taylor, Joshua Leven, Michael Choti
Abstract: A diagnostic tool for deep learning similarity models and image classifiers provides valuable insight into neural network decision-making. A disclosed solution generates a saliency map by: receiving a test image; determining, with an image classifier, an image classification of the test image; determining, for the test image, a first activation map for at least one model layer using the determined image classification; determining, for the test image, a first gradient map for the at least one model layer using the determined image classification; and generating a first saliency map as an element-wise function of the first activation map and the first gradient map.
Type:
Grant
Filed:
July 14, 2023
Date of Patent:
October 8, 2024
Assignee:
Microsoft Technology Licensing, LLC.
Inventors:
Oren Barkan, Omri Armstrong, Ori Katz, Noam Koenigstein
Abstract: Techniques for generating magnetic resonance (MR) images of a subject from MR data obtained by a magnetic resonance imaging (MRI) system, the techniques including: obtaining input MR data obtained by imaging the subject using the MRI system; generating a plurality of transformed input MR data instances by applying a respective first plurality of transformations to the input MR data; generating a plurality of MR images from the plurality of transformed input MR data instances and the input MR data using a non-linear MR image reconstruction technique; generating an ensembled MR image from the plurality of MR images at least in part by: applying a second plurality of transformations to the plurality of MR images to obtain a plurality of transformed MR images; and combining the plurality of transformed MR images to obtain the ensembled MR image; and outputting the ensembled MR image.
Type:
Grant
Filed:
May 5, 2023
Date of Patent:
October 1, 2024
Assignee:
Hyperfine Operations, Inc.
Inventors:
Jo Schlemper, Seyed Sadegh Mohseni Salehi, Michal Sofka
Abstract: A method for estimating depth of a scene includes selecting an image of the scene from a sequence of images of the scene captured via an in-vehicle sensor of a first agent. The method also includes identifying previously captured images of the scene. The method further includes selecting a set of images from the previously captured images based on each image of the set of images satisfying depth criteria. The method still further includes estimating the depth of the scene based on the selected image and the selected set of images.
Type:
Grant
Filed:
July 6, 2021
Date of Patent:
September 3, 2024
Assignee:
TOYOTA RESEARCH INSTITUTE, INC.
Inventors:
Jiexiong Tang, Rares Andrei Ambrus, Sudeep Pillai, Vitor Guizilini, Adrien David Gaidon
Abstract: A control section is included, the control section including an accumulation function of, when recognizing a specific user on a basis of sensor data acquired via an agent device, generating episode data in the accumulation section on a basis of a keyword extracted from the sensor data, generating a question for drawing out information concerning the episode data, and accumulating a reply from the specific user to the question in the episode data, and a responding function of, when recognizing the specific user on the basis of the sensor data acquired via the agent device, retrieving the episode data through the accumulation section on the basis of the keyword extracted from the sensor data, and generating response data concerning the retrieved episode data for the agent device to respond to the specific user.
Abstract: A manufacturing data analyzing method and a manufacturing data analyzing device are provided. The manufacturing data analyzing method includes the following steps. Each of at least one numerical data, at least one image data and at least one text data is transformed into a vector. The vectors are gathered to obtain a combined vector. The combined vector is inputted into an inference model to obtain a defect cause and a modify suggestion.
Type:
Grant
Filed:
June 10, 2021
Date of Patent:
August 13, 2024
Assignee:
UNITED MICROELECTRONICS CORP
Inventors:
Ching-Pei Lin, Ming-Tsung Yeh, Chuan-Guei Wang, Ji-Fu Kung
Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for detecting and indicating modifications between a digital image and a modified version of a digital image. For example, the disclosed systems generates an ordered collection of change records in response to detecting modifications to the digital image. The disclosed systems generates determine one or more non-contiguous modified regions of pixels in the digital image based on the change records. The disclosed system generate an edited region indicator corresponding to the non-contiguous modified regions. The disclosed systems can further color-code the edited region indicator at an object level based on objects in the modified version of the digital image.
Abstract: Disclosed is an information processing device, comprising: acquisition circuitry configured to acquire image data including a frame; and index calculation circuitry configured to: determine an average value of the frame, and predict a signal-to-noise ratio of the image data based on the average value of the frame.
Abstract: The disclosed apparatus, systems and methods relate to a framelet-based iterative algorithm for polychromatic CT which can reconstruct two components using a single scan. The algorithm can have various steps including a scaled-gradient descent step of constant or variant step sizes; a non-negativity step; a soft thresholding step; and a color reconstruction step.
Type:
Grant
Filed:
March 8, 2021
Date of Patent:
July 30, 2024
Assignees:
University of Iowa Research Foundation, Rensselaer Polytech Institute, Shandong University, Shandong Public Health Clinical Center
Inventors:
Wenxiang Cong, Ye Yangbo, Ge Wang, Shuwei Mao, Yingmei Wang
Abstract: The disclosure provides a method for improving brightness of mosaiced images, which comprises the following steps: determining a first common area shared by a first image to be mosaiced and a second image to be mosaiced, calculating color average values of the first image to be mosaiced and the second image to be mosaiced in the first common area respectively, determining a target average value of the first common area based on the two color average values, and determining an adjustment parameter of the first common area according to the target average value and the color average values; determining a second common area shared by the second image to be mosaiced and a third image to be mosaiced, calculating color average values of the second image to be mosaiced and the third image to be mosaiced in the second common area, determining a target average value of the second common area based on the two color average values, and determining an adjustment parameter of the second common area according to the target
Abstract: There is described a method for forming an inspection image of an object from radiographic imaging. The method has: forming the inspection image including scaling a feature of the object in one or more digital images to a common scale, the digital images including first and second digital images of the object, the first digital image having the feature at a first scale, the first digital image having a first grain diffraction pattern at the first scale, the second digital image having the feature at a second scale different from the first scale, the second digital image having a second grain diffraction pattern at the second scale, the second grain diffraction pattern different from the first grain diffraction pattern, the common scale common to both the first and second digital images after said scaling, and removing grain differences between the first and second grain diffraction patterns at the common scale.
Abstract: Methods and apparatuses for video processing based on spatial or temporal importance include: in response to receiving picture data of a picture of a video sequence, determining a level of semantic importance for the picture data, the picture data including a portion of the picture; and applying to the picture data a first resolution-enhancement technique associated with the level of semantic importance for increasing resolution of the picture data, wherein the first resolution-enhancement technique is selected from a set of resolution-enhancement techniques having different computational complexity levels.
Abstract: The disclosure provides a vehicle, a machine vision system, and a robot. In one example, the vehicle includes one or more processing units to implement a machine-learning model and to identify one or more objects depicted in one or more images using the machine-learning model, the machine-learning model being trained using a training dataset that includes a plurality of images procedurally synthesized according to one or more training image definitions.
Abstract: A method of inspecting a balcony is provided. The method, from the outside of the balcony, detects locations of two wooden joists in the interior of the balcony, where there is no access to pass a camera from the outside of the balcony into the interior space between the two joists. The method drills a hole into the interior space of the balcony and passes a camera through the hole into the interior space of the balcony. The method captures one or more images by the camera from the wooden surfaces in the interior space of balcony. The method analyzes the images to determine the existence of wood rot in the interior space of the balcony. The method removes the camera from the interior space of the balcony. The method seals the hole.