Abstract: Disclosed is a system and method for segmentation of selected data. In various embodiments, automatic segmentation of fiber tracts in an image data may be performed. The automatic segmentation may allow for identification of specific fiber tracts in an image.
Type:
Grant
Filed:
April 30, 2020
Date of Patent:
November 22, 2022
Assignee:
Medtronic Navigation, Inc.
Inventors:
Rowena Vigh, Hallehsadat Ghaderi, Daniel H. Adler, Shai Ronen, Nikhil Mahendra
Abstract: According to one implementation, a system for performing automated salience assessment of pixel anomalies includes a computing platform having a hardware processor and a system memory storing a software code. The hardware processor is configured to execute the software code to analyze an image for a presence of a pixel anomaly in the image, obtain a salience criteria for the image when the analysis of the image detects the presence of the pixel anomaly, and classify the pixel anomaly as one of a salient anomaly or an innocuous anomaly based on the salience criteria for the image. The hardware processor is further configured to execute the software code to disregard the pixel anomaly when the pixel anomaly is classified as an innocuous anomaly, and to flag the pixel anomaly when the pixel anomaly is classified as a salient anomaly.
Abstract: An image processing apparatus generates, based on an input image, region information indicating a region including an object, determines, based on the region information, a region including an object to be transmitted, and transmits, based on the region information indicating the determined region, an image of the object and region information indicating the determined region.
Abstract: An embodiment in accordance with the present invention includes a technology to continuously measure patient mobility automatically, using sensors that capture color and depth images along with algorithms that process the data and analyze the activities of the patients and providers to assess the highest level of mobility of the patient. An algorithm according to the present invention employs the following five steps: 1) analyze individual images to locate the regions containing every person in the scene (Person Localization), 2) for each person region, assign an identity to distinguish ‘patient’ vs. ‘not patient’ (Patient Identification), 3) determine the pose of the patient, with the help of contextual information (Patient Pose Classification and Context Detection), 4) measure the degree of motion of the patient (Motion Analysis), and 5) infer the highest mobility level of the patient using the combination of pose and motion characteristics (Mobility Classification).
Type:
Grant
Filed:
October 4, 2017
Date of Patent:
November 15, 2022
Assignee:
The Johns Hopkins University
Inventors:
Suchi Saria, Andy Jinhua Ma, Austin Reiter
Abstract: Devices, systems and processes for the detection of unsafe cabin conditions that provides a safer passenger experience in autonomous vehicles are described. One example method for enhancing passenger safety includes capturing at least a set of images of one or more passengers in the vehicle, determining, based on the set of images, the occurrence of an unsafe activity in an interior of the vehicle, performing, using a neural network, a classification of the unsafe activity, and performing, based on the classification, one or more responsive actions.
Type:
Grant
Filed:
October 13, 2020
Date of Patent:
November 8, 2022
Assignee:
ALPINE ELECTRONICS OF SILICON VALLEY, INC.
Inventors:
Rocky Chau-Hsiung Lin, Thomas Yamasaki, Koichiro Kanda
Abstract: Devices, systems, and methods for tracking a flying object, and displaying the tracked flying object with additional information regarding the flying object on a display. Digital images from a plurality of image sensors identify and track flying objects by the velocity and direction of movement, and can convey flight paths, corresponding collision alarms, warnings, or other information on the display.
Abstract: According to one embodiment, an inspection system inspects equipment including a first structural object and a second structural object. The first structural object extends in a first direction. The second structural object is provided around the first structural object. The second structural object has a first surface opposing the first structural object. A first protrusion is provided in the first surface. The first protrusion extends in the first direction. The system includes a robot and a controller. The robot includes an imager. The robot moves between the first structural object and the second structural object. The imager images the first protrusion. The controller detects, from a first image acquired by the imager, a first edge portion of the first protrusion in a circumferential direction around the first direction. The controller controls a movement of the robot by using the detected first edge portion.
Abstract: Described herein are neural network-based systems, methods and instrumentalities associated with image segmentation that may be implementing using an encoder neural network and a decoder neural network. The encoder network may be configured to receive a medical image comprising a visual representation of an anatomical structure and generate a latent representation of the medical image indicating a plurality of features of the medical image. The latent representation may be used by the decoder network to generate a mask for segmenting the anatomical structure from the medical image. The decoder network may be pre-trained to learn a shape prior associated with the anatomical structure and once trained, the decoder network may be used to constrain an output of the encoder network during training of the encoder network.
Type:
Grant
Filed:
June 18, 2020
Date of Patent:
November 1, 2022
Assignee:
SHANGHAI UNITED IMAGING INTELLIGENCE CO., LTD.
Abstract: A map generation device that detects a pre-specified target object based on image information obtained by imaging from each of a plurality of vehicles; estimates position information that is an absolute position of the detected target object, the position information being estimated based on a relative position of the detected target object and each vehicle imaging the image information in which the target object is detected, and position information that is an absolute position of each vehicle at a time of imaging the target object; and integrates matching target objects included in a plurality of estimated target objects, the matching target objects being integrated based on the position information of the respective target objects and the image information of the images in which the respective target objects are included, and a number and positions of the target objects being specified.
Abstract: According to an aspect of an embodiment, operations may comprise determining a target position and orientation for a calibration board with respect to a camera of a vehicle, detecting a first position and orientation of the calibration board with respect to the camera of the vehicle, determining instructions for moving the calibration board from the first position and orientation to the target position and orientation, transmitting the instructions to a device, detecting a second position and orientation of the calibration board, determining whether the second position and orientation is within a threshold of matching the target position and orientation, and, in response to determining that the second position and orientation is within the threshold of matching the target position and orientation, capturing one or more calibration camera images using the camera and calibrating one or more sensors of the vehicle using the one or more calibration camera images.
Type:
Grant
Filed:
July 2, 2020
Date of Patent:
October 25, 2022
Assignee:
NVIDIA CORPORATION
Inventors:
Ziqiang Huang, Lin Yang, Mark Damon Wheeler
Abstract: A method for performing video domain adaptation for human action recognition is presented. The method includes using annotated source data from a source video and unannotated target data from a target video in an unsupervised domain adaptation setting, identifying and aligning discriminative clips in the source and target videos via an attention mechanism, and learning spatial-background invariant human action representations by employing a self-supervised clip order prediction loss for both the annotated source data and the unannotated target data.
Type:
Grant
Filed:
August 20, 2020
Date of Patent:
October 11, 2022
Inventors:
Gaurav Sharma, Samuel Schulter, Jinwoo Choi
Abstract: In an example method, a computer system receives a query from a mobile device, including an indication of a location of the mobile device, and an environmental measurement obtained by the mobile device at the location. A set of candidate points of interest in geographical proximity to the location is determined. For each of one or more candidate points of interest of the set, a location fingerprint of the candidate point of interest and contextual data regarding the candidate point of interest are obtained. A similarity between the environmental measurement and each location fingerprint is determined. A particular candidate point of interest is selected from among the set based on the similarity, and based on an assessment of the contextual data. A label of the selected point of interest is associated with the location and transmitted to the mobile device.
Type:
Grant
Filed:
November 1, 2019
Date of Patent:
October 11, 2022
Assignee:
Apple Inc.
Inventors:
Richard B. Warren, Danil Yuryevich Zvyagintsev, Michael P. Dal Santo, Liviu T. Popescu, Pejman Lotfali Kazemi, Hyo Jeong Shin, Zehua Zhou
Abstract: A system and method are provided for capturing an image with correct skin tone exposure. In use, one or more faces are detected having threshold skin tone within a scene. Next, based on the detected one or more faces, the scene is segmented into one or more face regions and one or more non-face regions. A model of the one or more faces is constructed based on a depth map and a texture map, the depth map including spatial data of the one or more faces, and the texture map includes surface characteristics of the one or more faces. The one or more images of the scene are captured based on the model. Further, in response to the capture, the one or more face regions are processed to generate a final image.
Type:
Grant
Filed:
February 20, 2020
Date of Patent:
September 27, 2022
Assignee:
DUELIGHT LLC
Inventors:
William Guie Rivard, Brian J. Kindle, Adam Barry Feder
Abstract: An example operation may include one or more of receiving, by an accident processing node, an accident report generated by a transport, retrieving, by the accident processing node, a time and a location of the accident from the accident report, querying, by the accident processing node, a plurality of transport profiles on a storage based on the time and the location of the accident, and retrieving video data associated with the time and the location of the accident from the plurality of the transport profiles.
Abstract: There is provided a system and method of generating training data for training a Deep Neural Network usable for examination of a semiconductor specimen. The method includes: obtaining a first training image and first labels respectively associated with a group of pixels selected in each segment, extract a set of features characterizing the first training image, train a machine learning (ML) model using the first labels, values of the group of pixels, and the feature values of each of the set of features corresponding to the group of pixels, process the first training image using the trained ML model to obtain a first segmentation map, and determine to include the first training image and the first segmentation map into the DNN training data upon a criterion being met, and to repeat the extracting of the second features, the training and the processing upon the criterion not being met.
Abstract: The present disclosure relates to a tracking system for tracking the position and/or orientation of an object in an environment, the tracking system including: at least one camera mounted to the object; a plurality of spaced apart targets, at least some of said targets viewable by the at least one camera; and, one or more electronic processing devices configured to: determine target position data indicative of the relative spatial position of the targets; receive image data indicative of an image from the at least one camera, said image including at least some of the targets; process the image data to: identify one or more targets in the image; determine pixel array coordinates corresponding to a position of the one or more targets in the image; and, use the processed image data to determine the position and/or orientation of the object by triangulation.
Abstract: Provided is a technique for supporting a diagnosis in determining disease by using various types of measured values acquired by a medical image acquisition apparatus. An image diagnosis support device includes a measured-value receiving unit configured to receive various types of measured values at a plurality of positions within a living body, a group generator configured to generate groups of the measured values depending on the position or the type of the measured value, an intermediate index calculator configured to calculate an intermediate index from the measured values included in the group on a per-group basis, and a comprehensive index calculator configured to calculate a comprehensive index from values of the intermediate index calculated on a per-group basis. The intermediate index and the comprehensive index are displayed on a display unit in a display mode such as numerical values and in the form of an image.
Abstract: A system for eye-tracking according to an embodiment of the present invention includes a data collection unit that acquires face information of a user and location information of the user from an image captured by a photographing device installed at each of one or more points set within a three-dimensional space and an eye tracking unit that estimates a location of an area gazed at by the user in the three-dimensional space from the face information and the location information, and maps spatial coordinates corresponding to the location of the area to a three-dimensional map corresponding to the three-dimensional space.
Abstract: Techniques are disclosed for implementing a neural network that outputs embeddings. Furthermore, techniques are disclosed for using sensor data to train a neural network to learn such embeddings. In some examples, the neural network may be trained to learn embeddings. The embeddings may be used for object identification, object matching, object classification, and/or object tracking in various examples.
Type:
Grant
Filed:
November 3, 2020
Date of Patent:
August 30, 2022
Assignee:
Zoox, Inc.
Inventors:
Bryce A. Evans, James William Vaisey Philbin, Sarah Tariq
Abstract: Methods and apparatus for predicting performance of an individual on a task, the method comprises receiving brain imaging data for the individual, wherein the brain imaging data comprises structural brain data, determining values for at least one characteristic of the structural brain data within regions of interest defined for a population of individuals having different performance levels, and predicting based on the determined values, a performance potential of the individual.
Type:
Grant
Filed:
October 16, 2018
Date of Patent:
August 23, 2022
Assignee:
Voxel AI, Inc.
Inventors:
Benjamin J. A. Gallacher, Douglas J. Cook, Chris I. Murray, Andrew N. Ross