Abstract: Systems, devices, and methods are described for providing patient anatomy models with indications of model accuracy included with the model. Accuracy is determined, for example, by analyzing gradients at tissue boundaries or by analyzing tissue surface curvature in a three-dimensional anatomy model. The determined accuracy is graphically provided to an operator along with the patient model. The overlaid accuracy indications facilitate the operator's understanding of the model, for example by showing areas of the model that may deviate from the modeled patient's actual anatomy.
Abstract: A computer-implemented image analysis method and system. The method comprises: quantifying one or more features segmented and identified from a medical image of a subject; extracting clinically relevant features from non-image data pertaining to the subject; assessing the features segmented from the medical image and the features extracted from the non-image data with a trained machine learning model; and outputting one or more results of the assessing of the features.
Abstract: Systems and methods for determining user equipment (UE) locations within a wireless network using reference signals of the wireless network are described. The disclosed systems and methods utilize a plurality of in-phase and quadrature (I/Q) samples generated from signals provided by receive channels associated with two or more antennas of the wireless system. Based on received reference signal parameters the reference signal within the signals from each receive channel among the receive channels is identified. Based on the identified reference signal from each receive channel, an angle of arrival between a baseline of the two or more antennas and incident energy from the UE to the two or more antennas is determined. That angle of arrival is then used to calculate the location of the UE. The angle of arrival may be a horizontal angle of arrival and/or a vertical angle of arrival.
Abstract: A processor-implemented object tracking method includes: setting a suppressed region in a template image based on a shape of a target box of the template image; refining a template feature map of the template image by suppressing an influence of feature data corresponding to the suppressed region in the template feature map; and tracking an object by determining a bounding box corresponding to the target box in a search image based on the refined template feature map.
Type:
Grant
Filed:
August 17, 2021
Date of Patent:
August 1, 2023
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Ju Hwan Song, Hyunjeong Lee, Changbeom Park, Changyong Son, Byung In Yoo
Abstract: A system for eye-tracking according to an embodiment of the present invention includes a data collection unit that acquires face information of a user and location information of the user from an image captured by a photographing device installed at each of one or more points set within a three-dimensional space and an eye tracking unit that estimates a location of an area gazed at by the user in the three-dimensional space from the face information and the location information, and maps spatial coordinates corresponding to the location of the area to a three-dimensional map corresponding to the three-dimensional space.
Abstract: In an embodiment, a method for estimating a composition of food includes: receiving a first three-dimensional (3D) image; identifying food in the first 3D image; determining a volume of the identified food based on the first 3D image; and estimating a composition of the identified food using a millimeter-wave radar.
Abstract: A system and method are provided for capturing an image with correct skin tone exposure. In use, one or more faces are detected having threshold skin tone within a scene. Next, based on the detected one or more faces, the scene is segmented into one or more face regions and one or more non-face regions. A model of the one or more faces is constructed based on a depth map and a texture map, the depth map including spatial data of the one or more faces, and the texture map includes surface characteristics of the one or more faces. The one or more images of the scene are captured based on the model. Further, in response to the capture, the one or more face regions are processed to generate a final image.
Type:
Grant
Filed:
March 14, 2022
Date of Patent:
July 11, 2023
Assignee:
DUELIGHT LLC
Inventors:
William Guie Rivard, Brian J. Kindle, Adam Barry Feder
Abstract: Systems and methods for validating drive pose refinement are provided. In some aspects, a method includes receiving image data that depicts an area of interest, and receiving a plurality of virtual points generated using the image data. The method also includes selecting at least one drive in the area of interest that captures the plurality of virtual points, and generating a refined pose track for each of the at least one drive by applying a drive alignment process to drive data from the at least one drive using the plurality virtual points. The method further includes comparing the refined pose track to a control pose track generated using control repoints, and generating, based on the comparison, a report that validates the refined pose track.
Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
Type:
Grant
Filed:
June 21, 2021
Date of Patent:
June 27, 2023
Assignee:
NVIDIA Corporation
Inventors:
Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
Abstract: A detection device including: a detector that detects an object from one viewpoint; a reliability calculator that calculates reliability information on the object at the one viewpoint by using a detection result of the detector; and an information calculator that calculates shape information on the object at the one viewpoint by using the detection result of the detector and the reliability information and calculates texture information on the object at the one viewpoint by using the detection result, the information calculator generates model information on the object at the one viewpoint based on the shape information and the texture information.
Abstract: Systems, computer-implemented methods, apparatus and/or computer program products are provided that facilitate improving the accuracy of global positioning system (GPS) coordinates of indoor photos. The disclosed subject matter further provides systems, computer-implemented methods, apparatus and/or computer program products that facilitate generating exterior photos of structures based on GPS coordinates of indoor photos.
Abstract: A computing system identifies broadcast video for a plurality of games in a first league. The broadcast video includes a plurality of video frames. The computing system generates tracking data for each game from the broadcast video of a corresponding game. The computing system enriches the tracking data. The enriching includes merging play-by-play data for the game with the tracking data of the corresponding game. The computing system generates padded tracking data based on the tracking data. The computing system projects player performance in a second league for each player based on the tracking data and the padded tracking data.
Type:
Grant
Filed:
October 1, 2021
Date of Patent:
June 20, 2023
Assignee:
STATS LLC
Inventors:
Andrew Patton, Nathan Walker, Matthew Scott, Alex Ottenwess
Abstract: In some examples, one or more processors may receive at least one first visible light image and a first thermal image. Further, the processor(s) may generate, from the at least one first visible light image, an edge image that identifies edge regions in the at least one first visible light image. At least one of a lane marker or road edge region may be determined based at least in part on information from the edge image. In addition, one or more first regions of interest in the first thermal image may be determined based on at least one of the lane marker or the road edge region. Furthermore, a gain of a thermal sensor may be adjusted based on the one or more first regions of interest in the first thermal image.
Abstract: An optical scanner captures a plurality of images from a plurality of image-capture devices. In response to the activation signal, an evaluation phase is executed, and in response to the evaluation phase, an acquisition phase is executed. In the evaluation phase, a first set of images is captured and processed to produce a virtual frame comprising a plurality of regions, with each region containing a reduced-data image frame that is based on a corresponding one of the plurality of images. Also in the evaluation phase, attributes of each of the plurality regions of the virtual frame are assessed according to first predefined criteria, and operational parameters for the acquisition phase are set based on a result of the assessment. In the acquisition phase, a second set of at least one image is captured via at least one of the plurality of image-capture devices according to the set of operational parameters.
Abstract: Various methods for utilizing a saliency heatmaps are described. The methods include obtaining image data corresponding to an image of a scene, obtaining a saliency heatmap for the image of the scene based on a saliency network, wherein the saliency heatmap indicates a likelihood of saliency for a corresponding portion of the scene, and manipulating the image data based on the saliency heatmap. In embodiments, the saliency heatmap may be produced using a trained machine learning model. The saliency heatmap may be used for various image processing tasks, such as determining which portion(s) of a scene to base an image capture device's autofocus, auto exposure, and/or white balance operations upon. According to some embodiments, one or more bounding boxes may be generated based on the saliency heatmap, e.g., using an optimization operation, which bounding box(es) may be used to assist or enhance the performance of various image processing tasks.
Abstract: A system for analyzing images is provided. The system includes a computing device having at least one processor in communication with at least one memory device. The at least one processor is programmed to receive an image including a plurality of objects, detect the plurality of objects in the image, determine dependencies between each of the plurality of objects, identify the plurality of objects based, at least in part, on the plurality of dependencies, and determine one or more objects of interest from the plurality of identified objects.
Abstract: System, methods, and other embodiments described herein relate to determining depths of a scene from a monocular image. In one embodiment, a method includes generating depth features from depth data using a sparse auxiliary network (SAN) by i) sparsifying the depth data, ii) applying sparse residual blocks of the SAN to the depth data, and iii) densifying the depth features. The method includes generating a depth map from the depth features and a monocular image that corresponds with the depth data according to a depth model that includes the SAN. The method includes providing the depth map as depth estimates of objects represented in the monocular image.
Type:
Grant
Filed:
January 7, 2021
Date of Patent:
May 23, 2023
Assignee:
Toyota Research Institute, Inc.
Inventors:
Vitor Guizilini, Rares A. Ambrus, Adrien David Gaidon
Abstract: Mechanisms are provided to implement a machine learning training model. The machine learning training model trains an image generator of a generative adversarial network (GAN) to generate medical images approximating actual medical images. The machine learning training model augments a set of training medical images to include one or more generated medical images generated by the image generator of the GAN. The machine learning training model trains a machine learning model based on the augmented set of training medical images to identify anomalies in medical images. The trained machine learning model is applied to new medical image inputs to classify the medical images as having an anomaly or not.
Type:
Grant
Filed:
November 17, 2021
Date of Patent:
May 9, 2023
Inventors:
Ali Madani, Mehdi Moradi, Tanveer F. Syeda-Mahmood
Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive a series of sample coordinate points of a projected path of travel of a vehicle, generate interpolated coordinate points along the projected path between the sample coordinate points, fit a curve to the sample coordinate points and interpolated coordinate points, and output a curvature of a lane at a reported coordinate point along the projected path based on the curve.
Abstract: Techniques are provided for improving a perception processing pipeline for object detection that fuses image segmentation data (e.g., segmentation scores) with LiDAR points. The disclosed techniques are implemented using an architecture that accepts point clouds and images as input and estimates oriented 3D bounding boxes for all relevant object classes. In an embodiment, a method comprises: matching temporally, using one or more processors of a vehicle, points in a three-dimensional (3D) point cloud with an image; generating, using an image-based neural network, semantic data for the image; decorating, using the one or more processors, the points in the 3D point cloud with the semantic data; and estimating, using a 3D object detector with the decorated points as input, oriented 3D bounding boxes for the one or more objects.
Type:
Grant
Filed:
November 24, 2021
Date of Patent:
April 25, 2023
Assignee:
Motional AD LLC
Inventors:
Sourabh Vora, Oscar Olof Beijbom, Alex Hunter Lang, Bassam Helou