Abstract: In accordance with one embodiment, a method of locating mineral deposits suitable for production comprises obtaining via a computer an image of an area of land, determining from the image at least one fluid-expulsion structure present on the land, designating an area proximate the fluid-expulsion structure as a mineral exploration location; while in accordance with another embodiment a method of locating a hydrocarbon reservoir suitable for production is described which comprises obtaining via a computer an image of an area of land, determining from the image at least one fluid-expulsion structure present on the land, and designating an area proximate the fluid-expulsion structure as a hydrocarbon exploration location.
Abstract: In some implementations, an image classification system of an autonomous or semi-autonomous vehicle is capable of improving multi-object classification by reducing repeated incorrect classification of objects that are considered rarely occurring objects. The system can include a common instance classifier that is trained to identify and recognize general objects (e.g., commonly occurring objects and rarely occurring objects) as belonging to specified object categories, and a rare instance classifier that is trained to compute one or more rarity scores representing likelihoods that an input image is correctly classified by the common instance classifier. The output of the rare instance classifier can be used to adjust the classification output of the common instance classifier such that the likelihood of input images being incorrectly classified is reduced.
Abstract: An image forming apparatus of the present invention includes a human detection unit configured to detect a human, an operation unit including a display device configured to display an operation screen, a touch panel configured to detect a touch operation of the operation screen, a control unit configured to return the operation unit from a first power state in which the display device does not display the operation screen to a second power state in which the display device displays the operation screen based on a detection result of the human detection unit, and an invalidation unit configured to invalidate a touch operation of the operation screen for a predetermined period after the returning starts.
Abstract: With the present invention, it is possible to swiftly collect light source information for use when correcting color information of an object region in a captured image. A position in real space of an object included in a captured image is acquired, and light source information that corresponds to the acquired position is acquired. Color information of a region including the object in the captured image is corrected based on the acquired light source information.
Abstract: Embodiments include a system for determining cardiovascular information for a patient. The system may include at least one computer system configured to receive patient-specific data regarding a geometry of the patient's heart, and create a three-dimensional model representing at least a portion of the patient's heart based on the patient-specific data. The at least one computer system may be further configured to create a physics-based model relating to a blood flow characteristic of the patient's heart and determine a fractional flow reserve within the patient's heart based on the three-dimensional model and the physics-based model.
Abstract: In one embodiment, an apparatus comprises a communication interface and a processor. The communication interface is to communicate with a plurality of cameras. The processor is to obtain metadata associated with an initial state of an object, wherein the object is captured by a first camera in a first video stream at a first point in time, and wherein the metadata is obtained based on the first video stream. The processor is further to predict, based on the metadata, a future state of the object at a second point in time, and identify a second camera for capturing the object at the second point in time. The processor is further to configure the second camera to capture the object in a second video stream at the second point in time, wherein the second camera is configured to capture the object based on the future state of the object.
Type:
Grant
Filed:
April 30, 2018
Date of Patent:
May 25, 2021
Assignee:
Intel Corporation
Inventors:
Marcos Emanuel Carranza, Lakshmi N. Talluru, Cesar I. Martinez Spessot, Mateo Guzman, Sebastian M. Salomon, Mats G. Agerstam
Abstract: A foot screening system configured to aid in screening a plantar surface of a foot of a user for sores, ulcers, and other signs of diseases. The foot screening system including a foot platform including a foot contacting surface configured to serve as a foot stabilizing device during a screening of a plantar surface of a foot of a user, a camera stabilizer platform configured to support a mobile computing device at a desired angle and distance from the foot platform, and a user interface configured to aid a user in capturing one or more images of the plantar surface of the foot of the user, flagging the one or more images for additional review, and uploading the one or more images to a network for access by a healthcare provider.
Abstract: Introduced here are computer programs and associated computer-implemented techniques for determining reflectance of an image on a per-pixel basis. More specifically, a characterization module can initially acquire a first data set generated by a multi-channel light source and a second data set generated by a multi-channel image sensor. The first data set may specify the illuminance of each color channel of the multi-channel light source (which is configured to produce a flash), while the second data set may specify the response of each sensor channel of the multi-channel image sensor (which is configured to capture an image in conjunction with the flash). Thus, the characterization module may determine reflectance based on illuminance and sensor response. The characterization module may also be configured to determine illuminance based on reflectance and sensor response, or determine sensor response based on illuminance and reflectance.
Type:
Grant
Filed:
May 23, 2019
Date of Patent:
April 27, 2021
Assignee:
RINGO AI, INC.
Inventors:
Matthew D. Weaver, James Kingman, Jay Hurley, Jeffrey Saake, Sanjoy Ghose
Abstract: A method includes calculating a center-of-mass of a volume of an organ of a patient in a computerized anatomical map of the volume. A location is found on the anatomical map, on a surface of the volume, that is farthest from the center-of-mass. The location is identified as a known anatomical opening of the organ.
Abstract: In one embodiment, a diagnostic system for biological samples is disclosed. The diagnostic system includes a diagnostic instrument, and a portable electronic device. The diagnostic instrument has a reference color bar and a plurality of chemical test pads to receive a biological sample. The portable electronic device includes a digital camera to capture a digital image of the diagnostic instrument in uncontrolled lightning environments, a sensor to capture illuminance of a surface of the diagnostic instrument, a processor coupled to the digital camera and sensor to receive the digital image and the illuminance, and a storage device coupled to the processor. The storage device stores instructions for execution by the processor to process the digital image and the illuminance, to normalize colors of the plurality of chemical test pads and determine diagnostic test results in response to quantification of color changes in the chemical test pads.
Abstract: The present invention provides a method for tracking face in a video, comprising steps of taking an image sample from a video; extracting and storing a target face feature according to the image sample; dividing the video into one or more scene; and labeling one or more face matching the target face feature. Accordingly, an actor's facial expression and motion can be extracted with ease and used as training materials or subject matters for discussion.
Abstract: Embodiments described herein provide various examples of a face-image training data preparation system for performing large-scale face-image training data acquisition, preprocessing, cleaning, balancing, and post-processing. The disclosed training data preparation system can collect a very large set of loosely-labeled images of different people from the public domain, and then generate a raw training dataset including a set of incorrectly-labeled face images. The disclosed training data preparation system can then perform cleaning and balancing operations on the raw training dataset to generate a high-quality face-image training dataset free of the incorrectly-labeled face images. The processed high-quality face-image training dataset can be subsequently used to train deep-neural-network-based face recognition systems to achieve high performance in various face recognition applications.
Type:
Grant
Filed:
December 31, 2017
Date of Patent:
March 9, 2021
Assignee:
AltumView Systems Inc.
Inventors:
Zili Yi, Xing Wang, Him Wai Ng, Sami Ma, Jie Liang
Abstract: An image processing apparatus includes a processor that acquires document image data that is generated by reading an original document and recognizes character strings that are included in the document image data through character recognition, and the processor selects, as an issuing date of the original document, date information that includes time information from among a plurality of date information pieces in a case in which the plurality of date information pieces are extracted from among the character strings. The processor distinguishes a type of the original document on the basis of the document image data and selects date information as an issuing date of the original document from among the plurality of date information pieces in accordance with the type of the original document in a case in which no date information that includes time information has not been extracted.
Abstract: Methods of applying OCT angiography are disclosed. In particular, methods of detecting, visualizing and measuring the extent of retinal neovascularization are disclosed. Further disclosed are methods measuring retinal nonperfusion area and choriocapillaris defect area.
Abstract: A vehicle exterior environment recognition apparatus includes an object identifier and a barrier setting unit. The object identifier is configured to identify an object in a detected region ahead of an own vehicle. The barrier setting unit is configured to set a barrier located at a closest end of the object, with a relative distance from the object to the own vehicle in a traveling direction of the own vehicle being shortest at the closest end. The barrier is devoid of avoidability by the own vehicle with use of a traveling mode of the own vehicle.
Abstract: An image processing apparatus and method are provided. The image processing apparatus includes: a memory configured to store at least one instruction, and a processor electrically connected to the memory, wherein the processor, by executing the at least one instruction, is configured to: apply an input image to a training network model; and apply, to a pixel block included in the input image, a texture patch corresponding to the pixel block to obtain an output image, wherein the training network model stores a plurality of texture patches corresponding to a plurality of classes classified based on a characteristic of an image, and is configured to train at least one texture patch, among the plurality of texture patches, based on the input image.
Abstract: Methods and apparatus to automatically generate an image quality metric for an image are provided. An example method includes automatically processing a first medical image using a deployed learning network model to generate an image quality metric for the first medical image, the deployed learning network model generated from a digital learning and improvement factory including a training network, wherein the training network is tuned using a set of labeled reference medical images of a plurality of image types, and wherein a label associated with each of the labeled reference medical images indicates a central tendency metric associated with image quality of the image. The example method includes computing the image quality metric associated with the first medical image using the deployed learning network model by leveraging labels and associated central tendency metrics to determine the associated image quality metric for the first medical image.
Type:
Grant
Filed:
November 27, 2019
Date of Patent:
January 19, 2021
Assignee:
General Electric Company
Inventors:
Jiang Hsieh, Gopal Avinash, Saad Sirohey, Xin Wang, Zhye Yin, Bruno De Man
Abstract: In one embodiment, a computer-readable non-transitory storage medium embodies software that is operable when executed to, in real time, capture, by a single sensor, a number of images of a user; determine, based on the number of images, one or more short-term cardiological signals of the user during a period of time; estimate, based on the cardiological signals, a first short-term emotional state of the user; determine, based on the number of images, one or more short-term neurological signals of the user during the period of time; estimate, based on the neurological signals, a second short-term emotional state of the user; compare the first estimated emotional state to the second estimated emotional state; and in response to a determination that the first estimated emotional state corresponds to the second estimated emotional state, determine a short-term emotion of the user during the period of time.
Abstract: An image processing apparatus and method are provided. The image processing apparatus includes: a memory configured to store at least one instruction, and a processor electrically connected to the memory, wherein the processor, by executing the at least one instruction, is configured to: apply an input image to a training network model; and apply, to a pixel block included in the input image, a texture patch corresponding to the pixel block to obtain an output image, wherein the training network model stores a plurality of texture patches corresponding to a plurality of classes classified based on a characteristic of an image, and is configured to train at least one texture patch, among the plurality of texture patches, based on the input image.
Abstract: Described herein are a method, apparatus, terminal, and system for measuring a trajectory tracking accuracy of a target. Using each method, apparatus, terminal, and system to measure the trajectory tracking accuracy of the target includes determining a location information of the actual tracking trajectory of the target; comparing the location information of the actual tracking trajectory with a location information of the target trajectory to determine a variance between the location information of the actual tracking trajectory and the location information of the target trajectory; and determining the tracking accuracy of the target based on the variance.