Abstract: A device for optically detecting in transmission nanoparticles moving in a fluid sample includes a light source for emitting a spatially incoherent beam for illuminating the sample; an imaging optical system; and a two-dimensional optical detector. The imaging optical system includes a microscope objective. The two-dimensional optical detector includes a detection plane conjugated with an object focal plane of the microscope objective by said imaging optical system. The two-dimensional optical detector allows a sequence of images of an analysis volume of the sample to be acquired, each image resulting from optical interferences between the illuminating beam incident on the sample and the beams scattered by each of the nanoparticles present in the analysis volume during a preset duration shorter than one millisecond.
September 29, 2015
Date of Patent:
March 5, 2019
ECOLE SUPÉRIEURE DE PHYSIQUE ET DE CHIMIE INDUSTRIELLES DE LA VILLE DE PARIS, CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE, UNIVERSITÉ DE PIERRE ET MARIE CURIE, ECOLE NORMALE SUPÉRIEURE, INSTITUT NATIONAL DE LA SANTÉ ET DE LA RECHERCHE MÉDICALE
Abstract: A method and system for of identifying an unknown individual from a digital image is disclosed herein. In one embodiment, the present invention allows an individual to photograph a facial image an unknown individual, transfer that facial image to a server for processing into a feature vector, and then search social networking Web sites to obtain information on the unknown individual. The Web sites comprise myspace.com, facebook.com, linkedin.com, www.hi5.com, www.bebo.com, www.friendster.com, www.igoogle.com, netlog.com, and orkut.com. A method of networking is also disclosed. A method for determining unwanted individuals on a social networking website is also disclosed.
Abstract: Image processing and analysis technique includes using a computer apparatus to assess a patient's magnetic resonance images or derived multiparametric maps for pathology and then automatically generate a prescription based at least in part on that assessment. The parametric maps are derived from an MRI sequence from which multiparametric maps are derivable.
Abstract: A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as www.facebook.com to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.
Abstract: Embodiments describe an accurate and rapid method for assessing spinal bone density on chest or abdominal CT images using post-processed colored images. Post-processing of CT images for the purposes of displaying the spine is followed by color enhancement of routine unenhanced or contrast enhanced CT images to improve diagnostic accuracy, inter-observer agreement, reader confidence and/or time of interpretation as it relates to assessing bone density of the spine. CT images are post-processed (without changes to the standard-of-care CT imaging protocol and without additional cost or radiation for the patient) to straighten the spine for improved visualization of multiple segments. The color-enhanced images can be displayable simultaneously with the grayscale images. Methods and systems are provided for performing opportunistic bone density screening.
Abstract: When a plurality of candidate corneal reflections are extracted from an image of a person, a line-of-sight detection device calculates a movement vector of a reflection location on the basis of the candidate corneal reflection extracted from a first image and a second image, calculates a movement vector of a reference point on the person on the basis of the reference point extracted from the first image and the second image, identifies a corneal reflection from among the plurality of candidate corneal reflections on the basis of the movement vector of the reflection location and the movement vector of the reference point so as to identify a location of the corneal reflection, and extracts a location of a pupil from the image, and calculates a line of sight of the person on the basis of the identified location of the corneal reflection and the extracted location of the pupil.
Abstract: In accordance with some embodiments, systems, methods and media for determining object motion in three dimensions using speckle images are provided. In some embodiments, a system for three dimensional motion estimation is provided, comprising: a light source; an image sensor; and a hardware processor programmed to: cause the light source to emit light toward the scene; cause the image sensor to capture a first defocused speckle image of the scene at a first time and capture a second defocused speckle image of the scene at a second time; generate a first scaled version of the first defocused image; generate a second scaled version of the first defocused image; compare each of the first defocused image, the first scaled version, and the second scaled version to the second defocused image; and determine axial and lateral motion of the object based on the comparisons.
April 10, 2017
Date of Patent:
December 11, 2018
Wisconsin Alumni Research Foundation
Mohit Gupta, Brandon M. Smith, Pratham H. Desai, Vishal R. Agarwal
Abstract: Embodiments provide methods and systems for verifying authenticity of products. In an embodiment, an image of at least a part of a product label of a product is scanned and processed. An image profile is created from the scanned image and compared with a set of reference image profiles. Each reference image profile is associated with a reference image, a reference control transform value and a reference validation transform value. If there is a matching between the image profile and one of the reference image profiles, the reference image corresponding to the matching reference image profile is retrieved. A control transform and a validation transform of the scanned image is determined. The control transform value and the validation transform value are compared with the reference control transform vale and the reference validation transform value of the reference image for verifying authenticity of the products, respectively.
Abstract: A method of analyzing a spinal region of a subject. The method includes steps of obtaining a first sagittal image of the spinal region of the subject using an upright magnetic resonance imaging unit; identifying a first vertebral edge on a first side of a first disc in the first sagittal image; identifying a second vertebral edge on a second side of the first disc in the first sagittal image; and determining a first angle between the first vertebral edge and the second vertebral edge for the first disc.
Abstract: The present disclosure relates to a gaze based error recognition detection system that is intended to predict intention of the user to correct user drawn sketch misrecognitions through a multimodal computer based intelligent user interface. The present disclosure more particularly relates to a gaze based error recognition system comprising at least one computer, an eye tracker to capture natural eye gaze behavior during sketch based interaction, an interaction surface and a sketch based interface providing interpretation of user drawn sketches.
Abstract: Disclosed are methods, circuits, devices, systems and associated executable code for multi factor image feature registration and tracking, wherein utilized factors include both static and dynamic parameters within a video feed. Assessed factors may originate from a heterogeneous set of sensors including both video and audio sensors. Acoustically acquired scene information may supplement optically acquired information.
Abstract: A method for implementing a dynamic three-dimensional lung map view for navigating a probe inside a patient's lungs includes loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images, inserting the probe into a patient's airways, registering a sensed location of the probe with the planned pathway, selecting a target in the navigation plan, presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe, navigating the probe through the airways of the patient's lungs toward the target, iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe, and updating the presented view by removing at least a part of an object forming part of the 3D model.
Abstract: A set of electronic documents, such as emails, may include private information. A privatized image is generated to represent the set of electronic documents by identifying probable-pixel and proven-pixel locations within a subset of images. The subset of images is used to create blurred and obfuscated image having the private information distorted.
Abstract: Provided are a system and method for surface condition measurement and analysis to effectively utilize image data photographed when a surface as an object is photographed regularly and continuously. When a surface as an object is sequentially photographed as time passes and the photographed image data is sequentially stored, the sequentially stored images are compared and the presence or absence of image regions among the images that nearly coincide with each other is determined. When there are images with image regions that nearly coincide with each other, a coordinate system having one image as a reference is set, and a position of the other image in the coordinate system is determined. When an image with an undetermined position overlaps with an image with a determined position, including images photographed subsequent thereto, the position of the image is determined.
Abstract: An image processing apparatus, an image processing method, and a storage medium are shown. According to one implementation, the image processing apparatus includes the following. A detecting unit detects a subject in a moving image. A clipping unit clips a region corresponding to the subject detected by the detecting unit from each frame image composing the moving image. A setting unit sets a planned clipping region to be newly clipped by the clipping unit based on at least one of a position and a size of a region corresponding to the subject already clipped by the clipping unit.
Abstract: According to one embodiment, an article recognition apparatus includes an image acquisition unit, a recognition unit, a region detection unit, a storage unit, and a determination unit. The recognition unit recognizes each of the articles. The region detection unit determines article region information. The storage unit stores article information including a reference value for the article region information. The determination unit determines that an unrecognized article exists, if the reference value for the article region information of each article which the recognition unit recognized does not match with the article region information.
Abstract: A method for configuring a CNN with learned parameters that performs activation operation of an activation module and convolution operation of one or more convolutional layer in a convolutional layer at the same time is provided. The method includes steps of: (a) allowing a comparator to compare an input value corresponding to each of pixel values of an input image as a test image with a predetermined reference value and then output a comparison result; (b) allowing a selector to output a specific parameter corresponding to the comparison result among multiple parameters of the convolutional layer; and (c) allowing a multiplier to output a multiplication value calculated by multiplying the specific parameter by the input value and allowing the multiplication value to be determined as a result value acquired by applying the convolutional layer to an output of the activation module.
October 13, 2017
Date of Patent:
September 25, 2018
Yongjoong Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
Abstract: Systems and methods for tracking and analysis physical activity is disclosed. In some aspects, a provided method includes receiving a time sequence of images captured while with an individual is performing the physical activity, and generating, using the time sequence of images, at least one map indicating a movement of the individual. The method also includes identifying at least one body portion using the at least one map, and computing at least one index associated with the identified body portions to characterize the physical activity of the individual. The method further includes generating a report using the at least one index.
Abstract: An object information acquiring apparatus, includes: an irradiator irradiating an object with light; a probe having a plurality of transducers which receive an acoustic wave generated from the object irradiated with the light and output a reception signal; and a controller using the reception signal to acquire property information on the interior of the object, and the probe has a plurality of apertures and a surface on which the plurality of transducers are arranged has a spherical surface shape.
Abstract: An endoscopic image diagnosis support system (100) includes: a memory (10) that stores learning images pre-classified into pathological types; and a processor (20) that, given an endoscopic image, performs feature value matching between an image of an identification target region in the endoscopic image and the learning images, to identify the pathological types in the identification target region.