Abstract: According to one aspect, embodiments of the invention provide a system and method for utilizing the effort expended by a user in responding to a CAPTCHA request to automatically transcribe text from images in order to verify, retrieve and/or update geographic data associated with geographic locations at which the images were recorded.
Type:
Grant
Filed:
April 12, 2012
Date of Patent:
July 7, 2015
Assignee:
Google Inc.
Inventors:
Marco Zennaro, Luc Vincent, Kong Man Cheung, David Abraham
Abstract: A system and method for leadwire location may provide for recognition of the leadwire via analysis of a series of two-dimensional images that include representations of the leadwire, and/or determining an orientation of the leadwire with reference to a marker on the leadwire.
Type:
Grant
Filed:
March 27, 2012
Date of Patent:
June 23, 2015
Assignee:
Boston Scientific Neuromodulation Corporation
Inventors:
Troy Sparks, David Arthur Blum, Scott Kokones, Keith Carlton, Hemant Sharad Bokil
Abstract: An image verification device that checks an input image obtained by photographing an object to be checked against a registered image database, wherein, in the registered image database, an amount of feature of an image obtained by photographing an object is registered as a registered image, and the registered image includes registered images registered with respect to a plurality of objects, has a verification score calculating unit that calculates a verification score serving as a score representing a degree of approximation between the objects indicated by the registered images and the object of the input image by using the amount of feature of the input image and the amounts of feature of the registered images, and a relative evaluation score calculating unit.
Abstract: A method, computer readable storage device, and apparatus for determining the distance a computing device is located from a user's face. An image of an individual is obtained. A first pupil location and a second pupil location are identified based on the obtained image. A first distance between the identified first and second pupil location is determined. A second distance between the individual and the computing device is determined based on the determined first distance between the identified first and second pupil locations.
Type:
Grant
Filed:
February 15, 2013
Date of Patent:
May 26, 2015
Assignee:
GOOGLE INC.
Inventors:
Richard Gossweiler, Gregory Sean Corrado
Abstract: Provided is a tire defect detection method capable of accurately detecting a thinly extending convex defect of a tire surface. Prior to the start of Step S1, two-dimensional images including a slit light image are successively obtained in advance. In Step S1, a slit light image is extracted from data of a plurality of shot two-dimensional images. In Step S2, an eccentricity component which is deviation resulting from eccentricity is eliminated from the extracted slit light image. In Step S3, a feature quantity is calculated based on the light image from which the eccentricity component is eliminated, and in Step S4, a thinly extending convex defect is detected based on the calculated feature quantity.
Abstract: In embodiments of optical flow accounting for image haze, digital images may include objects that are at least partially obscured by a haze that is visible in the digital images, and an estimate of light that is contributed by the haze in the digital images can be determined. The haze can be cleared from the digital images based on the estimate of the light that is contributed by the haze, and clearer digital images can be generated. An optical flow between the clearer digital images can then be computed, and the clearer digital images refined based on the optical flow to further clear the haze from the images in an iterative process to improve visibility of the objects in the digital images.
Type:
Grant
Filed:
March 11, 2013
Date of Patent:
May 12, 2015
Assignee:
Adobe Systems Incorporated
Inventors:
Hailin Jin, Zhuoyuan Chen, Zhe Lin, Scott D. Cohen
Abstract: In a method and an apparatus for automatically generating an optimal 2-dimensional (2D) medical image from a 3D medical image, at least one virtual plane crossing a 3D volume is generated from 3D volume image data for showing part of a patient's body in a 3D manner, at least one 2D image representing a cross section of the part of the patient's body is generated by applying the 3D volume image data to the virtual plane, and a 2D image suitable for diagnosis of the patient having a feature most similar to a target feature from among the at least one 2D image is output.
Abstract: The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file.
Abstract: An object tracking device which tracks a target object in a time-series image including a plurality of frames has a location information acquisition unit that acquires location information of a target object in a first frame, the target object being a tracked target, a detailed contour model generation unit that generates a detailed contour model in the first frame, on the basis of the location information, the detailed contour model being formed with a plurality of contour points representing a contour of the target object, and a search location setting unit that sets a plurality of different search locations in a second frame, the second frame being any one of frames following the first frame.
Abstract: There is provided an image dynamic range compression system which can compress a dynamic range for which the visibility of a low-frequency image is ensured while preserving a high-frequency image. An image converting unit converts an input image into a compressed image having a narrower dynamic range than the input image. A high-frequency image extracting unit extracts a high-frequency image from the input image. An image synthesizing unit synthesizes a compressed image and the high-frequency image. Further, by adaptively changing synthesizing method of ensuring the visibility of images to synthesize, the image synthesizing unit synthesizes these images.
Abstract: A computer aided bone scan assessment system and method provide automated lesion detection and quantitative assessment of bone disease burden changes.
Abstract: A system including an image capturing unit configured to capture an image of at least one medical device monitoring a patient, a database including images of a plurality of medical devices, where each image corresponds to a particular medical device, and a data collection server configured to receive the at least one image, receive patient identification data corresponding to the patient, and identify the medical device in the image by comparing the received image with the images stored in the database and matching the received image with the images stored in the database.
Type:
Grant
Filed:
February 15, 2013
Date of Patent:
April 7, 2015
Assignee:
Covidien LP
Inventors:
David Fox, Robert T. Boyer, William A. Jordan, II
Abstract: An image processing apparatus includes a reduction unit and a compression unit. The reduction unit is configured to execute color-number reduction processing for each block configured by a plurality of pixels included in a processing target image expressed by processing target image data, the color-number reduction processing including reducing the number of colors expressed by the plurality of pixels in the block to generate image data having the reduced number of colors from the processing target image data, a gradation-number of each color value included in the image data having the reduced number of colors being the same as a gradation-number of each color value included in the processing target image data. The compression unit is configured to execute compression processing using the image data having the reduced number of colors to generate compressed image data.
Abstract: A method for calculating a centreline of an object is disclosed. An image of the object is divided into test areas. For each test area, detection direction and scanning direction are assigned from a list of limited directions. For each test area, at each scanning point a local point of the centreline is determined along the detection direction. An assigned smoothing function is applied to the collection of local points to determine the collection of pixels which define the centreline. The collection of pixels can be used to calculate the length of the centreline. Also, the coordinates of the pixels of the centreline can be used to average the intensity of the image along the centreline.
Abstract: A computer-assisted method of classifying cytological samples, includes using a processor to analyze images of cytological samples and identify cytological objects of interest within the sample images, wherein the processor (i) displays images of identified cytological objects of interest from the sample images to a reviewer, (ii) accesses a database of images of previously classified cytological objects, and (iii) displays to the reviewer, interspersed with the displayed images of the identified objects of interest from the sample images, one or more images obtained from the database of images of previously-classified objects.
Abstract: An image processing device, method and program in which a feature point derivation unit derives a plurality of characteristic points in an input moving image. A tracking subject feature point setting unit sets a feature point within a tracking subject, from the characteristic points. A background feature point setting unit sets a group of background feature points from the characteristic points. The background feature points are not located within the tracking subject. A motion detection unit detects movement over time of the background feature points. A clip area setting unit sets a size and a position of a clip area of an image to be employed which includes the feature point within the tracking subject, on the basis of the movement of the feature point within the tracking subject and the movement of the background feature points, when the motion detection unit detects movement of the background feature points.
Abstract: The disclosure is directed to techniques for region-of-interest (ROI) processing for video telephony (VT) applications. According to the disclosed techniques, a recipient device defines ROI information for video information transmitted by a sender device, i.e., far-end video information. The recipient device transmits the ROI information to the sender device. Using the ROI information transmitted by the recipient device, the sender device applies preferential encoding to an ROI within a video scene. ROI extraction may be applied to process a user description of a region of interest (ROI) to generate information specifying the ROI based on the description. The user description may be textual, graphical, or speech-based. An extraction module applies appropriate processing to generated the ROI information from the user description. The extraction module may locally reside with a video communication device, or reside in a distinct intermediate server configured for ROI extraction.
Abstract: Provided is a carried item region extraction device for accurately extracting a carried item region from an image. This carried item region extraction device has: a string region processing unit for extracting a string region including a string of a carried item from image information; and a carried item region processing unit for extracting a carried item region including a carried item from the image information on the basis of the string region.
Abstract: A system and method includes data representing a sequence of X-ray images of a portion of patient anatomy acquired over a time interval and signal data representing electrical activity of the heart of the patient over the time interval, determination of a score value for each image of said sequence of X-ray images, selection of a set of images from said sequence of X-ray images based on the determined score values, the set of images excluding one or more images of said sequence of X-ray images, and generation of an averaged image from said set of images.
Abstract: A computer implemented system for identifying license plates and faces in street-level images is disclosed. The system includes an object detector configured to determine a set of candidate objects in the image, a feature vector module configured to generate a set of feature vectors using the object detector to generate a feature vector for each candidate object in the set of candidate objects, a composite feature vector module to generate a set of composite feature vectors by combining each generated feature vector with a corresponding road or street description of the object in question, and an identifier module configured to identify objects of a particular type using a classifier that takes a set of composite feature vectors as input and returns a list of candidate objects that are classified as being of the particular type as output.