Abstract: An image processing apparatus includes: a region extracting unit extracts a character region on an image; a character recognizing unit that recognizes characters in the character region extracted by the region extracting unit; a translating unit that translates a recognition result obtained by the character recognizing unit; and a changing unit that changes a constitution of the image with respect to the character region extracted by the region extracting unit according to a direction of the characters in the character region extracted by the region extracting unit, and according to a direction of the characters of the language translated by the translating unit.
Abstract: Provided are a method and apparatus for correcting quantized coefficients. In the method, statistical values of coefficients and quantized coefficients are extracted from a received video data stream, coefficient correction values for each pixel position in blocks are determined by using the statistical distribution of the coefficients depending on the statistical values, and then the coefficients are corrected by respectively adding the coefficient correction values to corresponding coefficients of respective pixel positions.
Type:
Grant
Filed:
July 15, 2008
Date of Patent:
June 12, 2012
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Maxim Koroteev, Alexander Alshin, Elena Alshina, Vadim Seregin, Ekaterina Nesterova
Abstract: An image processing apparatus for correcting an input image for blur using a recovery filter in accordance with image blurriness includes a blurriness determining unit that receives a target image to be corrected, applies to the target image a recovery filter including a degree-of-blur parameter corresponding to blurriness while changing a value of the degree-of-blur parameter, evaluates a degree of recovery of each of corrected target images which have been corrected with recovery filters having different degree-of-blur parameter values, and determines blurriness of the target image based on the degree-of-blur parameter value of the highly evaluated recovery filter; and a blur correction unit that sets a recovery filter for the target image based on the degree-of-blur parameter in accordance with the determined blurriness of the target image and corrects the target image for blur using the recovery filter.
Abstract: An image filtering method, apparatus and system, wherein the method comprising the steps of detecting at least one portion of an edge, wherein the portion of the edge provides an indication that ringing artifact are probable; subjecting at least one portion of a pixel, related to the at least a portion of the edge, to a low pass filter to produce a filtered pixel; and blending the filtered pixel with a value relating to the filter prior to filtering to produce a filtered image.
Abstract: A method and system for superimposing a ruler on a visual representation of a surgical procedure is presented. The design includes providing a reference object while recording the visual representation of the surgical procedure, scaling the ruler to correspond to the reference object within the visual representation of the surgical procedure, and superimposing the ruler from the scaling on the visual representation for subsequent viewing.
Abstract: More accurate search for similar cases can be carried out in the case where images at different time phases exist. A contrast enhancement information analysis unit obtains time phase information of search target images obtained at different time phases in the same examination from accompanying information of the images, and a similar case database storing similar case information sets each including examination ID, time phase information, a characteristic quantity, and image interpretation/diagnosis support information is searched in processing by a first similar case information search unit, a second similar case information search unit, and a judgment unit. A corresponding portion of the similar case information sets satisfying three conditions comprising agreement of the time phase information with the search target images, agreement of examination between the portion of the similar case information sets, and similarity of a content-based characteristic to the search target images is obtained.
Abstract: Multi-spline image blending technique embodiments are presented which generally employ a separate low-resolution offset field for every image region being blended, rather than a single (piecewise smooth) offset field for all the regions to produce a visually consistent blended image. Each of the individual offset fields is smoothly varying, and so is represented using a low-dimensional spline. A resulting linear system can be rapidly solved because it involves many fewer variables than the number of pixels being blended.
Type:
Grant
Filed:
April 17, 2008
Date of Patent:
May 29, 2012
Assignee:
Microsoft Corporation
Inventors:
Richard Szeliski, Matthew T. Uyttendaele
Abstract: Embodiments of the present invention relate to systems, methods and computer storage media for associating a known geographic location with a known identity. Feature matching, of at least two images, is performed in at least two iterations. The iterations are based on an orientation of feature vectors associated with points of interest in each image. A geometric model is applied to the matched points of interest to improve the matched pairs. Two images are identified as being related. As a result, the known geographic location is associated with the known identity. Additional embodiments include augmenting feature vectors with a coordinate location of a related point of interest based on a geometric model. Further, an exemplary embodiment includes an additional matching iteration based on the augmented feature vectors. In an exemplary embodiment, the feature matching utilizes a Scale-Invariant Feature Transform (SIFT).
Type:
Grant
Filed:
June 4, 2009
Date of Patent:
May 29, 2012
Assignee:
Microsoft Corporation
Inventors:
Michael Kroepfl, Eyal Ofek, Yonatan Wexler, Donald Wysocki, Gur Kimchi
Abstract: An image processing device includes a detector for detecting a linear-interpolation-applicable area and an expansion corrector for performing a gradation expanding process on the linear-interpolation-applicable area detected by the detector. When a detector 11 detects a linear-interpolation-applicable area, if the gradation values of pixels preceding and following a pixel where a gradation change in a predetermined range is detected are the same as each other, then the detector judges the gradation change as being caused by a noise or the like, and regards the gradation value of the pixel where the gradation change is detected as the gradation values of pixels preceding and following the pixel.
Abstract: A plurality of items of shot image data obtained by temporally continuous shooting are analyzed. Marking data indicating that replaced graphic data is to be combined is added to image data corresponding to an actor and the resulting data is displayed. When a preset gesture (motion) is detected, marking data indicating that replaced graphic data u is to be combined is added to image data corresponding to another actor and the resulting data is displayed. After shooting, the individual items of image data to which marking data have been added are replaced with respective replaced graphic data. Replaced graphic data are created as moving images which capture the motions of the actors.
Abstract: An image processing apparatus includes a first edge extracting unit and a binarizing unit. The first edge extracting unit extracts an edge of chromatic components of a color image other than brightness components of the color image. The binarizing unit performs an enhancement process and a binarization process for pixels being extracted as the edge by the first edge extracting unit and performs the binarization process for pixels other than the pixels being extracted as the edge based on the brightness components.
Abstract: A computer-controlled system determines attributes of a frexel, which is an area of human skin, and applies a reflectance modifying agent (RMA) at the pixel level to automatically change the appearance of human features based on one or more digital images. The change may be based on a digital image of the same frexel, for as seen in a prior digital photograph captured previously by the computer-controlled system. The system scans the frexel and uses feature recognition software to compare the person's current features in the frexel with that person's features in the digital image. It then calculates enhancements to the make the current features appear more like the features in the digital image, and it applies the RMA to the frexel to accomplish the enhancements. Or the change may be based on a digital image of another person, through the application of RMAs.
Type:
Grant
Filed:
February 12, 2008
Date of Patent:
May 22, 2012
Assignee:
TCMS Transparent Beauty LLC
Inventors:
Albert D. Edgar, David C. Iglehart, Rick B Yeager
Abstract: A characteristic amount calculating means calculates first characteristic amounts, which do not require normalization, and normalized second characteristic amounts. A first discriminating portion discriminates whether a candidate for a face is included in the target image, by referring to first reference data with the first characteristic amounts, calculated from the target image. The first reference data is obtained by learning the first characteristic amounts of a plurality of images, which are known either to be of faces or to not be of faces. In the case that the candidate is included, a second discriminating portion discriminates whether the candidate is a face, by referring to second reference data, obtained by learning the second characteristic amounts of a plurality of images, which known either to be of faces or to not to be of faces.
Abstract: Detectors capable of accurately detecting and tracking moving features of such as faces within a video stream are sometimes too slow to be run in real-time. The present invention rapidly scans video footage in real-time and generates a series of preattemptive triggers indicating the frames and locations within the frames at our deserving of further investigation by a sub real-time detector. The triggers are generated by looking for peaks in a time variant measure such as the amount of symmetry within a frame or portion of a frame.
Type:
Grant
Filed:
July 25, 2006
Date of Patent:
May 22, 2012
Assignee:
Hewlett-Packard Development Company, L.P.
Abstract: There are provided: a pattern detection process section for extracting a partial image made of pixels including a target pixel from input image data; a rotated image generating section for generating a self-rotated image by rotating the partial image; and a matching test determination section for determining whether an image pattern included in the partial image matches an image pattern included in the self-rotated image. When it is determined that matching exists, a target pixel in the partial image or a block made of pixels including the target pixel is regarded as a feature point. Consequently, even when image data has been read while skewed with respect to a predetermined positioning angle of a reading position of an image reading apparatus or image data has been subjected to enlarging, reducing etc., a feature point properly specifying the image data can be extracted regardless of skew, enlarging, reducing etc.
Abstract: A monitoring station for monitoring distribution of media content, on basis of a watermark, comprises: receiving means for receiving an information signal representing the media content to which the watermark is added; extracting means for extracting perceptual features, identifying the information signal; first retrieving means for retrieving a supporting signal on basis of the perceptual features; second retrieving means for retrieving the watermark on basis of the supporting signal; and comparing means for comparing the watermark with predetermined information.
Abstract: There are provided: a pattern detection process section for extracting a partial image made of pixels including a target pixel from input image data; a displaced image generation section for generating a self-displaced image by displacing at least a part of the partial image through a predetermined method; and a matching test determination section for determining whether an image pattern included in the partial image matches an image pattern included in the self-displaced image or not. When the matching test determination section determines that the matching exists, a target pixel in the partial image or a block made of pixels including the target pixel is regarded as a feature point. Consequently, even when image data is subjected to a process such as enlarging and reducing, it is possible to extract a feature point that properly specifies the image data regardless of the enlarging/reducing process.
Abstract: A noise cancellation device for an image signal processing system includes a receiving end for receiving image signals, a 3D filtering unit for adjusting a filtering parameter according to a motion estimation value, and filtering the image signals and a former filtering result for generating a current filtering result, a motion detection unit for comparing the former filtering result and the image signals, so as to generate a current motion factor and the motion estimation value according to a former motion factor, a memory unit for receiving and storing the current filter result and the current motion factor as the former filtering result and the former motion factor, and an output end for outputting the current filtering result provided by the 3D filtering unit.
Abstract: According to an embodiment of the present invention, a tracking method includes detecting a mobile unit within a space, tracking the detected mobile unit, making a position determination of the mobile unit to be tracked to obtain positional data, and making a movement prediction of the mobile unit, based on a high frequency component of positional data.
Abstract: A method for selecting and activating a particular menu displayed in a client's region of a monitor screen by use of an image cognition is disclosed. Using an image-capturing device such as a camera attached to a system, a user's image is recognized at real time and displayed on an initial screen of a monitor. The user makes a direct hand motion while viewing his own image displayed on the initial screen, and when a desired menu icon is designated among a variety of menu icons arrayed on the initial screen, the system guides the user's hand image to the corresponding menu icon for its selection. When the user makes a particular body motion to activate the selected menu, the system recognizes the motion for thereby activating the selected menu.