Abstract: A method of analysing image data to quantify prior block-based processing comprises processing a set of pixel values derived from the image data to generate a spatial difference profile along a line perpendicular to assumed block edges, the spatial difference profile representing differences between values of pixels spaced spatially in a direction parallel to said line; summing the spatial difference profile in a direction perpendicular to that line; measuring inter-maxima distances in the spatial difference profile to a fractional precision in pixel spacing units; and aggregating measured inter-maxima distances to determine a block size.
Abstract: The present invention relates to a method of and apparatus for image analysis and in particular may relate to the detection of cross-fades in film or video sequences. The invention relates in particular to a method of analysing an image of a sequence of images to determine a cross-fade measure based on determined temporal picture information transitions associated with picture elements of the image. In particular, the cross-fade measure may be determined based on the extent to which the temporal picture information transitions are uniform. The method and apparatus of the invention can provide a measure of likelihood of a cross-fade in a single pass. In addition the described method can be accomplished in real-time or close to real-time. In addition the cross-fade detection results are comparable with, or better than, the results achieved by the prior art methods.
Abstract: To detect the presence of the two constituent images of a stereoscopic image within an image frame, a row of vertically-averaged pixels is derived from the upper half of the frame and compared with a second row of vertically-averaged pixels derived from the lower half of the frame. Similarly, a column of horizontally-averaged pixels is derived from the left half of the frame and compared with a column of horizontally-averaged pixels derived from the right half of the frame.
Abstract: In the creation of video program material, picture composition decisions—especially framing parameters—taken by an operator for small-display presentation of a scene are used the automatic generation of picture composition decisions for a larger display. Framing parameters such as pan, are temporally filtered before being applied to the wider field of view provided for the larger display.
Abstract: A motion estimator has a spatial sub-sampler to receive input images; at least one motion estimator determining motion vectors between input images and sub-sampled motion vectors between sub-sampled images; an up-sampler for up-sampling the sub-sampled motion vectors; and a selector for providing a motion vector output by selecting between the motion vectors and the (up-sampled) sub-sampled motion vectors, according to motion vector confidence.
Abstract: A method for repairing scratch impairments in which brightness values for a set of points within a narrow region of the impaired input image that is aligned with an expected scratch direction are modified in a scratch repair process; and the scratch repair process is controlled in dependence upon the relationship between the impaired brightness values and corresponding modified brightness values.
Abstract: When mixing cutting between video cameras viewing a common scene from different viewpoints, geometric transforms that vary from image to image are applied to one or both camera outputs so as to create an apparent point of view that moves along on a path joining the viewpoints of the cameras.
Abstract: A method of identifying the left-eye and the right-eye images of a stereoscopic pair, comprising the steps of comparing the images to locate an occluded region visible in only one of the images; detecting image edges; and identifying a right-eye image where image edges are aligned with a left hand edge of an occluded region and identifying a left-eye image where more image edges are aligned with a right hand edge of an occluded region.
Abstract: To assess picture impairment due to the interpolation of output images from input images, for example in standards conversion, an output image detail measure is compared with an input image detail measure. One of the image detail measures is obtained by interpolation between image detail measures for at least two images.
Abstract: A video noise reducer divides a signal into spatial frequency bands and derives both recursively and non-recursively filtered signals for each band. Both signals are processed non-linearly. These signals are combined in ways that vary between the bands to provide a noise signal and a detail signal. A clean video signal with all noise removed is used in the recursive loop. The output signal includes detail enhancement and may have a subjectively pleasant amount of noise added back.
Abstract: First image data at a lower sampling frequency is up-sampled in a sampling ratio N:M to a higher sampling frequency in an up-sampling filter; and, second image data at the said higher sampling frequency is down-sampled in a sampling ratio M:N to the said lower sampling frequency in a down-sampling filter where the combination of the up-sampling filter and the down-sampling filter is substantially transparent and every filtered sample is formed from a weighted sum of at least two input samples.
Abstract: For monitoring an image transformation such as aspect ratio conversion, an image feature is defined by identifying a position in the image having a local spatial maximum value and then identifying four other positions in the image having local spatial minimum values such that the four minimum value positions surround the position of the maximum, a first pair of the minimums lie on a first line passing through maximum and a second pair of the minimums lie on a second line passing through the maximum.
Abstract: A video processing apparatus can receive video at two different rates, which may be High Definition video and Standard Definition Video. The input video is stored and read into a processor at a fixed internal rate for processing at that rate. Processed video is output to a further store from which it can be read at either of the input rates.
Type:
Grant
Filed:
March 29, 2005
Date of Patent:
March 8, 2011
Assignee:
Snell Limited
Inventors:
Edward Palgrave-Moore, Keith Steward Hammond
Abstract: A method of correcting dirt or other defects in video or other images in which a region is provisionally corrected, an accumulated gradient measure formed along the periphery of the region with and without correction and the region corrected or not depending on a comparison of the gradient measures.
Abstract: The invention relates to the analysis of characteristics of audio and/or video signalsforthe generation of audio-visual content signatures.To determine an audio signature a region of interest for example of high entropy —is identified in audio signature data. This region of interest is then provided as anaudio signaturewith offset information. A video signature is also provided.
Abstract: A method of composite decoding in which the input signal is converted into the frequency domain, and the symmetry of frequency components with respect to the subcarrier frequency is compared. The comparison is varied in dependence upon the frequency being processed. In this way, the separation can be adapted to suit known characteristics of different portions on the input spectrum. This is particularly useful for processing NTSC signals. The allocation of a particular component to chrominance may be biased in dependence upon a measure of the luminance information of the composite signal at a corresponding spatial frequency.
Abstract: Systems and methods to detect non-uniform spatial scaling of an image in the horizontal direction (for example in 4:3 to 16:9 aspect ratio conversion).
Abstract: To determine a regional shot-change parameter for an image identified as a whole as a shot change image, a difference is taken between each pixel in that image and a spatially equivalent pixel in an adjacent image in the sequence. A pixel is flagged as a shot-change pixel when the difference for that pixel and around three of the spatially adjacent pixels exceed a threshold. If a pixel is spatially isolated from other shot-change pixels, it is not regarded as a shot-change pixel.
Abstract: Video data is segmented by representing the pixel location, RGB values and other features such as motion vectors, as points in a multidimensional segmentation space. Initialized segments are represented as locations in the segmentation space and segment membership then determined by the distance in segmentation space from the data point representing the pixel to the location of the segment. The distance measure takes into consideration the covariance of the data, for the segment or for the picture.
Abstract: A method of routing audio or video data. A plurality of source data inputs to input modules are divided into groups and main crosspoint modules receive one group from every input module, and destination data outputs from output modules are divided into groups and each output module receives one group from every main crosspoint module. Input modules send a duplicate of one selected group to a redundant crosspoint module and output modules receive a group from a redundant crosspoint module and can use that group in place of any group from a main crosspoint module.