System And Method For Vertical Gradient Detection In Video Processing
Methods and systems for processing video information are disclosed herein and may comprise calculating at least one vertical gradient of a plurality of adjacent pixels within a current field. A two-field difference may be calculated between a plurality of pixels within a first field and a corresponding plurality of pixels within a second field. At least one pixel may be deinterlaced within the current field based at least in part on the calculated at least one vertical gradient and the calculated two-field difference. The two-field difference may indicate an amount of motion between the plurality of pixels within the first field and the corresponding plurality of pixels within the second field. Phase shifting may be applied to at least one of the plurality of pixels within the first field and the corresponding plurality of pixels within the second field to effect in-phase alignment.
Latest Broadcom Corporation Patents:
This application is a Continuation Application of, and claims priority to, U.S. patent application entitled “System and Method for Vertical Gradient Detection in Video Processing,” filed on Nov. 10, 2005 and assigned Ser. No. 11/270,999.
This application is related to the following applications, each of which is hereby incorporated herein by reference in its entirety for all purposes:
- U.S. patent application Ser. No. 11/254,450, filed Oct. 20, 2005;
- U.S. patent application Ser. No. 11/254,262, filed Oct. 20, 2005;
- U.S. patent application Ser. No. 11/272,116, filed Nov. 10, 2005;
- U.S. patent application Ser. No. 11/272,113, filed Nov. 10, 2005;
- U.S. patent application Ser. No. 11/272,112, filed Nov. 10, 2005; and
- U.S. Provisional Patent Application Ser. No. 60/687,674, filed Jun. 6, 2005.
Certain embodiments of the invention relate to processing of video. More specifically, certain embodiments of the invention relate to a system and method for vertical gradient detection in video processing and deinterlacing based at least in part on the detected vertical gradient.
BACKGROUND OF THE INVENTIONDuring interlacing, pictures that form a video frame may be captured at two distinct time intervals. These pictures, which may be referred to as fields and which form the video frame, comprise a plurality of ordered lines. During one of the time intervals, video content for even-numbered lines may be captured, while at a subsequent time interval, video content for odd-numbered lines may be captured. The even-numbered lines may be collectively referred to as a top field, while the odd-numbered lines may be collectively referred to as a bottom field. On an interlaced display, the even-numbered lines may be presented for display on the even-numbered lines of a display during one time interval, while the odd-numbered lines may be presented for display on the odd-numbered lines of the display during a subsequent time interval.
With progressive displays, however, all of the lines of the display are displayed at one time interval. During deinterlacing of interlaced video, a deinterlacing process may generate pictures for display during a single time interval. Deinterlacing by combining content from adjacent fields, which is known as weaving, may be suitable for regions of a picture that are characterized by little or no object motion or lighting changes, known as inter-field motion. Displaying both the top field and bottom field at the same time interval may be problematic in cases where the video content comprises significant motion or significant lighting changes. Objects that are in motion are at one position when the top field is captured and another position when the bottom field is captured. If the top field and the bottom field are displayed together, a comb-like, or jagged edge affect may appear with the object. This is referred to as a weave artifact.
Alternatively, deinterlacers may generate a picture for progressive display by interpolating missing lines in a field from adjacent and surrounding lines. This is known as spatial interpolation, or “bobbing”. While spatial interpolation avoids weave artifacts in regions with high inter-field motion, spatial interpolation loses vertical detail and may result in a blurry picture.
Conventional methods for deinterlacing interlaced video may produce weave artifacts, for example by incorrectly biasing deinterlacing decisions towards weaving when spatial interpolation may be more appropriate. Similarly, conventional deinterlacing methods may often times bias deinterlacing decisions towards spatial interpolation when weaving may be a more appropriate method for deinterlacing. Furthermore, conventional deinterlacing methods may utilize a determined amount of weaving and spatial interpolation, or “bobbing”, which may, however, result in visible artifacts such as contouring artifacts.
Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.
BRIEF SUMMARY OF THE INVENTIONA system and method for vertical gradient detection in video processing, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
Certain aspects of the invention may be found in a method and system for vertical gradient detection in video processing. In one embodiment of the invention, video information may be processed by calculating an estimate of a vertical gradient of a plurality of vertically adjacent pixels within a current field. Inter-field motion may be estimated utilizing a measurement of differences between a plurality of fields. For example, a two-field difference may be calculated between a plurality of pixels within the current field and a corresponding plurality of in-phase pixels within an adjacent field. At least one of the plurality of pixels within the current field and the corresponding plurality of pixels within the adjacent field may be phase shifted to effect in-phase alignment, prior to the calculation of the two-field difference. The plurality of pixels within the current field and the corresponding plurality of in-phase pixels within the adjacent field may be filtered, prior to the calculation of the two-field difference. A blend control value may be generated based at least in part on the calculated at least one vertical gradient and the calculated two-field difference. The generated blend control value may be limited to a particular range, and the limiting may be achieved via a look-up table, a scaling function, and/or a clipping function. A current output sample value may be generated, based on the blend control value.
In one embodiment of the invention, one or more measures of motion of video content may be calculated and combined to generate a combined measure of visible motion of pixels. The combined measure may be utilized to determine whether weaving, spatial interpolation, or a mixture thereof, may be used for deinterlacing. For example, the combined measure may comprise a calculated vertical gradient within a plurality of adjacent pixels in the current field. In addition, the combined measure may also comprise a measure of the motion between fields, such as the vertical motion between the corresponding objects 132 and 134 in the previous and current fields, respectively. The measure of motion between fields may be based on a difference between adjacent fields and may be calculated by utilizing a plurality of corresponding pixels within the adjacent fields.
Similarly, pixels within the previous columns 1 and 2 and the subsequent columns 3 and 4 within the current field 201 d may also be selected for calculation of the vertical gradient. For example, pixels 208db, . . . , 216db may be selected from the previous column 1, and pixels 208dc, . . . , 216dc may be selected from the previous column 2. Furthermore, pixels 208de, . . . , 216de may be selected from the subsequent column 4, and pixels 208df, . . . , 216df may be selected from the subsequent column 5.
Vertical gradient detection may be calculated utilizing, for example, a high pass filter in the vertical dimension. Furthermore, to increase sensitivity in the vertical direction and decrease sensitivity in the horizontal direction, vertical gradient detection may also utilize a low pass filter in the horizontal direction. For example, vertical gradient detection may use as few as two taps vertically, or it may use more taps. The high pass function may comprise a difference function, or it may comprise another type of function. If vertical gradient detection uses 4 taps vertically, the vertical taps may utilize the same coefficient magnitude, such as 1, or they could be weighted. The horizontal low pass function may comprise an average function, or it may comprise another type of function. Vertical gradient detection may also be used to detect presence and magnitude of a vertical gradient by utilizing an absolute value function, such as “abs( ),” for example.
In one embodiment of the invention, a vertical gradient may be calculated utilizing all 20 samples illustrated in
Referring to
In this regard, a weighted sum, such as sum_y_b, may be calculated for column 1, similarly to sum_y_d but utilizing samples (208db, . . . , 216db). Similarly, weighted sum sum_y_c may be calculated utilizing samples (208dc, . . . , 216dc) in column 2. Weighted sum sum_y_e may be calculated utilizing samples (208de, . . . , 216de) in column 4, and weighted sum sum_y_f may be calculated utilizing samples (208df, . . . , 216df) in column 4. A weighted sum of these five weighted sums may then be calculated utilizing coefficients (1, 2, 2, 2, 1) as sum_y_total=1*sum_y_b+2*sum_y_c+2*sum_y_d+2*sum_y_e+1*sum_y_f. The resulting weighted sum sum_y_total may represent a vertical gradient centered at the current output sample location 212d, measuring the component, for example luma, used in the foregoing calculations. In another embodiment of the invention, additional vertical gradient values may be calculated similarly measuring other components of the pixels, for example Cb or Cr. For example, sum_Cb_total may be calculated in a similar fashion to sum_y_total, however using the Cb component of the respective pixels in the calculations. Similarly, for example sum_Cr_total may be calculated in a similar fashion to sum_y_total, however using the Cr component of the respective pixels in the calculations.
In another embodiment of the invention, the absolute value of sum_y_total may be calculated. The resulting value may be referred to as vert_grad_y, for example, and may be utilized as an estimate of the vertical luma gradient in the vicinity of the current output sample location. The value of vert_grad_y may be utilized to estimate the presence of horizontal, or near-horizontal, edges in the content in the current field 201d. The value of vert_grad_y may also be utilized to control the calculation of a blending control as part of a de-interlacing method or system, in accordance with an embodiment of the invention.
Alternatively, the absolute values of the sum_Cb_total and sum_Cr_total may also be calculated, and the resulting values may be referred to as vert_grad_Cb and vert_grad_Cr, respectively. The values of vert_grad_Cb and vert_grad_Cr may be utilized as estimates of the vertical Cb gradient and the vertical Cr gradient, respectively. A weighted sum of vert_grad_y, vert_grad_Cb and vert_grad_Cr, for example, may be calculated. The resulting sum may be referred to as vert_grad_total, for example. The value of ver_grad_total may be utilized to control the calculation of a blending control, for example, as part of a de-interlacing method or system, in accordance with an embodiment of the invention.
A vertical gradient may be indicative of horizontal or near horizontal details in the video content, and bad weave artifacts may result from applying weaving to such details if the content is moving between adjacent fields. A high vertical gradient value combined with an indication of motion between fields may result in utilizing spatial interpolation as a method for deinterlacing interlaced video fields from the current field and at least one adjacent field. Similarly, a low gradient value may be indicative of a lack of horizontal or near horizontal details such that visible bad weave artifacts are less likely to result from motion between fields and, therefore, weaving may be selected as a method for deinterlacing interlaced video fields from the current field 201d and at least one adjacent field.
The processor 302 may comprise suitable circuitry, logic, and/or code and may be adapted to control processing of video information by the video processing block 304, for example. The processor 302 may comprise a system or a host processor, or a video processor such as a video processing chip or embedded video processor core. The video processor may be integrated in any device that may be utilized to generate video signals and/or display video. The memory 308 may be adapted to store raw or processed video data, such as video data processed by the video processing block 304. Furthermore, the memory 308 may be utilized to store code that may be executed by the processor 302 in connection with video processing tasks performed by the video processing block 304. For example, the memory 308 may store code that may be utilized by the processor 302 and the video processing block 304 for calculating a vertical gradient within a current field, and utilizing the calculated vertical gradient during deinterlacing of interlaced video received from the video source 306.
The processor 302 may be adapted to calculate at least one vertical gradient within a plurality of vertically adjacent pixels within a current field. The plurality of pixels may be received from the video source 306. The processor 302 may deinterlace at least one pixel within the current field based at least in part on the calculated at least one vertical gradient.
Accordingly, aspects of the invention may be realized in hardware, software, firmware, or a combination thereof. The invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware, software and firmware may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
One embodiment of the present invention may be implemented as a board level product, as a single chip, application specific integrated circuit (ASIC), or with varying levels integrated on a single chip with other portions of the system as separate components. The degree of integration of the system may typically be determined primarily by speed and cost considerations. Because of the sophisticated nature of modern processors, it is possible to utilize a commercially available processor, which may be implemented external to an ASIC implementation of the present system. Alternatively, if the processor is available as an ASIC core or logic block, then the commercially available processor may be implemented as part of an ASIC device with various functions implemented as firmware.
The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context may mean, for example, any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form. However, other meanings of computer program within the understanding of those skilled in the art are also contemplated by the present invention.
While the invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiments disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.
Claims
1. A method, comprising:
- calculating, in at least one computing device, a vertical gradient of a plurality of adjacent pixels within a first field of a video frame;
- calculating, in the at least one computing device, a two-field difference between a plurality of pixels within the first field and a plurality of corresponding pixels within a second field of the video frame; and
- deinterlacing, in the at least one computing device, at least a portion of the video frame based at least in part on the calculated vertical gradient and the calculated two-field difference.
2. The method of claim 1, further comprising filtering the adjacent pixels within the first field, prior to calculating the vertical gradient.
3. The method of claim 2, wherein filtering the adjacent pixels within the first field comprises applying a first filter in a vertical direction and a second filter in a horizontal direction.
4. The method of claim 3, wherein the first filter passes a higher frequency range than the second filter.
5. The method of claim 3, wherein the first filter comprises a difference function.
6. The method of claim 3, wherein the second filter comprises an averaging function.
7. The method of claim 1, wherein the calculated vertical gradient is based at least in part on a weighted sum of a plurality of the adjacent pixels.
8. The method of claim 1, further comprising generating a blend control value based at least in part on the calculated vertical gradient and the calculated two-field difference.
9. The method of claim 8, further comprising limiting the generated blend control value to a particular range.
10. A system, comprising:
- at least one processor configured to calculate a vertical gradient of a plurality of adjacent pixels within a current field of a video frame;
- the at least one processor configured to calculate a two-field difference between a plurality of pixels within a first field of the video frame and a plurality of corresponding pixels within a second field of the video frame; and
- the at least one processor configured to deinterlace at least a portion of the video frame based at least in part on the calculated vertical gradient and the calculated two-field difference.
11. The system of claim 10, wherein the first field is the current field.
12. The system of claim 10, wherein the at least one processor is configured to filter the adjacent pixels within the current field, prior to calculating the vertical gradient.
13. The system of claim 12, wherein the at least one processor is configured to filter the adjacent pixels by applying a first filter in a vertical direction and a second filter in a horizontal direction.
14. The system of claim 13, wherein the first filter passes a higher frequency range than the second filter.
15. The system of claim 13, wherein the first filter comprises a difference function.
16. The system of claim 13, wherein the second filter comprises an averaging function.
17. The system of claim 10, wherein the calculated vertical gradient is based at least in part on a weighted sum of a plurality of the adjacent pixels.
18. The system of claim 10, wherein the at least one processor is configured to generate a blend control value based at least in part on the calculated vertical gradient and the calculated two-field difference.
19. A method, comprising:
- calculating, in at least one computing device, a vertical gradient of a plurality of adjacent pixels within a first field of a video frame;
- calculating, in the at least one computing device, a two-field difference between a plurality of pixels within the first field and a plurality of corresponding pixels within a second field of the video frame; and
- selecting, in the at least one computing device, at least one of a plurality of video processing operations to perform on at least a portion of the video frame based at least in part on the calculated vertical gradient and the calculated two-field difference.
20. The method of claim 19, wherein at least one of the video processing operations comprises a deinterlacing operation.
21. The method of claim 19, wherein at least one of the video processing operations comprises a weaving deinterlacing operation.
22. The method of claim 19, wherein at least one of the video processing operations comprises a spatial interpolation deinterlacing operation.
Type: Application
Filed: Oct 23, 2012
Publication Date: Feb 21, 2013
Applicant: Broadcom Corporation (Irvine, CA)
Inventor: Broadcom Corporation (Irvine, CA)
Application Number: 13/658,131
International Classification: H04N 7/01 (20060101);