Memory management method for storing motion vectors of decoded macroblocks
A method for using memory to store motion vectors of decoded macroblocks as candidate predictors used in future motion vector decoding process. For a decoded first macroblock, the method allocates a first memory space and a second memory space in a first memory, and allocates a third memory space and a fourth memory space in a second memory for storing the motion vector(s) of the first macroblock. When allocating memory spaces in the first memory, the method considers a row of macroblocks in the video frame as a whole, and allocates a plurality of memory units that are sufficient for storing motion vectors of a row of macroblocks. During the process of decoding each row of macroblocks, the memory units of the first memory can be re-used to store motion vectors of decoded macroblocks.
1. Field of the Invention
The present invention provides a memory management method for storing motion vector(s) of decoded macroblocks, and more particularly, to a memory management method for storing motion vector(s) of decoded macroblocks for providing candidate predictors in future decoding processes.
2. Description of the Prior Art
The motion picture experts group (MPEG) was established in 1988 and is a working group of the international organization for standardization (ISO). This working group has set up several audio/video compression formats of different versions. MPEG-1 and MPEG-2 are two video compression standards widely used today, and these two video compression standards have some common points. When processing video encoding/decoding with the MPEG standards, a 16*16 pixel macroblock (MB) is the basic unit for dealing with motion vectors (MV). In the MPEG standards, a macroblock can have a single motion vector for the whole macroblock, and in such situation the macroblock is called a “large region”. Or, a macroblock can be composed of four 8*8 blocks with each block having its own motion vector. In such situation, each 8*8 block is called a “small region”. Additionally, a macroblock can be composed of two fields, each field having its own motion vector. In this situation, each field is called a “field region”.
A video frame (in MPEG-4 standard a video frame is also called a VOP, which stands for video object plane) can be a progressive frame or an interlaced frame. A progressive frame may be irregularly composed of the above-mentioned large regions and small regions.
When processing motion compensation (MC), motion vectors must be decoded. Taking MPEG-4 standard as an example, P-VOP (which stands for predicted VOP) and S(GMC)-VOP (which stands for sprite global motion compensation VOP) are two kinds of video object planes that are encoded through using motion vectors. For decoding a motion vector of this kind of video object planes, the horizontal and vertical motion vector components are decoded differentially using a prediction. The prediction is formed by a median filtering of three vector candidate predictors from spatial neighborhood macroblocks or blocks already decoded. In the following description, when a macroblock contains only one motion vector for the whole macroblock, it will be referred to as a type-1 macroblock; when a macroblock contains four blocks (a first block in the top-left corner, a second block in the top-right corner, a third block in the bottom-left corner, and a fourth block in the bottom-right corner) and four corresponding motion vectors, it will be referred to as a type-2 macroblock; and when a macroblock contains two fields (a first field and a second field) and two corresponding motion vectors, it will be referred to as a type-3 macroblock.
When the video frame being decoded is a progressive frame, it is possible that the decoding macroblock and the spatial neighborhood macroblocks for providing candidate predictors are all type-1 macroblocks. This situation is shown in
Px=Median (MV1x,MV2x,MV3x)
Py=Median (MV1y,MV2y,MV3y)
where Median is a function for determining a median. For example, when MV1=(−2,3), MV2=(1,5), MV3=(−1,7), Px and Py are −1 and 5, respectively.
When the video frame being decoded is a progressive frame, it is also possible that the decoding macroblock and the spatial neighborhood macroblocks for providing candidate predictors are all type-2 macroblocks. These situations are shown in
Px=Median (MV1x,MV2x,MV3x);
Py=Median (MV1y,MV2y,MV3y).
When the video frame being decoded is an interlaced frame, it is possible that the decoding macroblock or the spatial neighborhood macroblocks for providing candidate predictors are type-3 macroblocks. These situations are as shown in
MV2x=Div2Round (MV2x—f1,MV2x—f2)
MV2y=Div2Round (MV2y—f1,MV2y—f2)
where Div2Round is an average then carry function. For example, when MV2x_f1=(1,2), MV2x_f2=(4,5), MV2x and MV2y are 3 and 4 respectively. After MV2 is calculated, Px and Py can be determined through the above-mentioned Median function. Please refer to
MVix=Div2Round (MVix—f1,MVix—f2)
MViy=Div2Round (MViy—f1,MViy—f2), where i={1, 2, 3}
The motion vector predictor Px and Py for both the first and second field of macroblock X can then be determined through the above-mentioned Median function.
Please refer to
Because before designing the system, it is not certain which type macroblocks A, B, C will be, while allocating memory space, three different situations of three spatial neighborhood macroblocks should be considered. This means, for each macroblock A, B, and C, the system should allocate a memory space being sufficient for storing a single motion vector of a macroblock (for dealing with the situation when the macroblock is a type-1 macroblock), four memory spaces each being sufficient for storing a motion vector of a block (for dealing with the situation when the macroblock is a type-2 macroblock), and two memory spaces each being sufficient for storing a motion vector of a field (for dealing with the situation when the macroblock is a type-3 macroblock). In other words, for each macroblock to provide a candidate predictor, the system should allocate seven memory spaces each being sufficient for storing a motion vector.
The conventional memory allocating method consumes a lot of memory space. When decoding motion vectors, systems of the prior art consider each video frame as a whole. In other words, taking a frame with 720*480 pixels as an example, when storing motion vector(s) of each decoded macroblock (which will become macroblock B and C for future decoding of macroblock X), the system must allocate (720/16)*(480/16)*7 memory spaces in a first memory. When storing motion vector(s) of each decoded macroblock (which will become macroblock A for future decoding of macroblock X), the system must allocate seven memory spaces in a second memory. This method is costly and is not ideal for system implementation.
SUMMARY OF INVENTIONIt is therefore an object of the invention to provide a memory management method for storing motion vector(s) of decoded macroblocks to solve the above-mentioned problem.
According to the embodiment, a memory management method used in the decoding process of a video frame is disclosed. The method is for storing motion vector(s) of a decoded first macroblock as candidate predictor(s) for future use in the decoding process, and includes the following steps: allocating a first memory space and a second memory space in a first memory, wherein each of the first and the second memory spaces is sufficient for storing one motion vector; and when the first macroblock has only one first motion vector, storing the first motion vector in the first or the second memory space.
The embodiment also discloses a memory management method used in the decoding process of a video frame. The method is for storing the motion vector(s) of a decoded first macroblock as candidate predictor(s) for use in decoding a next macroblock. The method includes: allocating a third memory space and a fourth memory space in a second memory, wherein each of the third and the fourth memory spaces is sufficient for storing one motion vector; and when the first macroblock has only one first motion vector, storing the first motion vector in the third or the fourth memory space.
Additionally, the embodiment also suggest a memory reuse implementation method. That is, when allocating memory space in the first memory, the embodiment considers each row of macroblocks as a whole. A plurality of memory units sufficient for storing motion vectors of a row of macroblocks are allocated, and are reused each time a new row is decoded. In this way, the embodiment greatly saves memory resources comparing to the prior art.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
BRIEF DESCRIPTION OF DRAWINGS
Referring to
-
- 610: Allocate a first memory space and a second memory space in a first memory for a decoded first macroblock regardless of whether the first macroblock is a type-1, type-2, or type-3 macroblock. Each of the first memory space and the second memory space is sufficient for storing a motion vector.
- 620: Determine the type of the first macroblock. When the first macroblock is a type-1 macroblock, go to step 630; when the first macroblock is a type-2 macroblock, go to step 640; and when the first macroblock is type-3 macroblock, go to step 650.
- 630: The first macroblock is a type-1 macroblock having a first motion vector. Store the first motion vector in the first or the second memory spaces. Although one memory space is enough for storing the first motion vector under this situation, it is also practicable to store the first motion vector in both the first and the second memory spaces.
- 640: The first macroblock is a type-2 macroblock having four blocks. Store the motion vector of the first macroblock's third block in the first memory space, and store the motion vector of the first macroblock's fourth block in the second memory space.
- 650: The first macroblock is a type-3 macroblock having a first field and a second field. Store the motion vector of the first macroblock's first field in the first memory space, and store the motion vector of the first macroblock's second field in the second memory space.
Specifically, this flowchart shows an embodiment of the present invention explaining how to allocate memory spaces and how to use the allocated memory spaces to store the motion vector(s) of the decoded first macroblock, considering the possibility that the motion vector(s) of the first macroblock will be used as candidate predictor(s) when macroblocks on the next row is going to be decoded.
By using the flowchart shown in
Please note that the first memory can be implemented by a dynamic random access memory (DRAM), a static random access memory (SRAM), registers, or other devices capable of storing data.
Furthermore, referring to
-
- 710: Allocate a third memory space and a fourth memory space in a second memory for a decoded first macroblock regardless of whether the first macroblock is a type-1, type-2, or type-3 macroblock. Each of the third memory space and the fourth memory space is sufficient for storing a motion vector.
- 720: Determine the type of the first macroblock. When the first macroblock is a type-1 macroblock, go to step 730; when the first macroblock is a type-2 macroblock, go to step 740; when the first macroblock is type-3 macroblock, go to step 750.
- 730: The first macroblock is a type-1 macroblock having a first motion vector. Store the first motion vector in the third or the fourth memory space. Although one memory space is enough for storing the first motion vector under this situation, it is also practicable to store the first motion vector in both the third and the fourth memory spaces.
- 740: The first macroblock is a type-2 macroblock having four blocks. Store the motion vector of the first macroblocks second block in the third memory space, and store the motion vector of the first macroblocks fourth block in the fourth memory space.
- 750: The first macroblock is a type-3 macroblock having a first field and a second field. Store the motion vector of the first macroblock's first field in the third memory space, and store the motion vector of the first macroblock's second field in the fourth memory space.
Specifically, this flowchart shows an embodiment of the present invention explaining how to allocate memory spaces and how to use the allocated memory spaces to store the motion vector(s) of the decoded first macroblock, considering that the motion vector(s) of the first macroblock will be used as candidate predictor(s) when a next macroblocks is going to be decoded.
By using the flowchart shown in
Please note that the second memory can be implemented by a dynamic random access memory (DRAM), a static random access memory (SRAM), registers, or other devices capable of storing data. Additionally, the first memory and the second memory can be realized as two separate memory devices or a single memory device, as can be appreciated by people familiar with the related arts.
Aside from macroblocks located on the last row and the last column of each video frame (or VOP), every decoded macroblock will have to provide its motion vector(s) as candidate predictor(s) for decoding a next macroblock and for decoding macroblocks on a next row. Hence, for each of these macroblocks, both the flowchart shown in
In addition to the flowcharts shown in
-
- 810: Allocate N memory units in a first memory, and allocate an additional memory unit in a second memory. Each one of the N memory units in the first memory and the additional memory unit in the second memory is sufficient for storing motion vector(s) of a single macroblock. More specifically, the N memory units are used to store motion vectors of a row of macroblocks, in case the stored motion vectors will be used as candidate predictors when macroblocks on a next row are going to be decoded. Hence each of the N memory unit could contain a first and a second memory space as described in
FIG. 12 . And the additional memory unit is used to store motion vector(s) of each decoded macroblock in case the motion vector(s) of the decoded macroblock will be used as candidate predictor(s) when a next macroblock is going to be decoded. Hence the additional memory unit could contain a third and a fourth memory space as described inFIG. 13 . - 820: A macroblock at the Lth row and Kth column is decoded.
- 830: Is L>1? If yes, go to step 850, otherwise go to step 840.
- 840: The decoded macroblock is located at a first row of a video frame. Store the motion vector(s) of the decoded macroblock in a Kth memory unit of the N memory units in the first memory, and store the motion vector(s) of the decoded macroblock in the additional memory unit in the second memory. Under the condition that each of the N memory unit contains a first and a second memory space described in
FIG. 12 and the additional memory unit contains a third and a fourth memory space described inFIG. 13 . If the decoded macroblock is a type-1 macroblock, its motion vectors could be stored in the first or the second memory spaces (or be stored in both of these two memory spaces), and stored in the third or the fourth memory spaces (or be stored in both of these two memory spaces). If the decoded macroblock is a type-2 macroblock, the motion vector of its second block could be stored in the third memory space, the motion vector of its third block could be stored in the first memory space, and the motion vector of its fourth block could be stored in the second and fourth memory spaces. If the decoded macroblock is a type-3 macroblock, the motion vector of its first field could be stored in the first and third memory spaces, and the motion vector of its second field could be stored in the second and fourth memory spaces. In this way, each time a macroblock is decoded, its motion vector(s) will always been stored in the additional memory unit in the second memory, overwriting the motion vector(s) of a previously decoded macroblock, so the additional memory unit in the second memory will be reused once each time a macroblock is decoded. And at this time each of the 1st˜Kth memory units of the N memory units in the first memory is stored with the motion vector(s) of the 1st˜Kth macroblocks of the 1st row respectively; each of the (K+1)th˜Nth memory units of the N memory units in the first memory is empty or stored with the motion vectors of macroblocks in a previous decoded video frame (which will not be used as candidate predictors in decoding this video frame); the additional memory unit in the second memory is stored with the motion vector(s) of the (K−1)th macroblocks of the 1st row (when K>1), or stored with the motion vector(s) of a macroblock in the previous decoded video frame (which will not be used as candidate predictors in decoding this video frame). - 850: The decoded macroblock is located at an Lth row of the video frame (L>1). Store the motion vector(s) of the decoded macroblock in a Kth memory unit of the N memory units in the first memory, and store the motion vector(s) of the decoded macroblock in the additional memory unit in the second memory. Again, under the condition that each of the N memory unit contains a first and a second memory unit described in
FIG. 12 and the additional memory unit contains a third and a fourth memory space described inFIG. 13 . If the decoded macroblock is a type-1 macroblock, its motion vectors could be stored in the first or the second memory spaces (or be stored in both of these two memory spaces), and stored in the third or the fourth memory spaces (or be stored in both of these two memory spaces). If the decoded macroblock is a type-2 macroblock, the motion vector of its second block could be stored in the third memory space, the motion vector of its third block could be stored in the first memory space, and the motion vector of its fourth block could be stored in the second and fourth memory spaces. If the decoded macroblock is a type-3 macroblock, the motion vector of its first field could be stored in the first and third memory spaces, and the motion vector of its second field could be stored in the second and fourth memory spaces. In this way, when L>1, the motion vector(s) of decoded macroblock at Lth row and Kth column will be stored at the memory unit originally storing the motion vector(s) of a previously decoded macroblock at the (L−1)th row and Kth column. That is, the motion vector(s) of macroblock at (L−1)th row and Kth column will be overwritten by the motion vector(s) of macroblock at Lth row and Kth column. In other words, the N memory units in the first memory will be reused once each time a row of macroblocks is decoded. And at this time each of the 1st˜Kth memory units of the N memory units in the first memory is stored with the motion vector(s) of the 1st˜Kth macroblocks of the Lth row respectively; each of the (K+1)th˜Nth memory units of the N memory units in the first memory is stored with the motion vector(s) of the (K+1)th˜Nth macroblocks of the (L−1)th row respectively; the additional memory unit in the second memory is stored with the motion vector(s) of the (K−1)th macroblocks of the Lth row (when K>1), or stored with the motion vector(s) of the Nth macroblocks of the (L−1)th row (which will not be used as candidate predictors in decoding macroblocks at the Lth row). - 860: If the decoding process is not finished, return to step 820.
- 810: Allocate N memory units in a first memory, and allocate an additional memory unit in a second memory. Each one of the N memory units in the first memory and the additional memory unit in the second memory is sufficient for storing motion vector(s) of a single macroblock. More specifically, the N memory units are used to store motion vectors of a row of macroblocks, in case the stored motion vectors will be used as candidate predictors when macroblocks on a next row are going to be decoded. Hence each of the N memory unit could contain a first and a second memory space as described in
Using the row based memory reuse scheme provided by the present invention, at each time point what the system must store are motion vectors of a row of macroblocks and a previous decoded macroblock. Taking a video frame with 720*480 pixels as an example, by using the method provided by the present invention, the system has to allocate only (720÷16) memory units (that is, (720÷16)×2 memory spaces) in the first memory and one memory unit (that is, two memory spaces) in the second memory (each allocated memory space being sufficient for storing one motion vector). Compared with the prior art, by using the method provided by the present invention, memory resources are used more efficiently.
Next, please refer to
Please note that the system shown in
In contrast to the conventional system, a system employing the present invention uses less memory space, which means the memory is used more efficiently. Hence system resources are saved.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Claims
1. A memory management method used in the decoding process of a video frame, for storing motion vector(s) of a decoded first macroblock as candidate predictor(s) for future use in the decoding process, the method comprising:
- allocating a first memory space and a second memory space in a first memory, wherein each of the first and the second memory spaces is sufficient for storing one motion vector; and
- when the first macroblock comprises only one first motion vector, storing the first motion vector in the first or the second memory space.
2. The method of claim 1, further comprising:
- when the first macroblock comprises a first block, a second block, a third block, and a fourth block, storing the motion vector of the third block in the first memory space and storing the motion vector of the fourth block in the second memory space.
3. The method of claim 1, wherein the video frame is a progressive frame.
4. The method of claim 1, wherein the video frame is an interlaced frame.
5. The method of claim 4, further comprising:
- when the first macroblock comprises a first field and a second field, storing the motion vector of the first field in the first memory space and storing the motion vector of the second field in the second memory space.
6. The method of claim 1, wherein the first memory is a DRAM, an SRAM, or registers.
7. A memory management method used in the decoding process of a video frame, for storing the motion vector(s) of a decoded first macroblock as candidate predictor(s) for use in decoding a next macroblock, the method comprising:
- allocating a third memory space and a fourth memory space in a second memory, wherein each of the third and the fourth memory spaces is sufficient for storing one motion vector; and
- when the first macroblock comprises only one first motion vector, storing the first motion vector in the third or the fourth memory space.
8. The method of claim 7, further comprising:
- when the first macroblock comprises a first block, a second block, a third block, and a fourth block, storing the motion vector of the third block in the third memory space and storing the motion vector of the fourth block in the fourth memory space.
9. The method of claim 7, wherein the video frame is a progressive frame.
10. The method of claim 7, wherein the video frame is an interlaced frame.
11. The method of claim 10, further comprising:
- when the first macroblock comprises a first field and a second field, storing the motion vector of the first field in the third memory space and storing the motion vector of the second field in the fourth memory space.
12. The method of claim 7, wherein the first memory comprises processing registers, registers, a DRAM, or an SRAM.
13. A row-based memory management method used in the decoding process of a video frame, for storing the motion vectors of a plurality of decoded macroblocks as candidate predictors for use in the decoding process, wherein each row of the video frame comprises N macroblocks, the method comprising:
- allocating N memory units in a first memory, wherein each memory unit is sufficient for storing the motion vector(s) of one macroblock;
- when a first macroblock located at an Lth row and a Kth column is decoded, storing the motion vector(s) of the first macroblock in a Kth memory unit of the memory units to overwrite the motion vector(s) of a second macroblock previously stored in the Kth memory unit, wherein the second macroblock is located at an (L−1)th row and the Kth column, K is an integer between 1 and N, and L is an integer larger than 1.
14. The method of claim 13, wherein the video frame is a progressive frame.
15. The method of claim 13, wherein the video frame is an interlaced frame.
16. The method of claim 13, wherein the first memory comprises a DRAM, an SRAM, or registers.
17. The method of claim 13, further comprising:
- allocating an additional memory unit in a second memory, wherein the additional memory unit is capable of storing the motion vector(s) of one macroblock;
- when a third macroblock of the video frame is decoded, storing the motion vector(s) of the third macroblock in the additional memory unit to overwrite the motion vector(s) of a fourth macroblock previously stored in the additional memory unit, wherein the fourth macroblock is decoded immediately before the third macroblock.
18. The method of claim 17, wherein the first memory comprises processing registers, registers, a DRAM, or an SRAM.
Type: Application
Filed: Jul 30, 2004
Publication Date: Apr 28, 2005
Inventors: Hui-Hua Kuo (Tai-Nan City), Gong-Sheng Lin (Tai-Chung City)
Application Number: 10/710,722