Method and Apparatus to Perform Optimal Visually-Weighed Quantization of Time-Varying Visual Sequences in Transform Space

Pure transform-based technologies, such as the DCT or wavelets, can leverage a mathematical model based on few or one parameters to generate the expected distribution of the transform components' energy, and generate ideal entropy removal configuration data continuously responsive to changes in video behavior. Construction of successive-refinement streams is supported by this technology, permitting response to changing channel conditions. Lossless compression is also supported by this process. The embodiment described herein uses a video correlation model to develop optimal entropy removal tables and optimal transmission sequence based on a combination of descriptive characteristics of the video source, enabling independent derivation of said optimal entropy removal tables and optimal transmission sequence in both encoder and decoder sides of the compression and playback process.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
PARENT CASE TEXT

This application claims benefit of a prior filed U.S. provisional application, Ser. No. 61/818,419, filed May 1, 2013.

REFERENCES

ISO/IEC 15444-1:2000

Information technology—JPEG 2000 image coding system—Part 1: Core coding system

US Patent Documents

U.S. Pat. No. 6,239,811 Westwater

Method and apparatus to measure relative visibility of time-varying data in transform space

U.S. Pat. No. 8,422,546

FEDERALLY SPONSORED RESEARCH

Not Applicable.

BACKGROUND

1. Field of Invention

The present invention relates generally to compression of moving video data, and more particularly to the application of quantization of the three-dimensional Discrete Cosine Transform (DCT) representation of moving video data for the purposes of removing visually redundant information.

2. Description of Prior Art

It is well established in the literature of the field of video compression that video can be well-modeled as a stationary Markov-1 process. This statistical model predicts the video behavior quite well, with measured correlations over 0.9 in the pixel and line directions.

It is well-known the Karhunen-Loeve Transform (KLT) perfectly decorrelates Markov-distributed video. This means the basis of the KLT is an independent set of vectors which encode the pixel values of the video sequence.

It is a further result that many discrete transforms well approximate the KLT for large correlation values. Perhaps the best-known such function is the DCT, although many other functions (DST, WHT, etc.) serve as reasonable approximations to the KLT.

It is for this reason the DCT is used to decorrelate images in the JPEG standard, after which a uniform quantization factor individually chosen for each DCT component is applied to said component, removing visual information imperceptible to the human eye. FIG. 1 illustrates the use of a Human Visual System quantizer array in JPEG. An individual frame of digitized video Error! Reference source not found 010 is transformed via a two-dimensional DCT Error! Reference source not found 020 and then quantized Error! Reference source not found 020 to remove imperceptible visual data. An entropy removal process Error! Reference source not found 040 actually compresses the information. The decompression process follows an equivalent set of steps in reverse, when a data set or data stream containing the compressed data Error! Reference source not found 210 is decompressed Error! Reference source not found 110 by reversing said entropy removal process, followed by a de-quantization step Error! Reference source not found 120, an inverse DCT step Error! Reference source not found 130, and a resulting frame Error! Reference source not found 140 may be displayed or otherwise processed. A key part of the process is good choice of quantizers Error! Reference source not found 310 that leverage a Human Visual Model to optimally remove redundant information. The use of a Human Vision Model in terms of a Contrast Sensitivity Function to generate two-dimensional quantizer coefficients is taught by Hwang, et al, and by Watson U.S. Pat. No. 5,629,780.

FIG. 2 illustrates the use of the DCT in the prior-art MPEG standard. A block-based difference after motion estimation 2015 is taken between reference frame(s) 2010 and an individual frame to be compressed 2005. Said block-based difference after motion estimation 2015 is transformed using the two-dimensional DCT 2020 and quantized 2030. The resulting quantized data is compressed via an entropy removal process 2040, resulting in a compressed data set or stream 2210. A decompression process can then be executed on said compressed data set or stream 2210, comprising the reverse entropy removal step 2110, a de-quantizing step 2120, an inverse two-dimensional DCT process 2130, and a block-based summation process 2135 using a previously-decompressed reference frame 2140 to generate an individual frame ready for playback or other processing 2145. The pre-defined fixed quantizer 2310 utilized in said quantization process 2030 and said de-quantization process 2130 cannot leverage the Human Vision Model, as no such model has been developed to apply directly to the difference between video blocks.

What is needed is a means of removing subjectively redundant video information from a moving sequence of video.

Many prior-art techniques are taught under the principle of guiding a design of a quantization matrix to provide optimum visual quality for a given bitrate. These techniques, being applicable to motion compensation-based compression algorithms, require a Human Visual Model-driven feedback loop to converge on the quantizers that will show minimal artifact on reconstruction. The use of this Human Visual Model is again limited to its application in the spatial domain. An example of this teaching is U.S. Pat. No. 8,326,067 by Furbeck, as illustrated in FIG. 3. A block-based difference after motion estimation Error! Reference source not found 015 is taken between reference frame(s) Error! Reference source not found 010 and an individual frame to be compressed Error! Reference source not found 005. Said block-based difference after motion estimation Error! Reference source not found 015 is transformed using the two-dimensional DCT Error! Reference source not found 020 and quantized Error! Reference source not found 030. The resulting quantized data is compressed via an entropy removal process Error! Reference source not found 040, resulting in a compressed data set or stream Error! Reference source not found 210. A decompression process can then be executed on said compressed data set or stream Error! Reference source not found 210, comprising the reverse entropy removal step Error! Reference source not found 110, a de-quantizing step Error! Reference source not found 120, an inverse two-dimensional DCT process Error! Reference source not found 130, and a block-based summation process Error! Reference source not found 135 using a previously-decompressed reference frame Error! Reference source not found 140 to generate an individual frame ready for playback or other processing Error! Reference source not found 145. The quantizer Error! Reference source not found 310 utilized in said quantization process Error! Reference source not found 030 and said de-quantization process Error! Reference source not found 130 cannot directly leverage the Human Vision Model, as no such model has been developed to apply directly to the difference between video blocks. Therefore a feedback processing step Error! Reference source not found 240 communicates to a Human Visual Model Error! Reference source not found 250 which determines the perceptual error, and feeds back recalculated said quantizers Error! Reference source not found 310 to be used to re-compress said individual frame to be compressed Error! Reference source not found 005. Said feedback processing step Error! Reference source not found 240 may be based on simple perceptual error minimization, or may minimize compression ratio after entropy removal.

The wavelet transform is another technique commonly used to perform compression. However, the wavelet does not decorrelate video, and thus optimal quantizers based upon a Human Visual Model cannot be calculated. A teaching by Gu et al, U.S. Pat. No. 7,006,568 attempts to address this issue by segmenting video sequences into similar-characteristic segments and calculating 2-D quantizers for each selected segment, chosen to reduce perceptual error in each subband, as illustrated in FIG. 4. A frame to be compressed Error! Reference source not found 005 is decomposed into its subbands via wavelet decomposition Error! Reference source not found 020 and quantized Error! Reference source not found 030. The resulting quantized data is compressed via an entropy removal process Error! Reference source not found 040, resulting in a compressed data set or stream Error! Reference source not found 210. A decompression process can then be executed on said compressed data set or stream Error! Reference source not found 210, comprising the reverse entropy removal step Error! Reference source not found 110, a de-quantizing step Error! Reference source not found 120, a subband reconstruction process Error! Reference source not found 130 to generate an individual frame ready for playback or other processing Error! Reference source not found 140. The quantizer Error! Reference source not found 330 utilized in said quantization process Error! Reference source not found 030 and said de-quantization process Error! Reference source not found 130 cannot directly leverage the Human Vision Model, as no such model has been developed to apply directly to the poorly-decorrelated video basis of the wavelet decomposition. This prior-art teaching subdivides the video stream into regions of relatively stable visual performance bounded by scene changes, as calculated by a scene analysis process Error! Reference source not found 310 acting upon said frame to be compressed Error! Reference source not found 005 and its previous frame in the motion video sequence Error! Reference source not found 010. A visually-weighted analysis process Error! Reference source not found 320 then calculates said quantizers Error! Reference source not found 330.

The current invention improves the compression process by directly calculating the visually optimal quantizers for 3-D transform vectors by evaluating the basis behavior of the decorrelated transform space under a time-varying Human Visual Model, as represented by a Contrast Sensitivity Function.

SUMMARY OF INVENTION

In accordance with one aspect of the invention, a method is provided for removal of all subjectively redundant visual information by means of calculating optimal visually-weighed quantizers corresponding to the decorrelating-transformed block decomposition of a sequence of video images. The contrast sensitivity of the human eye to the actual time-varying transform-domain frequency of each transform component is calculated, and the resolution of the transformed data is reduced by the calculated sensitivity.

A second aspect of the invention applies specifically to use of the DCT as the decorrelating transform.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 depicts a prior-art compressor featuring an optimal spatial transform and optimal fixed visual quantizers (JPEG).

FIG. 2 depicts a prior-art compressor featuring a sub-optimal time-varying transform using sub-optimal quantizers fixed or in-band quantizers (MPEG).

FIG. 3 depicts a prior-art compressor featuring a sub-optimal time-varying transform and a recursive feedback quantizer calculation to generate in-band quantizers.

FIG. 4 depicts a prior-art compressor featuring a sub-optimal time-varying transform using sub-optimal quantizers fixed or in-band quantizers (wavelet).

FIG. 5 depicts a compression system featuring an optimal time-varying transform using configuration parameters to independently generate visually optimal quantizers in compressor and decompressor.

FIG. 6 describes a typical set of configuration parameters that may be used to generate visually optimal time-varying quantizers.

FIG. 7 defines a typical time-varying contrast sensitivity function.

FIG. 8 defines a visually optimal quantizer in terms of visual resolution and the contrast sensitivity function specified in FIG. 7.

FIG. 9 refines the visually optimal quantizer definition of FIG. 8 with angular data specifications.

FIG. 10 refines the visually optimal quantizer definition of FIG. 8 with off-axis visual sensitivity human visual system adjustments.

FIG. 11 depicts a typical symmetric contrast sensitivity function (without angular or off-axis corrections).

FIG. 12 depicts typical contrast sensitivity function off-axis visual sensitivity human visual system adjustments.

FIG. 13 depicts typical eccentric-angle visual sensitivity human visual system adjustments.

FIG. 14 depicts the location of DC, mixed DC/AC, and AC components within a 3-dimensional DCT block.

FIG. 15 illustrates the calculation of DC component quantizers, and the contributed DC and AC quantizers of a mixed DC/AC component.

FIG. 16 illustrates the calculation of a statistically ideal mixed DC/AC quantizer.

FIG. 17 illustrates the application of a configurable Gibbs ringing compensation factor.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

As illustrated in FIG. 5, a block comprising a plurality of individual frames of digitized video Error! Reference source not found 010 is transformed via a three-dimensional DCT Error! Reference source not found 020 and then quantized Error! Reference source not found 030 to remove imperceptible visual data. An entropy removal process Error! Reference source not found 040 actually compresses the information. The decompression process follows an equivalent set of steps in reverse, when a data set or data stream containing the compressed data Error! Reference source not found 210 is decompressed Error! Reference source not found 110 by reversing said entropy removal process, followed by a de-quantization step Error! Reference source not found 120, an inverse DCT step Error! Reference source not found 130, and a resulting block of frames Error! Reference source not found 140 may be displayed or otherwise processed. Said quantizer process Error! Reference source not found 030 and said de-quantizer process Error! Reference source not found 120 use quantizers Error! Reference source not found 420 generated by a quantizer generation process Error! Reference source not found 410. Said quantizer generation process Error! Reference source not found 410 calculates said quantizers Error! Reference source not found 420 as a function of four sets of configuration data, the conditions under which viewing is expected to take place, and under which visual reconstruction will have no perceptual error Error! Reference source not found 310, the configuration of the video stream Error! Reference source not found 320, the quantizer generation algorithm to be used Error! Reference source not found 330, and the configuration of the applied decorrelating transform Error! Reference source not found 340.

In the current embodiment, said configuration of video stream Error! Reference source not found 320 is elaborated in FIG. 6. Said configuration of video stream Error! Reference source not found 010 is comprised of individual configuration items H Error! Reference source not found 020, the number of pixels per line within the frame, V Error! Reference source not found 030, the number of lines within the frame, R Error! Reference source not found 040, the frame rate in frames per second, B Error! Reference source not found 050, the number of bits used to represent the luminance value per pixel, and Aspect Error! Reference source not found 060, the physical aspect ratio or ratio of physical frame width to physical frame height.

In the current embodiment, said configuration of viewing conditions Error! Reference source not found 310 is elaborated in FIG. 6. Said configuration of viewing conditions Error! Reference source not found 110 is comprised of individual configuration items D Error! Reference source not found 120, the expected viewing distance in screen heights, and I Error! Reference source not found 130, the expected average ambient luminance.

In the current embodiment, said configuration of block-based decorrelating transform Error! Reference source not found 340 is elaborated in FIG. 6. Said configuration of block-based decorrelating transform Error! Reference source not found 210 is comprised of individual configuration items N Error! Reference source not found 220, the number of pixels per transform block, M Error! Reference source not found 230, the number of lines per transform block, L Error! Reference source not found 240, the number of frames per transform block, Nindex Error! Reference source not found 250, the number of frames per transform block, and Mindex Error! Reference source not found 260, the number of frames per transform block.

In the current embodiment, said configuration of quantizer algorithm Error! Reference source not found 330 is elaborated in FIG. 6. Said configuration of quantizer algorithm Error! Reference source not found 310 is comprised of individual configuration items visual loss factor, Error! Reference source not found 320 Mx, mixed DC/AC coefficient algorithm, Error! Reference source not found 330 Rx, Ry and Rz, correlation in pixel, line and frame directions respectively, and Error! Reference source not found 340 dBG, Gibbs ringing compensation.

FIG. 7 defines a typical contrast sensitivity function Error! Reference source not found 010 CSF(u,w,I,X0,Xmax) in terms of said (Error! Reference source not found 130) viewing conditions configuration item expected average ambient luminance I Error! Reference source not found 040, and additional variables u Error! Reference source not found 020, 2-dimensional spatial frequency, w Error! Reference source not found 030, temporal frequency, X0 Error! Reference source not found 050, angle subtended by DCT block, and Xmax Error! Reference source not found 060, angle subtended by display surface.

Luminance quantizers are calculated as in FIG. 8(a). The equation Error! Reference source not found 010 calculates the quantizer Q Error! Reference source not found 020 for a particular decorrelating transform component of index n Error! Reference source not found 030 in the pixel direction, a particular decorrelating transform component of index m Error! Reference source not found 040 in the line direction and a particular decorrelating transform component of index I Error! Reference source not found 050 in the frame or time direction, a particular decorrelating transform component of position Mindex Error! Reference source not found 060 in the pixel direction and a particular decorrelating transform component of position Nindex Error! Reference source not found 070 in the line direction; given said two-dimensional spatial frequency u (Error! Reference source not found 020), said temporal frequency w (Error! Reference source not found 030), of said (Error! Reference source not found 130) viewing conditions configuration item expected average ambient luminance I (Error! Reference source not found 040), said , angle subtended by DCT block, X0 (Error! Reference source not found 050), and said angle subtended by display surface Xmax (Error! Reference source not found 060).

The equation Error! Reference source not found 110 of FIG. 8(b) calculates said temporal frequency of a transform component w (Error! Reference source not found 030) as a function of said configuration of video stream configuration item frame rate in frames per second R (Error! Reference source not found 040), said configuration of block-based decorrelating transform configuration item number of frames per transform block L (Error! Reference source not found 240), and said particular decorrelating transform component of index in the frame or time direction I (Error! Reference source not found 050).

The equation Error! Reference source not found 010 of FIG. 9(a) depicts a typical definition of said angle subtended by display surface Xmax (Error! Reference source not found 060) in terms of said configuration of viewing conditions individual configuration item D the expected viewing distance in screen heights (Error! Reference source not found 120). The equation Error! Reference source not found 020 of FIG. 9(b) depicts a typical definition of said angle subtended by DCT block X0 (Error! Reference source not found 050) in terms of said configuration of block-based decorrelating transform individual configuration item the number of pixels per transform block N (Error! Reference source not found 220) and said configuration of block-based decorrelating transform individual configuration item the number of lines per transform block M (Error! Reference source not found 230).

Equation Error! Reference source not found 010 of FIG. 10 depicts a preferred process calculating said two-dimensional spatial frequency u (Error! Reference source not found 020) given said particular decorrelating transform component of index in the pixel direction n (Error! Reference source not found 030), said particular decorrelating transform component of index in the line direction m (Error! Reference source not found 040), said particular decorrelating transform component of position in the pixel direction Mindex (Error! Reference source not found 060) and a particular decorrelating transform component of position in the line direction Nindex (Error! Reference source not found 070). A human visual system orientation response adjustment is re Error! Reference source not found 020. A human visual system ex-foveal eccentricity response adjustment is re Error! Reference source not found 030.

The two-dimensional map of values assumes by said typical contrast sensitivity function CSF(u,w,I,X0,Xmax) (Error! Reference source not found 010) for equally-weighted is depicted in FIG. 11. The contour map of FIG. 12(a) further illustrates the symmetric distribution of said typical contrast sensitivity function CSF(u,w,I,X0,Xmax) (Error! Reference source not found 010), while The contour map of FIG. 12(b) illustrates the application of said human visual system orientation response adjustment re (Error! Reference source not found 020) to better model human visual orientation response. The contour map of FIG. 13 illustrates the application of said human visual system ex-foveal eccentricity response adjustment re (Error! Reference source not found 030) to better model human visual off-axis response.

As illustrated in FIG. 14, said block Error! Reference source not found 010 transformed via a three-dimensional DCT (Error! Reference source not found 020) is comprised a plurality of transform components. Transform component (n=0,m=0,I=0) Error! Reference source not found 020 is classified as pure DC. Transform components with (n=0) Error! Reference source not found 030, with (m=0) Error! Reference source not found 040, or with (I=0) Error! Reference source not found 050 are classified as mixed AC/DC. Component where no (I,m,n) is 0 are classified as pure AC components.

Said quantizer Q (Error! Reference source not found 020) gives optimal response for pure AC transform components, but produces sub-optimal results for pure DC or mixed AC/DC components, due to the extreme sensitivity of the human eye to DC levels. Pure DC transform components may be quantized by the value that the variance of the DC component is concentrated over the number of possible levels that can be represented in the reconstructed image, as the human eye is constrained to the capabilities of the display. Equation Error! Reference source not found 010 of FIG. 15(a) defines the pure DC transform quantizer as a function of said configuration of block-based decorrelating transform individual configuration item the number of pixels per transform block N (Error! Reference source not found 220), said configuration of block-based decorrelating transform individual configuration item number of lines per transform block M (Error! Reference source not found 230), and said configuration of block-based decorrelating transform individual configuration item number of frames per transform block L (Error! Reference source not found 240).

Mixed AC/DC components can be quantized by the minimum quantization step size apportioned over the variance of the DCT basis component. This process requires calculation of the per-component variance for the AC and DC components (i.e., the variance calculation in the number of dimensions in which each AC or DC component resides). Similarly, the value of the independent AC and DC quantizers must be calculated using the Contrast Sensitivity Function limited to the number of dimensions in which the AC or DC component resides. As illustrated in FIG. 15(b), the pseudocode C language program calcQ Error! Reference source not found 110 defines a quantizer suitable for application to the DC portion of mixed AC/DC components quantizer as a function of said configuration of block-based decorrelating transform individual configuration item the number of pixels per transform block N (Error! Reference source not found 220), said configuration of block-based decorrelating transform individual configuration item number of lines per transform block M (Error! Reference source not found 230), and said configuration of block-based decorrelating transform individual configuration item number of frames per transform block L (Error! Reference source not found 240). Said typical AC/DC component with I=0 Error! Reference source not found 050, the one-dimensional DC quantizer QDCm,n,0 Error! Reference source not found 210 is calculated from said reduced-dimension calculation of the quantizer calcQ Error! Reference source not found 110.

The two-dimensional AC quantizer QACm,n,0 Error! Reference source not found 220 is calculated directly from said typical generalized Contract Sensitivity Function CSF(u,w,I,X0,Xmax) Error! Reference source not found 010.

FIG. 16 illustrates the process of deriving a statistically optimal quantizer Qm,n,0 Error! Reference source not found 310 from said the one-dimensional DC quantizer QDCm,n,0 Error! Reference source not found 210 and said two-dimensional AC quantizer QACm,n,0 Error! Reference source not found 220. Said correlation coefficient Error! Reference source not found 330 Rx is used to generate an autocorrelation matrix Mx Error! Reference source not found 010. The convolution of said autocorrelation with the DCT in the x direction returns the variance-concentration matrix Cx Error! Reference source not found 020. Said process is understood to apply equally in the y and z directions.

The maximum visual delta of 1/QACm,n,0 Error! Reference source not found 110 calculated to apply to the variance-concentrated range Cx[m,m]*Cy[n,n] Error! Reference source not found 120 and 1/QDCm,n,0 Error! Reference source not found 130 calculated to apply to the variance-concentrated range Cz[0,0] Error! Reference source not found 130 is calculated as 1/min(QACm,n,0 QDCm,n,0) Error!

Reference source not found 210, and can be applied over the entire range Cx[m,m]*Cy[n,n]* Cz[0,0] Error! Reference source not found 220.

Said statistically optimal quantizer Qm,n,0 Error! Reference source not found 310 may now be calculated following with the C language pseudocode excerpt Error! Reference source not found 320. It is to be understood that the process of calculating typical statistically ideal mixed AC/DC coefficients is illustrated in the general sense in FIG. 15 and FIG. 16, with minor changes to the procedure obvious to any experienced practitioner of the art.

The worst-case degradation in visual quality caused by the Gibbs phenomenon as a result of quantization is illustrated in FIG. 17a. A further adjustment to visual quality is supported by said Gibbs ringing adjustment Error! Reference source not found 340 dBG, which is interpreted (FIG. 17b) as illustrated in equation Error! Reference source not found 010 as a logarithmic factor of the actual reduction factor G Error! Reference source not found 020. Said dBG Error! Reference source not found 340 with a value of 0 represents said quantizer reduction factor G Error! Reference source not found 020 of 8.985%, which precisely removes the worst-case Gibbs ringing from having visible effect. Gibbs ringing removal is applied to said quantizers Error! Reference source not found 420 generated by said quantizer generation process Error! Reference source not found 410 as illustrated in equation Error! Reference source not found 110 by reduction in magnitude by the factor 1−G (one minus said factor G Error! Reference source not found 020).

Thus the present invention presents a comprehensive means of determining, for any given video-decorrelating spatiotemporal transform, optimal visual quantizers under specified viewing conditions and digital video configuration. The rationale behind the development of these optimal visual quantizers includes the mapping of a standard contrast spatiotemporal sensitivity model to the specific and potentially dynamically changing characteristics of the compression system, and the extension of the model to include human sensitivity to angular and off-axis conditions, and the removal of potential Gibbs artifacts generated as a result of quantization. The invention has the important side-effect of supporting independent coherent quantizer generation in compressor and decompressor, enabling the low data rates associated with fixed quantizer tables while providing adaptation to potentially changing video frame rates.

While the present invention has been described in its preferred version or embodiment with some degree of particularity, it is understood that this description is intended as an example only, and that numerous changes in the composition or arrangements of apparatus elements and process steps may be made within the scope and spirit of the invention.

Adaptive video encoding using a perceptual model

U.S. Pat. No. 8,416,104

Method and apparatus for entropy decoding

U.S. Pat. No. 8,406,546

Adaptive entropy coding for images and videos using set partitioning in generalized hierarchical trees

U.S. Pat. No. 7,899,263

Method and apparatus for processing analytical-form compression noise in images with known statistics

U.S. Pat. No. 7,788,106

Entropy coding with compact codebooks

U.S. Pat. No. 7,085,425

Embedded DCT-based still image coding algorithm

Claims

1. An apparatus comprised of a compressor and decompressor and a method for generating an optimally compressed representation of multidimensional visual data after transformation by a multidimensional orthogonal transform of a specified transformation block size, after quantization by coefficients of said transformation block size, and after rearrangement of said quantized coefficients into a transmission sequence, and after collection of said quantized transformation coefficients into symbols, by the application of said quantized decorrelating transform to a plurality of measured variances of uncompressed multidimensional visual data and measured correlation coefficients of uncompressed multidimensional visual data to calculate the probability distribution of each quantized transform coefficient required to perform entropy removal,

2. The method of claim 1 where said orthogonal transform is the discrete cosine transform,

3. The method of claim 1 where said multidimensional visual data comprises a two-dimensional still image,

4. The method of claim 3 where said transformation block size comprises the entire image,

5. The method of claim 3 where said plurality of measured variances of uncompressed multidimensional visual data is one averaged value per block and said plurality of correlation coefficients is one averaged value per frame,

6. The method of claim 3 where said plurality of measured variances of uncompressed multidimensional visual data is one averaged value per block and said plurality of correlation coefficients is one averaged value per block,

7. The method of claim 3 where said plurality of measured variances of uncompressed multidimensional visual data is one averaged value per dimension per frame and said plurality of correlation coefficients is one averaged value per dimension per frame,

8. The method of claim 3 where said plurality of measured variances of uncompressed multidimensional visual data is one averaged value per block and said plurality of correlation coefficients is one averaged value per dimension per block,

9. The method of claim 1 where said multidimensional visual data comprises a three-dimensional moving video sequence,

10. The method of claim 9 where said transformation block size comprises a number of frames by the entire size of a single frame,

11. The method of claim 9 where said plurality of measured variances of uncompressed multidimensional visual data is one averaged value per group of frames and said plurality of correlation coefficients is one averaged value per group of frames,

12. The method of claim 9 where said plurality of measured variances of uncompressed multidimensional visual data is one averaged value per block and said plurality of correlation coefficients is one averaged value per block,

13. The method of claim 9 where said plurality of measured variances of uncompressed multidimensional visual data is one averaged value per dimension per group of frames and said plurality of correlation coefficients is one averaged value per dimension per group of frames,

14. The method of claim 9 where said plurality of measured variances of uncompressed multidimensional visual data is one averaged value per dimension per block and said plurality of correlation coefficients is one averaged value per dimension per block,

15. The method of claim 1 where said quantizers are all ones,

16. The method of claim 1 where said quantizers are all equal,

17. The method of claim 1 where said quantizers are visually weighed,

18. The method of claim 1 where coefficients are organized within each block into order of decreasing calculated component variance,

19. The method of claim 18 where the probability of symbols is calculated from a definition of a plurality of symbols as collected from sequences of component values whose conditional expectation is zero followed by the actual non-zero value, a plurality of symbols as collected from sequences of component values whose conditional expectation is zero followed by the number of bits required to represent the non-zero value, an end-of-block symbol whose conditional expectation is calculated from the cumulative probability of a sequence of symbols comprised solely of zeroes, and an escape symbol whose conditional expectation is calculated from the accumulation of the probability of all symbols not otherwise defined.

20. The method of claim 1 where coefficients are organized across blocks into order of decreasing calculated component variance,

21. The method of claim 20 where the probability of symbols is calculated from a definition of a plurality of symbols as collected from sequences of component values whose conditional expectation is zero followed by the actual non-zero value, a plurality of symbols as collected from sequences of component values whose conditional expectation is zero followed by the number of bits required to represent the non-zero value, an end-of-block symbol whose conditional expectation is calculated from the cumulative probability of a sequence of symbols comprised solely of zeroes, and an escape symbol whose conditional expectation is calculated from the accumulation of the probability of all symbols not otherwise defined.

22. The method of claim 1 where coefficients are organized across blocks into bands of decreasing calculated component variance within of order successive refinement,

23. The method of claim 22 where the probability of symbols is calculated from a definition of a plurality of symbols as collected from sequences of component values whose conditional expectation is zero followed by the actual non-zero value, a plurality of symbols as collected from sequences of component values whose conditional expectation is zero followed by the number of bits required to represent the non-zero value, an end-of-block symbol whose conditional expectation is calculated from the cumulative probability of a sequence of symbols comprised solely of zeroes, and an escape symbol whose conditional expectation is calculated from the accumulation of the probability of all symbols not otherwise defined.

24. The method of claim 1 where coefficients are organized across blocks into bands of equal weight in order of decreasing calculated component variance,

25. The method of claim 24 where the probability of symbols is calculated from a definition of a plurality of symbols as collected from sequences of component values whose conditional expectation is zero followed by the actual non-zero value, a plurality of symbols as collected from sequences of component values whose conditional expectation is zero followed by the number of bits required to represent the non-zero value, an end-of-block symbol whose conditional expectation is calculated from the cumulative probability of a sequence of symbols comprised solely of zeroes, and an escape symbol whose conditional expectation is calculated from the accumulation of the probability of all symbols not otherwise defined.

26. The method of claim 1 where Huffman coding based used to perform entropy removal on the constructed stream of symbols,

27. The method of claim 26 where said measured variances of uncompressed multidimensional visual data and said measured correlations of uncompressed multidimensional visual data are communicated between compressor and decompressor,

28. The method of claim 1 where arithmetic coding based is used to perform entropy removal on the constructed stream of symbols,

29. The method of claim 28 where said measured variances of uncompressed multidimensional visual data and said measured correlations of uncompressed multidimensional visual data are communicated between compressor and decompressor,

30. The method of claim 1 where said decorrelating transform is any orthonormal wavelet.

Patent History
Publication number: 20140328406
Type: Application
Filed: Apr 30, 2014
Publication Date: Nov 6, 2014
Inventor: Raymond John Westwater (Princeton, NJ)
Application Number: 14/266,757
Classifications
Current U.S. Class: Wavelet (375/240.19); Discrete Cosine (375/240.2)
International Classification: H04N 19/625 (20060101); H04N 19/91 (20060101); H04N 19/63 (20060101);