DATA PROCESSING APPARATUS FOR TRANSMITTING/RECEIVING COMPRESSED PIXEL DATA GROUPS OF PICTURE OVER DISPLAY INTERFACE AND RELATED DATA PROCESSING METHOD

A data processing apparatus has a mapper, a plurality of compressors, and an output interface. The mapper receives pixel data of a plurality of pixels of a picture, and splits the pixel data of the pixels of the picture into a plurality of pixel data groups. The compressors compress the pixel data groups and generate a plurality of compressed pixel data groups, respectively. The output interface packs the compressed pixel data groups into at least one output bitstream, and outputs the at least one output bitstream via a display interface.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. provisional application No. 61/865,345, filed on Aug. 13, 2013 and incorporated herein by reference.

BACKGROUND

The disclosed embodiments of the present invention relate to transmitting and receiving data over a display interface, and more particularly, to a data processing apparatus for transmitting/receiving compressed pixel data groups of a picture over a display interface and a related data processing method.

A display interface is disposed between a first chip and a second chip to transmit display data from the first chip to the second chip for further processing. For example, the first chip may be a host application processor, and the second chip may be a driver integrated circuit (IC). The display data may be single view data for two-dimensional (2D) display or multiple view data for three-dimensional (3D) display. When a display panel supports a higher display resolution, 2D/3D display with higher resolution can be realized. Hence, the display data transmitted over the display interface would have a larger data size/data rate, which increases the power consumption of the display interface inevitably. If the host application processor and the driver IC are both located at a portable device (e.g., a smartphone) powered by a battery device, the battery life is shortened due to the increased power consumption of the display interface. Thus, there is a need for an innovative design which can effectively reduce the power consumption of the display interface.

SUMMARY

In accordance with exemplary embodiments of the present invention, a data processing apparatus for transmitting/receiving compressed pixel data groups of a picture over a display interface and a related data processing method are proposed.

According to a first aspect of the present invention, an exemplary data processing apparatus is disclosed. The exemplary data processing apparatus includes a mapper, a plurality of compressors, and an output interface. The mapper is configured to receive pixel data of a plurality of pixels of a picture, and split the pixel data of the pixels of the picture into a plurality of pixel data groups. The compressors are configured to compress the pixel data groups and generate a plurality of compressed pixel data groups, respectively. The output interface is configured to pack the compressed pixel data groups into at least one output bitstream, and output the at least one output bitstream via a display interface.

According to a second aspect of the present invention, an exemplary data processing apparatus is disclosed. The exemplary data processing apparatus includes an input interface, a plurality of de-compressors, and a de-mapper. The input interface is configured to receive at least one input bitstream from a display interface, and un-pack the at least one input bitstream into a plurality of compressed pixel data groups of a picture. The de-compressors are configured to de-compress the compressed pixel data groups and generate a plurality of de-compressed pixel data groups, respectively. The de-mapper is configured to merge the de-compressed pixel data groups into pixel data of a plurality of pixels of the picture.

According to a third aspect of the present invention, an exemplary data processing method is disclosed. The exemplary data processing method includes: receiving pixel data of a plurality of pixels of a picture, and splitting the pixel data of the pixels of the picture into a plurality of pixel data groups; compressing the pixel data groups to generate a plurality of compressed pixel data groups, respectively; and packing the compressed pixel data groups into at least one output bitstream, and outputting the at least one output bitstream via a display interface.

According to a fourth aspect of the present invention, an exemplary data processing method is disclosed. The exemplary data processing method includes: receiving at least one input bitstream from a display interface, and un-packing the at least one input bitstream into a plurality of compressed pixel data groups of a picture; de-compressing the compressed pixel data groups to generate a plurality of de-compressed pixel data groups, respectively; and merging the de-compressed pixel data groups into pixel data of a plurality of pixels of the picture.

These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram illustrating a data processing system according to an embodiment of the present invention.

FIG. 2 is a diagram illustrating a pixel data splitting operation performed by a mapper based on a first pixel data grouping design.

FIG. 3 is a diagram illustrating a pixel data merging operation performed by a de-mapper based on the first pixel data grouping design.

FIG. 4 is a diagram illustrating a pixel data splitting operation performed by a mapper based on a second pixel data grouping design.

FIG. 5 is a diagram illustrating a pixel data merging operation performed by a de-mapper based on the second pixel data grouping design.

FIG. 6 is a diagram illustrating a first pixel section based pixel data grouping design according to an embodiment of the present invention.

FIG. 7 is a diagram illustrating a second pixel section based pixel data grouping design according to an embodiment of the present invention.

FIG. 8 is a flowchart illustrating a control and data flow of the data processing system shown in FIG. 1 according to an embodiment of the present invention.

FIG. 9 is a diagram illustrating a position-aware rate control mechanism according to an embodiment of the present invention.

FIG. 10 is a diagram illustrating an alternative design of step 806 in FIG. 8.

FIG. 11 is a diagram illustrating a modified compression mechanism according to an embodiment of the present invention.

FIG. 12 is a diagram illustrating an alternative design of step 808 in FIG. 8.

DETAILED DESCRIPTION

Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”. Also, the term “couple” is intended to mean either an indirect or direct electrical connection. Accordingly, if one device is coupled to another device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.

The present invention proposes applying data compression to a display data and then transmitting a compressed display data over a display interface. As the data size/data rate of the compressed display data is smaller than that of the original un-compressed display data, the power consumption of the display interface is reduced correspondingly. However, there may be a throughput bottleneck for a compression/de-compression system due to long data dependency of previous compressed/reconstructed data. To minimize or eliminate the throughput bottleneck of the compression/de-compression system, the present invention further proposes a data parallelism design. For example, the rate control intends to optimally sub-optimally adjust the bit rate of each compression unit so as to achieve the content-aware bit budget allocation and therefore improve the visual quality. However, the rate control generally suffers from the long data dependency. When the proposed data parallelism design is employed, there will be a compromise between the processing throughput and the rate control performance. It should be noted that the proposed data parallelism design is not limited to enhancement of the rate control, any compression/de-compression system using the proposed data parallelism design falls within the scope of the present invention. Further details will be described as below.

FIG. 1 is a block diagram illustrating a data processing system according to an embodiment of the present invention. The data processing system 100 includes a plurality of data processing apparatuses such as an application processor 102 and a driver integrated circuit (IC) 104. The application processor 102 and the driver IC 104 may be implemented in different chips, and the application processor 102 may communicate with the driver IC 104 via a display interface 103. In this embodiment, the display interface 103 may be a display serial interface (DSI) standardized by a Mobile Industry Processor Interface (MIPI) or an embedded display port (eDP) standardized by a Video Electronics Standards Association (VESA).

The application processor 102 is coupled between a data source 105 and the display interface 103, and supports compressed data transmission. The application processor 102 receives an input display data from the external data source 105, where the input display data may be image data or video data that includes pixel data DI of a plurality of pixels of a picture to be processed. By way of example, but not limitation, the data source 105 may be a camera sensor, a memory card or a wireless receiver. As shown in FIG. 1, the application processor 102 includes a display controller 111, an output interface 112 and a processing circuit 113. The processing circuit 113 includes circuit elements required for processing the pixel data DI to generate a plurality of compressed pixel data groups (e.g., two compressed pixel data groups DG1′ and DG2′ in this embodiment). For example, the processing circuit 113 has a mapper 114, a plurality of compressors (e.g., two compressors 115_1 and 115_2 in this embodiment), a rate controller 116, and other circuitry 117, where the other circuitry 117 may have a display processor, additional image processing element(s), etc. The display processor may perform image processing operations, including scaling, rotating, etc. For example, the input display data provided by the data source 105 may be bypassed or processed by the additional image processing element(s) located before the display processor to generate a source display data, and then the display processor may process the source display data to generate the pixel data DI to the mapper 114. In other words, the pixel data DI to be processed by the mapper 114 may be directly provided from the data source 105 or indirectly obtained from the input display data provided by the data source 105. The present invention has no limitation on the source of the pixel data DI.

The mapper 114 acts as a splitter, and is configured to receive the pixel data DI of one picture and split the pixel data DI of one picture into a plurality of pixel data groups (e.g., two pixel data groups DG1 and DG2 in this embodiment) according to a pixel data group setting DGSET. Further details of the mapper 114 will be described later. Since the pixel data DI is split into two pixel data groups DG1 and DG2, two compressors 115_1 and 115_2 are selected from multiple compressors implemented in the processing circuit 113, and enabled to compress the pixel data groups DG1 and DG2 to generate compressed pixel data groups DG1′ and DG2′, respectively. In other words, the number of enabled compressors depends on the number of pixel data groups.

Each of the compressors 115_1 and 115_2 may employ a lossless compression algorithm or a lossy compression algorithm, depending upon the actual design consideration. The rate controller 116 is configured to apply bit rate control (i.e., bit budget allocation) to the compressors 115_1 and 115_2, respectively. In this way, each of the compressed pixel data groups DG1′ and DG2′ is generated at a desired bit rate. In this embodiment, compression operations performed by the compressors 115_1 and 115_2 are independent of each other, thus enabling rate control with data parallelism. Since the long data dependency is alleviated, the rate control performance can be improved.

The output interface 112 is configured to pack/packetize the compressed pixel data groups DG1′ and DG2′ into at least one output bitstream according to the transmission protocol of the display interface 103, and transmit the at least one output bitstream to the driver IC 104 via the display interface 103. By way of example, one bitstream BS may be generated from the application processor 102 to the driver IC 104 via one display port of the display interface 103.

Regarding the driver IC 104, it communicates with the application processor 102 via the display interface 103. In this embodiment, the driver IC 104 is coupled between the display interface 103 and a display panel 106, and supports compressed data reception. By way of example, the display panel 106 may be implemented using any 2D/3D display device. When the application processor 102 transmits compressed display data (e.g., compressed pixel data groups DG1′ and DG2′ packed in the bitstream BS) to the driver IC 104, the driver IC 104 is configured to receive the compressed display data from the display interface 103 and drive the display panel 106 according to de-compressed display data derived from de-compressing the compressed display data.

As shown in FIG. 1, the driver IC 104 includes a driver IC controller 121, an input interface 122 and a processing circuit 123. The input interface 122 is configured to receive at least one input bitstream from the display interface 103 (e.g., the bitstream BS received by one display port of the display interface 103), and un-pack/un-packetize the at least one input bitstream into a plurality of compressed pixel data groups of a picture (e.g., two compressed pixel data groups DG3′ and DG4′ in this embodiment). It should be noted that, if there is no error introduced during the data transmission, the compressed pixel data group DG3′ generated from the input interface 122 should be identical to the compressed pixel data group DG1′ received by the output interface 112, and the compressed pixel data group DG4′ generated from the input interface 122 should be identical to the compressed pixel data group DG2′ received by the output interface 112.

The processing circuit 123 may include circuit elements required for driving the display panel 106. For example, the processing circuit 123 has a de-mapper 124, a plurality of de-compressors (e.g., two de-compressors 125_1 and 125_2 in this embodiment), and other circuitry 127, where the other circuitry 127 may have a display buffer, additional image processing element (s), etc. The de-compressor 125_1 is configured to de-compress the compressed pixel data group DG3′ to generate a de-compressed pixel data group DG3, and the de-compressor 125_2 is configured to de-compress the compressed pixel data group DG4′ to generate a de-compressed pixel data group DG4. In this embodiment, the de-compression operations performed by the de-compressors 125_1 and 125_2 are independent of each other. In this way, the de-compression throughput is improved due to data parallelism.

The de-compression algorithm employed by each of the de-compressors 125_1 and 125_2 should be properly configured to match the compression algorithm employed by each of the compressors 115_1 and 115_2. In other words, the de-compressors 125_1 and 125_2 are configured to perform lossless de-compression when the compressors 115_1 and 115_2 are configured to perform lossless compression; and the de-compressors 125_1 and 125_2 are configured to perform lossy de-compression when the compressors 115_1 and 115_2 are configured to perform lossy compression. If there is no error introduced during the data transmission and a lossless compression algorithm is employed by the compressors 115_1 and 115_2, the de-compressed pixel data group DG3 fed into the de-mapper 124 should be identical to the pixel data group DG1 generated from the mapper 114, and the de-compressed pixel data group Da4 fed into the de-mapper 124 should be identical to the pixel data group DG2 generated from the mapper 114.

The de-mapper 124 acts as a combiner, and is configured to merge the de-compressed pixel data groups into pixel data DO of a plurality of pixels of a reconstructed picture based on the pixel data grouping setting DGSET that is employed by the mapper 114. The pixel data grouping setting DGSET employed by the mapper 114 may be transmitted from the application processor 102 to the driver IC 104 via an in-band channel (i.e., display interface 103) or an out-of-band channel 107 (e.g., an I2C (Inter-Integrated Circuit) bus). Specifically, the display controller 111 controls the operation of the application processor 102, and the driver IC controller 121 controls the operation of the driver IC 104. Hence, the display controller 111 may first check a de-compression capability and requirement of the driver IC 104, and then determine the number of pixel data groups in response to a checking result. In addition, the display controller 111 may further determine the pixel data grouping setting DGSET employed by the mapper 114 to generate the pixel data groups that satisfy the de-compression capability and requirement of the driver IC 104, and transmit the pixel data grouping setting DGSET over display interface 103 or out-of-band channel 107. When receiving a query issued from the display controller 111, the driver IC controller 121 may inform the display controller 111 of the de-compression capability and requirement of the driver IC 104. In addition, when receiving the pixel data grouping setting DGSET from display interface 103 or out-of-band channel 107, the driver IC controller 121 may control the de-mapper 124 to perform the pixel data merging operation based on the received pixel data grouping setting DGSET.

The present invention proposes several pixel data grouping designs that can be used to split pixel data of a plurality of pixels of one picture into multiple pixel data groups. Examples of the proposed pixel data grouping designs are detailed as below.

In a first pixel data grouping design, the mapper 114 splits the pixel data DI of pixels of one picture by dividing bit depths/bit planes into different groups. FIG. 2 is a diagram illustrating a pixel data splitting operation performed by the mapper 114 based on the first pixel data grouping design. As shown in FIG. 2, the width of a picture 200 is W, and the height of the picture 200 is H. Thus, the picture 200 has W×H pixels 201. In this embodiment, pixel data of each pixel 201 has a plurality of bits corresponding to different bit planes. For example, each pixel 201 has 12 bits B0-B11 for each color channel R/G/B. The bits B0-B11 correspond to different bit planes Bit-plane[0]-Bit-plane[11]. Specifically, the least significant bit (LSB) B0 corresponds to the bit plane Bit-plane[0], and the most significant bit (MSB) B11 corresponds to the bit plane Bit-plane[11]. When the first pixel data grouping design is employed, the display controller 111 controls the pixel data grouping setting DGSET to instruct the mapper 114 to split bits of the pixel data of each pixel into a plurality of bit groups (e.g., two bit groups BG1 and BG2 in this embodiment), and distribute the bit groups to the pixel data groups (e.g., pixel data groups DG1 and DG2 in this embodiment), respectively. Concerning bits B0-B11 of color channels R, G, B of each pixel 201, the mapper 114 may categorize even bits B0, B2, B4, B6, B8, B10 as one bit group BG1, and categorize odd bits B1, B3, B5, B7, B9, B11 as another bit group BG2. However, this is for illustrative purposes only, and is not meant to be a limitation of the present invention. In an alternative design, the mapper 114 may categorize more significant bits B6-B11 as one bit group BG1, and categorize less significant bits B0-B5 as another bit group BG2. In short, any bit interleaving manner capable of splitting bits of pixel data of each pixel 201 of the picture 200 into multiple bit groups may be employed by the mapper 114.

As mentioned above, the pixel data groups DG1 and DG2 are transmitted from the application processor 102 to the driver IC 104 after undergoing data compression. Hence, the driver IC 104 obtains one de-compressed pixel data group DG3 corresponding to the pixel data group DG1 and another de-compressed pixel data group DG4 corresponding to the pixel data group DG2 after data de-compression is performed. FIG. 3 is a diagram illustrating a pixel data merging operation performed by the de-mapper 124 based on the first pixel data grouping design. The operation of the de-mapper 124 may be regarded as an inverse of the operation of the mapper 114. Hence, based on the pixel data grouping setting DGSET employed by the mapper 114, the de-mapper 124 obtains a plurality of bit groups (e.g., two bit groups BG1 and BG2 in this embodiment) from the de-compressed pixel data groups (e.g., two de-compressed pixel data groups DG3 and DG4 in this embodiment), respectively, and merge the bit groups to obtain bits of pixel data of each pixel 201′ of a reconstructed picture 200′. The resolution of the reconstructed picture 200′ generated at the driver IC 104 is identical to the resolution of the picture 200 processed in the application circuit 102. Hence, the width of the reconstructed picture 200′ is W, and the height of the reconstructed picture 200′ is H. The pixel data of each pixel 201′ of the reconstructed picture 200′ includes a plurality of bits B0-B11 corresponding to different bit planes Bit-plane [0] -Bit-plane [11]. For example, each color channel R/G/B of one pixel 201′ in the reconstructed 200′ includes 12 bits B0-B11. The de-mapper 124 may obtain the bit group BG1 composed of even bits B0, B2, B4, B6, B8, B10 of color channels R, G, B of a pixel 201′, obtain another bit group BG2 composed of odd bits B1, B3, B5, B7, B9, B11 of color channels R, G, B of the pixel 201′, and merge the bit groups BG1 and BG2 to recover all bits B0-B11 of the pixel data of the pixel 201′. However, this is for illustrative purposes only, and is not meant to be a limitation of the present invention. In another case where the mapper 114 categorizes more significant bits B6-B11 as one bit group BG1 and categorizes less significant bits B0-B5 as another bit group BG2, the de-mapper 124 may obtain the bit group BG1 composed of more significant bits B6-B11 of color channels R, G, B of a pixel 201′, obtain another bit group BG2 composed of less significant bits B0-B5 of color channels R, G, B of the pixel 201′, and merge the bit groups BG1 and BG2 to recover all bits B0-B11 of the pixel data of the pixel 201′. To put it simply, the bit de-interleaving manner employed by the de-mapper 124 depends on the bit interleaving manner employed by the mapper 114.

In a second pixel data grouping design, the mapper 114 splits the pixel data DI of pixels of one picture by dividing complete pixels into different groups. FIG. 4 is a diagram illustrating a pixel data splitting operation performed by the mapper 114 based on the second pixel data grouping design. As shown in FIG. 4, the width of a picture 400 is W, and the height of the picture 400 is H. Thus, the picture 400 has W×H pixels. As shown in FIG. 4, pixels located at the same pixel line (e.g., the same pixel row in this embodiment) include a plurality of pixels P0, P1, P2, P3 . . . PW−2, PW−1. When the second pixel data grouping design is employed, the display controller 111 controls the pixel data grouping setting DGSET to instruct the mapper 114 to split pixels of the picture 400 into a plurality of pixel groups (e.g., two pixel groups PG1 and PG2 in this embodiment), and distribute pixel data of the pixel groups to the pixel data groups (e.g., two pixel data groups DG1 and DG2 in this embodiment), respectively. For example, adjacent pixels located at the same pixel line (e.g., the same pixel row) are distributed to different groups, respectively. Hence, the pixel group PG1 includes all pixels of even pixel columns C0, C2 . . . CW−2 of the picture 400, and the pixel group PG1 includes all pixels of the odd pixel columns C1, C3 . . . CW−1 of the picture 400. As shown in FIG. 4, the pixel data group DG1 includes pixel data of H×(W/2) pixels, and the pixel data group DG2 includes pixel data of H×(W/2) pixels. However, this is for illustrative purposes only, and is not meant to be a limitation of the present invention. In an alternative design, the aforementioned pixel line may be a pixel column. Hence, adjacent pixels located at the same pixel column are distributed to different groups, respectively. Hence, the pixel group PG1 may include all pixels of even pixel rows of the picture 400, and the pixel group PG2 may include all pixels of the odd pixel rows of the picture 400. In other words, the pixel data group DG1 may be formed by gathering pixel data of (H/2)×W pixels, and the pixel data group DG2 may be formed by gathering pixel data of (H/2)×W pixels. To put it simply, any pixel interleaving manner capable of splitting adjacent pixels of the picture 400 into different pixel groups may be employed by the mapper 114.

As mentioned above, the pixel data groups DG1 and DG2 are transmitted from the application processor 102 to the driver IC 104 after undergoing data compression. Hence, the driver IC 104 obtains one de-compressed pixel data group DG3 corresponding to the pixel data group DG1 and another de-compressed pixel data group DG4 corresponding to the pixel data group DG2 after data de-compression is performed. FIG. 5 is a diagram illustrating a pixel data merging operation performed by the de-mapper 124 based on the second pixel data grouping design. The operation of the de-mapper 124 may be regarded as an inverse of the operation of the mapper 114. Hence, based on the pixel data grouping setting DGSET employed by the mapper 114, the de-mapper 124 obtains pixel data of a plurality of pixel groups (e.g., two pixel groups PG1 and PG2 in this embodiment) from the de-compressed pixel data groups (e.g., two pixel data groups DG3 and DG4 in this embodiment), respectively, and merge the pixel data of the pixel groups to obtain pixel data of pixels of a reconstructed picture 400′, where adjacent pixels located at the same pixel line (e.g., the same pixel row) of the reconstructed picture 400′ are obtained from different pixel groups, respectively. However, this is for illustrative purposes only, and is not meant to be a limitation of the present invention. In another case where the mapper 114 distributes adjacent pixels located at the same pixel column to different groups, respectively, the de-mapper 124 may obtain pixel data of a plurality of pixel groups from the de-compressed pixel data groups, respectively, and merge the pixel data of the pixel groups obtain pixel data of pixels of the reconstructed picture 400′, where adjacent pixels located at the same pixel column of the reconstructed picture 400′ are obtained from different pixel groups, respectively. To put it simply, the pixel de-interleaving manner employed by the de-mapper 124 depends on the pixel interleaving manner employed by the mapper 114.

Regarding the second pixel data grouping design mentioned above, the pixels are categorized into different pixel groups in a single-pixel based manner. In one alternative design, the pixels may be categorized into different pixel groups in a pixel section based manner, where each pixel section includes a plurality of successive pixels located at the same pixel line (e.g., the same pixel row or the same pixel column). FIG. 6 is a diagram illustrating a first pixel section based pixel data grouping design according to an embodiment of the present invention. Each of the pixel lines (e.g., pixel rows R0-RH−1 in this embodiment) is divided into a plurality of pixel segments (e.g., two pixel sections S1 and S2 in this embodiment), and the number of the pixel segments located at the same pixel line is equal to the number of pixel data groups (e.g., two pixel data groups DG1 and DG2 in this embodiment). Concerning the pixel data splitting operation, adjacent pixel segments located at the same pixel line (e.g., the same pixel row in this embodiment) are distributed to different pixel groups (e.g., two pixel groups PG1 and PG2 in this embodiment), respectively. Hence, as shown in FIG. 6, the pixel group PG1 is composed of pixel sections S1 each extracted from one of the pixel rows R0-RH−1 of the picture 400, and the pixel group PG2 is composed of pixel sections S2 each extracted from one of the pixel rows R0-RH−1 of the picture 400.

Concerning the pixel data merging operation, adjacent pixel segments located at the same pixel line (e.g., the same pixel row in this embodiment) are obtained from different pixel groups (e.g., two pixel groups PG1 and PG2 in this embodiment), respectively. Hence, as shown in FIG. 6, the reconstructed picture 400′ has pixel rows R0-RH−1 each reconstructed by merging one pixel section S1 obtained from the pixel group PG1 and another pixel section S2 obtained from the pixel group PG2.

It should be noted that the aforementioned pixel line may be a pixel column in another exemplary implementation. Therefore, each of the pixel columns is divided into a plurality of pixel segments, and the number of the pixel segments located at the same pixel column is equal to the number of pixel data groups. Concerning the pixel data splitting operation, adjacent pixel segments located at the same pixel column are distributed to different pixel groups, respectively. Concerning the pixel data merging operation, adjacent pixel segments located at the same pixel column are obtained from different pixel groups, respectively.

FIG. 7 is a diagram illustrating a second pixel section based pixel data grouping design according to an embodiment of the present invention. Each of the pixel lines (e.g. , pixel rows R0-RH−1 in this embodiment) is divided into a plurality of pixel segments (e.g. , four pixel sections S1, S2, S3 and S4 in this embodiment), and the number of the pixel segments located at the same pixel line is larger than the number of pixel data groups (e.g. , two pixel data groups DG1 and DG2 in this embodiment). Concerning the pixel data splitting operation, adjacent pixel segments located at the same pixel line (e.g., the same pixel row in this embodiment) are distributed to different pixel groups (e.g., two pixel groups PG1 and PG2 in this embodiment), respectively. Hence, as shown in FIG. 7, the pixel group PG1 is composed of pixel sections S1, each extracted from one of the pixel rows R0-RH−1 of the picture 400, and pixel sections S3, each extracted from one of the pixel rows R0-RH−1 of the picture 400; and the pixel group PG2 is composed of pixel sections S2, each extracted from one of the pixel rows R0-RH−1 of the picture 400, and pixel sections S4, each extracted from one of the pixel rows R0-RH−1 of the picture 400. Concerning the pixel data merging operation, adjacent pixel segments located at the same pixel line (e.g., the same pixel row in this embodiment) are obtained from different pixel groups (e.g., two pixel groups PG1 and PG2 in this embodiment), respectively. Hence, as shown in FIG. 7, the reconstructed picture 400′ has pixel rows R0-RH−1 each reconstructed by merging pixel sections S1 and S3 both obtained from the pixel group PG1 and pixel sections S2 and S4 both obtained from the pixel group PG2 .

It should be noted that the aforementioned pixel line may be a pixel column in another exemplary implementation. Therefore, each of the pixel columns is divided into a plurality of pixel segments, and the number of the pixel segments located at the same pixel column is larger than the number of pixel data groups. Concerning the pixel data splitting operation, adjacent pixel segments located at the same pixel column are distributed to different pixel groups, respectively. Concerning the pixel data merging operation, adjacent pixel segments located at the same pixel column are obtained from different pixel groups, respectively.

FIG. 8 is a flowchart illustrating a control and data flow of the data processing system shown in FIG. 1 according to an embodiment of the present invention. Provided that the result is substantially the same, the steps are not required to be executed in the exact order shown in FIG. 8. The exemplary control and data flow may be briefly summarized by following steps.

Step 802: Check a de-compression capability and requirement of a driver IC.

Step 803: Inform an application processor of the de-compression capability and requirement.

Step 804: Determine a pixel data grouping setting according to a checking result.

Step 806: Apply rate control to a plurality of compressors, independently.

Step 808: Generate a plurality of compressed pixel data groups by using the compressors to compress a plurality of pixel data groups obtained from pixel data of a plurality of pixels of a picture based on the pixel data grouping setting. For example, the pixel data groups may be generated based on any of the proposed pixel data grouping designs shown in FIG. 2, FIG. 4, FIG. 6 and FIG. 7.

Step 810: Pack/packetize the compressed pixel data groups into an output bitstream.

Step 812: Transmit the output bitstream via a display interface.

Step 814: Transmit the pixel data grouping setting via an in-band channel (i.e., display interface) or an out-of-band channel (e.g., I2C bus).

Step 816: Receive the pixel data grouping setting from the in-band channel (i.e., display interface) or the out-of-band channel (e.g., I2C bus).

Step 818: Receive an input bitstream from the display interface.

Step 820: Un-pack/un-packetize the input bitstream into a plurality of compressed data groups.

Step 822: Generate pixel data of a plurality of pixels of a reconstructed picture by using a plurality of de-compressors to de-compress the compressed pixel data groups, independently, and then merging a plurality of de-compressed pixel data groups based on the pixel data grouping setting.

It should be noted that steps 802 and 804-814 are performed by the application processor (AP) 102, and steps 803 and 816-822 are performed by the driver IC 104. As a person skilled in the art can readily understand details of each step shown in FIG. 8 after reading above paragraphs, further description is omitted here for brevity.

Moreover, the proposed data parallelism scheme may be inactivated when using a single compressor at the AP side and a single de-compressor at the driver IC side is capable of meeting the throughput requirement. For example, the application processor may refer to information of the de-compression capability and requirement informed by the driver IC to decide the throughput M (pixels per clock cycle) of one de-compressor in the driver IC and the target throughput requirement N (pixels per clock cycle) of the display panel driven by the driver IC. Assume that the throughput of one compressor in the application processor is also M (pixels per clock cycle). When N/M is not greater than one, this means that using a single compressor at the AP side and a single de-compressor at the driver IC side is capable of meeting the throughput requirement. Hence, the proposed data parallelism scheme is inactivated, and the conventional rate-controlled compression and de-compression is performed. When N/M is greater than one, this means that using a single compressor at the AP side and a single de-compressor at the driver IC side is unable to meet the throughput requirement. Hence, the proposed data parallelism scheme is activated. In addition, the number of compressors enabled in the application processor and the number of de-compressors enabled in the driver IC may be determined based on the value of N/M.

The pixel data splitting operation performed by the mapper 114 is to generate multiple pixel data groups that will undergo rate-controlled compression independently. However, it is possible that pixel data of adjacent pixel lines (e.g., pixel rows or pixel columns) in the original picture are categorized into different pixel data groups. The rate control generally optimizes the bit rate in terms of pixel context rather than pixel positions. The pixel boundary may introduce artifacts since the rate control is not aware of the boundary position. Taking the pixel data grouping design shown in FIG. 6 for example, the rate control applied to the pixel section S1 of the pixel row R0 is independent of the rate control applied to the pixel section S2 of the same pixel row R0. Specifically, the pixel section S1 is compressed in an order from P0 to PM, and the pixel section S2 is compressed in an order from PM+1 to PW−1. Concerning the pixels PM and PM+1 on opposite sides of the pixel boundary between pixel sections S1 and S2, the pixel PM may be part of a compression unit with a first bit budget allocation, and the pixel PM+1 may be part of another compression unit with a second bit budget allocation different from the first bit budget allocation. The difference between the first bit budget allocation and the second bit budget allocation may be large. As a result, the rate controller 116 may allocate bit rates un-evenly on the pixel boundary, thus resulting in degraded image quality on the pixel boundary in a reconstructed picture. To avoid or mitigate the image quality degradation caused by artifacts on the pixel boundary, the present invention further proposes a position-aware rate control mechanism which optimizes the bit budget allocation in terms of pixel positions.

FIG. 9 is a diagram illustrating a position-aware rate control mechanism according to an embodiment of the present invention. As shown in FIG. 9, there are compression units CU1 and CU2 on one side of a pixel boundary and compression units CU3 and CU4 on the other side of the pixel boundary. The compression units CU1 and CU2 belong to one pixel group PG1, and the compression unit CU1 is nearer to the pixel boundary than the compression unit CU2. The compression units CU3 and CU4 belong to another pixel group PG2, and the compression unit CU3 is nearer to the pixel boundary than the compression unit CU4. In one exemplary embodiment, each of the compression units CU1-CU4 may include 4×2 pixels, and the compression units CU1-CU4 may be horizontally or vertically adjacent in a picture. When the position-aware rate control mechanism is activated, the rate controller 116 may be configured to adjust the bit rate control according to a position of each pixel boundary between different pixel groups. For example, the rate controller 116 increases an original bit budget BBori_CU1 assigned to the compression unit CU1 by an adjustment value Δ1 (Δ1>0) to thereby determine a final bit budget BBtar_CU1, and decreases an original bit budget BBori_CU2 assigned to the compression unit CU2 by the adjustment value Δ1 to thereby determine a final bit budget BBtar_CU2. In addition, the rate controller 116 increases an original bit budget BBori_CU3 assigned to the compression unit CU3 by an adjustment value Δ2 (Δ2>0) to thereby determine a final bit budget BBtar_CU3, and decreases an original bit budget BBori_CU4 assigned to the compression unit CU4 by the adjustment value Δ2 to thereby determine a final bit budget BBtar_CU4. The adjustment value Δ2 maybe equal to or different from the adjustment value Δ1, depending upon actual design consideration. Since the proposed position-aware rate control tends to set a larger bit budget near the pixel boundary, the artifacts on the pixel boundary can be reduced. In this way, the image quality around the pixel boundary in a reconstructed picture can be improved.

In a case where the position-aware rate control is employed, the flow shown in FIG. 8 may be modified to have step 806 replaced with the following step shown in FIG. 10.

Step 1002: Apply rate control to a plurality of compressors according to pixel boundary positions, independently.

As a person skilled in the art can readily understand details of step 1002 after reading above paragraphs, further description is omitted here for brevity.

Taking the pixel data grouping design shown in FIG. 6 for example, the rate control applied to the pixel section S1 of the pixel row R0 is independent of the rate control applied to the pixel section S2 of the same pixel row R0. The pixel section S1 is compressed in an order from P0 to PM, and the pixel section S2 is compressed in an order from PM+1 to PW−1. As a result, the bit budget allocation condition for the pixel PM (which is the last compressed pixel in the pixel section S1) may be different from the bit budget allocation condition for the pixel PM+1 (which is the first compressed pixel in the pixel section S2). To avoid or reduce artifacts on the pixel boundary, the present invention further proposes a modified compression mechanism with compression orders set based on pixel boundary positions. FIG. 11 is a diagram illustrating a modified compression mechanism according to an embodiment of the present invention. As shown in FIG. 11, there are compression units CU1 and CU2 on one side of a pixel boundary and compression units CU3 and CU4 on the other side of the pixel boundary. The compression units CUand CU2 belong to one pixel group PG1, and the compression unit CU1 is nearer to the pixel boundary than the compression unit CU2. The compression units CU3 and CU4 belong to another pixel group PG2, and the compression unit CU3 is nearer to the pixel boundary than the compression unit CU4. In one exemplary embodiment, each of the compression units CU1-CU4 may include 4×2 pixels, and the compression units CU1-CU4 may be horizontally or vertically adjacent in a picture. When the modified compression mechanism is activated, each of the compressors 115_1 and 115_2 may be configured to set a compression order according to a position of each pixel boundary between different pixel groups. For example, the compressor 115_1 compresses the compression unit CU1 prior to compressing the compression unit CU2, and the compressor 115_2 compresses the compression unit CU3 prior to compressing the compression unit CU4. In other words, two adjacent pixel sections located at the same pixel line are compressed in opposite compression orders. Since the modified compression scheme starts the compression from compression units near the pixel boundary between adjacent pixel groups, the bit budget allocation conditions near the pixel boundary may be more similar. In this way, the image quality around the pixel boundary in a reconstructed picture can be improved. When the modified compression mechanism is activated at the AP side, the de-mapper 124 at the driver IC side may be configured to further consider the compression orders when merging the de-compressed pixel data groups DG3 and DG4.

In a case where the modified compression mechanism is employed, the flow shown in FIG. 8 may be modified to have step 808 replaced with the following step shown in FIG. 12.

Step 1202: Generate a plurality of compressed pixel data groups by splitting pixel data of a plurality of pixels of a picture into a plurality of pixel data groups based on the pixel data grouping setting and using the compressors to compress the pixel data groups according to compression orders set based on pixel boundary positions.

As a person skilled in the art can readily understand details of step 1202 after reading above paragraphs, further description is omitted here for brevity.

Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.

Claims

1. A data processing apparatus, comprising:

a mapper, configured to receive pixel data of a plurality of pixels of a picture, and split the pixel data of the pixels of the picture into a plurality of pixel data groups;
a plurality of compressors, configured to compress the pixel data groups and generate a plurality of compressed pixel data groups, respectively; and
an output interface, configured to pack the compressed pixel data groups into at least one output bitstream, and output the at least one output bitstream via a display interface.

2. The data processing apparatus of claim 1, wherein compression operations performed by the compressors are independent of each other.

3. The data processing apparatus of claim 2, further comprising:

a rate controller, configured to apply bit rate control to the compressors, respectively.

4. The data processing apparatus of claim 1, wherein the display interface is a display serial interface (DSI) standardized by a Mobile Industry Processor Interface (MIPI) or an embedded display port (eDP) standardized by a Video Electronics Standards Association (VESA).

5. The data processing apparatus of claim 1, wherein pixel data of each pixel of the picture includes a plurality of bits corresponding to different bit planes, and the mapper is configured to split the bits of the pixel data of each pixel of the picture into a plurality of bit groups, and distribute the bit groups to the pixel data groups, respectively.

6. The data processing apparatus of claim 1, wherein the mapper is configured to split the pixels of the picture into a plurality of pixel groups, and distribute pixel data of the pixel groups to the pixel data groups, respectively.

7. The data processing apparatus of claim 6, wherein adjacent pixels located at a same pixel line of the picture are distributed to different pixel groups, respectively.

8. The data processing apparatus of claim 6, wherein adjacent pixel segments located at a same pixel line of the picture are distributed to different pixel groups, respectively, and each of the adjacent pixel segments includes a plurality of successive pixels.

9. The data processing apparatus of claim 8, wherein at least one pixel line of the picture is divided into a plurality of pixel segments, and a number of the pixel segments is equal to a number of the pixel data groups.

10. The data processing apparatus of claim 8, wherein at least one pixel line of the picture is divided into a plurality of pixel segments, and a number of the pixel segments is larger than a number of the pixel data groups.

11. The data processing apparatus of claim 6, further comprising:

a rate controller, configured to apply bit rate control to the compressors, respectively;
wherein the rate controller adjusts the bit rate control according to a position of each pixel boundary between different pixel groups.

12. The data processing apparatus of claim 11, wherein concerning a specific pixel boundary between a first pixel group and a second pixel group, the rate controller is configured to increase an original bit budget assigned to a first compression unit by an adjustment value and decrease an original bit budget assigned to a second compression unit by the adjustment value; the first compression unit and the second compression unit are adjacent compression units in any of the first pixel group and the second pixel group; and the first compression unit is nearer to the specific pixel boundary than the second compression unit.

13. The data processing apparatus of claim 6, wherein each of the compressors is further configured to set a compression order according to a position of each pixel boundary between different pixel groups.

14. The data processing apparatus of claim 13, wherein concerning a specific pixel boundary between a first pixel group and a second pixel group, a first compressor is configured to compress a first compression unit prior to compressing a second compression unit, and a second compressor is configured to compress a third compression unit prior to compressing a fourth compression unit; the first compression unit and the second compression unit are adjacent compression units in the first pixel group, and the first compression unit is nearer to the specific pixel boundary than the second compression unit; and the third compression unit and the fourth second compression unit are adjacent compression units in the second pixel group, and the third compression unit is nearer to the specific pixel boundary than the fourth compression unit.

15. The data processing apparatus of claim 1, wherein the data processing apparatus is coupled to another data processing apparatus via the display interface; and the data processing apparatus informs the another data processing apparatus of a pixel data grouping setting employed to split the pixel data of the pixels of the picture.

16. The data processing apparatus of claim 1, wherein the data processing apparatus is coupled to another data processing apparatus via the display interface, and the data processing apparatus further comprises:

a controller, configured to check a de-compression capability and requirement of the another data processing apparatus, and determines a number of the pixel data groups in response to a checking result.

17. A data processing apparatus, comprising:

an input interface, configured to receive at least one input bitstream from a display interface, and un-pack the at least one input bitstream into a plurality of compressed pixel data groups of a picture;
a plurality of de-compressors, configured to de-compress the compressed pixel data groups and generate a plurality of de-compressed pixel data groups, respectively; and
a de-mapper, configured to merge the de-compressed pixel data groups into pixel data of a plurality of pixels of the picture.

18. The data processing apparatus of claim 17, wherein de-compression operations performed by the de-compressors are independent of each other.

19. The data processing apparatus of claim 17, wherein the display interface is a display serial interface (DSI) standardized by a Mobile Industry Processor Interface (MIPI) or an embedded display port (eDP) standardized by a Video Electronics Standards Association (VESA).

20. The data processing apparatus of claim 17, wherein pixel data of each pixel of the picture includes a plurality of bits corresponding to different bit planes, and the de-mapper is configured to obtain a plurality of bit groups from the de-compressed pixel data groups, respectively, and merge the bit groups to obtain the bits of the pixel data of each pixel of the picture.

21. The data processing apparatus of claim 17, wherein the de-mapper is configured to obtain pixel data of a plurality of pixel groups from the de-compressed pixel data groups, respectively, and merge the pixel data of the pixel groups to obtain the pixel data of the pixels of the picture.

22. The data processing apparatus of claim 21, wherein adjacent pixels located at a same pixel line of the picture are obtained from different pixel groups, respectively.

23. The data processing apparatus of claim 21, wherein adjacent pixel segments located at a same pixel line of the picture are obtained from different pixel groups, respectively, and each of the adjacent pixel segments includes a plurality of successive pixels.

24. The data processing apparatus of claim 23, wherein at least one pixel line of the picture is obtained by merging a plurality of pixel segments, and a number of the pixel segments is equal to a number of the de-compressed pixel data groups.

25. The data processing apparatus of claim 23, wherein at least one pixel line of the picture is obtained by merging a plurality of pixel segments, and a number of the pixel segments is larger than a number of the de-compressed pixel data groups.

26. The data processing apparatus of claim 17, wherein the data processing apparatus is coupled to another data processing apparatus via the display interface; and the data processing apparatus receives a pixel data grouping setting of splitting the pixel data of the pixels of the picture from the another data processing apparatus.

27. The data processing apparatus of claim 17, wherein the data processing apparatus is coupled to another data processing apparatus via the display interface, and the data processing apparatus further comprises:

a controller, configured to inform the another data processing apparatus of a de-compression capability and requirement of the data processing apparatus.

28. A data processing method, comprising:

receiving pixel data of a plurality of pixels of a picture, and splitting the pixel data of the pixels of the picture into a plurality of pixel data groups;
compressing the pixel data groups to generate a plurality of compressed pixel data groups, respectively; and
packing the compressed pixel data groups into at least one output bitstream, and outputting the at least one output bitstream via a display interface.

29. A data processing method, comprising:

receiving at least one input bitstream from a display interface, and un-packing the at least one input bitstream into a plurality of compressed pixel data groups of a picture;
de-compressing the compressed pixel data groups to generate a plurality of de-compressed pixel data groups, respectively; and
merging the de-compressed pixel data groups into pixel data of a plurality of pixels of the picture.
Patent History
Publication number: 20150049099
Type: Application
Filed: Jul 21, 2014
Publication Date: Feb 19, 2015
Inventors: Chi-Cheng Ju (Hsinchu City), Tsu-Ming Liu (Hsinchu City)
Application Number: 14/337,198
Classifications
Current U.S. Class: Interface (e.g., Controller) (345/520)
International Classification: G09G 5/00 (20060101);