VIDEO DECODING METHOD, VIDEO DECODER, VIDEO ENCODING METHOD AND VIDEO ENCODER
A video decoding method includes: receiving a coding value; and performing the following steps according to an index value of the coding value: collecting a plurality of reference samples, grouping the plurality of reference samples to generate at least one group, establishing a model of the at least one group, obtaining a target pixel from a target block, selecting a target group from the at least one group, and introducing a luminance value of the target pixel into a model of the target group to predict a chromaticity value of the target pixel.
Latest INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE Patents:
- METHOD OF LOGICAL CHANNEL PRIORITIZATION AND DEVICE THEREOF
- ADDITION SYSTEM AND METHOD OF REDUCING AGENT IN SEMICONDUCTOR MANUFACTURING PROCESS
- METHOD OF NON-TERRESTRIAL NETWORK COMMUNICATION AND USER EQUIPMENT USING THE SAME
- METHOD AND USER EQUIPMENT FOR REPORTING REMAINING DELAY BUDGET INFORMATION
- ELECTRONIC DEVICE AND METHOD FOR DETERMINING SCENARIO DATA OF SELF-DRIVING CAR
This application claims the benefit of U.S. provisional application Ser. No. 62/691,729, filed Jun. 29, 2018 and U.S. provisional application Ser. No. 62/727,595, filed Sep. 6, 2018, the subject matters of which are incorporated herein by references.
TECHNICAL FIELDThe disclosure relates to a video decoding method, a video decoder, a video encoding method and a video encoder.
BACKGROUNDTo enhance coding efficiency of video data, in international video coding standards, such as H.264/Advanced Video Coding (AVC), intra prediction is introduced to remove spatial information redundancy of a current coding image block and neighboring coded image blocks.
When video data in a coding format of YCbCr is coded, if the amount of coding data is further reduced, encoding/decoding efficiency can be enhanced.
SUMMARYAccording to an exemplary embodiment of the disclosure, a video decoding method is provided, the method including: receiving a coding value; and performing the following steps according to an index value of the coding value: collecting a plurality of reference samples, grouping the reference samples to generate at least one group, establishing a model of the at least one group, obtaining a target pixel from a target block, selecting a target group from the at least one group according to the target pixel, and introducing a luminance value of the target pixel into a model of the target group to predict a chromaticity value of the target pixel.
According to an exemplary embodiment of the disclosure, a video encoding method is provided, the method including: collecting a plurality of reference samples, grouping the reference samples to generate at least one group, establishing a model of the at least one group, obtaining a target pixel from a target block, selecting a target group from the at least one group according to the target pixel, introducing a luminance value of the target pixel into a model of the target group to predict a chromaticity value of the target pixel, and generating a coding value, the coding value including an index value.
According to an exemplary embodiment of the disclosure, a video decoder is provided, the decoder including: a processor, for controlling the video decoder; a memory, for storing a plurality of reference samples and a target block; a decoding module; and an index receiving module, receiving a coding value. The processor, the memory, the decoding module and the index receiving module are coupled to one another. The decoding module performs the following steps according to an index value of the coding value: collecting a plurality of reference samples, grouping the reference samples to generate at least one group, establishing a model of the at least one group, obtaining a target pixel from a target block, selecting a target group from the at least one group according to the target pixel, and introducing a luminance value of the target pixel into a model of the target group to predict a chromaticity value of the target pixel.
According to an exemplary embodiment of the disclosure, a video encoder is provided, the encoder including: a processor, for controlling the video encoder; a memory, for storing a plurality of reference samples and a target block; an encoding module; and an index selecting module. The processor, the memory, the encoding module and the index selecting module are coupled to one another. The encoding module performs the following operations: collecting the reference samples, grouping the reference samples to generate at least one group, establishing a model of the at least one group, obtaining a target pixel from the target block, introducing a luminance value of the target pixel into a model of the target group to predict a chromaticity value of the target pixel, and generating a coding value. The index selecting module generates an index value from the coding value generated by the encoding module.
In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawing.
DETAILED DESCRIPTIONTechnical terms of the application are based on general definition in the technical field of the application. If the application describes or explains one or some terms, definition of the terms are based on the description or explanation of the application. Each of the disclosed embodiments has one or more technical features. In possible implementation, one skilled person in the art would selectively implement part or all technical features of any embodiment of the application or selectively combine part or all technical features of the embodiments of the application based on the disclosure of the application and his/her own need.
The target block TB refers to a block to be reconstructed, wherein a luminance value of the target block TB is known. In an exemplary embodiment of the disclosure, a chromaticity value of a target pixel of the target block TB can be predicted from a luminance value of the target pixel of the target block TB. After individual chromaticity values of all of the target pixels in the target block TB have been predicted, the target block TB is considered as having been reconstructed. In the description below, a linear model is taken as an example for illustration; however, the present application is not limited thereto.
In step 110, the reference samples are scanned to obtain a plurality of luminance values and a plurality of chromaticity values of the reference samples.
Principles for the grouping in step 120 are described herein.
For example, details of determining whether to add a second reference sample of the reference samples into the first group according to a sample characteristic value of the second reference sample can be as described below. Two embodiments are given below; that is, two sets of determination equations, and given that it is determined that one or all reference samples Rn (where n is a positive integer) satisfy any set of the determination equations, it can be determined to add the reference sample Rn (where n is a positive integer) into the group.
The first set of determination equations are: determining whether a luminance value YRn and a chromaticity value CRn of the reference sample Rn (where n is a positive integer) satisfy the following equations to determine whether the reference sample Rn belongs to the group Gi (where i=1 to A, and A represents the quantity of existing groups) (if all of the four equations below are satisfied, it is determined that the reference sample Rn belongs to the group Gi; that is, if any one of the equations is not satisfied, it is determined that the reference sample Rn does not belong to the group Gi): YRn>Yi_group_min−Ymargin; YRn<Yi_group_max+Ymargin; CRn>Ci_group_min−Cmargin; and CRn<Ci_group_max+Cmargin.
In the above, Yi_group_min represents a minimum luminance value in the group Gi, Ymargin and Cmargin respectively represent a luminance range threshold (which may be an existing constant value) and a chromaticity range threshold (which may be an existing constant value), Yi_group_max represents a maximum luminance value in the group Gi, Ci_group_min represents a minimum chromaticity value in the group Gi, and Ci_group_max represents a maximum chromaticity value in the group Gi.
The second set of determination equations are: determining whether the luminance value YRn and the chromaticity value CRn of the reference sample Rn satisfy the following equations to determine whether the reference sample Rn belongs to the group Gi (if all of the four equations below are satisfied, it is determined that the reference sample Rn belongs to the group Gi; that is, if any one of the equations is not satisfied, it is determined that the reference sample Rn does not belong to the group Gi): YRn>Yi_group_max−Ymargin; YRn<Yi_group_min+Ymargin; CRn>Ci_group_max−Cmargin; and CRn<Ci_group_min+Cmargin.
If the reference sample Rn does not fall within any one existing group, a new group is created and the reference sample Rn is assigned to the new group.
After all of the reference samples are grouped, associated linear models are respectively established for the groups.
A target group is selected from the groups according to the luminance value of the target pixel, and the chromaticity value of the target pixel is predicted by using a target linear model of the target group. For example, assuming that the luminance value of the target pixel falls between Yi_group_min and Yi_group_max, the group Gi is selected as the target group, and the chromaticity value of the target pixel is predicted by using the target linear model LMi of the target group Gi.
Examples are described below. Initially after the first reference sample is scanned, a first group G1 is established, wherein the first group G1 currently includes only the first reference sample.
Then, a second reference sample R2 is scanned, and it is determined according to one of the two sets of equations above whether the second reference sample R2 belongs to the first group G1.
If it is determined that the second reference sample R2 belongs to the first group G1, the second reference sample R2 is assigned to the first group G1, and Y1_group_min (selecting the smaller between the luminance value of the first reference sample and the luminance value of the second reference sample), Y1_group_max (selecting the larger between the luminance value of the first reference sample and the luminance value of the second reference sample), C1_group_min (selecting the smaller between the chromaticity value of the first reference sample and the chromaticity value of the second reference value), and C1_group_max (selecting the larger between the chromaticity value of the first reference sample and the chromaticity value of the second reference value) are accordingly updated.
Alternatively, if it is determined that the second reference sample R2 does not belong to the first group G1, a new second group G2 is established (similarly, the second group G2 currently includes only the second reference sample R2). The above process is repeated until all of the reference samples have been grouped.
That is, the grouping in
In other possible exemplary embodiments of the present application, the step of grouping the reference samples includes: (1) establishing a first group according to a first reference sample, assigning the first reference sample to the first group, and generating a minimum luminance value and a maximum luminance value of the first group, wherein the minimum luminance value of the first group is a luminance value of the first reference sample, and the maximum luminance value of the first group is the luminance value of the first reference sample; (2) determining according to a luminance value of a second reference sample whether the second reference sample belongs to the first group, wherein it is determined according to one of the two sets of equations below whether the second reference sample belongs to the first group (if any one set of equations are satisfied, the second reference sample is assigned to the first group): (A) the first set of determination equations: YRn>Yi_group_min−Ymargin; and YRn<Yi_group_max+Ymargin; and (B) the second set of determination equations: YRn>Yi_group_max−Ymargin; and YRn<Yi_group_min+Ymargin; (3) if the second reference sample belongs to the first group, updating the minimum luminance value of the first group as a minimum value between the minimum luminance value of the first group and the luminance value of the second reference sample, and updating the maximum luminance value of the first group as a maximum value between the maximum luminance value of the first group and the luminance value of the second reference sample; (4) searching the established groups to determine whether the second reference sample belongs to any one of the groups; if the second reference sample does not belong to at least one of the groups, generating a second group, and generating a minimum luminance value and a maximum luminance value of the second group.
In other possible exemplary embodiments of the present application, the step of grouping the reference samples includes: (1) establishing a first group according to a first reference sample, assigning the first reference sample to the first group, and generating a minimum chromaticity value and a maximum chromaticity value of the first group, wherein the minimum chromaticity value of the first group is a chromaticity value of the first reference sample, and the maximum chromaticity value of the first group is the chromaticity value of the first reference sample; (2) determining according to a chromaticity value of a second reference sample whether the second reference sample belongs to the first group, wherein it is determined according to the two sets of equations below whether the second reference sample belongs to the first group (if any one set of equations are satisfied, the second reference sample is assigned to the first group): (A) the first set of determination equations: CRn>Ci_group_min−Cmargin; and CRn<Ci_group_max+Cmargin; and (B) the second set of determination equations: CRn>Ci_group_max−Cmargin; and CRn<Ci_group_min+Cmargin; (3) if the second reference sample belongs to the first group, updating the minimum chromaticity value of the first group as a minimum value between the minimum chromaticity value of the first group and the chromaticity value of the second reference sample, and updating the maximum chromaticity value of the first group as a maximum value between the maximum chromaticity value of the first group and the chromaticity value of the second reference sample; (4) searching the established groups to determine whether the second reference sample belongs to any one of the groups; if the second reference sample does not belong to at least one of the groups, generating a second group, and generating a minimum chromaticity value and a maximum chromaticity value of the second group.
Further, in an embodiment of the present application, a reference block and a target block can be further segmented.
Further, in other exemplary embodiments of the present application, the step of grouping the reference samples can include: defining a quantity of the groups; establishing the groups of a fixed quantity, and calculating individual group characteristic values of the at least one group according to individual sample characteristic values of a plurality of reference samples included in the groups; and assigning the reference samples into the at least one group according to the individual sample characteristic values. The group characteristic value includes any combination of: a position, a representative luminance value, a representative chromaticity component value, a maximum luminance, a minimum luminance, a maximum chromaticity, a minimum chromaticity, and a reference sample quantity. The sample characteristic value includes any combination of: a luminance, at least one chromaticity component and a position.
After the spatial position of the color discontinuity is identified, the reference block and the target block are segmented. Taking
Alternatively, the reference sub-block RB1_1 and the reference sub-block RB2_1 are referred when the chromaticity value of the target sample of the target sub-block TB_1 is predicted, the reference sub-block RB1_2 and the reference sub-block RB2_1 are referred when the chromaticity value of the target sample of the target sub-block TB_2 is predicted, the reference sub-block RB1_3 and the reference sub-block RB2_1 are referred when the chromaticity value of the target sample of the target sub-block TB_3 is predicted, the reference sub-block RB1_1 and the reference sub-block RB2_2 are referred when the chromaticity value of the target sample of the target sub-block TB_4 is predicted, the reference sub-block RB1_2 and the reference sub-block RB2_2 are referred when the chromaticity value of the target sample of the target sub-block TB_5 is predicted, and the reference sub-block RB1_3 and the reference sub-block RB2_2 are referred when the chromaticity value of the target sample of the target sub-block TB_6 is predicted. Details for predicting the chromaticity value are as described previously, and are omitted herein. That is, when any one of the target sub-blocks is predicted, at least one of the reference sub-blocks is excluded.
A color discontinuity occurs, for example, when a white reference sample is adjacent to a black reference sample, or when a red reference sample is adjacent to a blue reference sample.
In the present application, a color discontinuity is identified so as to segment a reference block and a target block. When the chromaticity value of a target sub-block is predicted, a reference sub-block having a more similar color is taken into consideration, and a reference sub-block having a less similar color is eliminated. That is, taking
That is to say, in one exemplary embodiment of the present application, to perform segmentation, a spatial position having a characteristic value discontinuity (e.g., a color discontinuity) is identified, so as to segment the reference block and the target block, that is, the reference block is segmented into at least one reference sub-block, and a group is established for the at least one reference sub-block; and a model is established by using the at least one reference sub-block.
Next, the reference sub-block RB1_1 and the reference sub-block RB2_1 are referred when the chromaticity value of the target sample of the target block TB_1 is predicted, the reference sub-block RB1_2 and the reference sub-block RB2_2 are referred when the chromaticity value of the target sample of the target block TB_2 is predicted, and the reference sub-block RB1_3 is referred when the chromaticity value of the target sample of the target block TB_3 is predicted. Details for predicting the chromaticity value are as described previously, and are omitted herein.
Alternatively, in other exemplary embodiments of the present application, the step of predicting the chromaticity value of the target pixel includes: segmenting the target block into at least one target sub-block according to a spatial position of a characteristic value discontinuity, selecting the target group according to the target sub-block to which the target pixel belongs, and predicting the chromaticity value of the target pixel by using the model of the target group.
In the above, N represents the quantity of reference samples in the group, L(n) and C(n) respectively represent the luminance value and the chromaticity value of the reference sample Rn.
Alternatively, another straight line equation is: Y=αx+β, where a and β are as shown below:
In the above, Cmax, Cmin, Lmax and Lmin respectively represent the maximum chromaticity, the minimum chromaticity value, the minimum luminance value and the maximum luminance value of the reference samples.
In
In
In
In other words, in
In the first approach, the chromaticity value is predicted by using a universal linear model (ULM). The “universal correlation model” and/or “universal linear model” is obtained according to individual center points of all of the groups.
In the second approach, an average value of predicted adjacent chromaticity values is used as the predicted chromaticity value of the target pixel regarded as an out-tier. The term “a predicated adjacent chromaticity value” refers to a predetermined chrominance of the remaining target pixels having similarly luminance values.
In the third approach, an intermediate grayscale value is used as the predicted chromaticity value of the target pixel regarded as an out-tier. For example, assuming that the pixel value is 10-bit, 512 is used as the predicted chromaticity value of the target pixel regarded as an out-tier.
Further, in one exemplary embodiment of the present application, if a chromaticity value of a target pixel previously processed is near the luminance value of the target pixel, the (predicted) chromaticity value of the target pixel previously processed is used as the predicted chromaticity value of the target pixel.
As described above, the exemplary embodiments of the present application effectively predict a chromaticity value. Thus, when video data in a YCbCr format is encoded, only Y data needs to be encoded, hence effectively reducing the bitrate of encoding. On the other hand, CbCr data can be predicted by using the above method. The exemplary embodiments of the present application are capable of solving the issue of poor efficiency of independently encoding a chromaticity value and enhancing the overall encoding efficiency. Therefore, the present application can be applied to products involving video compression related technologies, for example but not limited to, webcams, digital cameras, digital video cameras, handheld mobile devices and digital televisions.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims and their equivalents.
Claims
1. A video decoding method, comprising:
- receiving a coding value, and performing steps below according to an index value of the coding value: collecting a plurality of reference samples; grouping the reference samples to generate at least one group; establishing a model of the at least one group; obtaining a target pixel from a target block; selecting a target group from the at least one group according to the target pixel; and introducing a luminance value of the target pixel into a model of the target group to predict a chromaticity value of the target pixel.
2. The video decoding method according to claim 1, wherein a quantity of the groups generated is one.
3. The video decoding method according to claim 1, wherein a quantity of the groups generated is any number more than two.
4. The video decoding method according to claim 1, wherein the step of grouping the reference samples comprises:
- calculating an average luminance value of the reference samples; and
- assigning, according to whether respective luminance values of the reference samples are greater than or smaller than the average luminance value, the reference samples into the at least one generated group.
5. The video decoding method according to claim 1, wherein the step of grouping the reference samples comprises:
- establishing a first group according to a first reference sample of the reference samples, and assigning the first reference sample into the first group;
- determining, according to a sample characteristic value of a second reference sample of the reference samples, whether to add the second reference sample into the first group;
- if it is determined to add the second reference sample into the first group, updating a group characteristic value of the first group; and
- if it is determined not to add the second reference sample into the first group, establishing a second group and calculating a group characteristic value of the second group.
6. The video decoding method according to claim 5, wherein the group characteristic value comprises any combination of: a position, a representative luminance value, a representative chromaticity component value, a maximum luminance, a minimum luminance, a maximum chromaticity, a minimum chromaticity and a reference sample quantity; and the sample characteristic value comprises any combination of: a luminance, at least one chromaticity component and a position.
7. The video decoding method according to claim 1, wherein the step of grouping the reference samples comprises:
- defining a quantity of the at least one group;
- establishing the at least one group of a fixed quantity, and calculating individual group characteristic values of the at least one group according to individual sample characteristic values of a plurality of reference samples of the at least one group; and
- assigning the reference samples into the at least one group according to the individual sample characteristic values.
8. The video decoding method according to claim 7, wherein the group characteristic value comprises any combination of: a position, a representative luminance value, a representative chromaticity component value, a maximum luminance, a minimum luminance, a maximum chromaticity, a minimum chromaticity and a reference sample quantity; and the sample characteristic value comprises any combination of: a luminance, at least one chromaticity component and a position.
9. The video decoding method according to claim 1, wherein the step of establishing the model of the at least one group comprises:
- establishing the model by applying a linear regression algorithm.
10. The video decoding method according to claim 1, wherein the step of establishing the model of the at least one group comprises:
- establishing the model by applying a straight line equation.
11. The video decoding method according to claim 1, wherein the step of establishing the model of the at least one group comprises:
- establishing the model by applying an averaging algorithm.
12. The video decoding method according to claim 1, wherein the step of selecting the target group from the at least one group comprises:
- identifying from the at least one group the target group nearest to the target pixel according to the luminance value of the target pixel.
13. The video decoding method according to claim 1, wherein, if a luminance value of a previously processed target pixel is near the luminance value of the target pixel, the chromaticity value of the target pixel is predicted by using a predicted chromaticity value of the previously processed target pixel.
14. The video decoding method according to claim 1, wherein:
- a spatial position of a characteristic value discontinuity is identified to segment a reference block and the target block, wherein the reference block is segmented into at least one reference sub-block, and a group is established for the at least one reference sub-block; and
- at least one model is established by using the at least one reference sub-block.
15. The video decoding method according to claim 1, wherein the step of predicting the chromaticity value of the target pixel comprises:
- segmenting the target block into at least one target sub-block according to a spatial position of a characteristic value discontinuity;
- selecting the target group according to the target sub-block to which the target pixel belongs; and
- predicting the chromaticity value of the target pixel by using the model of the target group.
16. The video decoding method according to claim 1, wherein the reference sample is collected from a left adjacent block and an upper adjacent block of the target block.
17. A video encoding method, comprising:
- collecting a plurality of reference samples;
- grouping the reference samples to generate at least one group;
- establishing a model of the at least one group;
- obtaining a target pixel from a target block;
- selecting a target group from the at least one group according to the target pixel;
- introducing a luminance value of the target pixel into a model of the target group to predict a chromaticity value of the target pixel; and
- generating a coding value, the coding value having an index value.
18. The video encoding method according to claim 17, wherein a quantity of the groups generated is one.
19. The video encoding method according to claim 17, wherein a quantity of the groups generated is any number more than two.
20. The video encoding method according to claim 17, wherein the step of grouping the reference samples comprises:
- calculating an average luminance value of the reference samples; and
- assigning, according to whether respective luminance values of the reference samples are greater than or smaller than the average luminance value, the reference samples into the at least one generated group.
21. The video encoding method according to claim 17, wherein the step of grouping the reference samples comprises:
- establishing a first group according to a first reference sample of the reference samples, and assigning the first reference sample into the first group;
- determining, according to a sample characteristic value of a second reference sample of the reference samples, whether to add the second reference sample into the first group;
- if it is determined to add the second reference sample into the first group, updating a group characteristic value of the first group; and
- if it is determined not to add the second reference sample into the first group, establishing a second group and calculating a group characteristic value of the second group.
22. The video encoding method according to claim 21, wherein the group characteristic value comprises any combination of: a position, a representative luminance value, a representative chromaticity component value, a maximum luminance, a minimum luminance, a maximum chromaticity, a minimum chromaticity and a reference sample quantity; and the sample characteristic value comprises any combination of: a luminance, at least one chromaticity component and a position.
23. The video encoding method according to claim 17, wherein the step of grouping the reference samples comprises:
- defining a quantity of the at least one group;
- establishing the at least one group of a fixed quantity, and calculating individual group characteristic values of the at least one group according to individual sample characteristic values of a plurality of reference samples of the at least one group; and
- assigning the reference samples into the at least one group according to the individual sample characteristic values.
24. The video encoding method according to claim 23, wherein the group characteristic value comprises any combination of: a position, a representative luminance value, a representative chromaticity component value, a maximum luminance, a minimum luminance, a maximum chromaticity, a minimum chromaticity and a reference sample quantity; and the sample characteristic value comprises any combination of: a luminance, at least one chromaticity component and a position.
25. The video encoding method according to claim 17, wherein the step of establishing the model of the at least one group comprises:
- establishing the model by applying a linear regression algorithm.
26. The video encoding method according to claim 17, wherein the step of establishing the model of the at least one group comprises:
- establishing the model by applying a straight line equation.
27. The video encoding method according to claim 17, wherein the step of establishing the model of the at least one group comprises:
- establishing the model by applying an averaging algorithm.
28. The video encoding method according to claim 17, wherein the step of selecting the target group from the at least one group comprises:
- identifying from the at least one group the target group nearest to the target pixel according to the luminance value of the target pixel.
29. The video encoding method according to claim 17, wherein, if a luminance value of a previously processed target pixel is near the luminance value of the target pixel, the chromaticity value of the target pixel is predicted by using a predicted chromaticity value of the previously processed target pixel.
30. The video encoding method according to claim 17, wherein:
- a spatial position of a characteristic value discontinuity is identified to segment a reference block and the target block, wherein the reference block is segmented into at least one reference sub-block, and a group is established for the at least one reference sub-block; and
- at least one model is established by using the at least one reference sub-block.
31. The video encoding method according to claim 17, wherein the step of predicting the chromaticity value of the target pixel comprises:
- segmenting the target block into at least one target sub-block according to a spatial position of a characteristic value discontinuity;
- selecting the target group according to the target sub-block to which the target pixel belongs; and
- predicting the chromaticity value of the target pixel by using the model of the target group.
32. The video encoding method according to claim 17, wherein the reference sample is collected from a left adjacent block and an upper adjacent block of the target block.
33. A video decoder, comprising:
- a processor, for controlling the video decoder;
- a memory, for storing a plurality of reference samples and a target block;
- a decoding module; and
- an index receiving module, receiving a coding value;
- wherein, the processor, the memory, the decoding module and the index receiving module are coupled to one another; and the decoding module performs operations below according to an index value of the coding value: collecting a plurality of reference samples; grouping the reference samples to generate at least one group; establishing a model of the at least one group; obtaining a target pixel from a target block; selecting a target group from the at least one group according to the target pixel; and introducing a luminance value of the target pixel into a model of the target group to predict a chromaticity value of the target pixel.
34. A video encoder, comprising:
- a processor, for controlling the video encoder;
- a memory, for storing a plurality of reference samples and a target block;
- an encoding module; and
- an index selecting module;
- wherein, the processor, the memory, the encoding module and the index selecting module are coupled to one another; and the encoding module performs operations below: collecting a plurality of reference samples; grouping the reference samples to generate at least one group; establishing a model of the at least one group; obtaining a target pixel from a target block; selecting a target group from the at least one group according to the target pixel; introducing a luminance value of the target pixel into a model of the target group to predict a chromaticity value of the target pixel; and generating a coding value; and
- the index selecting module generates an index value from the coding value generated by the encoding module.
Type: Application
Filed: Jun 19, 2019
Publication Date: Jan 2, 2020
Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE (Hsinchu)
Inventors: Sheng-Po WANG (Taoyuan City), Chun-Lung LIN (Taipei City), Ching-Chieh LIN (Taipei City), Chang-Hao YAU (New Taipei City), Po-Han LIN (Taipei City)
Application Number: 16/446,174