ERROR-FLOOR MITIGATION OF LDPC CODES USING TARGETED BIT ADJUSTMENTS

- LSI CORPORATION

Embodiments of the present invention are methods for breaking one or more trapping sets in a near codeword of a failed graph-based decoder, e.g., an LDPC decoder. The methods determine, from among all bit nodes associated with one or more unsatisfied check nodes in the near codeword, which bit nodes, i.e., the suspicious bit nodes or SBNs, are most likely to be erroneous bit nodes. The methods then perform a trial in which the values of one or more SBNs are altered and decoding is re-performed. If the trial does not converge on the decoded correct codeword (DCCW), then other trials are performed until either (i) the decoder converges on the DCCW or (ii) all permitted combinations of SBNs are exhausted. The starting state of a particular trial, and the set of SBNs available to that trial may change depending on the results of previous trials.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of the filing date of U.S. provisional application No. 61/089,297, filed on Aug. 15, 2008 as attorney docket no. 08-0241-PR, the teachings of which are incorporated herein by reference in their entirety.

The subject matter of this application is related to (1) the subject matter of PCT application no. PCT/US08/86523 filed on Dec. 12, 2008 as attorney docket no. 08-0241 and (2) the subject matter of PCT application no. PCT/US08/86537 filed on Dec. 12, 2008 as attorney docket no. 08-1293, the teachings of both of which are incorporated herein by reference in their entirety.

BACKGROUND OF THE INVENTION

1. Field of the Invention

The invention relates to digital signal processing, and, in particular, to a data-encoding method known as low-density parity check (LDPC) coding.

2. Description of the Related Art

Communication is the transmission of information by a transmitter to a receiver over a communications channel. In the real world, the communications channel is a noisy channel, outputting to the receiver a distorted version of the information received from the transmitter. A hard disk (HD) drive is one such noisy channel, accepting information from a transmitter, storing that information, and then, possibly, transmitting a more or less distorted copy of that information to a receiver.

The distortion introduced by a communications channel such as an HD drive might be great enough to cause a channel error, i.e., where the receiver interprets the channel output signal as a 1 when the channel input signal was a 0, and vice versa. Channel errors reduce throughput, and are thus undesirable. Hence, there is an ongoing need for tools which detect and/or correct channel errors. Low-density parity check (LDPC) coding is one method for the detection and correction of channel errors.

LDPC codes are among the known near-Shannon-limit codes that can achieve very low bit-error rates (BER) for low signal-to-noise ratio (SNR) applications. LDPC decoding is distinguished by its potential for parallelization, low implementation complexity, low decoding latency, as well as less-severe error-floors at high SNRs. LDPC codes are considered for virtually all the next-generation communication standards.

SUMMARY OF THE INVENTION

In one embodiment, the present invention is a method for decoding encoded data using bit nodes and check nodes. The method performs iterative decoding on the encoded data to generate an original near codeword (NCW) having one or more unsatisfied check nodes (USCs), each USC having one or more associated bit nodes (ABNs), the one or more ABNs for the one or more USCs forming a set of ABNs. Next, the method selects, from the set of ABNs, a first set of suspicious bit nodes (SBNs) that may be erroneous bit nodes for the original NCW. Then, the method adjusts at least one of the SBNs in the first set to generate a modified NCW, and performs iterative decoding on the modified NCW to attempt to generate a decoded correct codeword (DCCW) for the encoded data.

BRIEF DESCRIPTION OF THE DRAWINGS

Other aspects, features, and advantages of the invention will become more fully apparent from the following detailed description, the appended claims, and the accompanying drawings in which like reference numerals identify similar or identical elements.

FIG. 1 is a block diagram of a typical hard disk (HD) drive 100.

FIG. 2(A) depicts LDPC H matrix 200, and FIG. 2(B) is a Tanner graph of H matrix 200.

FIG. 3 is a flowchart of typical LDPC decoding method 300 used by decoder 112.

FIG. 4 is a block diagram of LDPC decoding system 400 according to one embodiment of the invention.

FIG. 5 is a flowchart of exemplary targeted bit-flipping process 500 used by post-processor 404 of FIG. 4, according to one embodiment on the present invention.

FIG. 6 is a flow diagram of step 514 of FIG. 5.

FIG. 7 is a flow diagram of step 516 of FIG. 5.

FIG. 8 is a flow diagram of step 518 of FIG. 5.

FIG. 9 is a flow diagram of step 532 of FIG. 5.

DETAILED DESCRIPTION

FIG. 1 is a block diagram of a portion of a typical hard disk (HD) drive 100 that utilizes LDPC coding. HD drive 100 comprises platters 102 and read channel 104. Read channel 104 comprises LDPC encoder 106, write processor 108, read processor 110, and LDPC decoder 112. Path 114 is the noisy channel between LDPC encoder 106 and LDPC decoder 112.

Information words to be written to platters 102 are processed by LDPC encoder 106 to yield LDPC codewords. LDPC codewords are sent to write processor 108, which comprises a number of modules, e.g., a BPSK (binary phase-shift keying) encoder, a digital-to-analog converter, etc. Output 116 of write processor 108 is written to platters 102.

Signals 118 read from platters 102 are sent to read processor 110, which comprises a number of modules, e.g., a pre-amplifier, a continuous-time filter, a fixed-impulse response filter, a detector, an analog-to-digital converter, etc. Read processor 110 outputs log-likelihood ratio (LLR) values Lch to LDPC decoder 112, which in turn outputs decoded information words. Additionally, LDPC decoder 112 sends ELDPC values back to read processor 110. ELDPC are defined in Equation 6 below, and represent intermediate calculated LLR values. Read processor 110 uses the ELDPC values to tune its performance, a process known as turbo-equalization.

LDPC Encoding

LDPC encoder 106 appends to the bits of an information word a number of parity bits specified by the LDPC code, to yield a codeword. The bits in an information word are known as variable bits, and the number of those variable bits is denoted K. The total number of bits in an LDPC codeword is denoted N. Thus, the number of parity bits is given by N−K. The rate of a particular LDPC code is K/N, i.e., the ratio of the information word length to the codeword length. Thus, an LDPC code which appends six parity bits to each three-bit information word to yield a nine-bit codeword has a rate of ⅓. In the case of a typical HD drive, the information word length K is 4096 bits (the length of a typical HD drive sector), and the number of parity bits is approximately 410 bits, for a codeword length of 4506 bits and a rate of 0.9.

Each parity bit in an LDPC codeword is associated with one or more other (variable or parity) bits in that codeword in a particular way as specified by the particular LDPC code, and the value assigned to a parity bit is set so as to satisfy the LDPC code. Typical LDPC codes specify that associated bits satisfy a parity check constraint, e.g., the sum of the associated bits is an even number, i.e., sum modulo 2=0.

The LDPC Code

A particular LDPC code is defined by a two-dimensional matrix of 1s and 0s known as the parity check matrix, or H matrix, or simply H. H is known, a priori, by both the LDPC encoder and decoder. H comprises N columns and N−K rows, i.e., a column for every bit of the codeword, and a row for every parity bit. Each 1 in H represents an association between the codeword bit of the column and the parity bit of the row. For example, a 1 at the third row, seventh column of H means that the third parity check bit is associated with the seventh bit of the codeword. The modulo 2 sum of the value of a check bit and all variable bits associated with that check bit should be 0.

The number of 1s in a column of H is known as the weight wc of that column. Similarly, the number of 1s in a row of H is known as the weight wr of that row. The LDPC code defined by an H wherein all columns have the same wc and all rows have the same wr is known as a regular LDPC code. An LDPC code defined by an H where wc and/or wr are not the same across all columns and/or rows, respectively, is known as an irregular LDPC code.

A defining characteristic of typical LDPC codes is that H is “sparse,” i.e., the elements of H are mostly 0s with few 1s. Research has shown that H matrices typically need wc≧3 in order to perform well, and that irregular LDPC codes outperform regular LDPC codes.

FIG. 2(A) depicts LDPC H matrix 200. H matrix 200 comprises N=9 columns and N−K=6 rows. Thus, H matrix 200 defines an LDPC code which accepts a three-bit information word, appends six parity bits, and outputs a nine-bit codeword. Thus, the rate of this particular LDPC code is 3/9 or ⅓. The LDPC code defined by H matrix 200 is regular, with a wc of two, and a wr of three.

Channel Output: Log-Likelihood Ratios

Returning to FIG. 1, the path 114 between LDPC encoder 106 and LDPC decoder 112 is a noisy channel, and, as such, decoder 112 does not receive a perfect copy of the codewords outputted by LDPC encoder 106. Instead, read processor 110 outputs one or more Lch values, where each Lch value corresponds to a bit in the channel input codeword.

Each Lch value is a log-likelihood ratio (LLR). An LLR is a data structure comprising a number of bits, where a single sign bit indicates the hard decision (i.e., read processor 110's best guess as to whether the original bit was a 1 or a 0), and the remaining magnitude bits indicate read processor 110's degree of confidence in that hard decision. More precisely, the LLR represents

log p 0 p 1 ,

where p0 is the probability that the sample represents a 0, and p1 is the probability that the sample represents a 1.

For example, read processor 110 might output each Lch value as a five-bit data structure, where the most-significant bit is a sign bit which indicates the hard-decision value, and the 16 values of the four magnitude bits indicate the confidence of the hard decision. Thus, for example, in one typical scheme, an LLR value of binary 00000 would indicate a hard-decision value of 0 with least confidence, a value of binary 01111 would indicate a hard-decision value of 0 with maximum confidence, binary 10000 would be unused, binary 10001 would indicate a hard-decision value of 1 with least confidence, and a value of binary 11111 would indicate a hard-decision value of 1 with maximum confidence.

LDPC Decoding: Belief Propagation

FIG. 3 is a flowchart of typical LDPC decoding method 300 used by decoder 112. LDPC decoder 112 receives N number of Lch values and outputs decoded information word. The heart of decoding method 300 is an iterative, two-phase message-passing algorithm called belief propagation. Belief propagation is best explained with the use of a visualization called a Tanner graph.

FIG. 2(B) is a Tanner graph of H matrix 200. In general, a Tanner graph comprises 1) a number of bit nodes n equal to the number of columns in H (and thus equal to the number of variable bits), 2) a number of check nodes m equal to the number of rows in H (and thus equal to number of parity bits), 3) lines, also known as edges, each of which connects a single bit node to a single check node, 4) for each bit node n, the original Lch value received from a receiver, and 5) for each bit node n, a calculated hard-decision output value {circumflex over (x)}n. Tanner graph 2(B) comprises nine bit nodes n0-n8, six check nodes m0-m5, 18 edges 202 connecting bit nodes to check nodes, nine Lch values, and nine {circumflex over (x)}n values.

The edges in a Tanner graph represent the relationships between (i.e., variable) bit nodes n and check nodes m, i.e., edges represent 1s in H. For example, in FIG. 2(B), an edge 202 connects first bit node n0 to fourth check node m3, meaning that there is a 1 in the first column, fourth row of H matrix 200 in FIG. 2(A).

A Tanner graph is a bipartite graph, i.e., an edge can connect a bit node to only a check node, and cannot connect a bit node to another bit node, or a check node to another check node. The set of all bit nodes n connected by edges to a particular check node m is denoted N(m). The set of all check nodes m connected by edges to a particular bit node n is denoted M(n).

The index of a particular (bit or check) node is its ordinal sequence in the graph. The degree of a (bit or check) node is the number of edges connected to that node. Thus, the degree of bit node n in a Tanner graph is equal to the weight wc of column n in the corresponding H matrix, and the degree of check node m in a Tanner graph is equal to the weight wr of row m in the corresponding H matrix.

Returning to FIG. 3, processing starts at step 302 and proceeds to step 304, decoder initialization. Decoder initialization 304 comprises setting all edges (e.g., 202 of FIG. 2(B)) connected to bit node n to the corresponding Lch value associated with bit node n, and setting the {circumflex over (x)}n value of bit node n to the hard-decision value of bit node n's Lch. Thus, for example, in FIG. 2(B), if the Lch value associated with bit node n0 is +5, then, at step 304, the two edges 202 connecting bit node n0 to check nodes m0 and m3 are set to +5, and bit node n's {circumflex over (x)}n value is set to 1. An alternative way of expressing the first part of this step is that bit node n0 sends a message of +5 to each check node m in set M(n0). A message sent from a bit node n to a check node m is denoted Qnm.

Step 304 then sends to syndrome check step 306 a vector {circumflex over (x)} comprising N {circumflex over (x)}n values. Vector {circumflex over (x)} is a codeword candidate. Syndrome check step 306 calculates syndrome vector z using the following Equation 1:


z={circumflex over (x)}HT   (1)

where HT is the transpose of the H matrix. If z is a 0 vector, then vector {circumflex over (x)} has satisfied all the parity check constraints defined by H, i.e., {circumflex over (x)} is a valid codeword. In that case, processing proceeds to cyclic-redundancy check (CRC) check 318.

If, instead, z is not a 0 vector, then vector {circumflex over (x)} fails one or more of the parity check constraints, which are typically referred to as unsatisfied check nodes or USCs. The number of elements in syndrome vector z that are not 0 scalar values is the number b of USCs in vector {circumflex over (x)}. Further, the indices of the non-zero scalar elements of syndrome vector z are the indices of the USCs in vector {circumflex over (x)}.

If vector {circumflex over (x)} fails syndrome check 306, then processing continues to the first of one or more decoding iterations 308. Decoding iteration 308 comprises three steps: 1) a belief-propagation check-node update step 310, 2) a belief-propagation bit-node update step 312, and 3) a syndrome check step 314, which is identical to step 306.

In belief-propagation check-node update step 310, each check node m uses the Qnm messages received from all bit nodes n in set N(m) to calculate messages, denoted Rmn, according to the following Equations 2, 3, and 4:

R mn ( i ) = δ mn ( i ) max ( κ mn ( i ) - β , 0 ) ( 2 ) κ mn ( i ) = R mn ( i ) = min n N ( m ) \ n Q n m ( i - 1 ) ( 3 ) δ mn ( i ) = ( n N ( m ) \ n sgn ( Q n m ( i - 1 ) ) ) ( 4 )

where i is the decoding iteration, N(m)\n is set N(m) excluding bit node n, and β is a positive constant, the value of which depends on the code parameters. The calculated Rmn are then sent back along those same edges to all bit nodes n in set N(m).

Next, in belief-propagation bit-node update step 312, each bit node n calculates Qnm messages according to the following Equation 5:

Q nm ( i ) = L n ( 0 ) + m M ( n ) \ m R m n ( i ) ( 5 )

where Ln(0) the Lch value for bit node n, and M(n)\m is set M(n) excluding check node m. Bit node n then sends the calculated Qnm messages to all check nodes m in set M(n).

Also during bit-node update step 312, each bit node n updates its {circumflex over (x)}n value according to the following Equations 6 and 7:

E n ( i ) = m M ( n ) R mn ( i ) ( 6 ) P n = L n ( 0 ) + E n ( i ) ( 7 )

If Pn≧0, then {circumflex over (x)}n=0, and if Pn<0, then {circumflex over (x)}n=1. The values generated by Equation 6 are also referred to as E values or ELDPC. Typically, ELDPC values are sent back to the read processor (e.g., 110 of FIG. 1) as part of a tuning process known as turbo-equalization. The specific belief-propagation algorithm represented by Equations 2-6 is known as the min-sum algorithm.

Note that {circumflex over (x)}n is updated during each decoding iteration 308 and finally outputted by decoding process 300. The original LLR values Lch remain unchanged during decoding process 300. In other words, during each decoding iteration 308, each bit node n casts its vote as to the proper value of all the other bit nodes n to which it is associated via a check node m. For example, in FIG. 2(B), bit node n0 is associated with check nodes m0 and m3. Therefore, n0 will cast its vote as to the proper values of the bit nodes associated with check nodes m0 and m3, i.e., n3, n5, n6, and n7. The greater the magnitude value of bit node n's Lch value (i.e., the greater the confidence), the more bit node n's vote counts. The net effect of this vote-casting is that the {circumflex over (x)}n value of a bit node with a low Lch magnitude value (i.e., confidence) will change and conform to the beliefs of the high-confidence bit nodes with which that bit node is associated. In other word, if a bit node's Lch value contains an erroneous hard-decision value and low magnitude, then the combined votes of the other bit nodes will tend, after one or more iterations, to correct that erroneous hard-decision value.

There are two types of LDPC decoding: non-layered and layered. In a non-layered decoder (e.g., 300 of FIG. 3), check-node updates (e.g. 310 of FIG. 3) for all check nodes in the codeword are performed serially, and then bit-node updates (e.g. 312 of FIG. 3) for all bit nodes in the codeword are performed serially, or vice versa.

In layered decoding, the check-node/bit-node update cycle is performed on subgraphs of H known as layers. A layer is a set of check nodes, i.e., rows of H, which have no bit nodes in common. H matrix 200 in FIG. 2(A) is constructed such that the first three check nodes (i.e., the first three rows) do not update common bit nodes, and thus form a first layer. The fourth, fifth, and sixth check nodes (i.e., the fourth, fifth, and sixth rows) are similarly constructed and thus form a second layer. Thus, one iteration of layered decoding of H matrix AA would involve (i) performing the check-node updates for all check nodes in layer 1, (ii) performing bit-node updates of all bit nodes in layer 1, (iii) performing check-nodes updates for all check nodes in layer 2, and (iv) performing bit-node updates of all bit nodes in layer 2. The combination of steps (i) and (ii) and the combination of steps (iii) and (iv) are known as sub-iterations.

Bit-node update step 312 sends to syndrome check step 314 a vector {circumflex over (x)} constructed out of the current {circumflex over (x)}n values of the decoder. The syndrome check of step 314 is identical to the syndrome check of step 306 discussed above. If vector {circumflex over (x)} passes syndrome check 314, then vector {circumflex over (x)} is sent to CRC step 318.

LDPC Decoding: Cyclic Redundancy Check and Mis-Satisfied Check Nodes

Passing syndrome check 306 or 314 means only that vector {circumflex over (x)} is a valid codeword, but not necessarily the decoded correct codeword (DCCW). It is possible for an LDPC decoder to generate a valid codeword which is not the DCCW. In that case, there are no USCs in vector {circumflex over (x)}, but there are mis-satisfied check nodes (MSCs). Thus, to ensure that valid vector {circumflex over (x)} is the DCCW, process 300 passes vector {circumflex over (x)} to cyclic redundancy check (CRC) 318. A CRC check is a checksum operation which can detect alteration of data during transmission or storage.

If vector {circumflex over (x)} passes the CRC check, then vector {circumflex over (x)} is the DCCW, and process 300 sets global variable DCCW to true, outputs vector {circumflex over (x)}, and terminates at step 320. Otherwise, vector {circumflex over (x)} is not the DCCW, and process 300 sets global variable DCCW to false, outputs vector {circumflex over (x)}, and terminates at step 320. Global variable DCCW informs other decoding processes whether or not the DCCW has been generated.

Returning to step 314, if vector {circumflex over (x)} fails the syndrome check, then vector {circumflex over (x)} still contains one or more USCs. The typical method for resolving USCs is to perform another decoding iteration 308. However, there might exist one or more USCs in a particular vector {circumflex over (x)} which will never be satisfied in a reasonable amount of time. Thus, LDPC decoders are typically limited in how many decoding iterations they can perform on a particular vector {circumflex over (x)}. Typical values for the maximum number of iterations range from 50 to 200.

In FIG. 3, step 316 determines whether the maximum number of iterations has been reached. If not, then another decoding iteration 308 is performed. If, instead, the maximum number of iterations has been reached, then decoder process 300 has failed. In that case, process 300 sets global variable DCCW to false, outputs vector {circumflex over (x)}, and terminates at step 320.

A complete execution of process 300 is known as a decoding session.

BER, SNR, and Error Floors

The bit-error rate (BER) of an LDPC decoder is a ratio which expresses how many erroneously decoded bits will be generated for x number of bits processed. Thus, for example, a decoder with a BER of 10−9 will, on average, generate one erroneous bit for every billion bits processed. The smaller the BER, the better the decoder. The BER of an LDPC decoder increases (worsens) when the decoder fails, i.e., terminates without converging on the decoded correct codeword DCCW.

The BER of an LDPC decoder is strongly influenced by the signal-to-noise ratio (SNR) of the decoder's input signal. A graph of BER as a function of SNR typically comprises two distinct regions: an initial “waterfall” region where the BER improves (decreases) rapidly given a unit increase in SNR, and a subsequent “error floor” region where unit increases in SNR yield only modest improvements in BER. Thus, achieving significant BER improvements in the error floor region requires methods other than SNR increase.

One method for improving the error-floor characteristics of an LDPC decoding is to increase the codeword length. However, increasing codeword length also increases the memory and other computing resources required for LDPC decoding. Thus, if such resources are strictly limited, as is typically the case with the read-channel devices on HD drives, then other methods must be found to yield the necessary error-floor improvement.

Another scarce resource is processing cycles. Typically, to achieve a specified throughput, an HD drive budgets a fixed number of read-channel processing cycles for decoding a codeword. Methods which exceed that budget (i.e., off-the-fly methods) decrease the throughput. More desirable are on-the-fly methods which recover the DCCW within the clock-cycle allotment and thus do not decrease the throughput.

Trapping Sets, Flipping, Erasing, and Breaking

In a typical LDPC-decoding session, the decoder converges on the DCCW within the first several decoding iterations. When, instead, an LDPC decoder fails to converge on the DCCW within a specified maximum number of iterations, it is typically due to one of two scenarios. In the first scenario, the input codeword contains so many bit errors, i.e., so few correct values, that the decoder is unable to correct all the bit errors and outputs a vector {circumflex over (x)}, also known as an invalid codeword (ICW), with a large number (e.g., greater than 15) of bit errors. A typical method for resolving an ICW is to request a re-send of the input codeword.

In the second scenario, the decoder resolves all but a few USCs, but those unresolved USCs and their erroneous bit nodes form a stable configuration, known as a trapping set, which is impervious to further iterations of that decoder. Trapping sets have a significant impact on the error-floor characteristics of an LDPC decoder. When an LDPC decoder fails to converge on the DCCW, it is often because of a trapping set.

Trapping sets are notated (a, b), where b is the number of USCs in the trapping set, and a is the number of erroneous bit nodes associated with those USCs. Thus, an (8,2) trapping set comprises two USCs and eight erroneous bit nodes (EBNs) associated with those two USCs. The majority of trapping sets comprise fewer than five USCs and fewer than ten EBNs.

A vector {circumflex over (x)} may possess more than one trapping set, but rarely more than three. Thus, when a decoder outputs a vector {circumflex over (x)} with 15 or fewer USCs, those USCs are typically members of a trapping set. If vector {circumflex over (x)} of a failed decoder contains a small number (e.g., less than 16) of USCs, then vector {circumflex over (x)} is referred to as a near codeword (NCW). Whereas an ICW is often handled by requesting a re-send of the input codeword, an NCW can be handled by altering values in the codeword or decoder parameters.

Flipping a bit node refers to a specific process for altering one or more values associated with the bit node. Which values are altered during flipping depends on the state of the LDPC decoder. If an LDPC decoder has just been initialized, i.e., the decoder is in State 0, then flipping a bit node comprises (i) inverting the hard-decision value of that bit node's Lch value, i.e., 1 becomes 0, and vice versa, and (ii) setting the magnitude bits, i.e., the confidence, of that same Lch value to maximum, and (iii) limiting the magnitude bits of all other Lch values to at most 15% of the maximum allowable magnitude value.

For example, assume a system with 4-bit Lch magnitude values, where the maximum allowable positive magnitude is +15 and the maximum allowable negative magnitude is −16, and where 15% of the maximum allowable values would be +2 and −2, respectively. Further assume four Lch values corresponding to four bit nodes: +2, −11, +1, +13. In this example, flipping the first bit node comprises (i) inverting the sign of the first bit node's Lch value, i.e., +2 becomes −2, (ii) setting the magnitude of the first bit node's Lch value to the maximum allowable value, i.e., −2 becomes −16, and (iii) limiting the magnitude of the Lch values of the other three bit nodes to at most 15% of the maximum allowable value, i.e., −11, +1, and +13 become −2, +1, and +2, respectively.

If the decoder is in some state other than State 0, then flipping a bit node comprises (i) determining the hard-decision value of the bit node's P value (defined by Equation 7 above), (ii) setting the hard-decision values of that bit node's Lch value, P value, and all associated Qnm messages to the opposite of the P value hard-decision value, (iii) setting the magnitude bits of that bit node's Lch value, P value, and all associated Qnm messages to maximum, and (iv) limiting the initial magnitude of the Lch, P, and Qnm message values of all other bits to 15% of the maximum allowable value. Note that only initial magnitudes are limited. As the decoding session progresses, P and Qnm message values are updated and may assume any allowable value. Lch values, on the other hand, are read-only and thus will retain their limited magnitude values for the duration of the decoding session.

Erasing is another specific process for altering bit-node values. Erasing a bit node comprises (i) setting the hard-decision value of that bit node's Lch value to 0 and (ii) setting the magnitude bits, i.e., the confidence, of that same Lch value to 0, i.e., no confidence.

If one or more of the EBNs in a trapping set are adjusted (e.g., flipped or erased), then re-performing LDPC decoding on the resulting, modified trapping set may converge on the DCCW. When successful, this process is referred to as breaking the trapping set. Thus, another way to improve the error-floor characteristics of an LDPC decoder is to take the near codeword (NCW) of a failed decoder, identify potential EBNs in the NCW, flip or erase one or more of those EBNs, and submit the modified NCW for further LDPC processing. If flipping or erasing the EBNs breaks the trapping set in the NCW, then the resumed LDPC decoding will converge on the DCCW.

Some trapping sets can be broken by flipping or erasing a single EBN. In other trapping sets, flipping or erasing a single EBN may reduce the number of USCs, but not break the trapping set entirely, yielding a second, different trapping set. Yet other trapping sets can be broken only by flipping or erasing two or more EBNs at the same time.

Embodiments of the present invention are methods for yielding the DCCW from an NCW through targeted bit flipping or erasing. Specifically, from the set of all bit nodes (i.e., associated bit nodes (ABNs)) associated with USCs in the NCW, the methods identify an initial set (Set 1) of ABNs (i.e., the suspicious bit-nodes (SBNs)) which are most likely to be EBNs. Then, the methods select one or more SBNs from Set 1, flip or erase the selected SBNs, and submit the modified NCW to LDPC decoding. If the LDPC decoding converges on the DCCW, then the method terminates; otherwise, other SBNs or combinations of SBNs from Set 1 or other sets are selected for flipping or erasing, and decoding is performed until all specified combinations have been exhausted.

Embodiments of the present invention use any one of several different methods for identifying SBNs. One class of SBN-identification methods is the value-comparison methods. A value-comparison method first selects a set (i.e., node set) of one or more USC nodes and/or ABNs, e.g., all ABNs. The value-comparison method then selects a first node from the node set. The value comparison method then takes a first value associated with the first node, e.g., an Rmn message value, and compares it a second value of a like kind, e.g., another Rmn message value, associated with the same node. The second value can be from the same iteration/sub-iteration, or can be from another iteration/sub-iteration. If the difference between compared values exceeds a specified threshold, then either the ABN associated with the first value or the ABN associated with the second value is added to Set 1. The process is then repeated for each additional node in the node set.

A method wherein the node selected from the node set is an ABN is referred to as an ABN-based value-comparison method. In an ABN-based value-comparison method, the values available for comparison are the Rmn message values, Qnm message values, and P values associated with the ABN. Furthermore, the comparison is between the complete values, i.e., both the sign bit and magnitude bits. Lastly, if the difference between compared values exceeds a specified threshold, then it is the selected ABN which is added to Set 1.

A method wherein the node selected from the node set is a USC is referred to as a USC-based value-comparison method. In a USC-based value-comparison method, the values available for comparison are the Rmn and Qnm message values, i.e., no P values. Furthermore, the comparison is between only the magnitude bits of the two LLRs. Lastly, if the difference between compared values exceeds a specified threshold, then it is either the ABN associated with the first value or the ABN associated with the second value which is added to Set 1.

As described earlier, layered decoding involves multiple iterations, where each iteration may involve multiple sub-iterations. A value-comparison method that compares values from different iterations is referred to herein as an inter-iteration method. A value-comparison method that compares values from a single iteration is referred to herein as an intra-iteration method. A value-comparison method that compares values from different sub-iterations of a single iteration is referred to herein as an inter-sub-iteration method. Note that an inter-sub-iteration method is a particular type of intra-iteration method. Another type of intra-iteration method would be an intra-sub-iteration method that compares values from a single sub-iteration.

In an inter-iteration method, the difference between compared values is referred to herein as an inter-iteration difference value. In an inter-sub-iteration method, the difference between compared values is referred to herein as an inter-sub-iteration difference value. In an intra-sub-iteration method, the difference between compared values is referred to herein as an intra-sub-iteration difference value. Inter-sub-iteration difference values and intra-sub-iteration values are two types of intra-iteration difference values.

In order to avoid attempts at designing around certain embodiments of the present invention by including or excluding the case in which a difference value is exactly equal to the specified threshold, the term “exceeds” should be interpreted to cover embodiments in which ABNs are added to Set 1 when difference values are greater than a specified threshold as well as embodiments in which ABNs are added to Set 1 when difference values are greater than or equal to a specified threshold.

If a value-comparison method compares a first value, e.g., R message R257 (i.e., R message value of check node 2 to bit node 5 in decoding iteration 7) to any one or more values of a like kind (i.e., another R message value, e.g., R256, R112, R7927, but not another Q message value or P value), then the method is said to compare similar values. If a value-comparison method compares a first value, e.g., R257, to only R message values corresponding to the same check node and the same bit node, but from other iterations or sub-iterations, e.g., R256 or R2511, then the method is said to compare specific values. Note that specific values are a type of similar value.

Thus, in one embodiment of the present invention, the value-comparison method is an inter-iteration ABN-based specific-value method wherein Set 1 is the set of all ABNs whose Qnm message, Rmn message, and/or P values change significantly from one LDPC decoding iteration to the next. This embodiment applies to both layered and non-layered decoders. Our research has shown that the values associated with EBNs often change drastically from one LDPC-decoding iteration to the next.

For example, Set 1 might contain all ABNs, any of whose inter-iteration difference values Qchangei exceed a pre-defined threshold, where Qchangei is given by the following Equation 8:


Qchangei=|Qnmi−Qnmi−1|  (8)

where i represents the ith LDPC decoding iteration and excludes the first decoding iteration. Similarly, Set 1 might contain all ABNs, any of whose inter-iteration value changes Rchangei exceed a pre-defined threshold, where Rchangei is given by the following Equation 9:


Rchangei=|Rmni−Rmni−1|  (9)

where i represents the ith LDPC decoding iteration and excludes the first decoding iteration.

In another embodiment of the present invention, the value-comparison method is an ABN-based similar-value intra-iteration method, wherein Set 1 is the set of all ABNs whose values, in any given iteration of a layered or non-layered decoder, differ from one another by more than a specified intra-iteration difference threshold. In a layered decoder, the intra-iteration different values would be based on Qnm message values, Rmn message values, and/or P values. In a non-layered decoder, because only a single P value is generated for each iteration, the intra-iteration difference values would be based on Qnm message values and/or Rmn message values.

For an example in a non-layered decoder, assume a bit node which is connected to four check nodes, and further assume a specified threshold of 6. If, during any LDPC decoding iteration, any one of four Qnm messages sent to the bit node differs from any of the other three by more than 6, then the bit node is added to Set 1. Thus, if three of the Qnm messages for a bit node had a value of −7 and one had a value of +2, then the bit node would be added to Set 1, because the intra-iteration difference value between −7 and +2 (i.e., 9) is greater than the threshold of 6.

For an example in a layered decoder, assume a bit node and a layered decoder with four layers. Four different P values will be generated during each decoding iteration, one for each sub-iteration. If one of the P values differs from the other three by more than a specified intra-iteration difference threshold, then the bit node will be added to Set 1.

In another embodiment of the present invention, the value-comparison method is an ABN-based similar-value inter-sub-iteration method, wherein Set 1 is the set of all ABNs whose Qnm message, Rmn message, and/or P values, in any given sub-iteration of a layered decoder, differ from the corresponding values of the previous sub-iteration by more than a specified inter-sub-iteration difference threshold.

Thus, for example, assume an H matrix with four layers, and a bit node which is connected to a single check node in each layer, and thus receives a single Rmn message during the processing of each layer, i.e., during each sub-iteration. Further assume that the layers are processed in the order 1-2-3-4. If the Rmn message value of layer 2 differs from the Rmn message value of layer 1 by more than the specified threshold, then the bit node is added to Set 1. Similarly, the layer 3 Rmn value is compared to the layer 2 Rmn value, and layer 4 is compared to layer 3. Non-consecutive layers are not compared, e.g., the Rmn value of layer 4 would not be compared to the Rmn value of layer 1.

In another embodiment of the present invention, the value comparison method is an ABN-based specific-value inter-sub-iteration method, e.g., a Q24 message value is compared to only a Q24 message value from another sub-iteration.

In yet another embodiment of the present invention, the value-comparison method is an ABN-based similar-value intra-sub-iteration method, wherein Set 1 is the set of all ABNs whose values, in any given iteration of a layered or non-layered decoder, differ from one another by more than a specified intra-iteration difference threshold.

In yet another embodiment of the present invention, the value comparison methods are USC-based value-comparison methods that compare either (i) the magnitudes of all Qnm messages sent to a USC, or (ii) the magnitudes of all Rmn messages sent by a USC. In the case of Qnm message comparison, the ABN which sent the Qnm with the lowest magnitude is selected for Set 1. In the case of Rmn message comparison, the ABN which receives the Rmn with the greatest magnitude is selected for Set 1.

Certain embodiments of the present invention use SBN-selection methods which are not based on value comparisons. For example, in one embodiment of the present invention, Set 1 is the set of all ABNs.

In yet another embodiment of the present invention, Set 1 is the set of all ABNs whose P values (see Equation 7, above) saturate within a specified number of decoding iterations. Saturation refers to when a calculated LLR value exceeds the range of LLR values allowed by the notation system. For example, assume a system wherein the range of allowable LLR values is −15 (i.e., hard bit-value of 0 with maximum confidence) to +16 (i.e., hard bit-value of 1 with maximum confidence). If Equation 7 above yields a P value of +18, then that P value will be represented as +16, and that P value will be said to have saturated. Our research has shown that the P values of EBNs often saturate within the first several LDPC-decoding iterations.

In yet another embodiment of the present invention, Set 1 is the set of all ABNs whose P values (see Equation 7, above) and E values (see Equation 6, above) have opposite signs, e.g., a positive P value and a negative E value, or vice versa. Our research has shown that bit nodes with positive P values and a negative E values, or vice versa, are often EBNs.

In yet another embodiment of the present invention, Set 1 is the set of all ABNs whose P values (see Equation 7, above) and Lch values have opposite signs.

FIG. 4 is a block diagram of LDPC decoding system 400 according to one embodiment of the invention. System 400 is analogous to LDPC decoder 112 of FIG. 1. LDPC decoder 402 receives Lch values from a read processor analogous to read processor 110 of FIG. 1, and performs LDPC decoding to generate the decoded codeword vector {circumflex over (x)}. If vector {circumflex over (x)} is the decoded correct codeword, then processing of LDPC decoding system 400 terminates without employing post-processor 404. If vector {circumflex over (x)} contains one or more USCs, i.e., vector {circumflex over (x)} is not the DCCW, then LDPC decoder 402 outputs vector {circumflex over (x)} and the indices of the USCs to post-processor 404. If vector {circumflex over (x)} is not the DCCW, then post-processor 402 performs one or more post-processing methods, serially or in parallel, on vector {circumflex over (x)} to attempt to generate the DCCW. When finished, post-processor 404 outputs vector {circumflex over (x)}pp to further downstream processing, where vector {circumflex over (x)}pp might or might not be the DCCW depending on whether or not post-processor 404 succeeded in breaking the trapping set corresponding to vector {circumflex over (x)}.

FIG. 5 is a flowchart of exemplary targeted bit-flipping process 500 used by post-processor 404 of FIG. 4, according to one embodiment on the present invention. Although process 500 is bit-flipping process, it should be understood that the present invention can also be implemented in the context of bit-adjustment processes that erase bits instead of flipping bits, as well as in the context of bit-adjustment processes that involve both bit flipping and bit erasing, Furthermore, the present invention can be implemented in the context of types of bit adjustment other than the specific flipping and/or erasing of bits described earlier. Process 500 contains two analogous sub-processes 510 and 530.

Sub-process 510 is implemented (at most) one time. During sub-process 510, the trapping set corresponding to the failed decoding of LDPC decoder 402 of FIG. 4 is attempted to be broken by sequentially flipping one or more individual associated bit nodes (ABNs) associated with the set of unsatisfied check nodes (USCs) corresponding to the failed decoding of LDPC decoder 402 and, if that individual node-flipping fails, then by sequentially flipping one or more pairs of those ABNs. If sub-process 510 fails to break the trapping set, then sub-process 530 is implemented one time for each of one or more of the flipped pairs of ABNs from sub-process 510.

For each flipped pair of ABNs, sub-process 530 defines a new set of USCs and a new set of ABNs associated with those USCs. Like sub-process 510, sub-process 530 attempts to break the trapping set by sequentially flipping one or more individual ABNs associated with the new set of USCs and, if individual node-flipping fails, then by sequentially flipping one or more pairs of those ABNs.

Sub-processes 510 and 530 perform the same basic steps. Specifically, a set (i.e., Set 1) of one or more suspicious bit-nodes (SBNs) (i.e., ABNs that are most likely to be erroneous bit-nodes (EBNs)) is generated (i.e., step 514 for sub-process 510 and step 534 for sub-process 530). Then one or more trials are sequentially performed for one or more individual SBNs (i.e., step 516 for sub-process 510 and step 536 for sub-process 530) and then, if necessary, one or more trials are sequentially performed for one or more pairs of SBNs (i.e., step 518 for sub-process 510 and step 538 for sub-process 530). Each trial comprises (i) resetting the decoder to a specified, appropriate reset state, (ii) flipping the selected individual SBN or pair of SBNs, and (iii) re-performing decoding. If any trial yields the DCCW, then the process terminates at step 550.

Sub-processes 510 and 530 differ (i) in how many times they are performed and (ii) in their reset states. Sub-process 510 is performed at most once. Furthermore, the reset state for sub-process 510 is either (i) the original initialization state of the LDPC decoder (i.e., State 0) or (ii) the state of the LDPC decoder after performing a fixed number of iterations from State 0 (i.e., State 1).

In contrast, sub-process 530 may be performed any number of times. Furthermore, the reset state for each implementation of sub-process 530 may differ (i) from the reset states of other implementations of sub-process 530 and (ii) from the reset states of sub-process 510.

Process 500 begins at step 502 and proceeds to step 504 where it is determined whether vector {circumflex over (x)} is an appropriate candidate for process 500, i.e., vector {circumflex over (x)} is a near codeword which contains one or more trapping sets. As discussed above, trapping sets are stable configurations of typically one to five USCs, and a near codeword typically contains one to three trapping sets. Thus, in the embodiment of FIG. 5, to be an appropriate candidate for process 500, (i) vector {circumflex over (x)} must possess a number (bobserved) of USCs greater than 0 and less than a pre-defined threshold bmax (typically 16), and (ii) the particular configuration of USCs must have remained stable (i.e., unchanged) for the last two or three iterations of LDPC decoder 402 of FIG. 4.

Process 500 is designed to work with trapping sets in near codewords (NCW), and works by flipping bit nodes associated with USCs. Thus, if vector {circumflex over (x)} possesses no USCs, i.e., vector {circumflex over (x)} is a near-codeword mis-correction (NCW-MC), then process 500 cannot determine which bit nodes to flip. As such, near-codeword mis-corrections typically should be passed to other post-processing methods for resolution. Similarly, if vector {circumflex over (x)} possesses more than bmax USCs, then vector {circumflex over (x)} is most likely an invalid codeword (ICW), and typically should be passed to a post-processing methods which are designed to handle ICWs. Lastly, a set of USCs which appears in the last or next-to-last decoding iteration is most likely not a trapping set, but the result of some other problem with the decoding process, and as such should be addressed by other post-processing processes.

Accordingly, if step 504 evaluates false, then process 500 terminates at step 550; otherwise, processing proceeds to step 512 where the reset state is set equal to the decoder state upon initialization (State 0).

Next, at step 514, post-processor 404 identifies a set (i.e., Set 1) of one or more suspicious bit-nodes (SBNs) (i.e., ABNs that are most likely to be erroneous bit-nodes (EBNs)) and defines reset State 1. Further details describing step 514 are found in FIG. 6, which is described below.

Next, at step 516, individual-SBN trials are performed using SBNs selected from Set 1. If an individual-SBN trial converges on the DCCW, then process 500 terminates at step 550. If an individual-SBN trial does not yield the DCCW, but does yield a number b of USCs lower than bobserved, then the individual SBN is added to a new set (i.e., Set 2). If all individual-SBN trials are performed without yielding the DCCW, then processing continues to step 518. Further details describing step 516 are found in FIG. 7, which is described below.

Next, at step 518, SBN-pair trials are performed using pairs of SBNs selected from Set 2. If an SBN-pair trial converges on the DCCW, then process 500 terminates at step 550. If an SBN-pair trial does not yield the DCCW, then the number b of USCs in the resulting codeword is recorded. If all SBN-pair trials are performed without yielding the DCCW, then a new Set 3 is created which contains the one or more SBN pairs which yielded the lowest number b of USCs in their resulting codewords. Process 500 then continues to step 520. Further details describing step 518 are found in FIG. 8, which is described below.

Next, at step 520, a first SBN pair is selected from Set 3. Next, at step 532, the decoder is reset to State 1, the selected SBN pair is flipped, the decoder is run a fixed number of decoding iterations, and the resulting decoder state (State 2) becomes the new reset state. Further details describing step 532 are found in FIG. 9, which is described below.

Next, at step 534, a new Set 1 is generated and a new State 1 is defined. Step 534 is identical to step 514 of sub-process 510, except that step 514 starts the LDPC decoder from State 0, whereas step 534 starts the LDPC decoder from State 2. If step 534 generates the DCCW, then processing terminates at step 550; otherwise, processing continues to step 536.

Next, at step 536, individual-SBN trials are performed using the SBNs from the new Set 1, and those individual-SBNs that yield a lower b are stored in a new Set 2. Step 536 is identical to step 516 of sub-process 510, except that step 516 starts the LDPC decoder from State 0, whereas step 536 starts the LDPC decoder from State 2. Furthermore, step 536 uses the new Set 1 generated by step 534, while step 516 uses the Set 1 generated by step 514. If step 536 generates the DCCW, then processing terminates at step 550; otherwise, processing continues to step 538.

Next, at step 538, SBN-pair trials are performed using SBN pairs from the new Set 2. Step 538 is similar to step 519 of sub-process 510 except that a new Set 3 is not generated. If step 538 generates the DCCW, then processing terminates at step 550: otherwise, processing continues to step 540.

Next, at step 540, the next SBN pair is selected from Set 3. If another pair is selected, then processing loops to step 532. If, instead, there are no more SBN pairs in Set 3, then processing terminates at step 550. In one embodiment, the only SBN pairs from sub-process 510 that are used to implement sub-process 530 are those SBN pairs whose SBN-trials generate the lowest number b of USCs. In alternative embodiments, other criteria are used to select the SBN pairs for sub-process 530, such as a specified percentage or fixed number of SBN-pairs which generate the lowest b values.

FIG. 6 is a flow diagram of step 514 of FIG. 5. The implementation of step 514 shown in FIG. 6 corresponds to the definition of Set 1 as the set of all ABNs whose Qnm message values change significantly from one LDPC decoding iteration to the next. Other implementations, e.g., based on the alternative definitions of Set 1 described earlier, are also possible.

Processing begins at step 600 and proceeds to step 602 where a first USC out of the set of all USCs is selected. After step 602, process 514 enters loop 604 comprising steps 606, 608, and 610. At step 606, the LDPC decoder is reset to the reset state (State 0), i.e., the decoder is initialized.

At step 608, an LDPC decoder is run for a pre-defined number of iterations, e.g., 50, and all the associated bit nodes (ABNs) associated with the selected USC are monitored. The LDPC decoder of step 608 is similar to LDPC decoder 300 of FIG. 3, with the exception that step 304 (decoder initialization) is omitted.

Set 1 of SBNs is generated based on observed changes in bit-node LLR values. Which LLR values are monitored, and what specific changes identify an SBN, will vary from embodiment to embodiment. In the exemplary embodiment of FIG. 6, all Qnm values sent to the selected USC are monitored. SBNs with Qchangei values (see Equation 8, above) greater than a specified threshold are added to Set 1.

In another embodiment, process 500 is the same, except that step 608 monitors Rmn value changes and adds to Set 1 those ABNs whose Rchangei values (see Equation 9, above) exceed a specified threshold. In yet another embodiment, the P values (see Equation 7, above) are monitored, and those ABNs whose P values saturate within the pre-defined number of iterations are added to Set 1. In yet another embodiment, both the P values and E values (see Equation 6, above) are monitored, and those ABNs whose P values and E values possess opposite signs are added to Set 1. Other than these internal changes to step 608, process 500 is the same for all these embodiments. If, however, the particular embodiment represented by process 500 is one where every ABN is an SBN, then step 608 would observe no LLR values and add all ABNs to Set 1. Otherwise, the steps of process 500 are identical for all embodiments.

Next, at step 610, the next USC out the set of all USCs is selected, and loop 604 repeats. If there are no more USCs, then processing continues to step 612, where the decoder state after performing the fixed number of iterations is stored as State 1, and the reset state is set equal to State 1 Subsequent decoding operations will start from State 1 (e.g., after 50 iterations) rather than from State 0 (i.e., initialization).

Next, at step 614, the SBNs of Set 2 are ranked according to their probability of being an EBN. In the exemplary embodiment of FIG. 6, the SBNs of Set 2 are ranked by Qchangei values. In some embodiments of the present invention, the ranking of step 614 would be eliminated. For example, if Set 1 is all ABNs which possess P values and E values of opposite sign, then ranking is unnecessary.

Process 514 then terminates at step 616.

FIG. 7 is a flow diagram of step 516 of FIG. 5. In step 516, individual-SBN trials are performed.

Processing begins at step 700 and proceeds to step 702 where a first SBN in Set 1 is selected. Next, at step 704, process 516 enters a loop comprising steps 704, 706, 708, 710, and 712. At step 704, the decoder is reset to the reset state, and the selected SBN is flipped.

Next, at step 706, LDPC decoding is performed for a defined number of iterations. The LDPC decoder of step 706 is identical to the LDPC decoder of step 608 of FIG. 6. If the decoder converges on the DCCW, then process 516 terminates at step 714; otherwise, processing continues to step 708.

At step 708, the resulting number b of USCs is compared to bobserved. If b is less than bobserved, then processing continues to step 710; otherwise, processing continues to step 712. At step 710, the selected SBN is added to a Set 2. Processing then continues to step 712.

Next, at step 712, a next SBN is selected from Set 1 and processing loops back to step 704. If there are no more SBNs in Set 1, then process 516 terminates at step 714.

FIG. 8 is a flow diagram of step 518 of FIG. 5. Processing begins at step 800 and proceeds to step 802, where a first SBN pair is generated from the SBNs of Set 2. One constraint on pair creation is that there be at least one USC which is not common to both SBNs. In other words, if SBN a1 is associated with only USC b1, and SBN a2 is similarly associated with only USC b1, then a1 and a2 would not be a valid pair. The reason for this constraint is that if b1 is a USC, i.e., b1 does not satisfy its parity check, then an odd number of bits must be flipped if the parity check is to be satisfied. Flipping an even number of bits will not satisfy the parity check.

In other embodiments of the present invention, other constraints might be imposed on SBN-pair creation in step 802. For example, the SBNs available for SBN-pair creation might be limited to those SBNs, any of whose trials satisfy the conditions (bold−bnew)≧ae and bnew<bmax, where bold is the number b of USCs before the trial, bnew is the number b of USCs after the trial, ae is the number of bit nodes whose hard decisions are flipped as a result of the trial, and bmax is the maximum number b of USCs that a near codeword can possess and still be processed by process 500.

Next, process 518 enters a loop comprising steps 804, 806, 808, and 810. At step 804, the decoder is reset to the reset state, and the selected SBN pair is flipped. Then, at step 806, LDPC decoding is re-performed. The LDPC decoder of step 806 is identical to LDPC decoder 608 of FIG. 6. If the decoding of step 806 yields the DCCW, then process 518 terminates at step 814; otherwise, processing continues to step 808.

At step 808, the resulting b value at the end of step 806 is recorded for the current SBN pair. Next, at step 810, the next SBN pair from Set 2 is selected, and processing loops back to step 804. If, instead, at step 810, there are no more SBN pairs in Set 2, then processing proceeds to step 812.

At step 812, the SBN pair(s) which yielded the lowest b value at step 808 are saved as Set 3. Other criteria for generating Set 3 are possible. For example, Set 3 could include a specified number of SBN pairs having the smallest b values, whether those b values are the same or not. Process 518 then terminates at step 814.

FIG. 9 is a flow diagram of step 532 of FIG. 5. Processing begins at step 900 and proceeds to step 902 where the decoder is reset to State 1, and a first SBN pair is selected from Set 3 and flipped. Next, at step 904, LDPC decoding is performed for a specified number of iterations. The LDPC decoder of step 904 is identical to the LDPC decoder of step 608 of FIG. 6. Next, at step 906, the ending decoder state is stored as State 2, and the reset state is set equal to State 2. Process 532 then terminates at step 908.

Although the exemplary process 500 of FIG. 5 generates Set 1 by simultaneously monitoring all bit-nodes associated with a single, selected USC (e.g., step 608 of FIG. 6), the invention is not so limited. Instead, bit nodes could be monitored one by one, or, alternatively, all bit nodes for all USCs could be monitored at once.

Further, although the exemplary process 500 of FIG. 5 flips only single SBNs and pairs of SBNs, the present invention is not so limited. Specifically, any number of SBNs could be flipped at once, up to and including the number of SBNs in Set 2.

Yet further, although exemplary process 500 of FIG. 5 comprises multiple iterations of two sets of trials, i.e., a set of individual-SBN trials followed by a set of SBN-pair trials, the present invention is not so limited by the number of trial sets, the types of trials, or their sequence. For example, alternative processes could have three sets of SBN-pair trials followed by seven individual-SBN trials followed by two sets of SBN-triplet trials.

Furthermore, although exemplary process 500 of FIG. 5 specifies a particular way in which the results of one step affect (i) the starting decoder state of subsequent steps and (ii) the set of SBNs to be used by subsequent steps, the invention is not so limited. For example, the starting decoder state for step 516 is not dependent on the results of step 514, but always starts from the same State 1. An alternative embodiment of the present invention could, for example, take the ending decoder state from the individual-SBN trials of step 516 that yielded the lowest number b of USCs, and use that state as the starting state for each iteration of step 518. Similarly, although process 500 specifies that step 518 will use Set 2 in generating pairs of SBNs, an alternative embodiment could have step 518 using Set 1 to generate SBN pairs.

Although the present invention has been described in the context of hard disk drives that implement LDPC coding and decoding, the invention is not so limited. In general, the present invention can be implemented in any suitable communication path that involves LDPC coding and decoding.

Further, although the exemplary belief-propagation algorithm used above is the offset min-sum algorithm (OMS), the present invention is not so limited, and can be used with any belief-propagation variant, e.g., sum-product algorithm (SPA) or the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm.

Yet further, although the belief-propagation example used above employed a specific decoding schedule (flooding schedule) where all check nodes were updated during a single check-node update step, followed by all bit nodes being updated in a single bit-node update step, the present invention is not so limited, and can be used with any decoding schedule, e.g., row-serial schedule, column-serial schedule, and row-column serial schedule.

Yet further, although the exemplary LDPC decoder used above was a non-layered decoder, the present invention is not so limited, and can be used with both layered and non-layered decoders.

Yet further, although embodiments of the present invention have been described in the context of LDPC codes, the present invention is not so limited. Embodiments of the present invention could be implemented for any code which can be defined by a graph, e.g., tornado codes, structured IRA codes, since it is graph-defined codes which suffer from trapping sets.

The present invention can be embodied in the form of methods and apparatuses for practicing those methods. The present invention can also be embodied in the form of program code embodied in tangible media, such as magnetic recording media, optical recording media, solid state memory, floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. The present invention can also be embodied in the form of program code, for example, whether stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium or carrier, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits.

Unless explicitly stated otherwise, each numerical value and range should be interpreted as being approximate as if the word “about” or “approximately” preceded the value of the value or range.

It will be further understood that various changes in the details, materials, and arrangements of the parts which have been described and illustrated in order to explain the nature of this invention may be made by those skilled in the art without departing from the scope of the invention as expressed in the following claims.

The use of figure numbers and/or figure reference labels in the claims is intended to identify one or more possible embodiments of the claimed subject matter in order to facilitate the interpretation of the claims. Such use is not to be construed as necessarily limiting the scope of those claims to the embodiments shown in the corresponding figures.

It should be understood that the steps of the exemplary methods set forth herein are not necessarily required to be performed in the order described, and the order of the steps of such methods should be understood to be merely exemplary. Likewise, additional steps may be included in such methods, and certain steps may be omitted or combined, in methods consistent with various embodiments of the present invention.

Although the elements in the following method claims, if any, are recited in a particular sequence with corresponding labeling, unless the claim recitations otherwise imply a particular sequence for implementing some or all of those elements, those elements are not necessarily intended to be limited to being implemented in that particular sequence.

Reference herein to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments necessarily mutually exclusive of other embodiments. The same applies to the term “implementation.”

Claims

1. A method for decoding encoded data using bit nodes and check nodes, the method comprising:

(a) performing iterative decoding on the encoded data to generate an original near codeword (NCW) having one or more unsatisfied check nodes (USCs), each USC having one or more associated bit nodes (ABNs), the one or more ABNs for the one or more USCs forming a set of ABNs;
(b) selecting, from the set of ABNs, a first set of suspicious bit nodes (SBNs) that may be erroneous bit nodes for the original NCW;
(c) adjusting at least one of the SBNs in the first set to generate a modified NCW; and
(d) performing iterative decoding on the modified NCW to attempt to generate a decoded correct codeword (DCCW) for the encoded data.

2. The invention of claim 1, wherein step (b) comprises selecting the set of ABNs as the first set of SBNs.

3. The invention of claim 1, wherein step (b) comprises:

(b1) determining difference values over one or more decoding iterations, wherein each difference value corresponds to a difference between first and second values associated with a common ABN;
(b2) comparing the difference values to a specified threshold; and
(b3) selecting, as the first set, ABNs with difference values having magnitudes that exceed the specified threshold.

4. The invention of claim 3, wherein the first and second values are generated during a single decoding iteration.

5. The invention of claim 4, wherein:

each decoding iteration comprises one or more decoding sub-iterations; and
the first and second values are generated during a single decoding sub-iteration.

6. The invention of claim 4, wherein:

at least one decoding iteration comprises two or more decoding sub-iterations; and
the first and second values are generated during different decoding sub-iterations.

7. The invention of claim 3, wherein the first and second values are generated during different decoding iterations.

8. The invention of claim 3, wherein the first and second values are either two bit-node message values, two check-node message values, or two P values.

9. The invention of claim 3, wherein the first and second values are similar values.

10. The invention of claim 9, wherein the first and second values are specific values.

11. The invention of claim 1, wherein step (b) comprises:

(b1) determining whether P values saturate during one or more decoding iterations for different ABNs; and
(b2) selecting, as the first set, ABNs having a P value that is determined to saturate in step (b1).

12. The invention of claim 1, wherein step (b) comprises:

(b1) determining signs of first and second values after one or more decoding iterations, wherein the first and second values are both associated with either a common ABN or a common USC;
(b2) comparing the signs of the first and second values; and
(b3) selecting, as the first set, ABNs based on first and second values determined to have opposite sign in step (b2).

13. The invention of claim 12, wherein the first value is a P value and the second value is an E value.

14. The invention of claim 12, wherein the first value is a P value and the second value is an Lch value.

15. The invention of claim 1, wherein:

each USC is associated with one or more bit-node messages; and
step (b) comprises, for each USC: (b1) selecting a bit-node message with the least magnitude value; and (b2) selecting the ABN associated with the selected bit-node message to be in the first set.

16. The invention of claim 1, wherein:

each USC is associated with one or more check-node messages; and
step (b) comprises, for each USC: (b1) selecting a check-node message with the greatest magnitude value; and (b2) selecting the ABN associated with the selected check-node message to be in the first set.

17. The invention of claim 1, wherein steps (c) and (d) are implemented multiple times.

18. The invention of claim 17, wherein, for each implementation of step (c), a corresponding modified NCW for step (d) is generated by adjusting one or more SBNs in the original NCW.

19. The invention of claim 17, wherein, for at least one implementation of step (c), a corresponding modified NCW for step (d) is generated by adjusting one or more SBNs in an NCW generated during a previous implementation of step (d).

20. The invention of claim 1, wherein for at least one implementation of step (c), a corresponding modified NCW for step (d) is generated by adjusting only one SBN in an NCW.

21. The invention of claim 1, wherein for at least one implementation of step (c), a corresponding modified NCW for step (d) is generated by adjusting a pair of SBNs in an NCW.

22. The invention of claim 1, wherein the decoding is LDPC decoding.

23. The invention of claim 1, wherein step (c) comprises flipping or erasing the at least one SBN in the first set to generate the modified NCW.

24. Apparatus for decoding encoded data using bit nodes and check nodes, the apparatus comprising:

(a) means for performing iterative decoding on the encoded data to generate an original near codeword (NCW) having one or more unsatisfied check nodes (USCs), each USC having one or more associated bit nodes (ABNs), the one or more ABNs for the one or more USCs forming a set of ABNs;
(b) means for selecting, from the set of ABNs, a first set of suspicious bit nodes (SBNs) that may be erroneous bit nodes for the original NCW;
(c) means for adjusting at least one of the SBNs in the first set to generate a modified NCW; and
(d) means for performing iterative decoding on the modified NCW to attempt to generate a decoded correct codeword (DCCW) for the encoded data.
Patent History
Publication number: 20100042890
Type: Application
Filed: Mar 10, 2009
Publication Date: Feb 18, 2010
Patent Grant number: 8448039
Applicant: LSI CORPORATION (Milpitas, CA)
Inventor: Kiran Gunnam (San Jose, CA)
Application Number: 12/401,116
Classifications