Polar codes: from theory to practice

Performance of codes with dimension 1024
Average decoding complexity of codes with dimension 1024
Signal-to-noise ratio required for achieving codeword error probability 1E-2

Polar codes

Channel polarization phenomenon was discovered by E. Arikan in 2008. Essentially, by employing very simple operations one can transform a data transmission channel into a number almost noiseless and almost pure-noise synthetic subchannels. Then one can transmit the payload data over almost  noiseless subchannels with very high reliability. Some deterministic symbols (normally, 0) should be trasmitted over almost pure-noise synthetic suchannels.  

This approach defines a forward error correction method (polar coding). E. Arikan has shown that polar codes achieve Shannon capacity of a wide class of communication channels, having very simple construction, encoding and decoding algorithms. This is the first result of such kind since  1948, when C. Shannon shown the theoretical possibility of creation of such coding methods.  Although code constructions achieving the capacity  of some channels have been previously known, they did not find practical applications due to high implementation complexity. Owing to their simplicity, polar codes have great potential for finding their way into communication and storage systems.

However, it appears that the performance of Arikan polar codes with practically important parameters is substantially inferior compared to similar  LDPC and turbo codes. Furthermore, the complexity and latency of classical decoding methods for polar codes significantly exceeds that of  LDPC codes. Since 2011, Coding Theory Lab of SPbPU is working on overcoming these problems and development of code constructions and decoding algorithms with better performance and lower complexity compared to the existing techniques.

Polar subcodes (Polar codes with dynamic frozen symbols)

P. V. Trifonov and V. D. Miloslavskaya have suggested a generalization of the polar code construction. Instead of transmitting 0 over some synthetic bit subchannels, as in the classical Arikan construction, it was suggested to transmit some weighted sum of the symbols transmitted over the other subchannels (dynamic frozen symbols). The particular weighting coefficients are selected in such way, so that the obtained code is a subcode of an extended BCH code. This enables one to obtain codes with substantially higher minimum distance compared to classical polar codes. The codes constructed in this way provide much better performance compared to known  LDPC and turbo codes.

V.D. Miloslavskaya has suggested a method for shortening polar (sub)codes, which enables one to obtain codes of arbitrary length, which is not possible in the framework of the classical Arikan construction.

This page presents a database of  some polar subcodes. These codes can be decoded using the same techniques as Arikan polar codes.

Sequential decoding of polar codes

V.D. Miloslavskaya and P. V. Trifonov have suggested a sequential decoding algorithm for polar codes. As opposed to the list decoding algorithms suggested by some researchers, the proposed approach avoids a large fraction of useless computation.  As a result, the decoding complexity appears to be less, and the performance is better compared to the case of LDPC and turbo codes with similar parameters. This approach can be extended to the case of polar codes with arbitrary kernel, as well as short Reed-Solomon codes. Further complexity reduction techniques for this approach were developed by G. Trofimiuk and N. Iakuba.





No references have been found.