Catálogo de publicaciones - libros

Compartir en
redes sociales


Título de Acceso Abierto

Error-Correction Coding and Decoding

Martin Tomlinson Cen Jung Tjhai Marcel A. Ambroze Mohammed Ahmed Mubarak Jibril

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No requiere 2017 SpringerLink acceso abierto

Información

Tipo de recurso:

libros

ISBN impreso

978-3-319-51102-3

ISBN electrónico

978-3-319-51103-0

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© The Editor(s) (if applicable) and The Author(s) 2017

Tabla de contenidos

Analogue BCH Codes and Direct Reduced Echelon Parity Check Matrix Construction

Martin Tomlinson; Cen Jung Tjhai; Marcel A. Ambroze; Mohammed Ahmed; Mubarak Jibril

Many information sources are naturally analogue and must be digitised if they are to be transmitted digitally. The process of digitisation introduces quantisation errors and increases the bandwidth required. The use of analogue error-correcting codes eliminates the need for digitisation. It is shown that analogue BCH codes may be constructed in the same way as finite-field BCH codes, including Reed–Solomon codes, except that the field of complex numbers is used. It is shown how the Mattson–Solomon polynomial or equivalently the Discrete Fourier transform may be used as the basis for the construction of analogue BCH codes. It is also shown that a permuted parity check matrix produces an equivalent code, using a primitive root of unity to construct the code, as in discrete BCH codes. A new algorithm is presented which uses symbolwise multiplication of rows of a parity check matrix to produce directly the parity check matrix in reduced echelon form. The algorithm may be used for constructing reduced echelon parity check matrices for standard BCH and RS codes as well as analogue BCH codes. Gaussian elimination or other means of solving parallel, simultaneous equations are completely avoided by the method. It is also proven that analogue BCH codes are Maximum Distance Separable (MDS) codes. Examples are presented of using the analogue BCH codes in providing error-correction for analogue, band-limited data including the correction of impulse noise errors in BCH encoded, analogue stereo music waveforms. Since the data is bandlimited it is already redundant and the parity check symbols replace existing values so that there is no need for bandwidth expansion as in traditional error-correcting codes. Future research areas are outlined including the possibility of an analogue, maximum likelihood, error-correcting decoder based on the extended Dorsch decoder of Chap. . Steganography is another future application area for analogue BCH codes.

Part II - Code Construction | Pp. 299-314

LDPC Codes

Martin Tomlinson; Cen Jung Tjhai; Marcel A. Ambroze; Mohammed Ahmed; Mubarak Jibril

This chapter explores the design and performance of low-density parity-check (LDPC) codes which when combined with soft decision iterative decoding provide currently the best achievable performance in digital communication systems. The application of cyclotomic cosets, idempotents and Mattson–Solomon polynomials is shown to produce many new binary cyclic LDPC codes whose parity-check equations are orthogonal in each position. A key feature of this construction technique is the incremental approach to designing the minimum Hamming distance and the sparseness of the resulting parity-check matrix of the code. Binary cyclic LDPC codes are also constructed by considering idempotents in the Mattson–Solomon domain. It is shown that, for short algebraic LDPC codes, the myth of codes which have cycles of length 4 in their Tanner graph does not converge well with iterative decoding is not necessarily true. It is demonstrated that the cyclotomic coset-based construction can be easily extended to produce good non-binary algebraic LDPC codes. Good irregular LDPC codes may be constructed using the progressive edge-growth algorithm. Many new code results are presented showing the effects of choosing different degree distributions. Guidelines are given for designing the best codes. Methods of producing structured LDPC codes, such as those which have quasi-cyclic structure, are described. These are of interest to industry due to the simplification of the encoder and decoder. An example of such a construction to produce a (64800,48600) LDPC code, using a protograph, is presented along with performance results using iterative decoding. Better results are obtained with this code than the (64800,48600) LDPC code used in the DVB-S2 standard.

Part II - Code Construction | Pp. 315-354

An Exhaustive Tree Search for Stopping Sets of LDPC Codes

Martin Tomlinson; Cen Jung Tjhai; Marcel A. Ambroze; Mohammed Ahmed; Mubarak Jibril

The indicative performance of an LDPC code may be determined from exhaustive analysis of the low-weight spectral terms of the code’s stopping sets which by definition includes the low-weight codewords. In a landmark paper, Rosnes and Ytrehus showed that exhaustive, low-weight stopping set analysis of codes whose parity-check matrix is sparse can be carried out using a bounded tree search over the length of the code. For an (, ) code, the potential total search space is of size but a good choice of bound dramatically reduces this search space to a practical size. Indeed, the choice of bound is critical to the success of the algorithm. It is shown in this chapter that an improved algorithm can be obtained if the bounded tree search is applied to a set of information bits since the potential total search space is reduced to size . Since such a restriction will only find codewords and not all stopping sets, a class of bits is defined as unsolved parity bits, and these are also searched as appended bits in order to find all low-weight stopping sets. Weight spectrum and stopping set results are presented for commonly used WiMax LDPC codes in addition to some other well known LDPC codes.

Part III - Analysis and Decoders | Pp. 357-366

Erasures and Error-Correcting Codes

Martin Tomlinson; Cen Jung Tjhai; Marcel A. Ambroze; Mohammed Ahmed; Mubarak Jibril

It is shown that the number and weight of low-weight codewords of a linear code determine the erasure correcting performance of the code. Analysis is given of the probability density function of the number of erasures correctable by a given code in terms of the weight enumerator polynomial of the code. For finite-length codes and the erasure channel the best performance is achieved with maximum distance separable (MDS) codes and maximum likelihood decoding. Unfortunately, for the case of binary codes, there are no MDS codes, apart from trivial cases. However it is shown that for those binary codes that have a codeword weight distribution that is close to a binomial distribution the erasure correction performance is almost equal to that of MDS codes. Such binary codes include BCH, Goppa and double-circulant codes, and the erasure correction performance using several examples of these codes is presented. The contrasting performance of some LDPC and turbo codes is also given. A probabilistic method based on the parity-check matrix, is described which is particularly effective at finding low-weight codewords of any linear code. The method is based on randomly chosen erasure patterns and for most codes quickly determines the minimum Hamming distance of the code.

Part III - Analysis and Decoders | Pp. 367-398

The Modified Dorsch Decoder

Martin Tomlinson; Cen Jung Tjhai; Marcel A. Ambroze; Mohammed Ahmed; Mubarak Jibril

The Holy Grail in coding theory is the maximum likelihood decoder. Paradoxically, the best error-correcting codes are not commonly used in practical communication systems. This is because there is no practical decoder available that achieves anywhere near maximum likelihood performance. Inferior codes, such as LDPC or turbo codes, are used that perform much better with a practical decoder such as the iterative decoder. One day, may be in 100 years time, maximum likelihood decoders will be generally available and today’s best codes will then be used in practical systems. In this chapter a modified Dorsch decoder is described, which achieves near maximum likelihood performance for all binary codes that are not too long. It is a practical decoder for half rate codes having a length less than about 180 bits or so using current digital processors. The outstanding performance achieved, using several examples of the best binary codes known, is presented.

Part III - Analysis and Decoders | Pp. 399-419

A Concatenated Error-Correction System Using the Code Construction

Martin Tomlinson; Cen Jung Tjhai; Marcel A. Ambroze; Mohammed Ahmed; Mubarak Jibril

Concatenation of good codes is a classic method of constructing longer codes which are good codes. As codes are increased in length it becomes progressively harder to realise a near maximum likelihood decoder. This chapter presents a novel concatenated code arrangement featuring multiple, near maximum likelihood, decoders designed for an optimised matching of codes and decoders. By using some outstanding codes as constituent codes, the concatenated coding and decoding system is able to outperform the best LDPC and Turbo coding systems with the same code parameters. The performance of a net (256,128) code achieved with the concatenated arrangement is compared to a best (256,128) LDPC code and a best (256,128) Turbo code. Similarly, the performance of a (512,256) net concatenated code is compared to a best (512,256) LDPC code and a best (512,256) Turbo code. In both cases, the new system outperformed the LDPC and Turbo systems. To date, for the AWGN channel and net, half rate codes no other codes or coding arrangement is known that will outperform the system presented in this chapter for codes of lengths 256 and 512 bits.

Part III - Analysis and Decoders | Pp. 421-431

Combined Error Detection and Error-Correction

Martin Tomlinson; Cen Jung Tjhai; Marcel A. Ambroze; Mohammed Ahmed; Mubarak Jibril

This chapter is concerned with the design of codes and systems for combined error detection and correction, primarily aimed at applications featuring retransmission of data packets which have not been decoded correctly. Several such hybrid automatic request, HARQ, systems are described including a novel system variation which uses a retransmission metric based on a soft decision; the Euclidean distance between the decoded codeword and the received vector. It is shown that a cyclic redundancy check, CRC, is not essential for this system and need not be transmitted. The generator matrix of a nested set of block codes of length 251 bits is constructed by applying construction X three times in succession starting with an extended BCH (128, 113, 6) code. The result is the basis for an incremental-redundancy system whereby the first 128 bits transmitted is a codeword from the BCH code, followed by the transmission of a further 32 bits, if requested, producing a codeword from a (160, 113, 12) code. Further requests finally result in a codeword from a (251, 113, 20) code. Each request increases the chance of correct decoding by increasing the minimum Hamming distance of the net received codeword. Performance graphs are presented showing the comparative error rate performances and throughputs of the new systems compared to the standard HARQ system. Lower error floors and increased throughputs are evident.

Part IV - Applications | Pp. 435-450

Password Correction and Confidential Information Access System

Martin Tomlinson; Cen Jung Tjhai; Marcel A. Ambroze; Mohammed Ahmed; Mubarak Jibril

This chapter considers the use of Reed–Solomon codes to correct user mistakes or missing parts of long entered passwords in an encrypted, information retrieval system or a password-based authentication system. Dynamic, user-specific mapping of Galois field elements is used to ensure that passwords, arbitrarily chosen by the user, are valid codewords. A typical system is described based on GF(256) and the ANSI character set with worked examples given for encoding and password correction. Security is also enhanced by the user-specific Galois field symbol mapping because this defeats Rainbow tables when used with long passwords.

Part IV - Applications | Pp. 451-463

Variations on the McEliece Public Key Cryptoystem

Martin Tomlinson; Cen Jung Tjhai; Marcel A. Ambroze; Mohammed Ahmed; Mubarak Jibril

A completely novel type of public key cryptosystem was invented by Professor Robert McEliece in 1978 and this is based on error-correcting codes using Goppa codes. Other well-established public key cryptosystems are based on the difficulty of determining logarithms in finite fields which, in theory, can be broken by quantum computers. Despite numerous attempts by the crypto community, the McEliece system remains unbroken to this day and is one of the few systems predicted to survive attacks by powerful computers in the future. In this chapter, some variations to the McEliece system are described including a method which destroys the deterministic link between plaintext messages and ciphertexts, thereby providing semantic security. Consequently, this method nullifies the chosen-plaintext attack, of which the classic McEliece is vulnerable. It is shown that the public key size can be reduced and by encrypting the plaintext with a key derived from the ciphertext random error pattern, the security of the system is improved since an attacker has to determine the exact same error pattern used to produce the ciphertext. This defeats a chosen-ciphertext attack in which two random bits of the ciphertext are inverted. The standard McEliece system is vulnerable to this attack. The security of the McEliece system is analysed and a shortened ciphertext system is described which does not suffer from any consequent loss of security due to shortening. This is important because to achieve over 256 bits of security, the security analysis shows that the system needs to be based on Goppa codes of length 8192 bits. Key encapsulation and short plaintext applications need short ciphertexts in order to be efficient. It is shown that 256 bits of security is achieved with a shortened ciphertext of length 1912 bits, which conveys an information payload of 768 bits. Some examples of interesting applications that have been implemented on a smartphone in commercial products, such as a secure messaging app and secure cloud storage app, are also described in this chapter.

Part IV - Applications | Pp. 465-510

Error-Correcting Codes and Dirty Paper Coding

Martin Tomlinson; Cen Jung Tjhai; Marcel A. Ambroze; Mohammed Ahmed; Mubarak Jibril

This chapter describes how error-correcting codes can be used to impress additional information onto waveforms with a minimal level of distortion. Applications include watermarking and steganography. It is shown how the modified Dorsch decoder of Chap.  may be used to find codewords from partitioned classes of codewords, whose waveforms may be used as a watermark which is almost invisible, and still be reliably detected.

Part IV - Applications | Pp. 511-520