Jump to content

Variable-length encoding

From Wikipedia, the free encyclopedia
(Redirected from Variable-length code)

In coding theory, variable-length encoding is a type of character encoding scheme in which codes of differing lengths are used to encode a character set (a repertoire of symbols) for representation in a computer.[1] The equivalent concept in computer science is bit string.

Variable-length codes can allow sources to be compressed and decompressed with zero error (lossless data compression) and still be read back symbol by symbol. An independent and identically-distributed source may be compressed almost arbitrarily close to its entropy. This is in contrast to fixed-length coding methods, for which data compression is only possible for large blocks of data, and any compression beyond the logarithm of the total number of possibilities comes with a finite (though perhaps arbitrarily small) probability of failure.

For these reasons, they were sometimes used to pack English text into fewer bytes in adventure games for early microcomputers. However, disks, increases in computer memory, and general purpose compression algorithms have rendered such methods obsolete.

Multibyte encodings are usually the result of a need to increase the number of characters which can be encoded without breaking backward compatibility with an existing constraint. For example, with one byte (8 bits) per character, one can encode 256 possible characters; in order to encode more than 256 characters, the obvious choice would be to use two or more bytes per encoding unit, two bytes (16 bits) would allow 65,536 possible characters, but such a change would break compatibility with existing systems and therefore might not be feasible at all.[a]

Unlikely source symbols, they can be assigned longer codewords and likely source symbols can be assigned shorter codewords, thus giving a low expected codeword length. Some examples of well-known variable-length coding strategies are Huffman coding, Lempel–Ziv coding, arithmetic coding, and context-adaptive variable-length coding.

General structure

[edit]

A multibyte encoding system minimises disruption to existing software by keeping some characters as single-unit codes, while others require multiple units. This creates three unit types: singletons (which consist of a single unit), lead units (which come first in a multiunit sequence), and trail units (which come afterwards in a multiunit sequence). Input and display systems must handle these structures, though most other software does not.

For example, the four character string "I♥NY" is encoded in UTF-8 like this (shown as hexadecimal byte values): 49 E2 99 A5 4E 59. Of the six units in that sequence, 49, 4E, and 59 are singletons (for I, N, and Y), E2 is a lead unit and 99 and A5 are trail units. The heart symbol is represented by the combination of the lead unit and the two trail units.

UTF-8 clearly distinguishes singletons, leads, and trails with non-overlapping value ranges. By contrast, older encodings often reuse values, making it harder to parse text correctly. This can cause false positives in searches or make a corrupted byte disrupt long sequences. In well-designed encodings like UTF-8, searching works reliably, and corruption affects only the character containing the bad unit.

Codes and their extensions

[edit]

The extension of a code is the mapping of finite length source sequences to finite length bit strings, that is obtained by concatenating for each symbol of the source sequence the corresponding codeword produced by the original code. Using terms from formal language theory, the precise mathematical definition is as follows: Let and be two finite sets, called the source and target alphabets, respectively. A code is a total function[2] mapping each symbol from to a sequence of symbols over , and the extension of to a homomorphism of into , which naturally maps each sequence of source symbols to a sequence of target symbols, is referred to as its extension.

Variable-length codes can be strictly nested in order of decreasing generality as non-singular codes, uniquely decodable codes, and prefix codes. Prefix codes are always uniquely decodable, and these in turn are always non-singular:

Non-singular codes

[edit]

A code is non-singular if each source symbol is mapped to a different non-empty bit string; that is, the mapping from source symbols to bit strings is injective.

For example, the mapping is not non-singular because both a and b map to the same bit string 0; any extension of this mapping will generate a lossy (non-lossless) coding. Such singular coding may still be useful when some loss of information is acceptable (for example, when such code is used in audio or video compression, where a lossy coding becomes equivalent to source quantization).

However, the mapping is non-singular; its extension will generate a lossless coding, which will be useful for general data transmission (but this feature is not always required). It is not necessary for the non-singular code to be more compact than the source (and in many applications, a larger code is useful, for example as a way to detect or recover from encoding or transmission errors, or in security applications to protect a source from undetectable tampering).

Uniquely decodable codes

[edit]

A code is uniquely decodable if its extension is § non-singular. Whether a given code is uniquely decodable can be decided with the Sardinas–Patterson algorithm.

The mapping is uniquely decodable (this can be demonstrated by looking at the follow-set after each target bit string in the map, because each bitstring is terminated as soon as we see a \t0}} bit which cannot follow any existing code to create a longer valid code in the map, but unambiguously starts a new code).

Consider again the code from the previous section.[2] This code is not uniquely decodable, since the string 011101110011 can be interpreted as the sequence of codewords 01110 – 1110 – 011, but also as the sequence of codewords 011 – 1 – 011 – 10011. Two possible decodings of this encoded string are thus given by cdb and babe. However, such a code is useful when the set of all possible source symbols is completely known and finite, or when there are restrictions (such as a formal syntax) that determine if source elements of this extension are acceptable. Such restrictions permit the decoding of the original message by checking which of the possible source symbols mapped to the same symbol are valid under those restrictions.

Prefix codes

[edit]

A code is a prefix code if no target bit string in the mapping is a prefix of the target bit string of a different source symbol in the same mapping. This means that symbols can be decoded instantaneously after their entire codeword is received. Other commonly used names for this concept are prefix-free code, instantaneous code, or context-free code. A special case of prefix codes are block codes, LEB128, and variable-length quantity (VLQ) codes.

For example, the mapping above is not a prefix code because we do not know after reading the bit string 0 whether it encodes an a source symbol, or if it is the prefix of the encodings of the b or c symbols. An example of a prefix code is shown below.

Symbol Codeword
a 0
b 10
c 110
d 111
Example of encoding and decoding:
aabacdab00100110111010|0|0|10|0|110|111|0|10|aabacdab

For this example, if the probabilities of were , the expected number of bits used to represent a source symbol using the code above would be:

.

As the entropy of this source is 1.75 bits per symbol, this code compresses the source as much as possible so that the source can be recovered with zero error.

See also

[edit]

Notes

[edit]
  1. ^ As a real-life example of this, UTF-16, which represents the most common characters in exactly the manner just described (and uses pairs of 16-bit code units for less-common characters) never gained traction as an encoding for text intended for interchange due to its incompatibility with the ubiquitous 7-/8-bit ASCII encoding, with its intended role instead being taken by UTF-8, which does preserve ASCII compatibility.

References

[edit]
  1. ^ Crispin, M. (2005-04-01). UTF-9 and UTF-18 Efficient Transformation Formats of Unicode. doi:10.17487/rfc4042.
  2. ^ a b This code is based on an example found in Berstel et al. (2009), Example 2.3.1, p. 63.

Further reading

[edit]