bzip2-1.0, 0.9.5 and 0.9.0
use exactly the same file format as the previous
version, bzip2-0.1. This decision was made in the interests of
stability. Creating yet another incompatible compressed file format
would create further confusion and disruption for users.
Nevertheless, this is not a painless decision. Development
work since the release of bzip2-0.1 in August 1997
has shown complexities in the file format which slow down
decompression and, in retrospect, are unnecessary. These are:
The run-length encoder, which is the first of the
compression transformations, is entirely irrelevant.
The original purpose was to protect the sorting algorithm
from the very worst case input: a string of repeated
symbols. But algorithm steps Q6a and Q6b in the original
Burrows-Wheeler technical report (SRC-124) show how
repeats can be handled without difficulty in block
The randomisation mechanism doesn't really need to be
there. Udi Manber and Gene Myers published a suffix
array construction algorithm a few years back, which
can be employed to sort any block, no matter how
repetitive, in O(N log N) time. Subsequent work by
Kunihiko Sadakane has produced a derivative O(N (log N)^2)
algorithm which usually outperforms the Manber-Myers
I could have changed to Sadakane's algorithm, but I find
it to be slower than bzip2's existing algorithm for
most inputs, and the randomisation mechanism protects
adequately against bad cases. I didn't think it was
a good tradeoff to make. Partly this is due to the fact
that I was not flooded with email complaints about
bzip2-0.1's performance on repetitive data, so
perhaps it isn't a problem for real inputs.
Probably the best long-term solution,
and the one I have incorporated into 0.9.5 and above,
is to use the existing sorting
algorithm initially, and fall back to a O(N (log N)^2)
algorithm if the standard algorithm gets into difficulties.
The compressed file format was never designed to be
handled by a library, and I have had to jump though
some hoops to produce an efficient implementation of
decompression. It's a bit hairy. Try passing
decompress.c through the C preprocessor
and you'll see what I mean. Much of this complexity
could have been avoided if the compressed size of
each block of data was recorded in the data stream.
An Adler-32 checksum, rather than a CRC32 checksum,
would be faster to compute.
It would be fair to say that the bzip2 format was frozen
before I properly and fully understood the performance
consequences of doing so.
Improvements which I was able to incorporate into
0.9.0, despite using the same file format, are:
Single array implementation of the inverse BWT. This
significantly speeds up decompression, presumably
because it reduces the number of cache misses.
Faster inverse MTF transform for large MTF values. The
new implementation is based on the notion of sliding blocks
bzip2-0.9.0 now reads and writes files with fread
and fwrite; version 0.1 used putc and getc.
Duh! Well, you live and learn.
Further ahead, it would be nice
to be able to do random access into files. This will
require some careful design of compressed file formats.
After some consideration, I have decided not to use
GNU autoconf to configure 0.9.5 or 1.0.
autoconf, admirable and wonderful though it is,
mainly assists with portability problems between Unix-like
platforms. But bzip2 doesn't have much in the way
of portability problems on Unix; most of the difficulties appear
when porting to the Mac, or to Microsoft's operating systems.
autoconf doesn't help in those cases, and brings in a
whole load of new complexity.
Most people should be able to compile the library and program
under Unix straight out-of-the-box, so to speak, especially
if you have a version of GNU C available.
There are a couple of __inline__ directives in the code. GNU C
(gcc) should be able to handle them. If you're not using
GNU C, your C compiler shouldn't see them at all.
If your compiler does, for some reason, see them and doesn't
like them, just #define__inline__ to be /* */. One
easy way to do this is to compile with the flag -D__inline__=,
which should be understood by most Unix compilers.
If you still have difficulties, try compiling with the macro
BZ_STRICT_ANSI defined. This should enable you to build the
library in a strictly ANSI compliant environment. Building the program
itself like this is dangerous and not supported, since you remove
bzip2's checks against compressing directories, symbolic links,
devices, and other not-really-a-file entities. This could cause
One other thing: if you create a bzip2 binary for public
distribution, please try and link it statically (gcc -s). This
avoids all sorts of library-version issues that others may encounter
If you build bzip2 on Win32, you must set BZ_UNIX to 0 and
BZ_LCCWIN32 to 1, in the file bzip2.c, before compiling.
Otherwise the resulting binary won't work correctly.
I tried pretty hard to make sure bzip2 is
bug free, both by design and by testing. Hopefully
you'll never need to read this section for real.
Nevertheless, if bzip2 dies with a segmentation
fault, a bus error or an internal assertion failure, it
will ask you to email me a bug report. Experience with
version 0.1 shows that almost all these problems can
be traced to either compiler bugs or hardware problems.
Recompile the program with no optimisation, and see if it
works. And/or try a different compiler.
I heard all sorts of stories about various flavours
of GNU C (and other compilers) generating bad code for
bzip2, and I've run across two such examples myself.
2.7.X versions of GNU C are known to generate bad code from
time to time, at high optimisation levels.
If you get problems, try using the flags
You should specifically not use -funroll-loops.
You may notice that the Makefile runs six tests as part of
the build process. If the program passes all of these, it's
a pretty good (but not 100%) indication that the compiler has
done its job correctly.
If bzip2 crashes randomly, and the crashes are not
repeatable, you may have a flaky memory subsystem. bzip2
really hammers your memory hierarchy, and if it's a bit marginal,
you may get these problems. Ditto if your disk or I/O subsystem
is slowly failing. Yup, this really does happen.
Try using a different machine of the same type, and see if
you can repeat the problem.
This isn't really a bug, but ... If bzip2 tells
you your file is corrupted on decompression, and you
obtained the file via FTP, there is a possibility that you
forgot to tell FTP to do a binary mode transfer. That absolutely
will cause the file to be non-decompressible. You'll have to transfer
If you've incorporated libbzip2 into your own program
and are getting problems, please, please, please, check that the
parameters you are passing in calls to the library, are
correct, and in accordance with what the documentation says
is allowable. I have tried to make the library robust against
such problems, but I'm sure I haven't succeeded.
Finally, if the above comments don't help, you'll have to send
me a bug report. Now, it's just amazing how many people will
send me a bug report saying something like
bzip2 crashed with segmentation fault on my machine
and absolutely nothing else. Needless to say, a such a report
is totally, utterly, completely and comprehensively 100% useless;
a waste of your time, my time, and net bandwidth.
With no details at all, there's no way I can possibly begin
to figure out what the problem is.
The rules of the game are: facts, facts, facts. Don't omit
them because "oh, they won't be relevant". At the bare
Machine type. Operating system version.
Exact version of bzip2 (do bzip2 -V).
Exact version of the compiler used.
Flags passed to the compiler.
However, the most important single thing that will help me is
the file that you were trying to compress or decompress at the
time the problem happened. Without that, my ability to do anything
more than speculate about the cause, is limited.
Please remember that I connect to the Internet with a modem, so
you should contact me before mailing me huge files.
bzip2 is a resource hog. It soaks up large amounts of CPU cycles
and memory. Also, it gives very large latencies. In the worst case, you
can feed many megabytes of uncompressed data into the library before
getting any compressed output, so this probably rules out applications
requiring interactive behaviour.
These aren't faults of my implementation, I hope, but more
an intrinsic property of the Burrows-Wheeler transform (unfortunately).
Maybe this isn't what you want.
If you want a compressor and/or library which is faster, uses less
memory but gets pretty good compression, and has minimal latency,
Gailly's and Mark Adler's work, zlib-1.1.3 and
gzip-1.2.4. Look for them at
For something faster and lighter still, you might try Markus F X J
Oberhumer's LZO real-time compression/decompression library, at
If you want to use the bzip2 algorithms to compress small blocks
of data, 64k bytes or smaller, for example on an on-the-fly disk
compressor, you'd be well advised not to use this library. Instead,
I've made a special library tuned for that kind of use. It's part of
e2compr-0.40, an on-the-fly disk compressor for the Linux
ext2 filesystem. Look at
B: a directory containing 6001 files, one for every length in the
range 0 to 6000 bytes. The files contain random lowercase
letters. 18.7 megabytes.
H: my home directory tree. Documents, source code, mail files,
compressed data. H contains B, and also a directory of
files designed as boundary cases for the sorting; mostly very
repetitive, nasty files. 565 megabytes.
A: directory tree holding various applications built from source:
egcs, gcc-2.8.1, KDE, GTK, Octave, etc.
The tests conducted are as follows. Each test means compressing
(a copy of) each file in the data set, decompressing it and
comparing it against the original.
First, a bunch of tests with block sizes and internal buffer
sizes set very small,
to detect any problems with the
blocking and buffering mechanisms.
This required modifying the source code so as to try to
Data set H, with
buffer size of 1 byte, and block size of 23 bytes.
Data set B, buffer sizes 1 byte, block size 1 byte.
As (2) but small-mode decompression.
As (2) with block size 2 bytes.
As (2) with block size 3 bytes.
As (2) with block size 4 bytes.
As (2) with block size 5 bytes.
As (2) with block size 6 bytes and small-mode decompression.
H with buffer size of 1 byte, but normal block
size (up to 900000 bytes).
Then some tests with unmodified source code.
H, all settings normal.
As (1), with small-mode decompress.
H, compress with flag -1.
H, compress with flag -s, decompress with flag -s.
Forwards compatibility: H, bzip2-0.1pl2 compressing,
bzip2-0.9.5 decompressing, all settings normal.
Backwards compatibility: H, bzip2-0.9.5 compressing,
bzip2-0.1pl2 decompressing, all settings normal.
Bigger tests: A, all settings normal.
As (7), using the fallback (Sadakane-like) sorting algorithm.
As (8), compress with flag -1, decompress with flag
H, using the fallback sorting algorithm.
Forwards compatibility: A, bzip2-0.1pl2 compressing,
bzip2-0.9.5 decompressing, all settings normal.
Backwards compatibility: A, bzip2-0.9.5 compressing,
bzip2-0.1pl2 decompressing, all settings normal.
Misc test: about 400 megabytes of .tar files with
bzip2 compiled with Checker (a memory access error
detector, like Purify).
Misc tests to make sure it builds and runs ok on non-Linux/x86
These tests were conducted on a 225 MHz IDT WinChip machine, running
Linux 2.0.36. They represent nearly a week of continuous computation.
All tests completed successfully.
bzip2 is not research work, in the sense that it doesn't present
any new ideas. Rather, it's an engineering exercise based on existing
Four documents describe essentially all the ideas behind bzip2:
Michael Burrows and D. J. Wheeler:
"A block-sorting lossless data compression algorithm"
10th May 1994.
Digital SRC Research Report 124.
If you have trouble finding it, try searching at the
New Zealand Digital Library, http://www.nzdl.org.
Daniel S. Hirschberg and Debra A. LeLewer
"Efficient Decoding of Prefix Codes"
Communications of the ACM, April 1990, Vol 33, Number 4.
You might be able to get an electronic copy of this
from the ACM Digital Library.
David J. Wheeler
Program bred3.c and accompanying document bred3.ps.
This contains the idea behind the multi-table Huffman
Jon L. Bentley and Robert Sedgewick
"Fast Algorithms for Sorting and Searching Strings"
Available from Sedgewick's web page,
The following paper gives valuable additional insights into the
algorithm, but is not immediately the basis of any code
used in bzip2.
Block Sorting Text Compression
Proceedings of the 19th Australasian Computer Science Conference,
Melbourne, Australia. Jan 31 - Feb 2, 1996.
Kunihiko Sadakane's sorting algorithm, mentioned above,
is available from: