A working Posix thread library is needed for the server. On Solaris 2.5
we use Sun PThreads (the native thread support in 2.4 and earlier
versions are not good enough) and on Linux we use LinuxThreads by Xavier
The hard part of porting to a new Unix variant without good native
thread support is probably to port MIT-pthreads. See
Programming POSIX Threads.
The MySQL distribution includes a patched version of
Provenzano's Pthreads from MIT (see
web page). This can be used for some operating systems that do not
have POSIX threads.
It is also possible to use another user level thread package named
FSU Pthreads (see
home page). This implementation is being used for the SCO port.
See the `thr_lock.c' and `thr_alarm.c' programs in the `mysys'
directory for some tests/examples of these problems.
Both the server and the client need a working C++ compiler (we use gcc
and have tried SparcWorks). Another compiler that is known to work is the
To compile only the client use ./configure --without-server.
There is currently no support for only compiling the server, nor is it
likly to be added unless someone has a good reason for it.
If you want/need to change any `Makefile' or the configure script you must
get Automake and Autoconf. We have used the automake-1.2 and
All steps needed to remake everything from the most basic files.
/bin/rm -f config.cache
./configure --with-debug=full --prefix='your installation directory'
# The makefiles generated above need GNU make 3.75 or newer.
# (called gmake below)
gmake clean all install init-db
If you are using some functionality that is very new in MySQL,
you can try to run mysqld with the --skip-new (which will disable all
new, potentially unsafe functionality) or with --safe-mode which
disables a lot of optimization that may cause problems.
See section A.4.1 What To Do If MySQL Keeps Crashing.
If mysqld doesn't want to start, you should check that you don't have
any my.cnf files that interfere with your setup!
You can check your my.cnf arguments with mysqld --print-defaults
and avoid using them by starting with mysqld --no-defaults ....
If mysqld starts to eat up CPU or memory or if it ``hangs'', you
can use mysqladmin processlist status to find out if someone is
executing a query that takes a long time. It may be a good idea to
run mysqladmin -i10 processlist status in some window if you are
experiencing performance problems or problems when new clients can't connect.
The command mysqladmin debug will dump some information about
locks in use, used memory and query usage to the mysql log file. This
may help solve some problems. This command also provides some useful
information even if you haven't compiled MySQL for debugging!
If the problem is that some tables are getting slower and slower you
should try to optimize the table with OPTIMIZE TABLE or
myisamchk. See section 4 MySQL Database Administration. You should also
check the slow queries with EXPLAIN.
If you have some very specific problem, you can always try to debug
MySQL. To do this you must configure MySQL with the
--with-debug or the --with-debug=full option. You can check
whether or not MySQL was compiled with debugging by doing:
mysqld --help. If the --debug flag is listed with the
options then you have debugging enabled. mysqladmin ver also
lists the mysqld version as mysql ... --debug in this case.
If you are using gcc or egcs, the recommended configure line is:
This will avoid problems with the libstdc++ library and with C++
exceptions (many compilers have problems with C++ exceptions in threaded
code) and compile a MySQL version with support for all character sets.
If you suspect a memory overrun error, you can configure MySQL
with --with-debug=full, which will install a memory allocation
(SAFEMALLOC) checker. Running with SAFEMALLOC is however
quite slow, so if you get performance problems you should start
mysqld with the --skip-safemalloc option. This will
disable the memory overrun checks for each call to malloc and
If mysqld stops crashing when you compile it with
--with-debug, you have probably found a compiler bug or a timing
bug within MySQL. In this case you can try to add -g to
the CFLAGS and CXXFLAGS variables above and not use
--with-debug. If mysqld now dies, you can at least attach
to it with gdb or use gdb on the core file to find out
When you configure MySQL for debugging you automatically enable a
lot of extra safety check functions that monitor the health of mysqld.
If they find something ``unexpected,'' an entry will be written to
stderr, which safe_mysqld directs to the error log! This also
means that if you are having some unexpected problems with MySQL and
are using a source distribution, the first thing you should do is to
configure MySQL for debugging! (The second thing, of course, is to
send mail to firstname.lastname@example.org and ask for help. Please use the
mysqlbug script for all bug reports or questions regarding the
MySQL version you are using!
In the Windows MySQL distribution, mysqld.exe is by
default compiled with support for trace files.
which only prints information with the most interesting tags in
If you make a bug report about this, please only send the lines from the
trace file to the appropriate mailing list where something seems to go
wrong! If you can't locate the wrong place, you can ftp the trace file,
together with a full bug report, to
ftp://support.mysql.com/pub/mysql/secret so that a MySQL
developer can take a look a this.
On most system you can also start mysqld from gdb to get
more information if mysqld crashes.
With some older gdb versions on Linux you must use run
--one-thread if you want to be able to debug mysqld threads. In
this case you can only have one thread active at a time.
When running mysqld under gdb, you should disable the stack trace
with --skip-stack-trace to be able to catch segfaults within gdb.
It's very hard to debug MySQL under gdb if you do a lot of
new connections the whole time as gdb doesn't free the memory for
old threads. You can avoid this problem by starting mysqld with
-O thread_cache_size= 'max_connections +1'. In most cases just
using -O thread_cache_size=5' will help a lot!
If you want to get a core dump on Linux if mysqld dies with a
SIGSEGV signal, you can start mysqld with the --core-file option.
This core file can be used to make a backtrace that may help you
find out why mysqld died:
shell> gdb mysqld core
gdb> backtrace full
On some operating systems, the error log will contain a stack trace if
mysqld dies unexpectedly. You can use this to find out where (and
maybe why) mysqld died. See section 4.9.1 The Error Log. To get a stack trace,
you should NOT compile mysqld with the -fomit-frame-pointer
option to gcc. See section G.1.1 Compiling MYSQL for debugging..
If the error file contains something like the following:
mysqld got signal 11;
The manual section 'Debugging a MySQL server' tells you how to use a
stack trace and/or the core file to produce a readable backtrace that may
help in finding out why mysqld died
Attemping backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
stack range sanity check, ok, backtrace follows
you can find where mysqld died by doing the following:
Copy the above numbers to a file, for example `mysqld.stack'.
Make a symbol file for the mysqld server:
nm -n libexec/mysqld > /tmp/mysqld.sym
Note that many MySQL binary distributions comes with the above
file, named mysqld.sym.gz. In this case you must unpack this by
gunzip < bin/mysqld.sym.gz > /tmp/mysqld.sym
Execute resolve_stack_dump -s /tmp/mysqld.sym -n mysqld.stack.
This will print out where mysqld died. If this doesn't help you
find out why mysqld died, you should make a bug report and include
the output from the above commend with the bug report.
Note however that in most cases it will not help us to just have a stack
trace to find the reason for the problem. To be able to locate the bug
or provide a workaround, we would in most cases need to know the query
that killed mysqld and preferable a test case so that we can
repeat the problem! See section 220.127.116.11 How to Report Bugs or Problems.
If mysqld dies or hangs, you should start mysqld with
--log. When mysqld dies again, you can examine the end of
the log file for the query that killed mysqld.
If you are using --log without a file name, the log is stored in
the database directory as 'hostname'.log In most cases it's the last
query in the log file that killed mysqld, but if possible you
should verify this by restarting mysqld and executing the found
query from the mysql command line tools. If this works, you
should also test all complicated queries that didn't complete.
You can find the queries that take a long time to execute by starting
mysqld with --log-slow-queries. See section 4.9.5 The Slow Query Log.
If you find the text mysqld restarted in the error log file
(normally named `hostname.err') you have probably found a query
that causes mysqld to fail. If this happens you should check all
your tables with myisamchk (see section 4 MySQL Database Administration),
and test the queries in the MySQL log files to see if one doesn't
work. If you find such a query, try first upgrading to the newest
MySQL version. If this doesn't help and you can't find anything
in the mysql mail archive, you should report the bug to
email@example.com. Links to mail archives are available
online at the MySQL
If you have started mysqld with --with-myisam-recover,
MySQL will automatically check and try to repair MyISAM
tables if they are marked as 'not closed properly' or 'crashed'. If
this happens, MySQL will write an entry in the
hostname.err file 'Warning: Checking table ...' which is
followed by Warning: Repairing table if the table needs to be
repaired. If you get a lot of these errors, without mysqld having
died unexpectedly just before, then something is wrong and needs to
be investigated further. See section 4.1.1 mysqld Command-line Options.
It's of course not a good sign if mysqld did died unexpectedly,
but in this case one shouldn't investigate the Checking table...
messages but instead try to find out why mysqld died.
If you get corrupted tables or if mysqld always fails after some
update commands, you can test if this bug is reproducible by doing the
Take down the MySQL daemon (with mysqladmin shutdown).
Make a backup of the tables (to guard against the very unlikely case that
the repair will do something bad).
Check all tables with myisamchk -s database/*.MYI. Repair any
wrong tables with myisamchk -r database/table.MYI.
Make a second backup of the tables.
Remove (or move away) any old log files from the MySQL data
directory if you need more space.
Start mysqld with --log-bin. See section 4.9.4 The Binary Update Log.
If you want to find a query that crashes mysqld, you should use
When you have gotten a crashed table, stop the mysqld server.
Restore the backup.
Restart the mysqld server without--log-bin
Re-execute the commands with mysqlbinlog update-log-file | mysql.
The update log is saved in the MySQL database directory with
the name hostname-bin.#.
If the tables are corrupted again or you can get mysqld to die with the
above command, you have found reproducible bug that should be easy to
fix! FTP the tables and the binary log to
ftp://support.mysql.com/pub/mysql/secret and send a mail to
firstname.lastname@example.org or (if you are a support customer) to
email@example.com about the problem and the MySQL team
will fix it as soon as possible.
You can also use the script mysql_find_rows to just execute some of the
update statements if you want to narrow down the problem.
This causes clients to generate a trace file in `/tmp/client.trace'.
If you have problems with your own client code, you should attempt to
connect to the server and run your query using a client that is known to
work. Do this by running mysql in debugging mode (assuming you
have compiled MySQL with debugging on):
If your client crashes at some 'legal' looking code, you should check
that your `mysql.h' include file matches your mysql library file.
A very common mistake is to use an old `mysql.h' file from an old
MySQL installation with new MySQL library.
The MySQL server and most MySQL clients are compiled
with the DBUG package originally made by Fred Fish. When one has configured
MySQL for debugging, this package makes it possible to get a trace
file of what the program is debugging. See section G.1.2 Creating trace files.
One uses the debug package by invoking the program with the
--debug="..." or the -#... option.
Most MySQL programs has a default debug string that will be
used if you don't specify an option to --debug. The default
trace file is usually /tmp/programname.trace on Unix and
\programname.trace on Windows.
The debug control string is a sequence of colon separated fields
Each field consists of a mandatory flag character followed by
an optional "," and comma-separated list of modifiers:
The currently recognized flag characters are:
Enable output from DBUG_<N> macros for the current state. May be followed by a list of keywords which selects output only for the DBUG macros with that keyword. An empty list of keywords implies output for all macros.
Delay after each debugger output line. The argument is the number of tenths of seconds to delay, subject to machine capabilities. That is, -#D,20 is delay two seconds.
Limit debugging and/or tracing, and profiling to the list of named functions. Note that a null list will disable all functions. The appropriate "d" or "t" flags must still be given, this flag only limits their actions if they are enabled.
Identify the source file name for each line of debug or trace output.
Identify the process with the pid or thread id for each line of debug or trace output.
Enable profiling. Create a file called 'dbugmon.out' containing information that can be used to profile the program. May be followed by a list of keywords that select profiling only for the functions in that list. A null list implies that all functions are considered.
Identify the source file line number for each line of debug or trace output.
Print the current function nesting depth for each line of debug or trace output.
Number each line of dbug output.
Redirect the debugger output stream to the specified file. The default output is stderr.
As O but the file is really flushed between each write. When needed the file is closed and reopened between each write.
Limit debugger actions to specified processes. A process must be identified with the DBUG_PROCESS macro and match one in the list for debugger actions to occur.
Print the current process name for each line of debug or trace output.
When pushing a new state, do not inherit the previous state's function nesting level. Useful when the output is to start at the left margin.
Do function _sanity(_file_,_line_) at each debugged function until _sanity() returns something that differs from 0. (Mostly used with safemalloc to find memory leaks)
Enable function call/exit trace lines. May be followed by a list (containing only one modifier) giving a numeric maximum trace level, beyond which no output will occur for either debugging or tracing macros. The default is a compile time option.
Some examples of debug control strings which might appear on a shell
command line (the "-#" is typically used to introduce a control string
to an application program) are:
Currently MySQL only supports table locking for
ISAM/MyISAM and HEAP tables and page level locking
for BDB tables. See section 5.3.1 How MySQL Locks Tables. With MyISAM
tables one can freely mix INSERT and SELECT without locks
Starting in version 3.23.33, you can analyze the table lock contention
on your system by checking Table_locks_waited and
Table_locks_immediate environment variables.
Some database users claim that MySQL cannot support near the
number of concurrent users because it lacks row-level locking. This
may be true for some specific applications, but is not generally
true. As always this depends totally on what the application does and what
is the access/update pattern of the data.
Pros for row locking:
Fewer lock conflicts when accessing different rows in many threads.
Less changes for rollbacks.
Makes it possible to lock a single row a long time.
Takes more memory than page level or table locks.
Is slower than page level or table locks when used one a big part of the table,
because one has to do many more locks.
Is definitely much worse than other locks if you do often do GROUP
BY on a large part of the data or if one has to often scan the whole table.
With higher level locks one can also more easily support locks of
different types to tune the application as the lock overhead is less
notable as for row level locks.
Table locks are superior to page level / row level locks in the
Read and updates on strict keys; This is where one updates or deletes
a row that can be fetched with one key read:
UPDATE table_name SET column=value WHERE unique_key#
DELETE FROM table_name WHERE unique_key=#
SELECT combined with INSERT (and very few UPDATE's
Many scans / GROUP BY on the whole table without any writers.
Other options than row / page level locking:
Versioning (like we use in MySQL for concurrent inserts) where
you can have one writer at the same time as many readers. This means
that the database/table supports different views for the data depending
on when one started to access it. Other names for this are time travel,
copy on write or copy on demand.
Copy on demand is in many case much better than page or row level
locking; The worst case does, however, use much more memory than
when using normal locks.
Instead of using row level locks one can use application level locks.
(Like get_lock/release_lock in MySQL). This works of course
only in well-behaved applications.
In many cases one can do an educated guess which locking type is best
for the application but generally it's very hard to say that a given
lock type is better than another; Everything depends on the application
and different part of the application may require different lock types.
Here are some tips about locking in MySQL:
On web application most applications do lots of selects, very few
deletes, updates mainly on keys and inserts in some specific tables.
The base MySQL setup is VERY tuned for this.
Concurrent users is not a problem if one doesn't mix updates and selects
that needs to examine many rows in the same table.
If one mixes inserts and deletes on the same table then INSERT DELAYED
may be of great help.
One can also use LOCK TABLES to speed up things (many updates within
a single lock is much faster than updates without locks). Splitting
thing to different tables will also helps.
If you get speed problems with the table locks in MySQL, you
may be able to solve these to convert some of your tables to BDB tables.
See section 7.5 BDB or Berkeley_DB Tables.
I have tried to use the RTS thread packages with MySQL but
stumbled on the following problems:
They use an old version of a lot of POSIX calls and it is very tedious to
make wrappers for all functions. I am inclined to think that it would
be easier to change the thread libraries to the newest POSIX
Some wrappers are already written. See `mysys/my_pthread.c' for more info.
At least the following should be changed:
pthread_get_specific should use one argument.
sigwait should take two arguments.
A lot of functions (at least pthread_cond_wait,
should return the error code on error. Now they return -1 and set errno.
Another problem is that user-level threads use the ALRM signal and this
aborts a lot of functions (read, write, open...).
MySQL should do a retry on interrupt on all of these but it is
not that easy to verify it.
The biggest unsolved problem is the following:
To get thread-level alarms I changed `mysys/thr_alarm.c' to wait between
alarms with pthread_cond_timedwait(), but this aborts with error
EINTR. I tried to debug the thread library as to why this happens,
but couldn't find any easy solution.
If someone wants to try MySQL with RTS threads I suggest the
Change functions MySQL uses from the thread library to POSIX.
This shouldn't take that long.
Compile all libraries with the -DHAVE_rts_threads.
If there are some small differences in the implementation, they may be fixed
by changing `my_pthread.h' and `my_pthread.c'.
Run thr_alarm. If it runs without any ``warning'', ``error'' or aborted
messages, you are on the right track. Here is a successful run on
Main thread: 1
Thread 0 (5) started
Thread: 5 Waiting
Thread 1 (6) started
Thread: 6 Waiting
Thread: 6 Slept for 1 (1) sec
Thread: 6 Waiting
Thread: 6 Slept for 2 (2) sec
Thread: 6 Simulation of no alarm needed
Thread: 6 Slept for 0 (3) sec
Thread: 6 Waiting
Thread: 6 Slept for 4 (4) sec
Thread: 6 Waiting
Thread: 5 Slept for 10 (10) sec
Thread: 5 Waiting
Thread: 6 Slept for 5 (5) sec
Thread: 6 Waiting
Thread: 5 Slept for 0 (1) sec
MySQL is very dependent on the thread package used. So when
choosing a good platform for MySQL, the thread package is very
There are at least three types of thread packages:
User threads in a single process. Thread switching is managed with
alarms and the threads library manages all non-thread-safe functions
with locks. Read, write and select operations are usually managed with a
thread-specific select that switches to another thread if the running
threads have to wait for data. If the user thread packages are
integrated in the standard libs (FreeBSD and BSDI threads) the thread
package requires less overhead than thread packages that have to map all
unsafe calls (MIT-pthreads, FSU Pthreads and RTS threads). In some
environments (for example, SCO), all system calls are thread safe so the
mapping can be done very easily (FSU Pthreads on SCO). Downside: All
mapped calls take a little time and it's quite tricky to be able to
handle all situations. There are usually also some system calls that are
not handled by the thread package (like MIT-pthreads and sockets). Thread
scheduling isn't always optimal.
User threads in separate processes. Thread switching is done by the
kernel and all data are shared between threads. The thread package
manages the standard thread calls to allow sharing data between threads.
LinuxThreads is using this method. Downside: Lots of processes. Thread
creating is slow. If one thread dies the rest are usually left hanging
and you must kill them all before restarting. Thread switching is
Kernel threads. Thread switching is handled by the thread library or the
kernel and is very fast. Everything is done in one process, but on some
systems, ps may show the different threads. If one thread aborts, the
whole process aborts. Most system calls are thread safe and should
require very little overhead. Solaris, HP-UX, AIX and OSF1 have kernel
In some systems kernel threads are managed by integrating user
level threads in the system libraries. In such cases, the thread
switching can only be done by the thread library and the kernel isn't
really ``thread aware''.