These problems are perhaps regrettable, but we don't know any practical
way around them.
Certain local variables aren't recognized by debuggers when you compile
with optimization.
This occurs because sometimes GCC optimizes the variable out of
existence. There is no way to tell the debugger how to compute the
value such a variable "would have had", and it is not clear that would
be desirable anyway. So GCC simply does not mention the eliminated
variable when it writes debugging information.
You have to expect a certain amount of disagreement between the
executable and your source code, when you use optimization.
Users often think it is a bug when GCC reports an error for code
like this:
int foo (struct mumble *);
struct mumble { ... };
int foo (struct mumble *x)
{ ... }
This code really is erroneous, because the scope of struct
mumble in the prototype is limited to the argument list containing it.
It does not refer to the struct mumble defined with file scope
immediately below--they are two unrelated types with similar names in
different scopes.
But in the definition of foo, the file-scope type is used
because that is available to be inherited. Thus, the definition and
the prototype do not match, and you get an error.
This behavior may seem silly, but it's what the ANSI standard specifies.
It is easy enough for you to make your code work by moving the
definition of struct mumble above the prototype. It's not worth
being incompatible with ANSI C just to avoid an error for the example
shown above.
Accesses to bitfields even in volatile objects works by accessing larger
objects, such as a byte or a word. You cannot rely on what size of
object is accessed in order to read or write the bitfield; it may even
vary for a given bitfield according to the precise usage.
If you care about controlling the amount of memory that is accessed, use
volatile but do not use bitfields.
GCC comes with shell scripts to fix certain known problems in system
header files. They install corrected copies of various header files in
a special directory where only GCC will normally look for them. The
scripts adapt to various systems by searching all the system header
files for the problem cases that we know about.
If new system header files are installed, nothing automatically arranges
to update the corrected header files. You will have to reinstall GCC
to fix the new header files. More specifically, go to the build
directory and delete the files `stmp-fixinc' and
`stmp-headers', and the subdirectory include; then do
`make install' again.
On 68000 and x86 systems, for instance, you can get paradoxical results
if you test the precise values of floating point numbers. For example,
you can find that a floating point value which is not a NaN is not equal
to itself. This results from the fact that the floating point registers
hold a few more bits of precision than fit in a double in memory.
Compiled code moves values between memory and floating point registers
at its convenience, and moving them into memory truncates them.
On the MIPS, variable argument functions using `varargs.h'
cannot have a floating point value for the first argument. The
reason for this is that in the absence of a prototype in scope,
if the first argument is a floating point, it is passed in a
floating point register, rather than an integer register.
If the code is rewritten to use the ANSI standard `stdarg.h'
method of variable arguments, and the prototype is in scope at
the time of the call, everything will work fine.
On the H8/300 and H8/300H, variable argument functions must be
implemented using the ANSI standard `stdarg.h' method of
variable arguments. Furthermore, calls to functions using `stdarg.h'
variable arguments must have a prototype for the called function
in scope at the time of the call.