Copyright 2000, 2001 Free Software Foundation, Inc.
This file is part of the GNU MP Library.
The GNU MP Library is free software; you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published
by the Free Software Foundation; either version 2.1 of the License, or (at
your option) any later version.
The GNU MP Library is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public
License for more details.
You should have received a copy of the GNU Lesser General Public License
along with the GNU MP Library; see the file COPYING.LIB. If not, write to
the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston,
MA 02111-1307, USA.
This file current as of 7 Dec 2001. An up-to-date version is available at
http://www.swox.com/gmp/projects.html.
Please send comments about this page to
bug-gmp@gnu.org.
This file lists projects suitable for volunteers. Please see the
tasks file for smaller tasks.
If you want to work on any of the projects below, please let tege@swox.com
know. If you want to help with a project that already somebody else is
working on, please talk to that person too, tege@swox.com can put you in
touch. (There are no email addresses of volunteers below, due to spamming
problems.)
Faster multiplication
The current multiplication code uses Karatsuba, 3-way Toom-Cook,
or Fermat FFT. Several new developments are desirable:
Handle multiplication of operands with different digit count
better than today. We now split the operands in a very
inefficient way, see mpn/generic/mul.c.
Consider N-way Toom-Cook. See Knuth's Seminumerical
Algorithms for details on the method. Code implementing it
exists. This is asymptotically inferior to FFTs, but is finer
grained. A toom-4 might fit in between toom-3 and the FFTs
(or it might not).
It's possible CPU dependent effects like cache locality will
have a greater impact on speed than algorithmic improvements.
Add support for partial products, either a given number of low limbs
or high limbs of the result. A high partial product can be used by
mpf_mul now, a low half partial product might be of use
in a future sub-quadratic REDC. On small sizes a partial product
will be faster simply through fewer cross-products, similar to the
way squaring is faster. But work by Thom Mulders shows that for
Karatsuba and higher order algorithms the advantage is progressively
lost, so for large sizes partial products turn out to be no faster.
Another possibility would be an optimized cube. In the basecase that
should definitely be able to save cross-products in a similar fashion to
squaring, but some investigation might be needed for how best to adapt
the higher-order algorithms. Not sure whether cubing or further small
powers have any particularly important uses though.
Assembly routines
Write new and improve existing assembly routines. The tests/devel
programs and the tune/speed.c and tune/many.pl programs are useful for
testing and timing the routines you write. See the README files in those
directories for more information.
Please make sure your new routines are fast for these three situations:
Operands that fit into the cache.
Small operands of less than, say, 10 limbs.
Huge operands that does not fit into the cache.
The most important routines are mpn_addmul_1, mpn_mul_basecase and
mpn_sqr_basecase. The latter two don't exist for all machines, while
mpn_addmul_1 exists for almost all machines.
Standard techniques for these routines are unrolling, software
pipelining, and specialization for common operand values. For machines with
poor integer multiplication, it is often possible to improve the performance
using floating-point operations, or SIMD operations such as MMX or Sun's VIS.
Using floating-point operations is interesting but somewhat tricky.
Since IEEE double has 53 bit of mantissa, one has to split the operands in
small prices, so that no result is greater than 2^53. For 32-bit computers,
splitting one operand into 16-bit pieces works. For 64-bit machines, one
operand can be split into 21-bit pieces and the other into 32-bit pieces. (A
64-bit operand can be split into just three 21-bit pieces if one allows the
split operands to be negative!)
Faster extended GCD
We currently have extended GCD based on Lehmer's method.
But the binary method can quite easily be improved for bignums
in a similar way Lehmer improved Euclid's algorithm. The result should
beat Lehmer's method by about a factor of 2. Furthermore, the multiprecision
step of Lehmer's method and the binary method will be identical, so the
current code can mostly be reused. It should be possible to share code
between GCD and GCDEXT, and probably with Jacobi symbols too.
Paul Zimmerman has worked on sub-quadratic GCD and GCDEXT, but it seems
that the most likely algorithm doesn't kick in until about 3000 limbs.
Math functions for the mpf layer
Implement the functions of math.h for the GMP mpf layer! Check the book
"Pi and the AGM" by Borwein and Borwein for ideas how to do this. These
functions are desirable: acos, acosh, asin, asinh, atan, atanh, atan2, cos,
cosh, exp, log, log10, pow, sin, sinh, tan, tanh.
These are implemented in mpfr, redoing them in mpf might not be worth
bothering with, if the long term plan is to bring mpfr in as the new mpf.
Faster sqrt
The current code uses divisions, which are reasonably fast, but it'd be
possible to use only multiplications by computing 1/sqrt(A) using this
formula:
2
x = x (3 - A x )/2.
i+1 i i
And the final result
sqrt(A) = A x .
n
That final multiply might be the full size of the input (though it might
only need the high half of that), so there may or may not be any speedup
overall.
Nth root
Implement, at the mpn level, a routine for computing the nth root of a
number. The routine should use Newton's method, preferably without using
division.
If the routine becomes fast enough, perhaps square roots could be computed
using this function.
Quotient-Only Division
Some work can be saved when only the quotient is required in a division,
basically the necessary correction -0, -1 or -2 to the estimated
quotient can almost always be determined from only a few limbs of
multiply and subtract, rather than forming a complete remainder. The
greatest savings are when the quotient is small compared to the dividend
and divisor.
Some code along these lines can be found in the current
mpn_tdiv_qr, though perhaps calculating bigger chunks of
remainder than might be strictly necessary. That function in its
current form actually then always goes on to calculate a full remainder.
Burnikel and Zeigler describe a similar approach for the divide and
conquer case.
Sub-Quadratic REDC and Exact Division
mpn_bdivmod and the redc in
mpz_powm should use some sort of divide and conquer
algorithm. This would benefit mpz_divexact, and
mpn_gcd on large unequal size operands. See "Exact
Division with Karatsuba Complexity" by Jebelean for a (brief)
description.
Failing that, some sort of DIVEXACT_THRESHOLD could be
added to control whether mpz_divexact uses
mpn_bdivmod or mpn_tdiv_qr, since the latter
is faster on large divisors.
For the REDC, basically all that's needed is Montgomery's algorithm done
in multi-limb integers. R is multiple limbs, and the inverse and
operands are multi-precision.
For exact division the time to calculate a multi-limb inverse is not
amortized across many modular operations, but instead will probably
create a threshold below which the current style
mpn_bdivmod is best. There's also Krandick and Jebelean,
"Bidirectional Exact Integer Division" to basically use a low to high
exact division for the low half quotient, and a quotient-only division
for the high half.
It will be noted that low-half and high-half multiplies, and a low-half
square, can be used. These ought to each take as little as half the
time of a full multiply, or square, though work by Thom Mulders shows
the advantage is progressively lost as Karatsuba and higher algorithms
are applied.
Test Suite
Add a test suite for old bugs. These tests wouldn't loop or use
random numbers, but just test for specific bugs found in the
past.
More importantly, improve the random number controls for test
collections:
Use the new random number framework.
Have every test accept a seed parameter.
Allow `make check' to set the seed parameter.
Add more tests for important, now untested functions.
With this in place, it should be possible to rid GMP of
practically all bugs by having some dedicated GMP test machines
running "make check" with ever increasing seeds (and
periodically updating to the latest GMP).
If a few more ASSERTs were sprinkled through the code, running
some tests with --enable-assert might be useful, though not a
substitute for tests on the normal build.
An important feature of any random tests will be to record the
seeds used, and perhaps to snapshot operands before performing
each test, so any problem exposed can be reproduced.