BigDecimalMath

\(\require{AMSmath}\)
Java 8 is out and there are still no Math functions for BigDecimal.

After playing around with some implementations to calculate Pi I decided to write some implementation of BigDecimalMath to fill this gap.

The result of this is available on github: big-math.

The goal was to provide the following functions:

  • exp(x)
  • log(x)
  • pow(x, y)
  • sqrt(x)
  • root(n, x)
  • sin(x), cos(x), tan(x), cot(x)
  • asin(x), acos(x), atan(x), acot(x)
  • sinh(x), cosh(x), tanh(x)
  • asinh(x), acosh(x), atanh(x)

The calculations must be accurate to the desired precision (specified in the MathContext)
and the performance should be acceptable and stable for a large range of input values.

Implementation Details

Implementation exp(x)

To implement exp() the classical Taylor series was used:
\(\displaystyle e^x = \sum^{\infty}_{n=0} \frac{x^n}{n!} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots\)

Implementation log()

Note that in Java the function name log() means the natural logarithm, which in mathematical notation is written \(\ln{x}\).

The implementation of log() is based on Newton’s method.
We can use the double version Math.log() to give us a good initial value.

\(\displaystyle y_0 = \operatorname{Math.log}(x),\)
\(\displaystyle y_{i+1} = y_i + 2 \frac{x – e^{y_i} }{ x + e^{y_i}},\)
\(\displaystyle \ln{x} = \lim_{i \to \infty} y_i\)

Several optimizations in the implementation transform the argument of log(x) so that it will be nearer to the optimum of 1.0 to converge faster.
\(
\begin{align}
\displaystyle \ln{x} & = \ln{\left(a \cdot 10^b\right)} = \ln{a} + \ln{10} \cdot b & \qquad \text{for } x \leq 0.1 \text{ or } x \geq 10 \\
\displaystyle \ln{x} & = \ln{\left( 2 x \right)} – \ln{2} & \qquad \text{for } x \lt 0.115 \\
\displaystyle \ln{x} & = \ln{\left( 3 x \right)} – \ln{3} & \qquad \text{for } x \lt 0.14 \\
\displaystyle \ln{x} & = \ln{\left( 4 x \right)} – 2 \ln{2} & \qquad \text{for } x \lt 0.2 \\
\displaystyle \ln{x} & = \ln{\left( 6 x \right)} – \ln{2} – \ln{3} & \qquad \text{for } x \lt 0.3 \\
\displaystyle \ln{x} & = \ln{\left( 8 x \right)} – 3 \ln{2} & \qquad \text{for } x \lt 0.42 \\
\displaystyle \ln{x} & = \ln{\left( 9 x \right)} – 3 \ln{3} & \qquad \text{for } x \lt 0.7 \\
\displaystyle \ln{x} & = \ln{\left( \frac{1}{2} x \right)} + \ln{2} & \qquad \text{for } x \lt 2.5 \\
\displaystyle \ln{x} & = \ln{\left( \frac{1}{3} x \right)} + \ln{3} & \qquad \text{for } x \lt 3.5 \\
\displaystyle \ln{x} & = \ln{\left( \frac{1}{4} x \right)} + 2 \ln{2} & \qquad \text{for } x \lt 5.0 \\
\displaystyle \ln{x} & = \ln{\left( \frac{1}{6} x \right)} + \ln{2} + \ln{3} & \qquad \text{for } x \lt 7.0 \\
\displaystyle \ln{x} & = \ln{\left( \frac{1}{8} x \right)} + 3 \ln{2} & \qquad \text{for } x \lt 8.5 \\
\displaystyle \ln{x} & = \ln{\left( \frac{1}{9} x \right)} + 3 \ln{3} & \qquad \text{for } x \lt 10.0
\end{align}
\)

The additional logarithmic functions to different common bases are simple:
\(\displaystyle \operatorname{log}_2{x} = \frac{\ln{x}}{\ln{2}}\)

\(\displaystyle \operatorname{log}_{10}{x} = \frac{\ln{x}}{\ln{10}}\)

Since the precalculated values for \(\ln{2}, \ln{3}, \ln{10}\) with a precision of up to 1100 digits already exist for the optimizations mentioned above, the log2() and log10() functions could reuse them and are therefore reasonably fast.

Implementation pow(x)

The implementation of pow() with non-integer arguments is based on exp() and log():

\(\displaystyle x^y = e^{y \ln x}\)

If y is an integer argument then pow() is implemented with multiplications:

\(\displaystyle x^y = \prod_{i \to y} x\)

Actually the implementation is further optimized to reduce the number of multiplications by squaring the argument whenever possible.

Implementation sqrt(x), root(n, x)

The implementation of sqrt() and root() uses Newton’s method to approximate the result until the necessary precision is reached.
In the case of sqrt() we can use the double version Math.sqrt() to give us a good initial value.

\(\displaystyle y_0 = \operatorname{Math.sqrt}(x),\)
\(\displaystyle y_{i+1} = \frac{1}{2} \left(y_i + \frac{x}{y_i}\right),\)
\(\displaystyle \sqrt{x} = \lim_{i \to \infty} y_i\)

Unfortunately the root() function does not exist for double so we are forced to use a simpler initial value.

\(\displaystyle y_0 = \frac{1}{n},\)
\(\displaystyle y_{i+1} = \frac{1}{n} \left[{(n-1)y_i +\frac{x}{y_i^{n-1}}}\right],\)
\(\displaystyle \sqrt[n]{x} = \lim_{i \to \infty} y_i\)

Implementation sin(x), cos(x), tan(x), cot(x)

The basic trigonometric functions where implemented using Taylor series or if this proved more efficient by their relationship with an already implemented functions:

\(\displaystyle \sin x = \sum^{\infty}_{n=0} \frac{(-1)^n}{(2n+1)!} x^{2n+1} = x – \frac{x^3}{3!} + \frac{x^5}{5!} – \cdots\)

\(\displaystyle \cos x = \sum^{\infty}_{n=0} \frac{(-1)^n}{(2n)!} x^{2n} = 1 – \frac{x^2}{2!} + \frac{x^4}{4!} – \cdots\)

\(\displaystyle \tan x = \frac{\sin x}{\cos x}\)

\(\displaystyle \cot x = \frac{\cos x}{\sin x}\)

Implementation asin(x), acos(x), atan(x), acot(x)

The inverse trigonometric functions use a Taylor series for arcsin().

\(\displaystyle \arcsin x = \sum^{\infty}_{n=0} \frac{(2n)!}{4^n (n!)^2 (2n+1)} x^{2n+1}\)

This series takes very long to converge, especially when the argument x gets close to 1.
As optimization the argument x is transformed to a more efficient range using the following relationship.

\(\displaystyle \arcsin x = \arccos \sqrt{1-x^2} \qquad \text{for } x \gt \sqrt{\frac{1}{2}} \text{ (} \approx 0.707107 \text{)}\)

The remaining functions are implemented by their relationship with arcsin().

\(\displaystyle \arccos x = \frac{\pi}{2} – \arcsin x\)

\(\displaystyle \arctan x = \arcsin \frac{x}{\sqrt{1+x^2}}\)

\(\displaystyle \operatorname{arccot} x = \frac{\pi}{2} – \arctan x\)

Implementation sinh(x), cosh(x), tanh(x)

Taylor series are efficient for most of the implementations of hyperbolic functions.

\(\displaystyle \sinh x= \sum_{n=0}^\infty \frac{x^{2n+1}}{(2n+1)!} = x + \frac{x^3}{3!} + \frac{x^5}{5!} + \frac{x^7}{7!} +\cdots\)

\(\displaystyle \cosh x = \sum_{n=0}^\infty \frac{x^{2n}}{(2n)!} = 1 + \frac{x^2}{2!} + \frac{x^4}{4!} + \frac{x^6}{6!} + \cdots\)

The Taylor series for tanh() converges very slowly, so we use the relationship with sinh() and tanh() instead.
\(\displaystyle \tanh x = \frac{\sinh x}{\cosh x}\)

Implementation asinh(x), acosh(x), atanh(x)

The inverse hyperbolic functions can be expressed using natural logarithm.

\(\displaystyle \operatorname{arsinh} x = \ln(x + \sqrt{x^2 + 1} )\)

\(\displaystyle \operatorname{arcosh} x = \ln(x + \sqrt{x^2-1} )\)

\(\displaystyle \operatorname{artanh} x = \frac12\ln\left(\frac{1+x}{1-x}\right)\)

Performance calculating different precisions

Obviously it will take longer to calculate a function result with a higher precision than a lower precision.

The following charts show the time needed to calculate the functions with different precisions.

The arguments of the functions where:

  • log(3.1)
  • exp(3.1)
  • pow(123.456, 3.1)
  • sqrt(3.1)
  • root(2, 3.1)
  • root(3, 3.1)
  • sin(3.1)
  • cos(3.1)



While the time to calculate the results grows worse than linear for higher precisions the speed is still reasonable for precisions of up to 1000 digits.

Performance calculating different values

The following charts show the time needed to calculate the functions over a range of values with a precision of 300 digits.

  • log(x)
  • exp(x)
  • pow(123.456, x)
  • sqrt(x)
  • root(2, x)
  • root(3, x)
  • sin(x)
  • cos(x)

The functions have been separated in a fast group (exp, sqrt, root, sin, cos) and a slow group (exp, log, pow).
For comparison reasons the exp() function is contained in both groups.

Range 0 to 2


The performance of the functions is in a reasonable range and is stable, especially when getting close to 0 in which some functions might converge slowly.
The functions exp(), sin(), cos() need to be watched at the higher values of x to prove that the do not continue to grow.

Shows nicely that log() is more efficient when x is close to 1.0.
By using divisions and multiplication with the prime numbers 2 and 3 the log() function was optimized to use this fact for values of x than can be brought closer to 1.0.
This gives the strange arches in the performance of log().
The pow() function performs fairly constant, except for the powers of integer values which are optimized specifically.

Range 0 to 10


Shows that sin(), cos() have been optimized with the period of 2*Pi (roughly 6.28) so that they do not continue to grow with higher values.
This optimization has some cost which needs to be watched at higher values.


exp() has become stable and does no longer grow.
log() is stable and shows the typical arches with optimas at 1.0, 2.0 (divided by 2), 3.0 (divided by 3), 4.0 (divided by 2*2), 6.0 (divided by 2*3), 8.0 (divided by 2*2*2) and 9.0 (divided by 3*3).
pow() continues stable.

Range -10 to 10



Positive and negative values are symmetric for all functions that are defined for the negative range.

Range 0 to 100


All functions are stable over this range.


All functions are stable over this range.
The pow() function makes the chart somewhat hard to read because of the optimized version for integer powers.
The log() function shows here the effect of another optimization using the expoential form. The range from 10 to 100 is brought down to the range 1 to 10 and the same divisions are applied. This has the effect of showing the same arches again in the range from 10 to 100.

This entry was posted in Development, Java, Math and tagged , , , , , , , . Bookmark the permalink.

8 Responses to BigDecimalMath

  1. w says:

    What is the license for your code?

    • admin says:

      I have not picked a specific license yet. Provided you give some credits in the comments you can use the source code in whatever way you want.

    • admin says:

      I have now added the MIT license to the github repository.

  2. Burkhard Strauß says:

    BigDecimalMath.logUsingNewton(new BigDecimal(“1E309”), mc) fails. It calls Math.log(x.doubleValue()).

    • admin says:

      You are right.
      Funny is, I realized this yesterday while trying to do further optimizations (in sqrt).
      I will need to use another initial value if x is not a legal double value.
      Thanks for the heads up.

  3. Sven Möller says:

    What is the algorithm behind expIntegralFractional()? Can’t find that anywhere…

    • admin says:

      It is very embarassing, but I don’t remember where I found this but I can derive it again.
      Proof exp(x) = exp(1+FracPart(x)/IntPart(x)) ^ IntPart(x):

      1+FracPart(x)/IntPart(x) = (IntPart(x)+FracPart(x))/IntPart(x) = x/IntPart(x)

      exp(x/IntPart(x))^IntPart(x) = exp(x)

      Since the taylor series for exp() is terrible with larger values of x, I tried to move the exp() calculation into a range as close to 1.0 as I could.
      And the pow(x, y) function with y integer values is pretty fast.

Leave a Reply to w Cancel reply

Your email address will not be published. Required fields are marked *