The following compares three function approximations to the natural logarithm. One is the well-known Taylor series (green curve), the second is a continued fraction expansion that was developed as an even contraction of a continued fraction derived from a power series using Euler's method to convert a series into an S-fraction (gray curve). This continued fraction expansion is already the fastest-converging one from among the ones presented in: Cuyt, Petersen, et al.: "Handbook of Continued Fractions for Special Functions" (Springer, 2008), which is the reason I chose it as the starting point for my own contraction method. The third (blue-dashed curve) is a canonical even contraction I have developed from the second.

Comparison: Taylor Series, Continued Fraction Expansion, and Contracted Continued Fraction Expansion of log(z+1)

The original continued fraction expansion for log(z+1) is (gray curve above)

The contracted continued fraction expansion for log(z+1) I have developed from this is (blue-dashed curve above)

with

and

Using the backwards-recursion algorithm (see General page) this can be implemented in a procedural manner fairly straight-forward using a simple loop. Here in Java:

Notice that all polynomials use the Horner form (see General page) for much better efficiency.

It is now up to the individual computing platform and tweaking of compiler and runtime options to find out which one of the two continued fraction expansions is faster for a prescribed accuracy. In the case of the contracted continued fraction the terms to be computed are more complicated, which costs computing time, but fewer steps are required to attain a prescribed accuracy. This is a question that cannot be answered a-priori as compiled computing platforms are too different and the values chosen for compiler and runtime settings are too diverse to make a definitive statement here. However, already for n=0 or n=1 the convergence acceleration of the contracted continued fraction is so significant that the number of additional computations required in every step by the contracted continued fraction is not of importance. On today's computing platforms the standard arithmetic operations +,-,*, and / are "cheap" and fast, which ties in with another advantage of continued fraction expansions: they simply don't need any further arithmetic operations than these four!

One way to see this is that already for n=0 it is necessary to drag the xr slider out to 20 or more to even see a difference in the left plot (red and blue dashed curves) -- move yr to around 3 for that. The contracted continued fraction is virtually identical to the exact logarithm, while the gray curve, which is the fastest-converging iterated approximation to the natural logarithm I am aware of (using only simple operations), is still far off. At xr=20 it is necessary to increase n to 5 to make the visible differences disappear.

For example, picking n=0, the comparison is between

and

Both yield the same accuracy, the latter is the contraction of the former, and which one is calculated faster depends entirely on the computing platform and compiler and runtime settings. Using simpler terms means having to go "deeper", and going "shallow" means using more complicated terms.

Notice the quality of the approximations. As far out as 30 both formulae above have an error of less than 3% (using only n=0!).

This website is powered by webMathematica.
Please visit www.webMathematica.com to find out how you can empower your website with Mathematica.

Copyright (c) 2009 Andreas Lauschke Consulting. All Rights Reserved.