Fractional iterates of x²-2, a problem by Ramanujan

A curious result, attributed to Ramanujan, is stated here. The statement is: if \(F(x)=x^2-2\), where \(x>2\) then its fractional iterates are:

$$F^{[\log_23]}(x) = x^3-3x,$$ $$F^{[\log_25]}(x) = x^5 - 5x^3+5x.$$

Amazing result, indeed. Unfortunately, the blog post tells nothing about the way these solutions obtained, and I have no easy way to access Ramanujan's notebooks.

The topic of fractional iterations of functions always fascinated me, so I decided to try my hand in solving it. This post is not about the final solution (which turned out to be surprisingly simple), but about the way I solved it and enjoyed the solution. In the course I rediscovered some fact that, retrospectively, can be seen from the very beginning.

What are fractional iterates by the way

Let \(F(x)\) be some function. Then its second iterate is the function

$$F^{[2]}(x) = F(F(x)).$$

(In this post to denote iteration I use upper index in square brackets, in order to avoid confusion with powers and derivatives; this notation is not universally accepted). Third iteration is \(F^{[3]}(x)=F(F(F(x)))\), and so on. Indeed, for this notion to be well-defined, function \(F(x)\) must map its domain to itself; i.e. be endomorphic.

Under compositions, iterates behave like powers:

$$F^{[a]}(F^{[b]}(x)) = F^{[a+b]}(x),$$ $$\left(F^{[a]}\right)^{[b]} = F^{[ab]}.$$

Iterates can be extended to arbitrary integer: 0'th iterate is the identity function: \(F^{[0]}(x)=x\), and negatives are iterates of the inverse function.

Similarly to powers, rational fractional iterates can be defined. If \(p/q\) is a rational number, then let's say that \(G(x)\) is \(p/q\)'th iterate of \(F(x)\), iff its q'th iterate is p'th iterate of \(F\):

$$G(x)=F^{[p/q]}(x) \Leftrightarrow G^{[q]}(x) = F^{[p]}(x).$$

Particularly, 1/2'th iterate is known as a functional root; it is a solution \(G(x)\) of the functional equation \(G(G(x))=F(x)\). It must be noted that fractional iterates are not unique; generally there is infinite number of the functions satisfying the above equation. Unique solution can be obtained by imposing additional limitations on the desired function.

Finally, irrational fractional iterates can be obtained as limits of the rational ones.

It may seem that finding fractional iterates is not an easy problem, and this impression is absolutely right. This makes Ramanujan's result even more mysterious!

Step 1: checking with numbers

So, let's return to the function \(F(x)=x^2-2\) and its iterates. Without a clue to the solution, I first decided to check the validity of Ramanujan's solution numerically.

But how to calculate fractional iterates of \(F(x)\) numerically? Here is a trick (I found it on my own by the way). Note that for large \(x\), the -2 constant vanishes compared to the \(x^2\).

$$F(x) \approx x^2, x \gg 2.$$

Iterates of \(G(x)=x^2\) are easy to write in a closed form, that extends naturally to fractional values. \(G^{[2]}(x)=x^4, G^{[3]}(x)=x^8\) and, generally:

$$G^{[n]}(x) = x^{2^n}.$$

Then, arbitrary fractional iterate \(F^{[n]}\) can be approximated as:

$$F^{[n]} \approx F^{[-k]} \circ G^{[n]} \circ F^{[k]},$$

where \(k\) is a positive integer, sufficiently large for \(F^{[k]}(x)\) be much greater than 2. Bigger values of \(k\) would produce better approximation, but too big values cause numeric overflow, so some optimal number must be chosen. This can be coded as:

So, was Ramanujan right? So far, it seems so. Here is a plot of n'th fractional iterate, with n varying smoothly between 0 and 4. Numeric check shows that the difference between approximate and exact solution is within the tolerance.

Numeric fractional iterates of \(F(x)\).

Step 2: guesswork

Having a convenient way to calculate \(F^{[n]}(x)\), I tried to obtain more data, checking whether other \(\log_2n\)'th iterates are expressed via polynomials. Numpy has a convenient function that can guess polynomial from its values, polyfit. And, the answer is yes. Here is a table of the iterates:

1 x x2 x3 x4 x5 x6 x7 x8
F[log21]=x 1
F[log22]=F -2 1
F[log23] -3 1
F[log24]=F[2] 2 -4 1
F[log25]] 5 -5 1
F[log26] -2 9 -6 1
F[log27] -7 14 -7 1
F[log28]=F[3] 2 -16 20 -8 1

The resulting numeric table looks suspiciously similar to Pascal's triangle, and after fiddling a bit, it is possible to notice the relation between these numbers:


This can be translated to recursive relation between these polynomials. If \(F_i(x)\) is i'th polynomial in the table, \(F_i = F^{[log_2i]}\), then:

$$F_{i+2}(x) = xF_{i+1}(x)-F_i(x),$$

with initial conditions:

$$F_1(x) = x, F_2(x)=x^2-2.$$

Wow! At this point, I felt that I'm always there.

The above equation is a difference equation in \(i\). It can be solved with the usual technique which I will not elaborate here. Starting from characteristic equation \(p^2 = xp - 1\), which roots are \(p_{1,2} = (x \pm \sqrt{x^2 - 4})/2 \), i'th polynomial can be written as:

$$F_i(x) = \left(\frac{x + \sqrt{x^2 - 4}}2\right)^i + \left(\frac{x - \sqrt{x^2 - 4}}2\right)^i.$$

or, since \(F_i = F^{[log_2i]}\), n'th iterate of \(F(x)\) can be written as:

$$F^{[n]}(x) = \left(\frac{x + \sqrt{x^2 - 4}}2\right)^{2^n} + \left(\frac{x - \sqrt{x^2 - 4}}2\right)^{2^n}.$$

Step 3: simplifying

Indeed, it is still merely a guess. To really prove that the above formula is right, let's first simplify it a bit. Two roots of the characteristic equation are reciprocal to each other, and the sum of two reciprocal powers is a hyperbolic cosine:

$$F^{[n]}(x) = 2 \cosh\left( 2^n \phi(x)\right), \text{ where } \phi(x)=\log\left(\frac{x - \sqrt{x^2 - 4}}2\right).$$

To get rid of the nasty \(\phi(x)\), write the above equation for \(n=0\) as

$$x = 2 \cosh\left( \phi(x)\right),$$

and find \(\phi(x)\) from it:

$$\phi(x) = \text{acosh}\left( \frac x 2\right).$$

Finally, substitute it to the formula for arbitrary \(n\), obtaining:

$$F^{[n]}(x) = 2 \cosh\left( 2^n \text{acosh}(x/2)\right).$$

The final step: now it all makes sense!

At this point I slapped my forehead saying "of course!". It all follows from the hyperbolic identity \(\cosh(2x) = 2\cosh^2(x)-1\).

It is easy to check correctness of the boxed expression. First,

$$ F^{[1]}(x) = 2 \cosh \left( 2 \text{acosh}(x/2)\right) = 2\left( 2(x/2)^2-1 \right) = x^2-2 = F(x). $$

Moreover, composition also easy to check:

$$(F^{[a]}\circ F^{[b]})(x) = 2 \cosh\left( 2^a 2^b \text{acosh}(x/2)\right) = F^{[a+b]}(x). $$

The function \(P(x) = 2\cosh(x)\) thus is the key to this problem: doubling its argument causes its value to be mapped by \(F\): \(P(2x) = F(P(x))\). This establishes an isomorphism between doubling and applying \(F\), essentially solving the problem, since fractional iterates of doubling are easy to find. Mysterious Ramanujan's polynomial expressions are also easy to explain and understand: they are related to the identities

$$\cosh(3x) = 4 \cosh^3x - 3\cosh x,$$ $$\cosh(5x) = 16\cosh^5x - 20 \cosh^3x + 5\cosh x,$$

and so on. So easy.

Starting from the identity \(\cos(2x) = 1-2\cos^2(x)\), fractional iterates for the same function \(F(x)\) can be obtained for \(x\in[-2,2]\):

$$F^{[n]}(x) = -2\cos\left(2^n \text{acos}(-x/2)\right),$$

showing that \(F^{[n]}(x)\) is just a Chebyshev polynomial in disguise!

$$F^{[\log_2n]}(x) = 2T_n(x/2).$$

A step aside, without success

Is \(x^2-2\) special? Will other functions of the form \(x^2-a\) produce nice fractional iterates of closed form?

Numeric code can be used to test this question, and it appears that \(x^2-2\) is unique. No other constant (except 0, of course) seems to produce similar results.


Anonymous said…
Your posts are interesting but are in 2 dimensional maths.

I think that the same maths could be applied in 3-d models of misfolded proteins and the shards / fragments that result.

Andy Mason at may be a useful contact if you want to use the Leap platform to advance your models.

As I expect you know, the travelling salesman problem is difficult.
I have been giving this kind of problem a lot of thought. I think the answer to your last question is *no*. The only polynomial sequences which can be composed are the power functions
x^n and the Chebyshev polynomials. There is a good proof of this in the book "The Chebyshev Polynomials" by Theodore J. Rivlin, chapter 4 section 1.