The Analytic Continuation of the Infinite Hyperpower Function F(x) = +∞x

(The presentation that follows is a light introduction. For more austere results, consult Paper 1).

The finite real Hyperpower function has been studied extensively and interested readers are advised to consult the references.

Using our previous notation, it is defined as follows:

f(x,n) = {x, iff n=1, xf(x,n-1) iff n>1} = xxx...x (n-x's).

The author will pass through the following facts quickly, since most of these are addressed in the references:

limx->0+f(x,2k) = 1, and limx->0+f(x,2k+1) = 0, k in N.

The (infinite) real Hyperpower function is defined as:

F(x) = limn->+∞f(x,n) = xxx...

The references establish that F(x) converges for x in [(1/e)e,e(1/e)]. The author's first article on infinite exponentials establishes that whenever it converges, the limit is -W(-log(x))/log(x), where W is used to denote the principal branch of Lambert's W function.

On the interval (0, (1/e)e), f(x,n), n in N, is a two-cycle, therefore F(x) cannot be single-valued.

We now extend F for complex z, as follows:

h(z) = -W(-Log(z))/Log(z), where Log denotes the principal branch of the complex log function.

Using W's definition, the above definition can be written as:

h(z) = e-W(-Log(z)). (1)

The definition is unambiguous, provided we are working with the principal branch of log, Log(z) and with the principal branch of Lambert's W, since then all involved functions are single-valued and F(z) is then well-defined.

Let's examine h(z) a little closer:

The graph of h is shown below: The last view is rotated so it can show the function's behavior for real z. The top peak point is h(e(1/e))=e. The second branch point of F. One branch cut starts at this point.




h has two branch points and two branch cuts. The first branch point is at 0, and the first branch cut is the negative real axis (because of the log) and there is a second branch point at e(1/e), with a branch cut the subset of the positive real axis from e(1/e) to infinity.

To find the second branch point, we notice that since W has a branch point at -1/e, => -W has a branch point at 1/e. Solving -W(log(z))=1/e, we get e{(1/e)/e(1/e)} and plugging this into h(z) we get e(1/e).

The function is certainly real valued for real values of z in the interval [(1/e)e,e(1/e)], since we have established that on that interval limn->+∞f(x,n) exists finitely.


lambert W real

The surprise here is that h appears to be real valued for values of z in [0, (1/e)e) as well.

Lemma #1:

limx->0+h(x) = 0, for x->0+ along the positive real axis:

limx->0+h(x) =
limx->0+e-W(-log(x)) =
limx->0+log(x) = -∞,
the principal branch of the W(z) is real valued for x in [-1/e, +∞),
limx->+∞W(x) = +∞,
and the lemma follows.

Lemma #2:

h(z) is real valued for z in (0, (1/e)e).


0 < x < e-e.
log is strictly increasing throughout (0, +∞), =>
-∞ < log(x) < -e, =>
e < -log(x) < +∞.
The principal branch of W is real valued and strictly increasing throughout (e, +∞), and limx->+∞W(x) = +∞, =>
W(e) < W(-log(x)) < +∞,=>
1 < W(-log(x)) < +∞, =>
-∞ < -W(-log(x)) < -1.
exp is increasing everywhere, so, =>
0 < e-W(-log(x)) < 1/e, =>
0 < h(x) < 1/e,
and the lemma follows.

We know that the principal branch of the W(z) is real valued only for z in [-1/e, +∞). The latter along with a trivial modification of lemma #2, shows immediately another lemma.

Lemma #3:

h(z) is real valued for z in [0, e(1/e)], and for such z only.


Left to the reader as an exercise.

We momentarily turn towards f(x,n) and trace the functions' behavior with some Maple code.

> with (plots):
> W:=LambertW;
> h:=x->exp(-W(-log(x)));
> g:=proc(x,n)
> if n=1 then x
> else x^g(x,n-1);
> fi;
> end:
> p1:=plot({seq(g(x,n),n=1..30)}, x=0..0.3):
> p2:=plot(h(x),x=0..0.3):
> p3:=plot([[exp(-exp(1)),exp(-1)]],style=point,symbol=circle,color=black):

To actually plot the two separate limits of the even and odd subsequences of f(x,n), we modify the code which was introduced in the article on Solving the Second Real Auxiliary Equation.

> Odd:=proc(c)
> local fc;
> fc:=evalf(c);
> fsolve(fc^x=log(x)/log(fc),x,x=0.01..evalf(exp(-W(-log(c))))-0.0001);
> end:

> Even:=proc(c)
> local fc;
> fc:=evalf(c);
> fsolve(fc^x=log(x)/log(fc),x,x=evalf(exp(-W(-log(c))))+0.00001..infinity);
> end:

> p4:=plot('Odd(x)',x=0.001..evalf(exp(-exp(1)))):
> p5:=plot('Even(x)',x=0.001..evalf(exp(-exp(1)))):

The following is the behavior of f(x,n) for n in {1, 2, ..., 30}, the graph of the function defined in (1), the bifurcation point {e-e, e-1} (shown as a circle) and the two bifurcation branches stemming from that point and giving the limits of the even and odd subsequences, which go to 0 and 1 respectively.

> display(p1,p2,p3,p4,p5);

hyperpower convergence

The actual Hopf bifurcation can be seen without the graphs of f(x,n).


hopf bifurcation

The two branches stemming from the bifurcation point, were investigated by Euler, who showed that they can be parametrized as:

a(a/[1-a]) and a(1/[1-a]), for appropriate positive a.

Although the two subsequences f(x,2k) and f(x, 2k+1), k in N, left of the bifurcation point converge to different limits, h(x) continues all the way down to 0, (which was shown to be the limit in lemma #1), because the point e-W(-log(x)) is STILL a fixed point of the Hyperpower function F(x) (although unstable as the following lemma shows).

To see that, note that even for c in (0,e-e), x0 = e-W(-log(c)) is a fixed point of the iterative process: x->cx. Therefore iterating the function g(x) = cx, exactly at x0, will give:

g(x0) = c[-W(-log(c))/log(c)] = e[-log(c)*W(-log(c))/log(c)] = e-W(-log(c)) = x0, => g(n)(x0) = x0, for all n, and consequently limn->+∞g(n)(x0) = x0.

Lemma #4:

If c is in (0,e-e), x0 = e-W(-log(c)) is an unstable fixed point of the iterative process: x->cx.


c is in (0,e-e), =>
c < e-e.
log is strictly increasing, =>
log(c) < -e, =>
-log(c) > e.

The principal branch of W is strictly increasing everywhere (in particular past e), =>

W(e) < W(-log(c)), =>
1 < W(-log(c)), =>
-W(-log(c)) < -1.

On the other hand:
g'(x0) =
log(c)*g(x0) =
log(c)*x0 =
-log(c)*W(-log(c))/log(c) =
-W(-log(c)) < -1, =>
|g'(x0)| > 1,
and the lemma follows.

At this point it may be instructive to validate all this by checking the article on Solving the Second Real Auxiliary Equation, which shows all three roots to be as expected: One root is a 1-cycle (x0, above) and the other two roots are the 2-cycle.

To see the behavior around the fixed point, one can use the f_N proc that the author has used in the article on the Continuous Extension for the Hyper4 Operator, as follows:

> f_N:=proc(z,w,n)
> option remember;
> if n=0 or n=1 then z^w;
> else z^f_N(z,w,n-1);
> fi;
> end:

> w := 0.01;
> for n from 1 to 10 do
> f_N(w,F(w),n);
> od;

Which will indeed verify that the iteration stays constant.
On the other hand, if we perturb x0 a bit, let's see what happens:

> w:=0.01;
> UR1:=[evalf(seq([[w,f_N(w,F(w)+0.05,n)]], n=1..40 ))]:
> p1:=plot(UR1,0..0.1,0..1,style=point,color=red):

> w:= 0.0314635;
> UR2:=[evalf(seq([[w,f_N(w,F(w)+0.2541,n)]], n=1..40 ))]:
> p2:=plot(UR2,0..0.1,0..1,style=point,color=red):

> w:=0.048371;
> UR3:=[evalf(seq([[w,f_N(w,F(w)+0.123,n)]], n=1..40 ))]:
> p3:=plot(UR3,0..0.1,0..1,style=point,color=red):

> w:=exp(-exp(1))-0.001;
> UR4:=[evalf(seq([[w,f_N(w,F(w)+0.14231,n)]], n=1..40 ))]:
> p4:=plot(UR4,0..0.1,0..1,style=point,color=red):

> display(p1,p2,p3,p4);

accumulation points

The accumulation points, which correspond to the separate bifurcation branches, are clearly visible.

The Analyticity of h(z)

We now turn on the issue of analyticity of the complex function h(z). The principal branch of log(z), Log(z), is analytic everywhere except on the negative real axis, including 0, where Log(z) is not even defined. The principal branch of W, on the other hand, is analytic at 0, with radius of convergence R = 1/e. (Check W's expansion).

We expect therefore the resultant composition h(z) to be analytic at least in some domain D. (which of course excludes its two branch cuts, namely: (-∞, 0] U (e(1/e), +∞)).

We know the expansion for Lambert's W function:

lambert w expansion

Setting Z=-Log(z) in the above and then dividing by -Log(z), we immediately get:


A quick application of the Ratio Test using Maple gives:
> a(z,n+1)/a(z,n);
> simplify(",assume=positive);
(n + 1)(n - 1)*Log(z) n(-n + 1)
> limit(",n=infinity);
e* Log(z),
|e*Log(z)| < 1, iff
|Log(z)| < 1/e.

The Region of convergence of the above series, is therefore:

D = {z: |Log(z)| < 1/e}[1].

To see D, we can deploy Maple. First we generate 70 roots of 1/e.

> UR:=[evalf(seq( [Re(t(k,60)/exp(1)),Im(t(k,60)/exp(1))], k=0..69 ))]:
> plot(UR,style=line);

FS Region 1

Now let's transform those roots, under exp(z), to get to the final region D:

> f:=z->exp(z);
> URT:=[evalf(seq([Re(f(t(k,60)/exp(1))),Im(f(t(k,60)/exp(1)))], k=0..60 ))]:
> plot(URT,style=point);

FS Region 2

Let's now check that we indeed have convergence inside D:

> evalf(hs(1.4));
> evalf(h(1.4));
> evalf(hs(1.1));
> evalf(h(1.1));

For complex z as well:

> evalf(hs(1.1+0.3*I));
.9985441412 + .3170642132 I
> evalf(h(1.1+0.3*I));
.9985441412 + .3170642131 I
> evalf(hs(0.7+0.01*I));
.7619995945 + .006502597193 I
> evalf(h(0.7+0.01*I));
.7619822445 + .006522063595 I

The real endpoint bounds of D are: e-(1/e) and e(1/e). Note that h(z) fails to be analytic past its second branch point, at z=e(1/e), as expected. It is also interesting to note that D is nothing more than the region of analyticity of the principal branch of the Lambert's W, under the map exp(z), also as expected.

Having seen the analyticity of h(z), it is natural now to want to use h(z) as the analytic continuation of the real hyperpower function F(x). Usually the criteria for choosing such functions are fairly obscure, but in this case the author thinks its simplicity, its analyticity and its complete agreement with the real Hyperpower F(x), lead to the obvious.


  1. The region actually includes the boundary δD. When |Log(z)| = 1/e, similar to the series for Lambert's W, we can use Stirling's approximation. sqrt(2*π)*n(n+1/2)/en < n!, => 1/[en*n!] < 1/ sqrt(2*π)*n(n+1/2), => (n/e)(n-1)/n! = e*n(n-1)/[en*n!] < e*n(n-1)/sqrt(2*π)*n(n+1/2), = e/[n(3/2)*sqrt(2*π)]. Since the series ∑n=11/n3/2 converges, the above series converges on δD as well.