back to listing index

Inverse Ackermann without pain - גבריאל ניבש - Gabriel Nivasch

[web search]
Original source (www.gabrielnivasch.org)
Tags: math functions inverse-ackermann
Clipped on: 2016-12-03

Fun‎ > ‎

Inverse Ackermann without pain

(Updated June 2009)

The inverse Ackermann function is an extremely slow-growing function which occasionally turns up in computer science and mathematics. The function is denoted α(n) (alpha of n).

This function is most well-known in connection with the Union-Find problem: The optimal algorithm for the Union-Find problem runs in time O(mα(n) + n), where n is the number of elements and m is the total number of Union and Find operations performed. (See Cormen et al., Introduction to Algorithms, Second Edition, Chapter 21, MIT Press, 2001.) (A more precise bound is O(mα(m, n) + n), with a two-parameter version of the inverse Ackermann function, which we will explain below.)

The inverse Ackermann function also arises in computational geometry. For example, the maximum complexity of the lower envelope of n segments in the plane is Θ(nα(n)). (See J. Matoušek, Lectures on Discrete Geometry, Chapter 7, Springer-Verlag, New York, 2002.)

For some reason the inverse Ackermann function gets much less attention than it deserves. This is probably due to the perception that just defining α(n) is complicated, never mind working with it.

It may come as a surprise, then, that there is a very simple and elegant way to define the inverse Ackermann function and derive its asymptotic properties. Moreover, there is no need to make any mention of A, the very quickly-growing Ackermann function.

In other words, dealing with α(n) does not have to be painful!

There are several different versions of the inverse Ackermann function in the literature. In fact, usually one needs to define a specific version of the function for each application. However, at the end of the day, all definitions yield equivalent asymptotic behavior; namely, we have |α(n) − α'(n)| = O(1) for any two versions α and α'. Thus, it is convenient to have a canonical definition of α(n), which we would like to be as simple and elegant as possible.

The inverse Ackermann hierarchy

The inverse Ackermann hierarchy is a sequence of functions αk(n), for k = 1, 2, 3, ..., where each function in the hierarchy grows much more slowly than the previous one.

Let [] denote the ceiling function (rounding up to the nearest integer). Then the inverse Ackermann hierarchy is defined as follows. We first let

α1(n) = [n / 2].

Then, for each k ≥ 2, we let αk(n) be the number of times we have to apply the function αk−1, starting from n, until we reach 1. Formally, for k ≥ 2, we let

αk(1) = 0;   αk(n) = 1 + αkk−1(n)),   n ≥ 2.

The following table shows the first values of αk(n):

n = 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ... n
α1(n) = 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 13 ... [n / 2]
α2(n) = 0 1 2 2 3 3 3 3 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 ... [log2 n]
α3(n) = 0 1 2 2 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 ... log* n
α4(n) = 0 1 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 ...
...

We have α2(n) = [log2 n], and α3(n) is the iterated logarithm function, denoted log* n.

Claim 1: If n ≥ 4 then αk(n) ≤ n − 2.

Proof: By induction on k. The case k = 1 is clear. So assume k ≥ 2.

If n = 4, then αk(n) = 2; and if n = 5 or 6, then αk(n) = 3. So let n ≥ 7. Then, by induction on k and n,

αk(n) = 1 + αkk−1(n)) ≤ 1 + αk(n − 2) ≤ 1 + n − 4 < n − 2.   QED

Claim 2: We have αk+1(n) ≤ αk(n) for all k and n. Moreover, for k ≥ 2 the inequality is strict if and only if αk(n) ≥ 4.

Proof: The claim is easily established for αk(n) ≤ 3, so suppose αk(n) ≥ 4. By Claim 1,

αk+1(n) = 1 + αk+1k(n)) ≤ 1 + αk(n) − 2 < αk(n).   QED

Corollary 3: We have αk(n) = o(n) for all k ≥ 2.

Proof: By Claim 2, since α2(n) = Θ(log n) = o(n).   QED

Claim 4:We have αk+1(n) = o(αk(n)) for all k ≥ 1.

Proof: By Corollary 3 we have

αk+1(n) = 1 + αk+1k(n)) = 1 + o(αk(n)).   QED

In fact, Claim 4 can be strengthened. Given an integer r ≥ 1, let f(r) denote the r-th-fold composition of the function f. Then,

Claim 5: αk+1(n) = o(αk(r)(n)) for all fixed k and r.

Proof: Iterating r times the definition of αk+1(n), and applying Corollary 3,

αk+1(n) = r + αk+1k(r)(n)) = r + o(αk(r)(n)).   QED

Thus, we have log* n = o(log log log n), α4(n) = o(log* log* log* log* n), etc.

The inverse Ackermann function

By Claim 2, for every fixed n ≥ 5, the sequence

α1(n), α2(n), α3(n), ...

decreases strictly until it settles at 3. For example, for n = 9876! we obtain the sequence

3.06×1035163, 116812, 6, 4, 3, 3, 3, ...

The inverse Ackermann function α(n) assigns to each integer n the smallest k for which αk(n) ≤ 3:

α(n) = min { k : αk(n) ≤ 3 }.

Thus, α(9876!) = 5.

Claim 6: We have α(n) = o(αk(n)) for every fixed k.

Proof: Let m = αk+1(n). Then the (m − 2)-nd term of the sequence

αk+1(n), αk+2(n), αk+3(n), ...,

namely αk+m−2(n), already equals 3. Thus,

α(n) ≤ k + m − 2 = k − 2 + αk+1(n) = o(αk(n)).   QED

The two-parameter version of the inverse Ackermann function

There is also a two-parameter version of the inverse Ackermann function that sometimes comes up (for example, in the running time of the Union-Find algorithm mentioned above). This two-parameter function can be defined as:

α(m, n) = min { k : αk(n) ≤ 3 + m / n }.

This definition differs by at most a small additive constant from the "usual" definition of α(m, n) found in the literature. And as before, we defined it directly, without making mention of the rapidly-growing Ackermann function.

The function α(m, n) satisfies the following properties:

  • α(m, n) ≤ α(n) for every m and n.
  • α(m, n) is nonincreasing in m.
  • If m = nαk(n) then α(m, n) ≤ k.

See also