Introduction
This page is inspired by the concept of asymptotic growth rate of functions.
Asymptotic growth rate refers to the behaviour of a function at unboundedly large values of its arguments. For functions f(n) of a single variable, asymptotic growth rate refers to how fast f(n) grows as n increases without bound, compared to other functions. We say that a function f(n) has larger asymptotic growth than a function g(n), or equivalently, f(n) dominates g(n), if for sufficiently large n, f(n)>g(n). More precisely, “suffiently large” means that n>M for some fixed constant value M, determined relative to both f and g.
You are viewing: Which Function Grows The Fastest
Here, we focus our attention primarily on fast-growing functions, so we will skip over some common functions that exhibit slow rates of growth.
Constant Functions
The simplest function is the constant function f(n)=C, for some constant number C. This represents the slowest rate of growth: that is to say, there is no growth at all.
Linear Functions
Linear functions are functions of the form f(n)=An+B, where A and B are constants. Linear functions are the “next step up” from constant functions; they represent a linear rate of growth. Linear growths are characterized by the change in f(n) being proportional to the change in n.
(Note: we skipped over functions such as logarithmic functions and square root functions, which, strictly speaking, fall between constant functions and linear functions in terms of asymptotic rate of growth. These functions can be seen to arise from the inverses of fast-growing functions we discuss below. Since we are interested in fast-growing functions here, we shall not pay much more attention to these inverse functions.)
Quadratic Functions
Quadratic functions are of the form f(n)=An2+Bn+C, for some constants A, B, and C. These functions grow asymptotically faster than any linear function. Since the quadratic term An2 dominates the value of the function for large values of n, we consider all quadratic functions to be adequately captured by the growth rate of the squaring function An2.
Given a fast-growing linear function such as f(n)=1000n and a humble-looking quadratic function g(n)=n2/1000, g will eventually overtake f, and f will never catch up again. This is true no matter how large a coefficient f has, and no matter how small the coefficient g has, as long as it is not zero.
Polynomial Functions
The linear and quadratic functions are but the first few members of a large class of polynomial functions. Polynomial functions are of the form P(n)=Cmnm + Cm-1nm-1 + … C1n + C0, where C0, …, Cm are constants. By convention, m is chosen so that Cm is non-zero. We then call m the degree of P, and write deg(P)=m. For our purposes, we are really only interested in polynomials with m≥1 and Cm>0.
When m=1, the function is a linear function, and when m=2, it is a quadratic function.
It can be easily seen that given two polynomial functions P(n) and Q(n), P grows asymptotically faster than Q if deg(P)>deg(Q), and vice versa. Because of this property, every polynomial may be assigned a natural number, equal to its degree, representing its rate of growth.
Exponential Functions
The simplest exponential function is of the form f(n)=Bn, for some constant B. We call B the base of the exponential term Bn.
The exponential function dominates every polynomial function. Not only does it grow faster than the linear function, the quadratic function, or even g(n)=n100000; it outgrows all polynomial functions. Hence, we may think of the exponential function f(n)=Bn as a “super-polynomial” with infinite degree.
Exponential functions are not limited to just Bn. For example, one may construct functions like Bn2, which grows faster than any function of the form Bn. In fact, each non-constant term in the exponent corresponds with a different “level” of the exponential function, where higher levels grow faster asymptotically than lower levels. For example, Bn2+n grows faster than Bn2.
In fact, one can also construct nested exponentials, with a dominant term of the form BBn, or BBBn. In terms of asymptotic rate of growth, these functions lie in the upper reaches of the class of exponential functions.
Representative Functions
Note that because these exponential functions dominate all polynomial functions, we may add a polynomial function P(n) to an exponential function f(n) without significantly altering its asymptotic growth rate. Hence, when f(n) is an exponential function, f(n)+P(n) will, in general, have sufficiently similar asymptotic growth behaviour with f(n) that we lump them into the same category.
In general, just as a polynomial of a higher degree dominates all polynomials of lower degrees, just as exponentials dominate all polynomials, and just as nested exponentials dominate all simple exponentials and exponentials less deeply-nested, we may classify a function based on the term or factor that contributes most to its asymptotic growth rate.
For example, since bn dominates all polynomials, we may classify all functions of the form bn+P(n), for some polynomial P, in the same class as bn, and call bn the representative function of its class. Moreover, since factors of the form bbn dominate factors of the form bn, we may regard all functions of the form f(n)=bbncn+P(n) as belonging to the same class, with the representative g(n)=bbn.
From this point onwards, we shall regard all examples of functions as referring to classes the function is representative of, except where context indicates otherwise. Hence, when we speak of exponentials f(n)=bn, it should be understood to mean all functions form by algebraically combining f(n) with other functions with a lower asymptotic rate of growth.
Tetrational Functions
To truly transcend the exponential class of functions, we need to use a higher-order arithmetical operation. Nested exponentiations of the form nn…n, where there are m occurrences of n, are sometimes called power towers. Just as multiplication is repeated addition, and exponentiation is repeated multiplication, power towers are repeated exponentiations, and corresponds with the next higher-order operation. This operation is sometimes referred to as tetration, to emphasize its position in the sequence addition, multiplication, exponentiation, ….
There are various notations in use for representing tetration, including Knuth’s up-arrow notation (n↑↑m), and Rudy Rucker’s left-hand superscript (mn for n tetrated to m). Here, we shall adopt a more extensible notation, and write n(4)m for n tetrated to m.
Functions of the form f(n)=n(4)C, for some constant C, correspond with the nested exponential functions. These functions dominate less-deeply nested (or non-nested) exponential functions, but can still be represented using exponentiation alone.
However, functions of the form f(n)=B(4)n, for some constant B, cannot be represented as nested exponentials with a fixed number of nestings. We shall classify these functions as the tetrational functions. They dominate all exponential functions, including nested exponential functions with a fixed level of nesting.
Higher-order Operation Functions
The reason we chose the notation n(4)m for tetration, is because we can repeat the process of diagonalization. Just as multiplication is iterated addition, exponentiation is iterated multiplication, and tetration is iterated exponentiation, we can construct a higher-level operator, which we may term pentation, which corresponds to iterated tetration. Pentation is defined using Knuth’s up-arrow notation as n(5)m=n↑↑↑m.
Here, the limitations of the Knuth up-arrow notation becomes apparently. To define yet higher-order operations, we need to use the indexed up-arrow notation. We define n(i)m to mean n↑i-2m.
The (i) operations give rise to a hierarchy of arithmetical operations known as the Grzegorczyk hierarchy. For each i, the function f(n)=B(i)n is a function that outgrows all functions g(n)=C(j)n for j<i. We may say the order of f is equal to i. Thus, higher-order operations induce a hierarchy of functions where their relative asymptotic rates of growth is ordered by their orders.
The Ackermann Function
The Ackermann function is inevitably encountered in the process of seeking increasingly faster-growing functions. There are several different definitions, but all embody the same underlying concept. One definition is:
A(m,n) = { n+1if m=0 { A(m-1, 1)if m>0 and n=0 { A(m-1, A(m, n-1))if m>0 and n>0
It may not be immediately obvious from this definition that the Ackermann function represents a rate of asymptotic growth that transcends every higher-order operation in the Grzegorczyk hierarchy, until one attempts to compute its values for various values of m and n. We shall consider a few small values of m and n to illustrate its behaviour.
For m=0, if we increment n from 0 upwards, we see that A(m,n) takes on the values 1, 2, 3, 4, …, respectively. In other words, A(0,n)=n+1.
For m=1, again letting n iterate through the first few numbers, we get the values 2, 3, 4, 5, …. So A(1,n) = n+2. So far, it seems pretty tame.
For m=2, iterating n through the first few numbers, we get 3, 5, 7, 9, …. So, A(2,n) = 2n+3. Only a faint shadow of the things to come.
Let m=3, and iterate n as usual. We get 5, 13, 29, 61, …. A little analysis reveals that A(3,n)=2n+3-3. Now the true power of the Ackermann function begins to rear its head.
Now let m=4, and iterate n. We get 13, 65533, and then an immensely large number, namely, 265536-3. The number following that is so large that it easily overflows typical BigNum programming libraries designed to handle large numbers. It is equal to 2265536-3, or, to use tetrational notation, 2(4)6-3. In general, A(4,n)=2(4)(n+3)-3.
Read more : Which Is The Inlet And Outlet On A Radiator
The values of A(5,n) in general are too large to be computed even by a modern computer with a lot of memory. On analysis, it is revealed that A(5,n)=2(5)(n+3)-3.
In general, A(m,n)=2(m)(n+3)-3.
Now, define the function f(n)=A(n,n). It can be easily seen that this function will outgrow all functions corresponding to higher-order operations in the Grzegorczyk hierarchy. The dominant factor in any of these latter functions is of the form b(i)n, where b and i are constant. But since f(n) is dominated by the term 2(n)(n+3) and n will eventually surpass the constant i, f(n) will eventually dominate b(i)n for any constant i.
Hence, in some sense, f(n) may be regarded as corresponding to a higher-order operation function with an infinite order. From here on, for convenience, we shall write A(n) to denote the unary Ackermann function f(n)=A(n,n).
Beyond the Ackermann Function
The Ackermann function is far from the asymptotic growth rates achievable by recursive (computable) functions. One may continue in the following manner:
Define function iteration for an arbitrary function f as follows:
f1(n) = f(n) fi+1(n) = f(fi(n))
Now, given some function f(n), define a related function @f as follows:
@f1(n) = fn(n) @fi+1(n) = @fin(n) @f(n) = @fn(n)
We call @f the double-diagonalization of f.
We may regard @ as a “function operator” that performs double-diagonalization on f. For example, take the successor function s(n)=n+1. Then @s(n)=O(A(n)). In other words, @ transforms the successor function into something with the asymptotic growth rate of the Ackermann function. Obviously, @ is a very powerful function operator; we can apply it multiple times to get to growth rates significantly past the Ackermann function.
Let us agree that when we write @@f, we mean @(@f), and when we write @@@f, we mean @(@(@(f))), and so forth. Since @s(n) has the asymptotic growth rate of the Ackermann function, @@s(n) must significantly transcend the Ackermann function A(n). It transcends constant iterations Ai(n), the diagonalization An(n), and even constant iterations of the diagonalization, @Ai(n).
We shall denote @@s(n) by B(n). B(n)=O(@A(n)).
We can produce even faster-growing functions by diagonalizing the process of applying @ itself: let @if denote i applications of @ to f. That is to say:
@1f(n) = @f(n) @i+1f(n) = @(@if)(n)
We can then form the function g(n)=@nf(n). This function will transcend all finite applications of @ to f. If we then define C(n)=@ns(n), we obtain a function that transcends all finite applications of @ to the Ackermann function. This is equivalent to saying it transcends all finite applications of @ to the successor function s(n), since the @s(n) has equivalent growth rate as A(n)—adding the equivalent of one more application of @ does not change the fact that any finite number of applications of @ will be dominated by C(n).
Now we can diagonalize the process of diagonalizing the applications of @ to a function. We define:
#f1(n) = @ns(n) #fi+1(n) = #fin(n) #f(n) = #fn(n)
Obviously, this process can go on indefinitely: we define new functions and operators on functions, and then operators on operators on functions, etc.. But to find our way around this forest requires some central unifying notion.
Transfinite Ordinals
We now turn to a subject that may appear initially to be unrelated to what we’ve been discussing, but which will turn out to provide the central unifying notion we are seeking for in constructing faster and faster growing computable functions.
Transfinite ordinals are a generalization of finite ordinals numbers (first, second, third, …) to include non-finite numbers. The following is a definition of ordinal numbers due von Neumann:
A partially ordered set S with partial order ≤ is said to be totally ordered if given any two elements T and U of S, T≤U, or U≤T. A totally ordered set is well-ordered if every possible subset has a least element under its order relation ≤.
An ordinal is a set S which is totally ordered under the set containment relation ⊆, and every element of S is also a subset of S.
An ordinal α under this definition is well-ordered (because set containment is well-ordered, by the Axiom of Foundation), and is precisely the set that contains all ordinals smaller than itself. If α is finite, then it corresponds with the natural number n, and is defined to be the number of elements it contains. Thus, the empty set ∅=0, {0}=1, {0,1}=2, {0,1,2}=3, …, and so forth.
The set of natural numbers, which we shall denote by ω, is also an ordinal by this definition, since it contains only ordinals (the natural numbers), and every element of ω is a subset of ω and any smaller element (under the set containment relationship) is also an element of ω. Hence, ω is an infinite (or transfinite) ordinal. It is, in fact, the smallest infinite ordinal, since it is the supremum of all the (finite) natural numbers.
One of the most important properties of ordinal numbers that given any set of ordinals, the union of all the ordinals in the set is also an ordinal, and is the supremum of the set: it is the maximal ordinal in the set if the set is bounded, and it is the smallest ordinal containing the ordinals in the set if the set is unbounded. For example, if α={2,5,6}={{0,1},{0,1,2,3,4},{0,1,2,3,4,5}}, then the union of all elements in α is {0,1,2,3,4,5}, which is equal to 6 by definition.
Successor Ordinals
Given any ordinal α, we can form the set β=α∪{α}. The new ordinal β is called the successor of α, and is denoted α+1. For example, 1+1={0}∪{{0}}={0,{0}}={0,1}=2. Thus, we may define the successor operation s(α)=α∪{α} for any ordinal α. Note that this operation now applies not only to natural numbers, but also to infinite ordinals like ω: ω+1 is defined to be ω∪{ω}={0,1,2,3,…ω}.
We say that an ordinal α is a successor ordinal if there exists some ordinal β such that α=β+1. If there is no such β, then we say that α is a limit ordinal. In other words, a limit ordinal is an ordinal that cannot be reached from previous ordinals by repeated applications of the successor function s. Zero is such a limit ordinal, because there are no ordinals before zero. The infinite ordinal ω is another limit ordinal: it cannot be reached from any finite ordinal by repeated applications of s. It can only be reached “by fiat”—by taking the union of all natural numbers.
Ordinal Addition
We may now define the iterated successor operation:
s0(α)=α si+1(α)=s(si(α))
It should be obvious that si(α)=α+i. As it stands, this definition only works for finite ordinals i, but it is possible to define it for transfinite i as well. We simply extend the definition to include what happens at limit ordinals:
sβ(α) = { αif β=0 { s(sγ(α))if β=γ+1 { ∪γ<βsγ(α) if β is a limit ordinal
The notation ∪γ<βsγ(α) means the set union of all ordinals given by sγ(α), for all ordinals γ<β. As we’ve mentioned before, the union of any set of ordinals is also an ordinal, so this ensures that our iterated successor function always constructs an ordinal.
This definition of the iterated successor operation allows us to define ordinal addition for any two ordinals α and β:
α+β=sβ(α)
Read more : Which Terminal Is Spirit At Newark
Note that, in general, ordinal addition is not commutative: α+β is not equal to β+α in general. This is unlike the addition of finite numbers.
Ordinals Multiplication
By iterating the addition operation we have defined on ordinal numbers, we may define ordinal multiplication:
aβ(α) = { 0if β=0 { aγ(α) + α if β=γ+1 { ∪γ<βγ(α) if β is a limit ordinal
It is easy to see from this definition that an(α)=α+α+…+α (n times), for finite n. This also works for transfinite n such as ω: aω(α) is simply the supremum of α⋅1, α⋅2, α⋅3, …. Hence, we may define ordinal multiplication thus:
α⋅β = aβ(α)
Ordinal Exponentiation
Ordinal exponentiation can be constructed in a similar vein, by iterating ordinal multiplication. The definition is almost identical:
mβ(α) = { αif β=0 { mγ(α)⋅α if β=γ+1 { ∪γ<βγ(α) if β is a limit ordinal
It is easy to see that mβ(α) is just α multiplied by itself β times; hence, we may define ordinal exponentiation as:
αβ = mβ(α)
Beyond Ordinal Exponentiation
One may now be tempted to assume that we may define ordinal tetration by iterating ordinal exponentiation, just as we constructed finite tetration from finite exponentiation by iteration. We can—but only up to ω.
To see why this is so, consider the sequence ω, ωω, ωωω, …. We may take the limit of this process to obtain an ordinal that contains all such “power towers” of ω’s of finite height. This ordinal is Cantor’s ε0, which intuitively corresponds to a power tower of infinite height. We may think of it as ω(4)ω, using our higher-order operation notation. So far so good.
Now, the question is, how do we define ω(4)(ω+1)?
Intuitively speaking, we know that this corresponds to a power tower of height ω+1; but unfortunately, this cannot be defined in terms of iterated exponentials. The reason is that exponentiation is non-associative: xyz is not the same as (xy)z. Hence, given a power tower T=xx…x=x(4)n, Tx is not equal to x(4)(n+1), but is in fact much smaller. Now, it is possible to get x(4)(n+1) by taking xT instead, and this is what makes it possible to define tetration in terms of iterated exponentials of finite height.
However, this doesn’t generalize to transfinite ordinals: in general, ordinal operations are not commutative, so ω(4)(ω+1) ≠ ω(4)(1+ω). In fact, 1+ω=ω. So if we try to define ω(4)(ω+1)=ωω(4)ω, we realize that it is still equal to ω(4)ω=ε0. In fact, ε0 is defined by Cantor to be the smallest ordinal such that ωε0=ε0.
The problem here is that for power towers of transfinite height, adding one more ω to the base of the tower does not make it bigger. We need to add the next ω to the top of the tower. However, the top of the tower is not reachable arithmetically—the infinite tower ωωω… has no top!
In order to surmount this problem, we need to use another approach.
Ordinal Fixed-points
A fixed-point of an ordinal function F is an ordinal α such that α=F(α). For example, consider s'(α)=1+α. (Note: this is not the same as the successor function s(α)=α+1, which has no fixed points in the ordinals. Remember that ordinal arithmetic is not commutative.) If α is finite, then s'(α)>α. However, if α=ω, then s'(α)=1+ω=ω=α. Therefore, ω is a fixed-point of s’. In fact, every ordinal after ω is also a fixed point of s’. (E. g., s'(ω+1)=1+(ω+1)=(1+ω)+1=ω+1.)
Similarly, consider a'(α)=ω+α. For small values of α, a’ increases it by ω: a'(0)=ω, a'(1)=ω+1, a'(2)=ω+2, and even a'(ω)=ω+ω, and a'(ω+ω)=ω+ω+ω=ω⋅3. If we continue to iterate a’, we eventually reach a fixed-point: a'(ω⋅ω)=ω+ω⋅ω=ω⋅ω. After this, every ordinal greater than ω⋅ω is a fixed-point of a’.
Now consider m'(α)=ω⋅α. For sufficiently small α, m’ increases it by a factor of ω. For example, m'(ω)=ω⋅ω, m'(ω⋅ω)=ω⋅ω⋅ω. Eventually, we find that the fixed-point of m’ is ωω. Here, we note that not every ordinal after ωω is a fixed-point of m’; for example, m'(ωω+1) = ω⋅(ωω+1) = ω⋅ωω+ω⋅1 = ωω+ω ≠ ωω. In fact, the next smallest ordinal that is a fixed-point of m’ is ωω⋅2. This is because m'(α+β) = ω⋅α+ω⋅β, so for this to equal α+β, both α and β must be fixed-points of m’. Since ωω is the smallest fixed-point of m’, the next smallest fixed-point must be ωω+ωω=ωω⋅2.
Notice that there are multiple fixed-points for each of these examples, and that enumerating fixed-points allows us to get past the ordinal where the function is “stuck” at. We now apply this to iterated ordinal exponentiation in order to get past ε0.
Ordinal Tetration
First of all, Cantor made use of fixed-points to define his ε-numbers. Let e'(α)=ωα. Then εβ is defined to be the β’th fixed-point of e’.
We have already seen ε0, the first fixed point of e’, which we intuitively identify as ω(4)ω. Now, we shall make use of Cantor’s ε-numbers to define ordinal tetration beyond ω(4)ω.
Recall how we tried to define ω(4)(ω+1) using iterated exponentiation, but could not get past ε0. Intuitively, however, we “know” that ω(4)(ω+1) is simply a power tower of height ω+1. If we imagine laying each of the exponents in this tower in a line, we get the transfinite sequence ω, ω, ω, …, ω where the last ω sits at the ω’th position of the line. Now notice that if we add another ω to the left of this sequence, we get the same sequence. In other words, ωω(4)(ω+1) = ω(4)(ω+1). So, it must be one of Cantor’s ε-numbers. But which one is it?
Consider ordinals of the form ω(4)ω+α, where 0 < α < ω(4)ω. Clearly, ωω(4)ω+α = ωω(4)ω⋅ωα = ω(4)ω⋅ωα ≠ ω(4)ω. Similarly, for ordinals of the form ω(4)ω⋅α, where 1 < α < ω(4)ω, we have ωω(4)ω⋅α = (ωω(4)ω)α = (ω(4)ω)α ≠ ω(4)ω⋅α. Furthermore, (ω(4)ω)α < ω(ω(4)ω)α for 1 < α < ω(4)ω.
In general, any ordinal between ω(4)ω and ω(4)(ω+1) is some combination of ordinals with addition, multiplication, and exponentiation, but none of them is a fixed-point of e’.
In other words, ω(4)(ω+1) is the next smallest fixed-point of e’. This means that it must be equal to ε1. Therefore, we may define ω(4)(ω+1) = ε1.
An analogous argument will show that after ω(4)(ω+1), the next smallest ordinal that is a fixed point of e’ must be ω(4)(ω+2). And since it is so, it must be equal to ε2.
Now the pattern is clear: ω(4)(ω+α) = εα. Notice that for α ≥ ω⋅ω, this equivalence collapses to εα = ω(4)α. So we have successfully defined ordinal tetration with ω as the left-argument.
Ordinal Pentation
Armed with our definition of ordinal tetration, we now consider what happens if we iterate tetration: ω(4)ω = ε0, ω(4)ω(4)ω = ω(5)3 = εε0, ω(4)ω(4)ω(4)ω = ω(5)4 = εεε0, …, etc.. So, we can define ordinal pentation for finite right arguments as:
ω(5)1 = ω ω(5)2 = ε0 ω(5)(n+1) = ε(ω(5)n)
If we take this process to the limit, we get η0 = ω(5)ω, which has the property that η0 = εη0. Again, this is a fixed-point property. This means that we can now enumerate the fixed-points of the ε-subscripting process, which, by induction, lets us define ordinal pentation for arbitrary right-arguments. In other words, we let ηα to be the α’th fixed-point of the ε-subscripting operation, and define ω(5)(2+α) to be ηα. The extra “2” term is just to make indices line up nicely for finite α; once we get to ω and beyond, it is no longer relevant.
The Veblen Hierarchy
We could continue in this way to define the higher-order operations in the Grzegorczyk hierarchy for ordinals, but we might as well adopt a more robust notation for enumerating these fixed-point functions. These fixed-point functions give rise to the so-called Veblen hierarchy of ordinals, which, by our scheme of defining ordinal operations using fixed-points, will eventually lead us back to the realm of fast-growing functions. In fact, the functions corresponding to the ordinals in upper reaches of the Veblen hierarchy grow so fast that they far, far, transcend the Grzegorczyk hierarchy, the Ackermann function, and virtually all immediately-obvious attempts of going past the Ackermann function, including our previous attempts to define operators on functions and operators on operators on functions, etc..
First, let us define the ψ function, which will enumerate the ordinals in the Veblen hierarchy. We shall adopt a slightly different definition than the φ notation usually used for the Veblen hierarchy. In spite of its initial appearances, this notation is actually equivalent in expressive power to the φ notation once we get to ψω and beyond. We adopt this notation so that it will be more conveniently later on to define the Exploding Tree Function, which will be the functional equivalent of the Feferman-Schütte ordinal Γ0. But first, the definition:
ψ0(0)=1 ψ0(α)=1+α ψβ+1(α)= the α’th fixed-point of ψβ ψβ(α)= the α’th common fixed-point of φγ for all γ<β, β a limit ordinal
Credits
This page was inspired by various online resources on large, finite natural numbers, including the following:
- Large Numbers by Rob Munafo
- Infinity Scrapers by Jonathan Bowers (it may be easier to understand what exactly Bowers is talking about if you read about linear array notation first, followed by the subsequent pages that go to dimensional arrays and beyond.)
Source: https://t-tees.com
Category: WHICH