Watashi wa doitsujin desu.

And again, I gave a talk for the best mathematics club of the multiverse, for "younger" students, meaning that this text is not scientific. This is the script for the talk.

We want to prove the following theorem:

Theorem: There is an algorithm of which, with standard set theory, we cannot decide whether it terminates.

Firstly, we need to look at what an algorithm is. Usually, people introduce this with Turing machines. However, we will use a model which is closer to modern computers, so-called register machines. These are provably equivalent to Turing machines, except that some algorithms might take a bit longer on Turing machines – but at this point, we do not care about efficiency, we just care about whether things can be calculated at all, given enough time and space. If you care about efficiency, that would be in the realm of complexity theory, while we will work in the realm of recursion theory.

A register machine has a finite set of registers R[0], R[1], …, which can contain arbitrarily large natural numbers. A program consists of a sequence of instructions:

  • For every i, there is the instruction R[i]++, which increases the number that is contained in R[i].
  • For every i, there is the instruction R[i]--, which decreases the number contained in R[i] if it is larger than 0, and does nothing otherwise.
  • There is the instruction End, which ends the program.
  • For every i and n, there is the instruction if R[i]==0 then goto n. The instructions are numbered, and this instruction jumps to the instruction with number n, if R[i] contains 0, otherwise it does nothing.

The following program checks whether R[0] >= R[1], and if so, it sets R[2] to 1:

0 if R[1]==0 then goto 6
1 if R[0]==0 then goto 5
2 R[1]--
3 R[0]--
4 if R[2]==0 then goto 0
5 End
6 R[2]++
7 End

Line 4 is only here to do an unconditional jump, therefore, we can introduce a shorthand that just does an unconditional jump:

0 if R[1]==0 then goto 6
1 if R[0]==0 then goto 5
2 R[1]--
3 R[0]--
4 goto 0
5 End
6 R[2]++
7 End

To make it a bit more readable, we can omit the line numbers, which we usually do not need, and just set labels to the lines we want to jump to:

Start: if R[1]==0 then goto Yes
if R[0]==0 then goto No
R[1]--
R[0]--
goto Start

No: End

Yes: R[2]++
End

This just increases readability, it doesn't make anything possible that wasn't possible before. By using this algorithm, we can generally check whether R[i] > R[j], and therefore, we can introduce a shorthand notation if R[i] > R[j] then goto A without being able to do anything we couldn't do before.

We can do addition by

Start: if R[1]==0 then goto Done
R[0]++
R[1]--
goto Start
Done: End

and truncated subtraction (max(a-b, 0)) by

Start: if R[1]==0 then goto Done
R[0]--
R[1]--
goto Start
Done: End

Therefore, we can add the instructions R[i]+=R[j] which adds R[j] to R[i], and R[i]-=R[j], without being able to do anything more than before. Having these instructions, we can define multiplication by repeated addition, and division and modulo by repeated subtraction. This is left to the reader.

Now, we cannot know for sure whether everything that is computable at all is computable by register machines. However, we do not know any computable function that cannot be computed by a register machine. Hence, it is generally believed that there is none. This is called the Church-Turing thesis.

These programs operate on numbers only. Real computers work with images and text. However, this is, computationally, no difference, and there are several injections between these kinds of data. Especially, programs themselves can be represented as numbers, which is called Gödelization. For this, we use the Cantor Pairing, which gives a bijective function \pi:\mathbb{N}^2\rightarrow\mathbb{N} by \pi(k_1,k_2):=\frac12(k_1+k_2)(k_1+k_2+1)+k_2. This function enumerates the backward diagonals on the grid of natural numbers, as this graphic shows. The pairing itself can obviously be calculated with the above functions by a register machine. Inverting the function is also easy, and left to the reader.

Now we can represent every program we have in the following way: We first map numbers to the single instructions:

  • R[i]++ is mapped to \pi(0, i)
  • R[i]-- is mapped to \pi(1, i)
  • if R[i]==0 then goto j is mapped to \pi(2,\pi(i,j))
  • End is mapped to \pi(3,0)

Therefore, every instruction has its own code. A program can be encoded as a sequence of these codes; the program with the instructions i_0,\ldots,i_{n-1} can be encoded by \pi(n,\pi(i_0,\pi(i_1,\ldots))).

Therefore, it is well-defined to talk about "programs getting other programs as parameters". And having seen this, it is easy to write an universal register machine, which evaluates such a program, given a sequence of register values:

The state of a program is entirely determined by its registers and the number of the current instruction. It has a finite sequence of registers R_0,\ldots,R_{i-1}, and an instruction line c, and therefore, we can encode the state by \pi(c,
\pi(i,\pi(R_0,\pi(R_1,\ldots)))).

Now, let R_0 contain the program code, and R_1 contain the current state. Let c=\left<R_1\right>_0 be the first element of the program state, which tells us, at which position of R_0 we are. Let d=\left<R_0\right>_c be the current instruction given by c.

  • If \pi_1(d)=0, let k=\pi_2(d)+1 and r=\left<R_1\right>_k. r is the value of the k-1-st register in the simulated program. We replace it inside R_1 by r+1 and replace \left<R_1\right>_0 by \left<R_1\right>_0+1. We simulated R[k-1]++.
  • Similarily, we can simulate R[k-1]--.
  • For \pi_1(d)=2, we have to check the register value, and set the instruction number to the apropriate value.
  • For \pi_1(d)=3, we end, and have the final state of the program.

While it is really intricate, we can see that it is possible to "simulate" a register program inside a register program. A natural question which arises is: Is there an algorithm such that, given a program P (or its Gödelization), and an input state s, the algorithm determines whether P(s) terminates. This problem is called the Halting problem. We now show that it cannot be solved. Formally, we show that there is no program M that, given the Gödelization of a program P in R_0 and an input state in R_1, leaves R_2 being 0 if and only if P with the given input state terminates. We do this by contradiction: Assume such an M exists. Then we could, from this M, generate the following program:

"execute M"
if R[2]==0 then goto Loop
End
Loop: goto Loop

This program terminates if and only if the given program P with the given state does not terminate. We modify this program once again by one line:

"set R[1] to <R[0]>"
"execute M"
if R[3]==0 then goto Loop
End
Loop: goto Loop

We call this program O. O only takes one argument. It terminates if and only if the given program, given its own Gödelization, does not terminate. Such programs are called self-accepting.

Now, as O is itself a program, we can Gödelize it, so let o be the Gödelization. By setting R_1=o, we can calculate O(o).

Now assume O(o) terminates. This means that in O after executing M there will be R[2]==0. Therefore O(o) would not terminate. Contradiction.

But assuming O(o) would not terminate would mean, by the same argument, that R[2] is not 0 after M. Therefore, O(o) would terminate. Also a contradiction.

Such a program O cannot exist. Therefore, M cannot exist.

This proves that we cannot generally decide whether an algorithm terminates. However, it is not yet what we want: We want an algorithm, of which we cannot decide whether it terminates, at all. To get it, we need to do a bit of logic. We will mainly focus on Zermelo-Fraenkel set theory here, as it is the foundation of mathematics.

We first define what a mathematical formula is, which is essentially a string that encodes a mathematical proposition.

  • We have an infinite set of variable symbols V:=\{a,b,c,\ldots\}.
  • The set of strings A=\{a\in b\mid a,b\in V\} is the set of atomic formulae: Formulae which just give them \in-relation between two free variables.

Now, the set F of formulae is the smallest set, such that

  • Every atomic formula is a formula: A\subseteq F.
  • If X\in F and a\in V, then \forall a X ("for all a X holds") and \exists a X ("there exists an a such that X holds") and \neg X ("not X") are in F.
  • If X, Y\in F, then X\wedge Y ("X and Y"), X\vee
  Y ("X or Y") and X\rightarrow Y ("X implies Y") are in F.

We now give the axioms of set theory:

NUL: There is an empty set:

\exists_e\forall_z\neg z\in x

We can introduce a common shorthand notation for \neg a\in b by a\not\in b, and rewrite this axiom as

\exists_e\forall_z z\not\in e

If we want to talk about the empty set now, we need to introduce some variable e, and add \forall_z z\not\in e to the formula. Therefore, our system doesn't get stronger if we introduce a symbol \emptyset for the empty set, instead of always adding this formula, and it increases readability, which is why we do that.

We furthermore define the shorthand notation X\leftrightarrow Y by (X\rightarrow Y)\wedge (Y\rightarrow X), and a=b by \forall z(z\in a\leftrightarrow z\in b).

EXT: The axiom of extensionality says that sets that contain the same elements are also contained in the same sets:

\forall x\forall y(x=y\rightarrow\forall w(x\in w\leftrightarrow y\in w))

FUN: The axiom of foundation says that every set contains a set that is disjoint to it. From this axiom follows that there are no infinite \ni-chains.

\forall x(\exists a(a\in x)\rightarrow\exists y(y\in x\wedge\neg\exists z(z\in y\wedge z\in x)))

or, with additional obvious shorthand notation

\forall x(x\neq\emptyset\rightarrow \exists y\in x(y\cap x=\emptyset))

PAR: The axiom of pairing says that there is a set that contains at least two given elements, meaning, for all x,y, there exists a superset of \{x,y\}:

\forall x\forall y\exists z(x\in z\wedge y\in z)

UN: The axiom of union says that the superset of the union of all sets in a set exists:

\forall f\exists a\forall y\forall x((x\in y\wedge y\in f)\rightarrow x\in a))

POW: The axiom of the powerset: A superset of the powerset of every set exists:

\forall x\exists y\forall z(\underbrace{\forall q(q\in z\rightarrow q\in x)}_{z\subseteq x} \rightarrow z\in y).

INF: The axiom of infinity says that a superset of the set of natural numbers exists. Natural numbers are encoded as ordinals: 0:=\emptyset, and n+1 := n\cup \{n\}. Writing it out as formula is left as an exercise.

The other two sets of formulae we need are given by axiom schemes: They are infinitely many axioms, but they can be expressed by a simple, finite rule:

SEP: The axiom scheme of separation says that, for every formula \theta(x) and every set a, the set \{x\in
  a\mid\theta(x)\} exists:

Let a formula \theta be given with free variables among x,
z, w_1, \ldots, w_n, and y not occur freely. Then the formula

\forall z\forall w_1\ldots\forall w_n\exists y\forall x(x\in
y\leftrightarrow (x\in z\wedge\theta)).

is an axiom of set theory.

RPL: The axiom scheme of replacement is a bit more complicated.

A formula \theta(x,y) is called a functor on a set a (which is not the same as a functor in category theory), if for all x\in a there is a unique y such that \theta(x,y) holds. Therefore, in some sense, \theta defines something similar to a function on a, and we write \overline{\theta}(x) for this unique y. Then the set \{\overline{\theta}(x)\mid x\in a\}, the "image" of a, exists. Formalizing this scheme is left as an exercise.

AC: It should be noted that usually the axiom of choice is added. However, we do not need to care whether it is added or not, so we omit it here.

We already talked about embedding natural numbers into this set theory. We can also define general arithmetic inside this set theory. Most of mathematics can be formalized inside Zermelo-Fraenkel set theory.

Now, we can formalize propositions. Now we want to formalize proofs. Normally, I would introduce the calculus of natural deduction here, because it corresponds to the dependently typed lambda calculus, so every proof is a term. However, for the specific purpose we need, namely, formalizing proof theory in arithmetic, the equivalent Hilbert calculus is the better choice. It corresponds to the SKI calculus for proof terms.

Firstly, we further reduce our formulae: We can express A\vee B as \neg A\rightarrow B, and A\wedge B as \neg(\neg
A\vee\neg B). Furthermore, \exists x A can be expressed by \neg\forall x\neg A. Hence, we only need \neg, \forall and \rightarrow to express all formulae. We now define additional logical axiom schemes, where \phi,\xi,\psi range over all formulae. (Notice: \rightarrow is right-associative.)

  • P1. \phi\rightarrow\phi
  • P2. \phi\rightarrow\psi\rightarrow\phi
  • P3. (\phi\rightarrow\psi\rightarrow\xi)\rightarrow(\phi\rightarrow\psi)\rightarrow\phi\rightarrow\xi
  • Q5. \forall x(\phi(x))\rightarrow\phi(y) for all variables x,y
  • Q6. \forall x(\phi\rightarrow\psi)\rightarrow \forall x\phi \rightarrow\forall x\psi
  • Q7. \phi\rightarrow\forall x\phi if x is not free in \phi

A proof of a formula \psi is a finite sequence of formulae \psi_0, \ldots, \psi_n, such that \psi_n=\psi and for all 0\le i\le n, either \psi_i is an axiom of set theory, or a logical axiom, or there exist 0\le j,k<i such that \psi_j = (\psi_k\rightarrow \psi_j). Essentially this means that everything in the formula is either an axiom or follows from former formulae applying modus ponens.

Completeness Theorem: If a formula \psi is true in set theory, then there exists a proof of it.

To prove this, we would need model theory, which would lead too far, so we leave out the proof.

Now, as we did for programs before, we can gödelize formulae and proofs. Let us denote by \lceil\phi\rceil the gödelization of \phi.

Diagonalization Lemma: For every formula \phi(x) with one free variable x, there exists a formula \theta, such that \theta\leftrightarrow \phi(\lceil\theta\rceil) holds.

Proof: First, we notice that, given the formula \phi(x), we can express the substitution of another variable z for x, therefore, we can give a function that satisfies S_{x:=z}(\lceil\phi(x)\rceil)=\lceil\phi(z)\rceil. Now we can define \tau(z):=\exists y(\phi(y)\wedge S_{x:=z}z=y). Now, define \theta := \tau(\lceil\tau(z)\rceil). Then we have \theta\leftrightarrow \exists y(\phi(y)\wedge
 S_{x:=\lceil\tau(z)\rceil}\lceil\tau(z)\rceil=y) \leftrightarrow
 \exists y(\phi(y)\wedge y=\lceil\tau(\lceil\tau(z)\rceil)\rceil)
 \leftrightarrow \phi(\lceil\tau(\lceil\tau(z)\rceil)\rceil) \leftrightarrow
 \phi(\lceil\theta\rceil). This concludes the proof.

Notice that the definition of \theta is computational: It can be done effectively by a computer. As we can find such a formula for every \psi, we denote it by \theta_\psi.

Now, we can also gödelize proofs and their correctness criterion. Therefore, we can give a formula \nu(p,f) meaning "p is the gödelization of a correct proof of the gödelized formula f". Therefore, \mu(f):=\exists_p\nu(p, f) says that the gödelized formula f is provable.

By the diagonalization lemma, there is a formula \theta_{\neg\mu} such that \theta_{\neg\mu}\leftrightarrow\neg\mu(\lceil\theta_{\neg\mu}\rceil). Now, assume that \theta_{\neg\mu} does not hold. Then \neg\mu(\lceil\theta_{\neg\mu}\rceil) also cannot hold, therefore, it would be provable, which is a contradiction. Hence, \theta_{\neg\mu} must hold. But then, it cannot be provable. This is a (sketch of a) proof of

Gödel's first incompleteness theorem: In Zermelo-Fraenkel set theory, there are propositions that can neither be proved nor disproved.

More generally, this theorem holds for all axiom systems that are capable of basic arithmetic, because this is all we used. Specifically for Zermelo-Fraenkel set theory, there are other examples of such propositions, namely the continuum hypothesis, and the existence of large cardinals.

Now, something we always implicitly assumed is that set theory is consistent: If \psi is provable, then \neg\psi cannot be provable. This is, however, unknown, which follows from:

Gödel's second incompleteness theorem: Set theory cannot prove its own consistency.

Proof: We use our \theta_{\neg \mu} from the proof of the first incompleteness theorem. Furthermore, we can define \overline{f} such that \overline{\lceil\psi\rceil}=\lceil\neg\psi\rceil. Now, we can define what it means to be consistent, namely: \zeta :=
 \neg\exists f(\mu(f)\wedge\mu(\overline{f})). Now, we know that \neg\mu(\lceil\theta_{\neg\mu}\rceil), and therefore, since false propositions imply anything, \mu(\lceil\theta_{\neg\mu}\rceil) \rightarrow
 \mu(\lceil\chi\wedge\neg\chi\rceil) for all formulae \chi, and obviously this implies \mu(\lceil\chi\rceil)\wedge\mu(\lceil\neg\chi\rceil). Therefore, \zeta\rightarrow \theta_{\neg\mu}. But this contradicts what we proved in the first completeness theorem. Hence, \zeta cannot be provable.

Let a := \lceil \emptyset=\{\emptyset\}\rceil. Obviously, \mu(a) if and only if set theory is inconsistent (since it is wrong). Now consider the following algorithm:

Retry: if ν(R[0], a) then goto Found
R[0]++
goto Retry
Found: end

Does this algorithm terminate?

If it terminates, it has found an inconsistency in set theory. Assuming that set theory is consistent, it would not terminate. But if we could prove that it does not terminate, we would be able to prove that set theory is consistent, and this contradicts the second incompleteness theorem.

Hence, we have an algorithm of which we cannot decide whether it terminates.