. 98
( 107 .)


the propositional rules.

Section 19.4
540 / Completeness and Incompleteness

19.18 Open Exercise 19.17 again. Take the constant c as shorthand for the witnessing constant cCube(x) .
‚ Take T to be the ¬rst two premises of this proof. We saw in Exercise 19.6 that the other
sentences are all members of H. The Elimination Theorem thus applies to show that you could
transform your proof from the preceding exercise into a proof from the ¬rst two premises, one
that does not need the remaining premises. Open Exercise 19.18 and give such a proof. [If you
were to actually transform your previous proof, using the method we gave, the result would be
a very long proof indeed. You™ll be far better o¬ giving a new, direct proof.]

Section 19.5
The Henkin Construction
Proposition 3 allows us to take any ¬rst-order structure for L and get from it
one for LH that makes all the same sentences true. This, of course, gives rise
to a truth assignment h to all the sentences of LH that respects the truth-
functional connectives: just assign true to all the sentences that are true
in the structure, false to the others. (You may recall that you were asked
to prove this in Exercise 18.11.) The main step in the Henkin proof of the
Completeness Theorem is to show that we can reverse this process.
Theorem (Henkin Construction Lemma) Let h be any truth assignment for
Henkin Construction
Lemma LH that assigns true to all the sentences of the Henkin theory H. There is a
¬rst-order structure Mh such that Mh |= S for all sentences S assigned true
by the assignment h.
In giving the proof of this result, we will assume that our language L
contains only relation symbols and constants, no function symbols. We will
return at the end to explain how to modify the proof if there are function
symbols. The proof of this theorem has two parts. We must ¬rst show how
to construct Mh from h and then show that Mh does indeed make true all
constructing Mh the sentences to which h assigned true. To construct Mh , we must do three
things. We must de¬ne the domain D of Mh , we must assign to each n-ary
relation symbol R some set R of n-tuples from D, and we must assign to
each name c of LH some element of D. We ¬rst give the basic idea of the
construction. This idea won™t quite work, so it will have to be modi¬ed, but
it™s useful to see the ¬‚awed idea before digging into the details that correct
the ¬‚aw.
The basic (¬‚awed) idea in constructing the ¬rst-order structure M is to
use the construction of Exercise 18.12, but to count on the fact that h assigns
true to all the sentences in H to get us past the quanti¬ers. In more detail,
we build M as follows:

Chapter 19
The Henkin Construction / 541

—¦ For the domain of M, use the set of constant symbols of LH .

—¦ Let each constant be a name of itself.

—¦ To specify the interpretation of a (binary, say) relation symbol R, take
the set R of pairs c, d of constant symbols such that h assigns true
to the sentence R(c, d).

This de¬nes a perfectly good ¬rst-order structure M. We would like to be
able to prove that M makes true all and only the sentences to which h assigns
true. That is, we would like to show that for any sentence S of LH , M |= S if
and only if h(S) = true. The natural way to prove this would be by induction
on the complexity of the sentence S. Many of the details of the proof work
out. The main problem has to do with identity statements! The problem is a problem: identity
that inevitably there are distinct constants c and d such that h assigns true
to c = d. (See Exercise 19.7.) But M makes this sentence false since c and d
are distinct and name themselves.
This problem is one that is quite familiar in mathematical situations. What
we need to do is to form what mathematicians call a “quotient” of M. What
this means is that we must “identify” various elements that M considers dis-
tinct. This is just the sort of thing that equivalence relations and equivalence equivalence classes to
the rescue
classes (studied in Chapter 15) are designed to do, and is the reason we in-
cluded them in our discussion of sets.
De¬ne a binary relation ≡ on the domain of M (i.e. on the constants of
LH ) as follows:

c ≡ d if and only if h(c = d) = true.

Lemma 11. The relation ≡ is an equivalence relation.

Proof: This follows immediately from Exercise 19.20.

From this it follows that we can associate with each constant c its equiv-
alence class
[c] = {d | c ≡ d}
This allows us to de¬ne our desired ¬rst-order structure Mh : de¬nition of Mh

—¦ The domain D of our desired ¬rst-order structure Mh is the set of all
such equivalence classes.

—¦ We let each constant c of LH name its own equivalence class [c].

Section 19.5
542 / Completeness and Incompleteness

—¦ We interpret relation symbols R as follows. For this de¬nition let™s again
assume that R is binary, just to make the notation simpler. Then the
interpretation of R is the set

{ [c], [d] | h(R(c, d)) = true}

It you think about it a second, you will see that there is something not quite
clear about the de¬nition of R in the last item. We have to say which pairs
a, b of equivalence classes are in this relation. What happens if a = [c] = [c ]
and b = [d] = [d ]? If we had h(R(c, d)) = true but h(R(c , d )) = false
our de¬nition would not really make sense. Thus, we just make sure that this
cannot happen.

Lemma 12. If c ≡ c , d ≡ d and h(R(c, d)) = true then h(R(c , d )) = true.

Proof: By Exercise 19.8, the following is a tautological consequence
of the Henkin theory H:

(R(c, d) § c = c § d = d ) ’ R(c , d )

Since h assigns everything in H true, and it assigns true to each
conjunct of R(c, d) § c = c § d = d , it must also assign true to
R(c , d ).

This lemma shows that our de¬nition of Mh does make sense after all.
The proof of the Henkin Construction Lemma will be completed by proving
the following result.

Lemma 13. For any sentence S of LH , Mh |= S if and only if h(S) = true.
the crucial lemma

Proof: The basic idea of this proof is to use induction. We have
explicitly de¬ned the structure Mh to make sure the claim is true
for atomic sentences. Further, truth assignments work the same way
on truth-functional connectives as the de¬nition of truth in a ¬rst-
order structure. So the only possible problems are the quanti¬ers and
these, we will see, are taken care of by the quanti¬er axioms in H.
The only mild complication to this outline is that the quanti¬er ∀ is
not handled directly, but indirectly through the deMorgan sentences
in H:
¬∀x P(x) ” ∃x ¬P(x)
What makes this a complication is that the most obvious measures
of complexity, say by length or number of logical operations, would

Chapter 19
The Henkin Construction / 543

count ∀x P(x) as simpler than the sentence ∃x ¬P(x), whereas any
induction proof is going to need to make sure that the latter works
out right before it can be sure that the former does. We get around
this by de¬ning a di¬erent measure of complexity for w¬s. Namely,
we de¬ne the complexity of an atomic w¬ to be 0, the complexity
of ¬P and ∃x P to be one greater than the complexity of P, the
complexity of P § Q, P ∨ Q, and P ’ Q to be one greater than the
maximum of that of P and Q, but the complexity of ∀x P to be three
greater than that of P. Here is a little table showing the complexity
of some w¬s in the blocks language:

w¬ complexity
Small(x) 0
(x = a) 0
¬(x = a) 1
Small(x) ’ ¬(x = a) 2
¬(Small(x) ’ ¬(x = a)) 3
∃x ¬(Small(x) ’ ¬(x = a)) 4
∀x (Small(x) ’ ¬(x = a)) 5

Notice that the complexity of a w¬ P(x) is the same as that of P(c).
(See, for example, Exercise 19.21.) With this de¬nition of complexity,
we can prove the lemma, by induction on the complexity of sentences.
As remarked above, the case where the complexity is 0 is true by the
way we de¬ned the structure Mh .
Assume that the lemma holds for all sentences of complexity ¤ k and
let S have complexity ¤ k + 1. There are several cases to consider,
depending on the main connective or quanti¬er of S. We treat one
of the truth-functional cases, as these are all similar, and then both
of the quanti¬er cases.
Case 1. Suppose S is P ∨ Q. If Mh |= S, then at least one of P or Q is
true. Assume that P is true. Since the complexity of S is ¤ k + 1, the
complexity of P is ¤ k, so by induction hypothesis, h(P) = true. But
then h(P ∨ Q) = true, as desired. The proof in the other direction
is similar.
Case 2. Suppose that S is ∃x P(x). We need to show that Mh |=
∃x P(x) if and only if h assigns the sentence true. Assume ¬rst that
the sentence is true in Mh . Then since every object in the domain is
denoted by some constant, there is a constant c such that Mh |= P(c).
But since the complexity of this sentence is less than that of S, our

Section 19.5
544 / Completeness and Incompleteness

induction hypothesis shows us that h(P(c)) = true. But now recall
that our theory H contains the sentence

P(c) ’ ∃x P(x)

so h assigns this sentence true. But then, by the truth table for ’,
h assigns true to ∃x P(x), as desired.
The reverse direction of this case is very similar, but it uses the
Henkin witnessing axiom for P(x). Here is how it goes. Assume that
h assigns true to ∃x P(x). We need to show that Mh |= ∃x P(x). But
recall that h assigns true to the witnessing axiom

∃x P(x) ’ P(cP(x) )

Hence, by the truth table for ’, h assigns true to P(cP(x) ). Hence,
by induction, this sentence is true in Mh . But then ∃x P(x) is true
as well.
Case 3. Finally, let us assume that S is ∀x P(x). We need to prove
that this sentence is true in Mh if and only if h assigns the sentence
true. Assume ¬rst that S is true in Mh . Then ∃x ¬P(x) is false in
Mh . But then, by induction, h assigns false to this sentence. But
recall that H contains the sentence

¬∀x P(x) ” ∃x ¬P(x)

From this it follows that h assigns false to ¬∀x P(x) and hence true
to ∀x P(x), as desired. The proof in the other direction is entirely

Function symbols If there are function symbols in the original
language, we have to explain how to interpret them in our struc-
ture. Suppose, for example, that our language contains a one-place
function symbol f. How should we de¬ne its interpretation f ? In par-
ticular, if d is some constant symbol, what equivalence class should
f ([d]) be? What comes to our rescue here is the witnessing constant
for the sentence
∃x [f(d) = x]
We can de¬ne f ([d]) to be the equivalence class [cf(d)=x ] of the wit-
nessing constant cf (d)=x . Since

∃x [f(d) = x] ’ f(d) = cf(d)=x

Chapter 19
The Henkin Construction / 545

is in H, it is not hard to check that all the details of the proof then
work out pretty much without change.

This completes our ¬lling in of the outline of the proof of the Completeness


19.19 Use Tarski™s World to open Henkin Construction. This ¬le lists eight sentences. Let™s suppose
‚| that the predicates used in these sentences (Cube, Dodec, and Small) exhaust the predicates of
L. (In particular, we banish = to avoid the complications it caused in the proof of the Henkin
Construction Lemma.) Let h be any truth assignment that assigns true to all these sentences.
Describe the ¬rst-order structure Mh . (How many objects will it have? What will they be
called? What shape and size predicates will hold of them?) Use Tarski™s World to build a world
that would be represented by this ¬rst-order structure. There will be many such. It should, of


. 98
( 107 .)