Here e is intended to name the group™s identity element and · the group

operation. Let Σ be the set of sentences of LG including:

1. The axioms for a commutative group:

• ∀x x · e = x

• ∀x ∃y x · y = e

• ∀x ∀y ∀z x · (y · z) = (x · y) · z

• ∀x ∀y y · x = x · y

2. A sentence which asserts that every element of the universe is of

order 2:

• ∀x x · x = e

3. For each n ≥ 2, a sentence, σn , which asserts that there are at

least n di¬erent elements in the universe:

• ∃x1 . . . ∃xn ((¬x1 = x2) § (¬x1 = x3 ) § · · · § (¬xn’1 = xn ))

51

52 9. APPLICATIONS OF COMPACTNESS

We claim that every ¬nite subset of Σ is satis¬able. The most

direct way to verify this is to show how, given a ¬nite subset ∆ of Σ,

to produce a model M of ∆. Let n be the largest integer such that

σn ∈ ∆ ∪ {σ2} (Why is there such an n?) and choose an integer k such

that 2k ≥ n. De¬ne a structure (G, —¦) for LG as follows:

• G = { a | 1 ¤ ¤ k | a = 0 or 1 }

• a | 1 ¤ ¤ k —¦ b | 1 ¤ ¤ k = a + b (mod 2) | 1 ¤ ¤ k

That is, G is the set of binary sequences of length k and —¦ is coordi-

natewise addition modulo 2 of these sequences. It is easy to check that

(G, —¦) is a commutative group with 2k elements in which every element

has order 2. Hence (G, —¦) |= ∆, so ∆ is satis¬able.

Since every ¬nite subset of Σ is satis¬able, it follows by the Com-

pactness Theorem that Σ is satis¬able. A model of Σ, however, must

be an in¬nite commutative group in which every element is of order

2. (To be sure, it is quite easy to build such a group directly; for ex-

ample, by using coordinatewise addition modulo 2 of in¬nite binary

sequences.)

Problem 9.1. Use the Compactness Theorem to show that there

is an in¬nite

1. bipartite graph,

2. non-commutative group, and

3. ¬eld of characteristic 3,

and also give concrete examples of such objects.

Most applications of this method, including the ones above, are

not really interesting: it is usually more valuable, and often easier, to

directly construct examples of the in¬nite objects in question rather

than just show such must exist. Sometimes, though, the technique

can be used to obtain a non-trivial result more easily than by direct

methods. We™ll use it to prove an important result from graph theory,

Ramsey™s Theorem. Some de¬nitions ¬rst:

Definition 9.1. If X is a set, let the set of unordered pairs of

elements of X be [X]2 = { {a, b} | a, b ∈ X and a = b }. (See De¬ni-

tion A.1.)

1. A graph is a pair (V, E) such that V is a non-empty set and

E ⊆ [V ]2. Elements of V are called vertices of the graph and

elements of E are called edges.

2. A subgraph of (V, E) is a pair (U, F ), where U ‚ V and F =

E © [U]2.

3. A subgraph (U, F ) of (V, E) is a clique if F = [U]2.

4. A subgraph (U, F ) of (V, E) is an independent set if F = ….

9. APPLICATIONS OF COMPACTNESS 53

That is, a graph is some collection of vertices, some of which are

joined to one another. A subgraph is just a subset of the vertices,

together with all edges joining vertices of this subset in the whole graph.

It is a clique if it happens that the original graph joined every vertex in

the subgraph to all other vertices in the subgraph, and an independent

set if it happens that the original graph joined none of the vertices in

the subgraph to each other. The question of when a graph must have

a clique or independent set of a given size is of some interest in many

applications, especially in dealing with colouring problems.

Theorem 9.2 (Ramsey™s Theorem). For every n ≥ 1 there is an

integer Rn such that any graph with at least Rn vertices has a clique

with n vertices or an independent set with n vertices.

Rn is the nth Ramsey number . It is easy to see that R1 = 1 and

R2 = 2, but R3 is already 6, and Rn grows very quickly as a function

of n thereafter. Ramsey™s Theorem is fairly hard to prove directly, but

the corresponding result for in¬nite graphs is comparatively straight-

forward.

Lemma 9.3. If (V, E) is a graph with in¬nitely many vertices, then

it has an in¬nite clique or an in¬nite independent set.

A relatively quick way to prove Ramsey™s Theorem is to ¬rst prove

its in¬nite counterpart, Lemma 9.3, and then get Ramsey™s Theorem

out of it by way of the Compactness Theorem. (If you™re an ambitious

minimalist, you can try to do this using the Compactness Theorem for

propositional logic instead!)

Elementary equivalence and non-standard models. One of

the common uses for the Compactness Theorem is to construct “non-

standard” models of the theories satis¬ed by various standard math-

ematical structures. Such a model satis¬es all the same ¬rst-order

sentences as the standard model, but di¬ers from it in some way not

expressible in the ¬rst-order language in question. This brings home

one of the intrinsic limitations of ¬rst-order logic: it can™t always tell

essentially di¬erent structures apart. Of course, we need to de¬ne just

what constitutes essential di¬erence.

Definition 9.2. Suppose L is a ¬rst-order language and N and M

are two structures for L. Then N and M are:

1. isomorphic, written as N ∼ M, if there is a function F : |N| ’

=

|M| such that

(a) F is 1 ’ 1 and onto,

(b) F (cN) = cM for every constant symbol c of L,

54 9. APPLICATIONS OF COMPACTNESS

(c) F (f N (a1, . . . , ak ) = f M (F (a1), . . . , F (ak )) for every k-place

function symbol f of L and elements a1, . . . , ak ∈ |N|, and

(d) P N (a1, . . . , ak ) holds if and only if P N (F (a1), . . . , F (ak ))

for every k-place relation symbol of L and elements a1 ,

. . . , ak of |N|;

and

2. elementarily equivalent, written as N ≡ M, if Th(N) = Th(M),

i.e. if N |= σ if and only if M |= σ for every sentence σ of L.

That is, two structures for a given language are isomorphic if they

are structurally identical and elementarily equivalent if no statement

in the language can distinguish between them. Isomorphic structures

are elementarily equivalent:

Proposition 9.4. Suppose L is a ¬rst-order language and N and

M are structures for L such that N ∼ M. Then N ≡ M.

=

However, as the following application of the Compactness Theorem

shows, elementarily equivalent structures need not be isomorphic:

Example 9.2. Note that C = (N) is an in¬nite structure for L= .

Expand L= to LR by adding a constant symbol cr for every real number

r, and let Σ be the set of sentences of L= including

• every sentence „ of Th(C), i.e. such that C |= „ , and

• ¬cr = cs for every pair of real numbers r and s such that r = s.

Every ¬nite subset of Σ is satis¬able. (Why?) Thus, by the Compact-

ness Theorem, there is a structure U for LR satisfying Σ, and hence

Th(C). The structure U obtained by dropping the interpretations of

all the constant symbols cr from U is then a structure for L= which

satis¬es Th(C). Note that |U| = |U | is at least large as the set of all

real numbers R, since U requires a distinct element of the universe to

interpret each constant symbol cr of LR .

Since Th(C) is a maximally consistent set of sentences of L= by

Problem 8.6, it follows from the above that C ≡ U. On the other hand,

C cannot be isomorphic to U because there cannot be an onto map

between a countable set, such as N = |C|, and a set which is at least

as large as R, such as |U|.

In general, the method used above can be used to show that if a

set of sentences in a ¬rst-order language has an in¬nite model, it has

many di¬erent ones. In L= that is essentially all that can happen:

Proposition 9.5. Two structures for L= are elementarily equiva-

lent if and only if they are isomorphic or in¬nite.

9. APPLICATIONS OF COMPACTNESS 55

Problem 9.6. Let N = (N, 0, 1, S, +, ·, E) be the standard struc-

ture for LN . Use the Compactness Theorem to show there is a structure

M for LN such that N ≡ N but not N ∼ M.

=

Note that because N and M both satisfy Th(N), which is maximally

consistent by Problem 8.6, there is absolutely no way of telling them

apart in LN .

Proposition 9.7. Every model of Th(N) which is not isomorphic

to N has

1. an isomorphic copy of N embedded in it,

2. an in¬nite number, i.e. one larger than all of those in the copy

of N, and

3. an in¬nite decreasing sequence.

The apparent limitation of ¬rst-order logic that non-isomorphic

structures may be elementarily equivalent can actually be useful. A

non-standard model may have features that make it easier to work

with than the standard model one is really interested in. Since both

structures satisfy exactly the same sentences, if one uses these features

to prove that some statement expressible in the given ¬rst-order lan-

guage is true about the non-standard structure, one gets for free that

it must be true of the standard structure as well. A prime example of

this idea is the use of non-standard models of the real numbers con-

taining in¬nitesimals (numbers which are in¬nitely small but di¬erent

from zero) in some areas of analysis.

Theorem 9.8. Let R = (R, 0, 1, +, ·) be the ¬eld of real numbers,

considered as a structure for LF . Then there is a model of Th(R) which

contains a copy of R and in which there is an in¬nitesimal.

The non-standard models of the real numbers actually used in anal-

ysis are usually obtained in more sophisticated ways in order to have

more information about their internal structure. It is interesting to

note that in¬nitesimals were the intuition behind calculus for Leibniz

when it was ¬rst invented, but no one was able to put their use on a

rigourous footing until Abraham Robinson did so in 1950.

56 9. APPLICATIONS OF COMPACTNESS

Hints

CHAPTER 1

Hints

1.1. Symbols not in the language, unbalanced parentheses, lack of

connectives . . .

1.2. Proceed by induction on the length of the formula or on the

number of connectives in the formula.

1.3. Compute p(±)/ (±) for a number of examples and look for

patterns. Getting a minimum value should be pretty easy.

1.4. Proceed by induction on the length of or on the number of

connectives in the formula.

1.5. Construct examples of formulas of all the short lengths that

you can, and then see how you can make longer formulas out of short

ones.

1.6. Hewlett-Packard sells calculators that use such a trick. A sim-

ilar one is used in De¬nition 5.2.

1.7. Observe that LP has countably many symbols and that every

formula is a ¬nite sequence of symbols. The relevant facts from set

theory are given in Appendix A.

1.8. Stick several simple statements together with suitable connec-

tives.

1.9. This should be straightforward.

1.10. Ditto.

1.11. To make sure you get all the subformulas, write out the for-

mula in o¬cial form with all the parentheses.

1.12. Proceed by induction on the length or number of connectives

of the formula.

59

60 1. HINTS

CHAPTER 2

Hints

2.1. Use truth tables.

2.2. Proceed by induction on the length of δ or on the number of

connectives in δ.

2.3. Use Proposition 2.2.

2.4. In each case, unwind De¬nition 2.1 and the de¬nitions of the

abbreviations.

2.5. Use truth tables.

2.6. Use De¬nition 2.3 and Proposition 2.4.

2.7. If a truth assignment satis¬es every formula in Σ and every

formula in “ is also in Σ, then . . .

2.8. Grinding out an appropriate truth table will do the job. Why

is it important that Σ be ¬nite here?

2.9. Use De¬nition 2.4 and Proposition 2.4.

2.10. Use De¬nitions 2.3 and 2.4. If you have trouble trying to

prove one of the two directions directly, try proving its contrapositive

instead.

61

62 2. HINTS

CHAPTER 3

Hints

3.1. Truth tables are probably the best way to do this.

3.2. Look up Proposition 2.4.

3.3. There are usually many di¬erent deductions with a given con-

clusion, so you shouldn™t take the following hints as gospel.

1. Use A2 and A1.

2. Recall what ∨ abbreviates.

3.4. You need to check that •1 . . . • satis¬es the three conditions

of De¬nition 3.3; you know •1 . . . •n does.

3.5. Put together a deduction of β from “ from the deductions of

δ and δ ’ β from “.

3.6. Examine De¬nition 3.3 carefully.

3.7. The key idea is similar to that for proving Proposition 3.5.

3.8. One direction follows from Proposition 3.5. For the other di-

rection, proceed by induction on the length of the shortest proof of β

from Σ ∪ {±}.

3.9. Again, don™t take these hints as gospel. Try using the Deduc-

tion Theorem in each case, plus

1. A3.

2. A3 and Problem 3.3.

3. A3.

4. A3, Problem 3.3, and Example 3.2.

5. Some of the above parts and Problem 3.3.

6. Ditto.

7. Use the de¬nition of ∨ and one of the above parts.

8. Use the de¬nition of § and one of the above parts.

9. Aim for ¬± ’ (± ’ ¬β) as an intermediate step.