a x -c , i : 1, . . . , m, x . 0, j : 1, . . . , n.

GH H G H

H

This is a linear programming problem in standard form.

For a brief account of the origin and early days of linear programming by

a pioneer in the ¬eld, see Dantzig [1963]. Some early applications are also

discussed there.

In this section we shall derive a necessary and suf¬cient condition that a

point x be a solution to a linear programming problem. In practice the use

*

of this condition to solve the problem is precluded by the dimension of the

space RL. Instead, a direct method, the simplex method, is used. The simplex

method will be discussed in Chapter VI. Nevertheless, we derive the necessary

conditions for linear programming problems to illustrate the simplest case of

the use of Farkas™s theorem (and hence separation theorems) to derive

necessary conditions in constrained optimization problems. The basic idea in

this proof is a theme that runs throughout much of optimization theory.

The linear programming problem in standard form (SLP) is given as

Minimize 19b, x2

subject to Ax - c, x . 0,

where x + RL, A is an m;n matrix, b " 0 is a vector in RL, and c is a vector in

RK.

The linear programming problem SLP can also be written as follows:

Minimize 19b, x2 subject to

A c

x- (1)

,

9I 0

LINEAR PROGRAMMING PROBLEMS 141

where I is the n;n identity matrix and 0 is the zero vector in RL. If we denote

the matrix on the left in (2) by A and the vector on the right by c and then

drop the tildes, we can state the linear programming problem (LPI) as

minimize 19b, x2

subject to Ax - c,

where A is an m;n matrix, b + RL, and c + RK. Here, we have incorporated the

constraint x . 0 in the matrix A and the vector c.

T 4.1. L et x be a solution of L PI. T hen there exists a " 0 in RK such

*

that (x , ) satis¬es the following:

*

Ax - c, AR : b,

(i) (ii)

*

. 0, (iv) 1Ax 9 c, 2 : 0, (2)

(iii)

*

1b, x 2 : 1c, 2.

(v)

*

If there exist an x + RL and a " 0 in RK that satisfy (i) ”(iv), then x is a

*

*

solution of L PI.

Proof. The feasible set for LPI is X : +x : Ax - c,. Since x is a solution to

*

LPI, x + X, and so (i) holds.

*

We now prove (ii) and (iii). If x in X is a solution to LPI, then

*

19b, x 2 - 19b, x2 for all x in X. Hence 1b, x2 - 1b, x 2 for all x in X. If

* *

we set : 1b, x 2, then we have that X is contained in the closed negative half

*

space determined by the hyperplane H? . By Exercise II.2.16, for x in int(X ), we

b

have 1b, x2 : , and so x cannot be an interior point of X.

*

Let a denote the ith row of the matrix A. Let

G

E : +i : 1a , x 2 : c ,, I : +i : 1a , x 2 : c ,.

G* G G* G

Since x is not an interior point of X, the set E " `. Let A denote the

#

*

submatrix of A consisting of those rows a with i in E, and let A denote the

G '

submatrix of A consisting of those rows a with i in I.

G

We claim that the system

A x - 0, 1b, x2 9 0, (3)

#

has no solution x in RL.

Intuitively this is clear, for if x would be a solution, then so would x for

9 0, and we would have A (x ; x ) - c . Also for suf¬ciently small, by

#* #

continuity, we would have A (x ; x ) : c . Thus, if (3) had a solution, we

'* '

would be able to move into the feasible set and at the same time increase

1b, x 2 or decrease 19b, x 2. This would contradict the optimality of x .

* * *

142 OPTIMIZATION PROBLEMS

We now make this argument precise. Again, if x were a solution of (3), then

x would be a solution of (3) for all 9 0, and

19b, x ; x 2 : 19b, x 2 9 1b, x 2 : 19b, x 2 (4)

* * *

for all 9 0. Also,

A (x ; x ) : c ; A x - c , 9 0, (5)

#* # # #

and

A (x ; x ) : A x ; A x : (c 9 p) ; A x ,

'* '* ' ' '

where p 9 0 (Recall that p 9 0 means that all components of p are strictly

positive.) By taking 9 0 to be suf¬ciently small, we can make A x 9 p : 0.

'

Hence for 9 0 and suf¬ciently small

A (x ; x ) : c ; ( A x 9 p) : c . (6)

'* ' ' '

In summary, from (5) and (6), we get that for suf¬ciently small 9 0 the

vectors x ; x are in X. From this and from (4), we get that x is not a

* *

solution to the linear programming problem. This contradiction shows that x

cannot be a solution of (3).

Since the system (3) has no solution, it follows, from Theorem II.5.1

(Farkas™s lemma) that there exists a vector y in RK#, where m is the row

#

dimension of A , such that

#

AR y : b, y . 0.

#

Therefore, if m is the row dimension of A , then

' '

AR y ; AR · 0 : b,

# '

where 0 is the zero vector in RK'. If we now take : (y, 0), then is in RK and

satis¬es

y

AR : (A R , AR ) : b, . 0. (7)

#'0

Thus, (ii) and (iii) are established.

From (7) we see that those components of corresponding to indices in I

are zero. By the de¬nition of E, A x 9 c : 0. Hence

#* #

1Ax 9 c, 2 : 1(A x 9 c , A x 9 c ), (y, 0)2 : 0, (8)

#* # '* '

*

which is (iv).

LINEAR PROGRAMMING PROBLEMS 143

From (iv) and (ii) we get

1c, 2 : 1Ax , 2 : 1x , AR 2 : 1x , b2,

* * *

which is (v).

The proof of suf¬ciency is elementary and does not involve separation

theorems. Let x be any feasible point. Then

19b, x2 9 19b, x 2 : 19AR , x2 ; 1c, 2

*

: 1 , 9Ax ; c2 : 91Ax 9 c, 2 . 0.

Hence x is a solution to LPI.

*

Remark 4.1. An alternate proof of (iv) and (v) is to establish (v) ¬rst and then

use (v) to establish (iv). Thus

A

#x

1b, x 2 : 1AR , x 2 : 1 , Ax 2 : (y, 0),

A

* * * *

'

: 1y, A x 2 : 1y, c 2 : 1(y, 0), (c , c )2

#* # #'

: 1 , c2,

which is (v). To establish (iv), we write

0 : 1b, x 2 9 1c, 2 : 1AR , x 2 9 1c, 2

* *

: 1 , Ax 2 9 1c, 2 : 1Ax 9 c, 2.

* *

Remark 4.2. It follows from (i) and (iii) that (iv) holds if and only if for each

i : 1, . . . , m

(1a , x 2 9 c ) : 0. (iv)

G* GG

That is, (iv) is equivalent to (iv). The condition (iv) is sometimes called the

complementary slackness condition. It states that if there is ˜˜slack™™ in the ith

constraint at x , that is, if 1a , x 2 : c , then : 0.

G* G G

*

Remark 4.3. Let the linear programming problem be reformulated with X

taken to be an arbitrary open set instead of RL. It is clear from the proof that

if x is a solution to this problem, then the necessary conditions hold.

*

Conversely, if there is a point x in X at which the necessary conditions hold,

*

then x is feasible and the minimum is achieved at x .

* *

144 OPTIMIZATION PROBLEMS

The inequality constraints Ax - c can be converted to equality constraints

by introducing a nonnegative vector whose components are called slack

variables and writing

Ax ; : c, . 0.

Thus the linear programming problem formulated in (1) can be written as

Minimize 1(9b, 0), (x, )2

x

subject to (A, I) : c, x . 0, . 0.

That is, the problem LPI can be written as a canonical linear programming

(CL P) problem:

Minimize 19b, x2

subject to Ax : c, x . 0.

The simplex method is applied to problems in canonical form.

Conversely, the problem CLP can be written as a problem LPI by writing

the equality constraint Ax : c as two inequality constraints, Ax - c and

9Ax - 9c, and then incorporating the constraint x . 0 in the matrix. That

is, we write CLP as

Minimize 19b, x2

A c

.

subject to 9A x - 9c

9I 0

Remark 4.4. We warn the reader that the de¬nitions ˜˜standard form™™ and

˜˜canonical form™™ are not uniform throughout the linear programming litera-

ture. Problems that we have designated as being in canonical form are

designated as being in standard form by some authors who use the designation

˜˜canonical form™™ differently from ours. In consulting the literature the reader

should always check the de¬nitions used.

Exercise 4.1. Prove the following corollaries to Theorem 4.1.

C¬¬ 1. L et x be a solution of the standard linear programming problem

*