* *

the dual problem. Then Ax - c and . 0. Hence

* *

1Ax 9 c, 2 - 0. (3)

* *

is feasible for the dual problem, AR . b. Since x . 0, we have

Since

* * *

1AR 9 b, x 2 . 0. (4)

* *

By Corollary 1, 1b, x 2 : 1c, 2, so 1AR , x 2 9 1c, 2.0. Therefore

* * ** *

1Ax 9c, 2.0. Comparing this with (3) gives (i). Also,

* *

1Ax 9 c, 2 : 1Ax , 2 9 1c, 2 : 1AR 9 b, x 2;

*

* * ** * *

and so from (3) we get that 1AR 9 b, x 2 - 0. This inequality and (4) give

* *

(ii).

To prove the suf¬ciency, note that (i) and (ii) give

1c, 2 : 1Ax , 2 : 1AR , x 2 : 1b, x 2.

* ** ** *

DUALITY IN LINEAR PROGRAMMING 219

It then follows from the equality of the outermost quantities and Corollary 1

that x is a solution of the primal problem and that is a solution of the dual.

* *

The next theorem is a stronger version of the strong duality statement (iv)

of Theorem (7.1) in that we obtain strong duality by merely assuming the

feasible sets X and Y to be nonempty.

T 7.2. T he feasible sets X and Y for the primal and dual problems

respectively are nonempty if and only if there exists a solution x to the primal

*

problem and a solution to the dual problem. In this event

*

1c, 2 : 1b, x 2. (5)

* *

Proof. If the primal and dual problems have solutions, then the sets X and

Y are clearly nonempty.

To show that X " ` and Y " ` imply that both the dual and primal

problems have solutions and that (5) holds, it suf¬ces to show that the primal

problem has a solution. For then by (iv) of Theorem 7.1, the dual problem has

a solution and (5) holds.

In Remark IV.4.1 we noted that relation (2iv) of Theorem IV.4.1 is a

consequence of (i), (ii), (iii), and (v). Thus in Corollary IV.4.3 the same

observation holds. Hence a necessary and suf¬cient condition for a vector x

*

in RK such that

to be a solution of P is that there exist a vector

A AR

c b

x- .

, ,

9I I

* (6)

0 0

L K

1b, x 2 : 1c, 2.

*

Since the existence of vectors x in RL and vector in RK that satisfy (6) is a

*

necessary and suf¬cient condition that x be a solution of P, it follows that if

*

P has no solution, then the system

A c

(i) x- ,

9I 0

L

(7)

9AR 9b

(ii) u- ,

9I 0

K

(iii) 19b, x2 ; 1c, u2 : 0

has no solution (x, u), x + RL, u + RK. Condition (7i) is the condition that x be

feasible for P and condition (7ii) is the condition that u be feasible for the

dual problem. By assumption, there exist feasible vectors x + X and u + Y. By (i)

of Theorem 7.1, for such vectors we have 19b, x2 . 19c, u2, and thus

19b, x2 ; 1c, u2 . 0. Hence for a vector (x, u) that satis¬es (7i) and (7ii) to

220 CONVEX PROGRAMMING AND DUALITY

fail to satisfy (7iii), it must fail to satisfy

19b, x2 ; 1c, u2 - 0, (7iii)

so we may replace (7iii) by (7iii) in (7).

Now consider the system

9 0, + R,

A 9AR 9b

c

x- u- (8)

, ,

9I 9I

0 0

L K

19b, x2 ; 1c, u2 - 0.

If (7) has a solution (x , u ), then (8) has a solution ( , x , u ) : (1, x , u ).

Conversely, if (8) has a solution ( , x , u ), then (x , u ) : (x / , u / ) is a

solution of (7). Therefore (7) has no solution if and only if (8) has no solution.

System (8) can be written as

1(0 , 0 , 1), (x, u, )2 9 0,

LK

A O 9c

9I O 0 x

(9)

L L

O 9AR b u -0 ,

L>K>

O 9I 0

K K

9bR cR 0

where O represents a zero matrix of appropriate dimension and 0 is the zero

G

vector in RG. Since this system has no solution, it follows from Theorem II.5.1

(Farkas™s lemma) that there exists a vector (p , p , q , q , ) . 0 with

L>K>

p + RK, p + RL, q + RL, q + RK, and + R such that

p

AR 9I O O 9b p 0

L L

O O 9A 9I q :0 .

c

K K

9cR 0 bR 0 0 q 1

L K

Thus

ARp 9 p 9 b : 0 ,

L

9Aq 9 q ; c : 0 , (10)

K

19p , c2 ; 1q , b2 : 1.

DUALITY IN LINEAR PROGRAMMING 221

We shall show that this system cannot have a solution (p , p , q , q , ) .

0 . This contradiction will prove that the primal problem has a

L>K>

solution, since we arrived at (10) by assuming that it did not.

Let x + X and let y + Y. Such points exist since X " ` and Y "`.

Suppose ¬rst that : 0. Then ARp : p and Aq : 9q . Since x + X, we

have Ax - c. Thus, since p . 0,

19p , c2 - 19p , Ax 2 : 91ARp , x 2 : 91p , x 2 - 0, (11)

where the last inequality follows from p . 0, x . 0. Similarly,

1q , b2 - 1q , ARy 2 : 1Aq , y 2 : 91q , y 2 - 0. (12)

From (11) and (12) we get

19p , c2 ; 1q , b2 - 0, (13)

which contradicts the last equation in (10).

Suppose now that 9 0. The ¬rst equation in (10) implies that p / is

feasible for the dual problem. The second equation in (10) implies that q / is

feasible for the primal problem. By the weak duality, 19b, q / 2.19c, p / 2.

Hence we again arrive at (13), which contradicts the last equation in (10).

Convexity and Optimization in n . Leonard D. Berkovitz

Copyright 2002 John Wiley & Sons, Inc.

ISBN: 0-471-35281-0

VI

SIMPLEX METHOD

1. INTRODUCTION

In Section 4 of Chapter IV we formulated the linear programming problem and

derived a necessary and suf¬cient condition that a point x be a solution to a

*

linear programming problem. In Section 7 of Chapter V we presented the

duality theorems for linear programming. We remarked in Section 4 of Chapter

IV that in practice the dimension of the space RL precluded the use of the

necessary and suf¬cient conditions to solve linear programming problems and

that instead a method known as the simplex method is used. In this chapter

we shall present the essentials of the simplex method, which is an algorithm for

solving linear programming problems.

To motivate the rationale of the simplex methods, we consider several

examples in R.

Example 1.1. Minimize f (x) : x 9 2x subject to

9x 9 x - 91,

9x ; x - 0,

(1)

x ; 2x - 4,

x . 0, x . 0.

The feasible set is indicated by the hatched area in Figure 6.1. The level curves

f (x) : 0 and f (x) : 91 are indicated by dashed lines; the arrows indicate the

directions of the gradient vectors of f. Since the gradient vectors indicate the

direction of increasing f, we see from the ¬gure that f attains a minimum at

the extreme point (, ) of the feasible set.

Example 1.2. This example differs from Example 1.1 in that we take the

function f to be f (x) : x ; x . The level curves are lines parallel to the line

x ; x : 1, a segment of which is part of the boundary of the feasible set.

Since the gradient vector of f is (1, 1) and this is the direction of increasing f,

222

INTRODUCTION 223

Figure 6.1.

it follows that f attains its minimum at all boundary points of the feasible set

that lie on the line x ; x : 1. In particular, the minimum is attained at the

two extreme points (, ) and (1, 0).

Example 1.3. Minimize f (x) : 9x 9 x subject to

9x ; x - 0,

x 9 2x - 92,

x . 0, x . 0.

The feasible set is the hatched area in Figure 6.2. The level curve f (x) : 96

is indicated by a dashed line. Since the gradient of f is (91, 91), it follows

from the ¬gure that this problem has no solution. The feasible set in this

example is unbounded. An unbounded feasible set, however, does not imply

that a linear programming problem has no solution. If we modify this example

by taking f (x) : x ;x , then the problem does have a solution at the extreme

point (2, 2).

In the examples that we considered either the problem had no solution or