<<

. 50
( 59 .)



>>

H j : p ; 1, . . . , n.
0,

Then, from the fact that x : 0 for j : p ; 1, . . . , n, from the feasibility of x
H 
and from (6) we get that

N
Az( ) : Ax 9  A : c.
 HH
H
Let

x
H : j : 1, . . . , p and " 0 .
: min (8)
 H
""
H
It follows from (7) that z( ) . 0 whenever " " - . Hence z( ) is feasible

whenever " " - . Thus,

N
1b, x 2 . 1b, z( )2 : 1b, x 2 9 b
  HH
H
for all such that " " - . Hence

N
 b : 0 and 1b, x 2 : 1b, z( )2.
HH 
H
Thus, z( ) is a solution whenever " " - . Let the minimum in (8) be attained

at the index r. Let : x / . Then : if 9 0 and : 9 if : 0.
 P P   P   P
Hence z( ) is feasible and is a solution. Moreover z( ) has fewer then p
 
positive components. If the corresponding columns of A are linearly indepen-
dent, then z( ) is an extreme point. If the columns are not linearly indepen-

dent, we repeat the process, taking x to be z( ). In a ¬nite number of
 
iterations we shall obtain a solution such that the column vectors correspond-
ing to the positive components are linearly independent. This solution will be
an extreme point of the feasible set.
230 SIMPLEX METHOD


3. PRELIMINARIES TO SIMPLEX METHOD

In the last section we showed that if a solution to the canonical linear
programming problem exists, then it exists at an extreme point of the feasible
set. We also showed that there are at most n!/m!(n 9 m)! such points. For a
moderate system of 6 constraint equations and 30 variables, there can be as
many as

30 30!
: : 593,775
6 6!24!

extreme points. Thus the simple-minded scheme of evaluating the objective
function at all possible extreme points and then selecting the point at which
the largest value of the objective function is attained is not practical. The
simplex method is an algorithm for ¬nding a solution by going from one
extreme point of the feasible set to another in such a way that the objective
function is not decreased.
We now introduce the notation and de¬nitions that will be used. Consider
the system

Ax : c (1)

that de¬nes the feasible set for CLP, the canonical linear programming
problem. Thus, A is m ; n and of rank m. Let B denote an m ; m submatrix
of A obtained by taking m linearly independent columns of A and let N denote
the submatrix of A consisting of the remaining n 9 m columns. We can write
a vector x in RL as

x : (x , x ),
,
where x consists of those components of x corresponding to the columns in
B and x consists of those components of x corresponding to the columns in N.
,
De¬nition 3.1. The columns of B are called a basis. The components of x are
called basic variables. The components of x are called nonbasic variables or
,
free variables.

We may write the system (1) as

x
(B, N) : c. (2)
x
,
Any vector of the form ( , x ), where x is arbitrary and
, ,
: B\c 9 B\Nx ,
,
PRELIMINARIES TO SIMPLEX METHOD 231


will be a solution of (2). (This is why the components of x are called free
,
variables.) There is a unique solution of (2) with x : 0 , namely
, ,
:( , 0 ), : B\c. (3)
,
De¬nition 3.2. (i) A solution of (1) of the form (3) is called a basic solution.
(ii) If . 0 , then is feasible and is called a basic feasible solution. (iii) If
are zero, then is called a degenerate basic
some of the components of
solution.

We emphasize the following aspects of our notation. By x we mean the
vector consisting of those components of x that correspond to the columns of
B. By x we mean the vector consisting of those components of x that
,
correspond to the columns of N. By we mean the value of the basic variables
in a basic solution. By we mean the value of a solution of (1) and by we
,
mean the value of a particular x .
,
We also call attention to a change in terminology. Up to now we have used
the term solution to mean a solution of the linear programming problem. We
are now using it to mean a solution of (1). Henceforth we shall use the term
optimal solution to designate a solution of the linear programming problem,
that is, a feasible point at which the maximum is attained.

Remark 3.1. It follows from the preceding and from Theorem 2.1 that a basic
feasible solution is an extreme point and conversely. It follows from Theorem
2.2 that if an optimal solution exists, then there exists a basic feasible solution
that is an optimal solution. Therefore, we need only consider basic feasible
solutions in our search for the optimal solution.

To apply the simplex method, we reformulate the problem stated in (1) of
Section 2 by introducing a scalar variable z and writing the problem as

Maximize z
subject to Ax : c,
(4)
1b, x2 9 z : 0,
x . 0.

The simplex method consists of two phases. In phase I the method either
determines a basic feasible solution or determines that the feasible set is empty.
In phase II the method starts with a basic feasible solution and either
determines that no optimal solution exists or determines an optimal solution.
In the latter case it does so by moving from one basic feasible solution to
another in such a way that the value of z is not decreased.
232 SIMPLEX METHOD


We conclude this section with a review of some aspects of linear algebra
involving the solution of systems of linear equations and the calculation of
inverse of matrices. These aspects of linear algebra arise in the simplex method.
Consider a system of linear equations

My : d, (5)

where M is a k ; n matrix of rank k, y is a vector in RL, and d is a vector in
RI. A system of linear equations My : d is said to be equivalent to (5) if the
solution sets of both equations are equal. We de¬ne an elementary operation
on the system (5) to be any one of the following operations.

(i) Interchange two equations.
(ii) Multiply an equation by a nonzero constant.
(iii) Add a nonzero multiple of an equation to another equation.

Any sequence of elementary operations applied to (5) will yield a system
equivalent to (5).
Let R denote a k ; k submatrix of M that has rank k and let S denote the
submatrix of M formed by the remaining n 9 k columns. Then we may write
(5) as

y
P : d,
(R, S) (6)
y
Q
where y is the vector obtained from y by taking the components of y that
P
correspond to the columns in R and y has an analogous meaning. By an
Q
appropriate sequence of elementary operations, we can obtain a system
equivalent to (6) having the form

y
P : d,
(U, V ) (7)
y
Q
where U is a k ; k upper triangular matrix with diagonal entries equal to 1
and V is a k ; (n 9 k) matrix. To simplify notation, assume that R consists of
the ¬rst k columns of M. Then (7) in component form reads

y : d ,
y; y ;%; y; y ;%;

   I I  I> I> L L
; % ; y : d ,
y ;%; y; y (8)

 I I  I> I> L L
$
; % ; y : d .
y; y
I
I I I> I> IL L
The last equation in (8) expresses y in terms of y , . . . , y . Proceeding
I>
I L
PRELIMINARIES TO SIMPLEX METHOD 233


upward through the system (8) by backward substitution, we can express
y , . . . , y in terms of y , . . . , y and thus obtain the general solution of (7)
 I I> L
and hence of (5). The method of solving (5) just described is called Gauss
elimination.
In solving the system (5) or (6) we need not write down the equations and
the variables. We need only to write down the augmented matrices (M, d) or
(R, S, d) and apply the elementary operations to the rows of the augmented
matrices. In so doing we speak of elementary row operations. The reader will
recall, or can verify, that an elementary row operation on a k ; n matrix M
can be effected in the following fashion. Perform the elementary operation on
the k ; k identity matrix to obtain a matrix E. Then premultiply the matrix M
by E. The matrices E are called elementary matrices. In computer programs for
solving systems of linear equations by Gauss elimination, elementary row
operations are executed by premultiplications by elementary matrices. Such
computer programs also incorporate rules for choosing the components y of
P
y so as to increase accuracy and speed of computation. We shall not take up
these questions here.
By an additional sequence of elementary row operations, we can obtain a
system equivalent to (7) having the form

y
(I , W ) P : d, (9)
I y
Q

where I is the k ; k identity matrix. We can rewrite (9) as
I

I y : d 9 Wy (10)
IP Q

and thus obtain y in terms of y directly, without backward substitution. The
P Q
method of solving (5) by reducing it to the equivalent system (9) is known as
Gauss”Jordan elimination. For most large systems, computer implementations
of Gauss elimination are superior to those of Gauss”Jordan elimination.
We next recall an effective way of calculating the inverse of a nonsingular
matrix R. We ¬rst note that an elementary matrix E is nonsingular. Since R is
nonsingular, we can reduce R to the identity matrix by applying a sequence of
elementary row operations. But each elementary row operation is effected by
a premultiplication by an elementary matrix. Thus, we have a sequence E ,
N
E , . . . , E of elementary matrices such that
N\ 

E . . . E E R : I.
N 

Multiplying this equation through on the right by R\ gives

E . . . E I : R\. (11)
N 
234 SIMPLEX METHOD

<<

. 50
( 59 .)



>>