<<

. 41
( 59 .)



>>

* *
*
min+ f (x) : x + X,, X Y +x : g(x) - 0, h(x) : 0,

will be a solution of the unconstrained problem

min+ f (x) ; 1 , g(x)2 ; 1 , h(x)2,. (12)
* *
The set S of solutions of the unconstrained problem (12) may contain points
that do not satisfy the constraints. To ¬nd solutions of the constrained
problem, one must determine those points of S that satisfy the constraints. It
follows from the rightmost inequality in (7) that if (12) has a unique solution,
then it must be a solution of the constrained problem.

One sometimes ¬nds the statement that we may replace the constrained
problem by the unconstrained problem of minimizing a Lagrangian. This
statement is only true in the sense described in the preceding paragraph.
It is conceivable that there exist vectors ( , ) with . 0 such that the
unconstrained problem (12) with ( , ) : ( , ) has the same in¬mum as
**
constrained problem. Such a vector will be called a Kuhn”Tucker vector. The
multiplier vectors ( , ) of Theorem 2.2 are Kuhn”Tucker vectors. In the
**
next theorem we show that if CPII has a solution, then a Kuhn”Tucker vector
for CPII is a multiplier vector as in Theorem 2.2.

T 2.3. L et CPII have a solution x . T hen a Kuhn”Tucker vector (  ,  )
*
for CPII is a multiplier vector ( , ) as in T heorem 2.2.
**
Proof. Since x is a solution of CPII and since (  ,  ) is a Kuhn”Tucker
*
vector,

f (x ) : inf+ f (x) ; 1  , g(x)2 ; 1  , h(x)2 : x + RL,.
*
Thus (5) holds with : 1. Hence by the italicized statement in Remark 2.4,

the vector (  ,  ) is a multiplier vector as in Theorem 2.2.

Theorems 2.2 and 2.3 imply the following:
NECESSARY CONDITIONS AND SUFFICIENT CONDITIONS 185


C¬¬ 1. L et CPII have a solution x . L et M denote the set of multipliers
*
associated with x as in T heorem 2.2 and let K denote the set of Kuhn”Tucker
*
vectors for CPII. T hen M : K.

Remark 2.6. The hypothesis of strong consistency in Theorem 2.2 guaranteed
that the set M is not empty. Thus if CPII is strongly consistent and has a
solution, then K is also nonempty.

We now give an example of a problem, taken from Rockafellar [1970], for
which no Kuhn”Tucker vector, and hence no vector in M, exists.

Example 2.1. Consider R, and let

g (x , x ) : x 9 x .
f (x , x ) : x , g (x , x ) : x ,

     
Then the only point in R satisfying g(x , x ) - 0 is (0, 0), and the value of the

minimum is zero. Thus, a Kuhn”Tucker vector : ( , ) would have to

satisfy . 0, . 0 and
 
0 - x ; x ; (x  9 x ) : x ; ( 9 )x ; x
 
     
for all x. We leave it to the reader to check that this is impossible.
Note that the problem in this example is not strongly consistent.

Condition (3) of Theorem 2.1 and condition (7) of Theorem 2.2 are called
saddle point conditions. The points (x , , , ) in (3) and (x , , ) in
* * * * ***
(7) are called saddle points. We previously encountered saddle point conditions
and saddle points in our discussion of game theory in Section 5 of Chapter III.
If the convex programming problem CPII has a solution and if the problem is
strongly consistent, then the saddle point condition (7) gives us a method of
computing f (x ), the value of the minimum, without ¬nding the optimal point
*
x , as is shown in the following theorem. We shall return to this point in our
*
study of duality in Section 4.

T 2.4. L et CPII be strongly consistent, let x be a solution, let denote
*
the value of the minimum, and let , be the multipliers associated with x as
** *
in T heorem 2.2. T hen

: L (x , , ) : min L (x, , )
*** **
x


: sup inf L (x, , ) : inf sup L (x, , ), (13)
(,) (,)
x x



where + RK, . 0, + RI, x + RL.
186 CONVEX PROGRAMMING AND DUALITY


Proof. By a slight modi¬cation of the argument in Lemma III.5.1 to take
into account the possible value of ;- for a supremum and 9- for an
in¬mum, we have that

sup inf L (x, , ) - inf sup L (x, , ). (14)
(,) (,)
x x


From (7) of Theorem 2.2 we get

sup L (x , , ) : L (x , , ) : inf L (x, , ). (15)
* *** **
(,) x


Hence

inf sup L (x, , ) - L (x , , ) - sup inf L (x, , ). (16)
*** (,) x
x (,)


We also note that by (7) of Theorem 2.2 we can replace the in¬mum in (15)
by a minimum. This observation and (14) and (16) establish all the equalities
in (13) except the ¬rst, which follows from (9) of Theorem 2.2.

Remark 2.7. It follows from Remark 2.6 and the proof of Theorem 2.4 that the
hypothesis of strong consistency in the theorem can be replaced by the
hypothesis that M is not empty.

Theorem 2.2 states that if CPII is strongly consistent and x is a solution,
*
then we may take : 1. Hence, if the functions f and g are differentiable, we

may take : 1 in Remark 2.3. Theorem IV.5.3 states that, for differentiable

convex problems, if the KKT conditions hold at a feasible point x , then x
* *
minimizes. Thus, we have the following theorem.

T 2.5. L et f and g be convex and differentiable and let h be af¬ne. L et
CPII be strongly consistent. A necessary and suf¬cient condition that a feasible
point x be a solution to CPII is that there exist multipliers + RK, . 0, and
* * *
in RI such that
*

f (x ) ; R
g(x ) ; R A : 0,
* *
* *
1 , g(x )2 : 0.
*
*
In the next theorem we shall show that, with no restrictions placed on f and
g, the saddle point condition (7) is a suf¬cient condition that x be a solution
*
to the programming problem P: Minimize f over the set X, where
X : +x : x + X , g(x) - 0,.

Note that since we place no conditions on the constraints, equality con-
straints can be replaced by inequality constraints. We assume that this has been
done, and so h : 0.
NECESSARY CONDITIONS AND SUFFICIENT CONDITIONS 187


T 2.6. L et X be a set in RL, let f be a real-valued function de¬ned on

X , and let g be a function from X to RK. L et x be a point in X and let . 0
  *  *
be a vector in RK such that the saddle point condition (7) holds for all x in X

and all - 0 in RK. T hen x is a solution to problem P.
*
Proof. We ¬rst show that x is feasible. The leftmost inequality in (7) is
*
f (x ) ; 1 , g(x )2 - f (x ) ; 1 , g(x )2, . 0.
* * * * *
Hence

19 , g(x )2 - 0. (17)
*
*
Now ¬x a value j in the set +1, . . . , m,. Let

:( ,..., ; 1, ,..., ).
,
H * *H\ *H *H> *K
If for each j : 1, . . . , m we take : in (17), we get
H
g (x ) - 0, j : 1, . . . , m.
H*
Thus x is feasible.
*
We next show that (6) holds. If we take : 0 in (17), we get that
1 , g(x )2 . 0. On the other hand, g(x ) - 0 and . 0 imply that
* *
* *
1 , g(x )2 - 0. Hence 1 , g(x )2 : 0, which is (6).
* *
* *
From the rightmost inequality in (7) and from (6) we get that

f (x ) - f (x) ; 1 , g(x)2, x+X .
* * 
For x feasible, g(x) - 0. Hence for x feasible

f (x ) - f (x),
*
and so f (x ) : min+ f (x) : x + X,.
*
Remark 2.8. Note that the proof of Theorem 2.6 only involves very elementary
arguments.

The next theorem emphasizes some of the points implicit in the material
presented in this section.

T 2.7. A necessary and suf¬cient condition for a point (x , , ) to be
***
a saddle point for the L agrangian L de¬ned in (8) on the domain

D : +(x, , ) : x + RL, + RK, . 0, + RI,
188 CONVEX PROGRAMMING AND DUALITY


is that

L (x , ) : min+L (x, ) : x + RL,;
(i) , ,
*** * *
g(x ) - 0, h(x ) : 0; (18)
(ii)
* *
1 , g(x )2 : 0.
(iii)
* *
Moreover, if (18ii) and (18iii) hold, then

f (x ) : L (x , , ). (19)
* ***
Proof. It follows from the de¬nition of L that if (18ii) and (18iii) hold, then
(19) holds.
Let (x , ) be a saddle point for L . Then (18i) follows from the
,
***
de¬nition of saddle point. The inequality in (18ii) and the equation (18iii) were
established in the course of proving Theorem 2.6. The equality in (18ii) also
follows from Theorem 2.6, since there we converted the equality constraints to
two inequality constraints, h(x) - 0 and 9h(x) - 0, and thus h(x ) - 0 and
*
9h(x ) - 0. Now suppose that (18) holds. Condition (18ii) states that x is
* *
feasible. Conditions (18i) and (19) imply that (5) holds. It now follows from
Remark 2.4 that (x , , ) is a saddle point for L (x, , ).
***


3. PERTURBATION THEORY

In problems arising in applications the values of various parameters are usually

<<

. 41
( 59 .)



>>