. 1
( 59 .)



Purdue University

A Wiley-Interscience Publication
This book is printed on acid-free paper.
Copyright 2002 by John Wiley & Sons, Inc., New York, NY. All rights reserved.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any
form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise,
except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without
either the prior written permission of the Publisher, or authorization through payment of the
appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA
01923, (978) 750-8400 fax (978) 750-4744. Requests to the Publisher for permission should be
addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York,
NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ  WILEY.COM.
For ordering and customer service, call 1-800-CALL-WILEY.

Library of Congress Cataloging-in-Publication Data:
Berkovitz, Leonard David, 1924”
Convexity and optimization in R [superscript n] / Leonard D. Berkovitz.
p. cm.--(Pure and applied mathematics)
˜˜A Wiley-Interscience publication.™™
Includes bibliographical references and index.
ISBN 0-471-35281-0 (cloth : alk. paper)
1. Convex sets. 2. Mathematical optimization. I. Title. II. Pure and applied
mathematics (John Wiley & Sons : Unnumbered)
QA640 .B46 2001
Printed in the United States of America.
10 9 8 7 6 5 4 3 2 1
To my wife, Anna

Preface ix

I Topics in Real Analysis 1
1. Introduction 1
Vectors in R L
2. 1
3. Algebra of Sets 4
Metric Topology of R L
4. 5
5. Limits and Continuity 8
6. Basic Property of Real Numbers 11
7. Compactness 14
8. Equivalent Norms and Cartesian Products 16
9. Fundamental Existence Theorem 18
10. Linear Transformations 21
Differentiation in R L
11. 22

II Convex Sets in Rn 30
Lines and Hyperplanes in R L
1. 30
2. Properties of Convex Sets 35
3. Separation Theorems 45
4. Supporting Hyperplanes: Extreme Points 56
5. Systems of Linear Inequalities: Theorems of the Alternative 61
6. Af¬ne Geometry 69
7. More on Separation and Support 80

III Convex Functions 87
1. De¬nition and Elementary Properties 87
2. Subgradients 102
3. Differentiable Convex Functions 106
4. Alternative Theorems for Convex Functions 113
5. Application to Game Theory 117

IV Optimization Problems 128
1. Introduction 128
2. Differentiable Unconstrained Problems 129
3. Optimization of Convex Functions 137
4. Linear Programming Problems 139
5. First-Order Conditions for Differentiable Nonlinear
Programming Problems 145
6. Second-Order Conditions 163

V Convex Programming and Duality 179
1. Problem Statement 179
2. Necessary Conditions and Suf¬cient Conditions 181
3. Perturbation Theory 188
4. Lagrangian Duality 200
5. Geometric Interpretation 207
6. Quadratic Programming 210
7. Duality in Linear Programming 215

VI Simplex Method 222
1. Introduction 222
2. Extreme Points of Feasible Set 225
3. Preliminaries to Simplex Method 230
4. Phase II of Simplex Method 234
5. Termination and Cycling 245
6. Phase I of Simplex Method 251
7. Revised Simplex Method 255

Bibliography 261

Index 263

This book presents the mathematics of ¬nite-dimensional optimization, featur-
ing those aspects of convexity that are useful in this context. It provides a basis
for the further study of convexity, of more general optimization problems, and
of numerical algorithms for the solution of ¬nite-dimensional optimization
problems. The intended audience consists of beginning graduate students in
engineering, economics, operations research, and mathematics and strong
undergraduates in mathematics. This was the audience in a one-semester
course at Purdue, MA 521, from which this book evolved.
Ideally, the prerequisites for reading this book are good introductory
courses in real analysis and linear algebra. In teaching MA 521, I found that
while the mathematics students had the real analysis prerequisites, many of the
other students who took the course because of their interest in optimization
did not have this prerequisite. Chapter I is for those students and readers who
do not have the real analysis prerequisite; in it I present those concepts and
results from real analysis that are needed. Except for the Weierstrass theorem
on the existence of a minimum, the ˜˜heavy™™ or ˜˜deep™™ theorems are stated
without proof. Students without the real variables prerequisite found the
material dif¬cult at ¬rst, but most managed to assimilate it at a satisfactory
level. The advice to readers for whom this is the ¬rst encounter with the
material in Chapter I is to make a serious effort to master it and to return to
it as it is used in the sequel.
To address as wide an audience as possible, I have not always presented the
most general result or argument. Thus, in Chapter II I chose the ˜˜closest point™™
approach to separation theorems, rather than more generally valid arguments,
because I believe it to be more intuitive and straightforward for the intended
audience. Readers who wish to get the best possible separation theorem in
¬nite dimensions should read Sections 6 and 7 of Chapter II. In proving the
Fritz John Theorem, I used a penalty function argument due to McShane
rather than more technical arguments involving linearizations. I limited the
discussion of duality to Lagrangian duality and did not consider Fenchel
duality, since the latter would require the development of more mathematical


The numbering system and reference system for theorems, lemmas, remarks,
and corollaries is the following. Within a given chapter, theorems, lemmas, and
remarks are numbered consecutively in each section, preceded by the section
number. Thus, the ¬rst theorem of Section 1 is Theorem 1.1, the second,
Theorem 1.2, and so on. The same applies to lemmas and remarks. Corollaries
are numbered consecutively within each section without a reference to the
section number. Reference to a theorem in the same chapter is given by the
theorem number. Reference to a theorem in a chapter different from the current
one is given by the theorem number preceded by the chapter number in Roman
numerals. Thus, a reference in Chapter IV to Theorem 4.1 in Chapter II would
be Theorem II.4.1. References to lemmas and remarks are similar. References
to corollaries within the same section are given by the number of the corollary.
References to corollaries in a different section of the same chapter are given by
pre¬xing the section number to the corollary number; references in a different
chapter are given by pre¬xing the chapter number in Roman numerals to the
I thank Rita Saerens and John Gregory for reading the course notes version
of this book and for their corrections and suggestions for improvement. I thank
Terry Combs for preparing the ¬gures. I also thank Betty Gick for typing
seemingly innumerable versions and revisions of the notes for MA 521. Her
skill and cooperation contributed greatly to the success of this project.

L®¤ D. B«©
West Lafayette, Indiana
Convexity and Optimization in n . Leonard D. Berkovitz
Copyright 2002 John Wiley & Sons, Inc.
ISBN: 0-471-35281-0



The serious study of convexity and optimization problems in RL requires some
background in real analysis and in linear algebra. In teaching a course based
on notes from which this text evolved, the author and his colleagues assumed
that the students had an undergraduate course in linear algebra but did not
necessarily have a background in real analysis. The purpose of this chapter is
to provide the reader with most of the necessary background in analysis. Not
all statements will be proved. The reader, however, is expected to understand
the de¬nitions and the theorems and is expected to follow the proofs of
statements whenever the proofs are given. The bad news is that many readers
will ¬nd this chapter to be the most dif¬cult one in the text. The good news is
that careful study of this material will provide background for many other
courses and that subsequent chapters should be easier. If necessary, the reader
should return to this chapter when encountering this material later on.

By euclidean n-space, or RL, we mean the set of all n-tuples x : (x , . . . , x ),
where the x , i : 1, . . . , n are real numbers. Thus, RL is a generalization of the
familiar two- and three-dimensional spaces R and R. The elements x of RL
are called vectors or points. We will often identify the vector x : (x , . . . , x )
with the n ; 1 matrix

and abuse the use of the equal sign to write


x: $ .

In this case we shall call x a column vector. We shall also identify x with the
1 ; n matrix (x , . . . , x ) and write x : (x , . . . , x ). In this case we shall call x
 L  L
a row vector. It will usually be clear from the context whether we consider x
to be a row vector or a column vector. When there is danger of confusion, we
will identify x with the column vector and use the transpose symbol, which is
a superscript t, to denote the row vector. Thus xR : (x , . . . , x ).
We de¬ne two operations, vector addition and multiplication by a scalar. If
x : (x , . . . , x ) and y : (y , . . . , y ) are two vectors, we de¬ne their sum x ; y
 L  L
to be

x ; y : (x ; y , . . . , x ; y ).
  L L
For any scalar, or real number, we de¬ne x to be

x : ( x , . . . , x ).
We assume that the reader is familiar with the properties of these operations
and knows that under these operations RL is a vector space over the reals.
Another important operation is the inner product, or dot product, of two
vectors, denoted by 1x, y2 or xRy and de¬ned by

1x, y2 :  x y .
Again, we assume that the reader is familiar with the properties of the inner
product. We use the inner product to de¬ne the norm #·# or length of a vector
x as follows:

#x# : 1x, x2 :  x .

. 1
( 59 .)