Geometric Methods and Applications [book]

Jean Gallier
2011 Texts in Applied Mathematics  
Preface This book is an introduction to fundamental geometric concepts and tools needed for solving problems of a geometric nature with a computer. Our main goal is to present a collection of tools that can be used to solve problems in computer vision, robotics, machine learning, computer graphics, and geometric modeling. During the ten years following the publication of the first edition of this book, optimization techniques have made a huge comeback, especially in the fields of computer
more » ... and machine learning. In particular, convex optimization and its special incarnation, semidefinite programming (SDP), are now widely used techniques in computer vision and machine learning, as one may verify by looking at the proceedings of any conference in these fields. Therefore, we felt that it would be useful to include some material (especially on convex geometry) to prepare the reader for more comprehensive expositions of convex optimization, such as Boyd and Vandenberghe [2], a masterly and encyclopedic account of the subject. In particular, we added Chapter 7, which covers separating and supporting hyperplanes. We also realized that the importance of the SVD (singular value decomposition) and of the pseudo-inverse had not been sufficiently stressed in the first edition of this book, and we rectified this situation in the second edition. In particular, we added sections on PCA (principal component analysis) and on best affine approximations and showed how they are efficienlty computed using SVD. We also added a section on quadratic optimization and a section on the Schur complement, showing the usefulness of the pseudo-inverse. In this second edition, many typos and small mistakes have been corrected, some proofs have been shortened, some problems have been added, and some references have been added. Here is a list containing brief descriptions of the chapters that have been modified or added. • Chapter 3, on the basic properties of convex sets, has been expanded. In particular, we state a version of Carathéodory's theorem for convex cones (Theorem 3.2), a version of Radon's theorem for pointed cones (Theorem 3.6), and Tverberg's theorem (Theorem 3.7), and we define centerpoints and prove their existence (Theorem 3.9). vii viii Preface • Chapter 7 is new. This chapter deals with separating hyperplanes, versions of Farkas's lemma, and supporting hyperplanes. Following Berger [1], various versions of the separation of open or closed convex subsets by hyperplanes are proved as consequences of a geometric version of the Hahn-Banach theorem (Theorem 7.1). We also show how various versions of Farkas's lemma (Lemmas 7.3, 7.4, and 7.5) can be easily deduced from separation results (Corollary 7.4 and Proposition 7.3). Farkas's lemma plays an important result in linear programming. Indeed, it can be used to give a quick proof of so-called strong duality in linear programming. We also prove the existence of supporting hyperplanes for boundary points of closed convex sets (Minkowski's lemma, Proposition 7.4). Unfortunately, lack of space prevents us from discussing polytopes and polyhedra. The reader will find a masterly exposition of these topics in Ziegler [3]. • Chapter 14 is a major revision of Chapter 13 (Applications of Euclidean Geometry to Various Optimization Problems) from the first edition of this book and has been renamed "Applications of SVD and Pseudo-Inverses." Section 14.1, about least squares problems, and the pseudo-inverse has not changed much, but we have added the fact that AA + is the orthogonal projection onto the range of A and that A + A is the orthogonal projection onto Ker(A) ⊥ , the orthogonal complement of Ker(A). We have also added Proposition 14.1, which shows how the pseudoinverse of a normal matrix A can be obtained from a block diagonalization of A (see Theorem 12.7). Sections 14.2, 14.3, and 14.4 are new. In Section 14.2, we define various matrix norms, including operator norms, and we prove Proposition 14.4, showing how a matrix can be best approximated by a rank-k matrix (in the 2 norm). Section 14.3 is devoted to principal component analysis (PCA). PCA is a very important statistical tool, yet in our experience, most presentations of this concept lack a crisp definition. Most presentations identify the notion of principal components with the result of applying SVD and do not prove why SVD does in fact yield the principal components and directions. To rectify this situation, we give a precise definition of PCAs (Definition 14.3), and we prove rigorously how SVD yields PCA (Theorem 14.3), using the Rayleigh-Ritz ratio (Lemma 14.2). In Section 14.4, it is shown how to best approximate a set of data with an affine subspace in the least squares sense. Again, SVD can used to find solutions. • Chapter 15 is new, except for Section 15.1, which reproduces Section 13.2 from the first edition of this book. We added the definition of the positive semidefinite cone ordering, , on symmetric matrices, since it is extensively used in convex optimization. In Section 15.2, we find a necessary and sufficient condition (Proposition 15.2) for the quadratic function f (x) = 1 2 x Ax + x b to have a minimum in terms of the pseudo-inverse of A (where A is a symmetric matrix). We also show how to accommodate linear constraints of the form C x = 0 or affine constraints of the form C x = t (where t = 0). In Section 15.3, we consider the problem of maximizing f (x) = x Ax on the unit sphere x x = 1 or, more generally, on the ellipsoid x Bx = 1, where A is a symmetric matrix and B is symmetric, positive definite. We show that these Preface ix problems are completely solved by diagonalizing A with respect to an orthogonal matrix. We also briefly consider the effect of adding linear constraints of the form C x = 0 or affine constraints of the form C x = t (where t = 0). • Chapter 16 is new. In this chapter, we define the notion of Schur complement, and we use it to characterize when a symmetric 2 × 2 block matrix is either positive semidefinite or positive definite (Proposition 16.1, Proposition 16.2, and Theorem 16.1). • Chapter 17 is also brand new. In this chapter, we show how a computer vision problem, contour grouping, can be formulated as a quadratic optimization problem involving a Hermitian matrix. Because of the extra dependency on an angle, this optimization problem leads to finding the derivative of eigenvalues and eigenvectors of a normal matrix X. We derive explicit formulas for these derivatives (in the case of eigenvectors, the formula involves the pseudo-inverse of X) and we prove their correctness. It appears to be difficult to find these formulas together with a clean and correct proof in the literature. Our optimization problem leads naturally to the consideration of the field of values (or numerical range)
doi:10.1007/978-1-4419-9961-0 fatcat:ve7ilfnbq5dwhmlm4n5bxtbqle