Boyd Convex Optimization book - SFU Wiki
Boyd Convex Optimization book - SFU Wiki
Boyd Convex Optimization book - SFU Wiki
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
338 6 Approximation and fitting<br />
at given points u i ∈ R k ? (Here we do not restrict f to lie in any finite-dimensional<br />
subspace of functions.) The answer is: if and only if there exist g 1 , . . . , g m such<br />
that<br />
y j ≥ y i + g T i (u j − u i ), i, j = 1, . . . , m. (6.19)<br />
To see this, first suppose that f is convex, dom f = R k , and f(u i ) = y i ,<br />
i = 1, . . . , m. At each u i we can find a vector g i such that<br />
f(z) ≥ f(u i ) + g T i (z − u i ) (6.20)<br />
for all z. If f is differentiable, we can take g i = ∇f(u i ); in the more general case,<br />
we can construct g i by finding a supporting hyperplane to epi f at (u i , y i ). (The<br />
vectors g i are called subgradients.) By applying (6.20) to z = u j , we obtain (6.19).<br />
Conversely, suppose g 1 , . . . , g m satisfy (6.19). Define f as<br />
f(z) =<br />
max (y i + gi T (z − u i ))<br />
i=1,...,m<br />
for all z ∈ R k . Clearly, f is a (piecewise-linear) convex function. The inequalities<br />
(6.19) imply that f(u i ) = y i , for i = 1, . . . , m.<br />
We can use this result to solve several problems involving interpolation, approximation,<br />
or bounding, with convex functions.<br />
Fitting a convex function to given data<br />
Perhaps the simplest application is to compute the least-squares fit of a convex<br />
function to given data (u i , y i ), i = 1, . . . , m:<br />
minimize<br />
∑ m<br />
i=1 (y i − f(u i )) 2<br />
subject to f : R k → R is convex, dom f = R k .<br />
This is an infinite-dimensional problem, since the variable is f, which is in the<br />
space of continuous real-valued functions on R k . Using the result above, we can<br />
formulate this problem as<br />
minimize<br />
∑ m<br />
i=1 (y i − ŷ i ) 2<br />
subject to ŷ j ≥ ŷ i + gi T (u j − u i ), i, j = 1, . . . , m,<br />
which is a QP with variables ŷ ∈ R m and g 1 , . . . , g m ∈ R k . The optimal value of<br />
this problem is zero if and only if the given data can be interpolated by a convex<br />
function, i.e., if there is a convex function that satisfies f(u i ) = y i . An example is<br />
shown in figure 6.24.<br />
Bounding values of an interpolating convex function<br />
As another simple example, suppose that we are given data (u i , y i ), i = 1, . . . , m,<br />
which can be interpolated by a convex function. We would like to determine the<br />
range of possible values of f(u 0 ), where u 0 is another point in R k , and f is any<br />
convex function that interpolates the given data. To find the smallest possible<br />
value of f(u 0 ) we solve the LP<br />
minimize y 0<br />
subject to y j ≥ y i + gi T (u j − u i ), i, j = 0, . . . , m,