⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 sos.htm

📁 optimization toolbox
💻 HTM
📖 第 1 页 / 共 2 页
字号:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>

<head>
<meta http-equiv="Content-Language" content="en-us">
<title>YALMIP Example : Sum-of-squares decompositions</title>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1251">
<meta content="Microsoft FrontPage 6.0" name="GENERATOR">
<meta name="ProgId" content="FrontPage.Editor.Document">
<link href="yalmip.css" type="text/css" rel="stylesheet">
<base target="_self">
</head>

<body leftMargin="0" topMargin="0">

<div align="left">
  <table border="0" cellpadding="4" cellspacing="3" style="border-collapse: collapse" bordercolor="#000000" width="100%" align="left" height="100%">
    <tr>
      <td width="100%" align="left" height="100%" valign="top">
      <h2>Sum of squares decompositions</h2>
      <hr noShade SIZE="1">
      <p>YALMIP has a built-in module for sum of squares calculations. In its most basic formulation, a 
      sum of squares (SOS) problem takes a polynomial
      <strong>p(x)</strong> and tries to find a real-valued polynomial vector function
      <strong>h(x)</strong> such 
      that <strong>p(x)=h<sup>T</sup>(x)h(x)</strong> (or equivalently <strong>p(x)=v<sup>T</sup>(x)Qv(x)</strong>, 
      where <b>Q</b> is positive semidefinite and <strong>v(x)</strong> is a vector of monomials), hence proving non-negativity of the polynomial <strong>p(x)</strong>.
      Read more about standard sum of squares decompositions in 
      <a href="readmore.htm#PARRILO">[Parrilo]</a>. </p>
		<p>In addition to standard SOS decompositions, YALMIP also supports
		<a href="#linear">linearly</a> and <a href="#polynomial">nonlinearly</a> 
		parameterized problems, decomposition of <a href="#matrix">matrix 
		valued</a> polynomials and computation of<a href="#lowrank"> low rank 
		decompositions</a>.</p>
		<p>The examples below only introduce the basic features related to 
		sum-of-squares in YALMIP. For more features and details, run the example 
		<code>sosex.m</code></p>
      <h3>Doing it your self the hard way</h3>
      <p>Before we introduce the efficiently implemented SOS module in YALMIP, 
		let us briefly mention how one could implement a SOS solver in 
		high-level YALMIP code. Of course, you would never use this approach, 
		but it might be useful to see the basic idea. Define a polynomial.</p>
      <table cellPadding="10" width="100%" id="table5">
        <tr>
          <td class="xmpcode">
          <pre>x = sdpvar(1,1);y = sdpvar(1,1);
p = (1+x)^4 + (1-y)^2;</pre>
          </td>
        </tr>
      </table>
      <p>Introduce a decomposition</p>
      <table cellPadding="10" width="100%" id="table6">
        <tr>
          <td class="xmpcode">
          <pre>v = monolist([x y],degree(p)/2);
Q = sdpvar(length(v));
p_sos = v'*Q*v;</pre>
          </td>
        </tr>
      </table>
      <p>The polynomials have to match, hence all coefficient in the polynomial 
		describing the difference of the two polynomials have to be zero.</p>
      <table cellPadding="10" width="100%" id="table7">
        <tr>
          <td class="xmpcode">
          <pre>F = set(coefficients(p-p_sos,[x y]) == 0) + set(Q &gt; 0);
solvesdp(F)</pre>
          </td>
        </tr>
      </table>
      <p>The problem above is typically more efficiently solved when interpreted 
		in primal form, hence we <a href="dual.htm#dualize">dualize</a> it.</p>
      <table cellPadding="10" width="100%" id="table11">
        <tr>
          <td class="xmpcode">
          <pre>F = set(coefficients(p-p_sos,[x y]) == 0) + set(Q &gt; 0);
[F,obj] = dualize(F,[]);
solvesdp(F,-obj)</pre>
          </td>
        </tr>
      </table>
      <p>Adding parameterizations does not change the code significantly. Here is the 
		code to find a lower bound on the polynomial</p>
      <table cellPadding="10" width="100%" id="table8">
        <tr>
          <td class="xmpcode">
          <pre>sdpvar t
F = set(coefficients((p-t)-p_sos,[x y]) == 0) + set(Q &gt; 0);
solvesdp(F,-t)</pre>
          </td>
        </tr>
      </table>
      <p>Matrix valued decompositions are a bit trickier, but still 
		straightforward.</p>
      <table cellPadding="10" width="100%" id="table9">
        <tr>
          <td class="xmpcode">
          <pre>sdpvar x y
P = [1+x^2 -x+y+x^2;-x+y+x^2 2*x^2-2*x*y+y^2];
m = size(P,1);
v = monolist([x y],degree(P)/2);
Q = sdpvar(length(v)*m);
R = kron(eye(m),v)'*Q*kron(eye(m),v)-P;
s = coefficients(R(find(triu(R))),[x y]);
solvesdp(set(Q &gt; 0) + set(s==0));
sdisplay(clean(kron(eye(m),v)'*double(Q)*kron(eye(m),v),1e-6))</pre>
          </td>
        </tr>
      </table>
      <p>Once again, this is the basic idea behind the SOS module in YALMIP. 
		However, the implementation is far more efficient and uses various 
		approaches to reduce complexity, hence the approaches above should never be 
		used in practice.</p>
		<h3>Simple sum of squares problems</h3>
      <p>The following lines of code presents some typical manipulation when working 
      with SOS-calculations (a more detailed description is available if you run 
		the sum of squares example in <code>yalmipdemo.m</code>). </p>
		<p>The most important commands are <a href="reference.htm#sos">
      	sos</a> (to define a SOS constraint) and
      <a href="reference.htm#solvesos">
      solvesos</a> (to solve the problem).</p>
      <table cellPadding="10" width="100%">
        <tr>
          <td class="xmpcode">
          <pre>x = sdpvar(1,1);y = sdpvar(1,1);
p = (1+x)^4 + (1-y)^2;
F = set(sos(p));
solvesos(F);</pre>
          </td>
        </tr>
      </table>
      <p>The sum of squares decomposition is extracted with the command
      <a href="reference.htm#sosd">
      sosd</a>.</p>
      <table cellPadding="10" width="100%">
        <tr>
          <td class="xmpcode">
          <pre>h = sosd(F);
sdisplay(h)<font color="#000000">

ans = </font></pre>
          <pre><font color="#000000">  '-1.203-1.9465x+0.22975y-0.97325x^2'
  '0.7435-0.45951x-0.97325y-0.22975x^2'
  '0.0010977+0.00036589x+0.0010977y-0.0018294x^2'</font></pre>
          </td>
        </tr>
      </table>
      <p>To see if the decomposition was successful, we simply calculate <strong>
      p(x)-h<sup>T</sup>(x)h(x) </strong>which should be 0. However, due to numerical errors, 
      the difference will not be zero. A useful command then is
      <a href="reference.htm#clean">
      clean</a>. Using
      <a href="reference.htm#clean">
      clean</a>, we remove all monomials with coefficients smaller than, e.g., 1e-6.</p>
      <table cellPadding="10" width="100%">
        <tr>
          <td class="xmpcode">
          <pre>clean(p-h&#39;*h,1e-6)
<font color="#000000">
ans =</font></pre>
          <pre><font color="#000000">&nbsp;&nbsp;  0</font></pre>
          </td>
        </tr>
      </table>
      <p>The decomposition <strong>
      p(x)-v<sup>T</sup>(x)Qv(x) </strong>can also be obtained easily.</p>
      <table cellPadding="10" width="100%">
        <tr>
          <td class="xmpcode">
          <pre>x = sdpvar(1,1);y = sdpvar(1,1);
p = (1+x)^4 + (1-y)^2;
F = set(sos(p));
[sol,v,Q] = solvesos(F);
clean(p-v{1}'*Q{1}*v{1},1e-6)</pre>
          <pre><font color="#000000"> ans =
     0</font></pre>
          </td>
        </tr>
      </table>
      <p><font color="#FF0000">Note</font> : Even though <strong>
      h<sup>T</sup>(x)h(x) </strong>and <strong>
      v<sup>T</sup>(x)Qv(x)</strong> should be the same in theory, they are 
      typically not. The reason is partly numerical issues with floating point 
      numbers, but more importantly due to the way YALMIP treats the case when 
      <b>Q</b> 
      not is positive definite (often the case due to numerical issues in the 
      SDP solver). The decomposition is in theory defined as <strong>h(x)=chol(Q)v(x)</strong> 
      . If <strong>
      Q</strong> is indefinite, YALMIP uses an approximation of the Cholesky 
      factor based on a singular value decomposition.</p>
      <p>The quality of the decomposition can alternatively be evaluated using 
		<a href="reference.htm#checkset">checkset</a>. The value reported is the 
		largest coefficient in the polynomial&nbsp; <strong>
      p(x)-h<sup>T</sup>(x)h(x)</strong></p>
      <table cellPadding="10" width="100%" id="table1">
        <tr>
          <td class="xmpcode">
          <pre>checkset(F)
<font color="#000000">++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|   ID|      Constraint|                          Type|   Primal residual|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|   #1|   Numeric value|   SOS constraint (polynomial)|       7.3674e-012|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++</font></pre>
			<pre>e = checkset(F(is(F,'sos')))

<font color="#000000">e =</font></pre>
			<pre><font color="#000000">  7.3674e-012</font></pre>
          </td>
        </tr>
      </table>
      <p>It is very rare (except in contrived academic examples) that one finds a 
      decomposition with a positive definite <strong>
      Q </strong>and zero mismatch<strong>
      </strong>between <strong>
      v<sup>T</sup>(x)Qv(x) </strong>and the polynomial <strong>
      p(x)</strong> .The reason is simply that all SDP solver uses real 
      arithmetics. Hence, in principle, the decomposition has no theoretical 
      value as a certificate for non-negativity unless additional post-analysis 
      is performed (relating the size of the residual with the eigenvalues of 
		the Gramian).</p>
      <h3><a name="linear"></a>Parameterized problems</h3>
      <p>The involved polynomials can be parameterized, and we can optimize the parameters 
      under the constraint that the polynomial is a sum of squares. As an example, 
      we can calculate a lower bound on a polynomial. The second argument can be used for an objective function to be 
      minimized. Here, we maximize the lower bound, i.e. minimize the negative 
      lower bound. The third argument is an options structure while the fourth argument is a vector 
      containing all parametric variables in the polynomial (in this example 
      we only have one variable, but several parametric variables can be defined 
      by simply concatenating them).</p>
      <table cellPadding="10" width="100%">
        <tr>
          <td class="xmpcode">
          <pre>sdpvar x y lower
p = (1+x*y)^2-x*y+(1-y)^2;
F = set(sos(p-lower));
solvesos(F,-lower,[],lower);
double(lower)
<font color="#000000">
 ans =</font></pre>
          <pre><font color="#000000">&nbsp;&nbsp;&nbsp;  0.75</font></pre>
          </td>
        </tr>
      </table>
      <p>YALMIP automatically declares all variables appearing in the objective 
      function and in non-SOS constraints as parametric variables. Hence, the 
      following code is equivalent.</p>
      <table cellPadding="10" width="100%">
        <tr>
          <td class="xmpcode">
          <pre>solvesos(F,-lower);
double(lower)
<font color="#000000">
 ans =</font></pre>
          <pre><font color="#000000">&nbsp;&nbsp;&nbsp;  0.75</font></pre>
          </td>
        </tr>
      </table>
      <p>Multiple SOS constraints can also be used. Consider the following problem of 
      finding the smallest possible <b>t</b> such that the polynomials <b>t(1+xy)<sup>2</sup>-xy+(1-y)<sup>2</sup></b> 
      and <b>(1-xy)<sup>2</sup>+xy+t(1+y)<sup>2</sup> </b>are both 
      sum of squares.</p>
      <table cellPadding="10" width="100%">
        <tr>
          <td class="xmpcode">
          <pre>sdpvar x y t
p1 = t*(1+x*y)^2-x*y+(1-y)^2;
p2 = (1-x*y)^2+x*y+t*(1+y)^2;
F = set(sos(p1))+set(sos(p2));
solvesos(F,t);</pre>
          <pre>double(t)
<font color="#000000">
 ans = 

   0.2500</font></pre>
          <pre>sdisplay(sosd(F(1)))</pre>
          <pre><font color="#000000">ans = </font></pre>
          <pre><font color="#000000">    '-1.102+0.95709y+0.14489xy'
    '-0.18876-0.28978y+0.47855xy'</font></pre>
          <pre>sdisplay(sosd(F(2)))</pre>
          <pre><font color="#000000">ans = </font></pre>
          <pre><font color="#000000">    '-1.024-0.18051y+0.76622xy'
    '-0.43143-0.26178y-0.63824xy'
    '0.12382-0.38586y+0.074568xy'</font></pre>
          <pre>&nbsp;</pre>
          </td>
        </tr>
      </table>
      <p>
          <img border="0" src="demoicon.gif" width="16" height="16">If you have 
		parametric variables, bounding the feasible region typically 
		improves numerical behavior. Having lower bounds will 
			additionally decrease the size of the optimization problem 
			(variables bounded from below can be treated as translated cone 
			variables in <a href="dual.htm#dualization">dualization</a>, which 
			is used by <a href="reference.htm#solvesos">
			solvesos</a>).</p>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -