📄 mp.htm
字号:
<pre>[pass,tol] = mpt_ispwabigger(sol{1},sol_p{1})
pass =
<font color="#000000"> 0</font>
tol =
<font color="#000000"> 1.3484e-014
</font>figure;
plot(Valuefunction);
plot(Valuefunction_p):</pre>
</td>
</tr>
</table>
<p>Since the objective function is <b>t</b>, the optimal t should coincide
with the value function. The optimal <b>t</b> in the projected problem
was requested as the second optimizer</p>
<table cellPadding="10" width="100%" id="table31">
<tr>
<td class="xmpcode">
<pre>plot(Optimizer(2));</pre>
</td>
</tr>
</table>
<p>In the code above, we used the pre-constructed function <code>create_CHS.m</code>,
which generates numerical data for defining predictions from
current state and future input signals. The problem can however be
solved without using this function. We will show two alternative
methods. </p>
<p>The first approach explicitly generates future states using the given
current state and future inputs (note that the solution not is exactly
the same as above due to a small difference in indexing logic.)</p>
<table cellPadding="10" width="100%" id="table32">
<tr>
<td class="xmpcode">
<pre>x = sdpvar(2,1);
U = sdpvar(N,1);</pre>
<pre>x0 = x; % Save parametric state
F = set(-5 < x0 < 5); % Exploration space
objective= 0;
for k = 1:N
% Feasible region
F = F + set(-1 < U(k) < 1);</pre>
<pre> % Add stage cost to total cost
objective = objective + (C*x)'*(C*x) + U(k)'*U(k);
% Explicit state update
x = A*x + B*U(k);
end
F = F + set(-1 < C*x < 1); % Terminal constraint
[sol,diagnostics,Z,Valuefunction,Optimizer] = solvemp(F,objective,[],x0,U(1));</pre>
</td>
</tr>
</table>
<p>The second method introduced new decision variables for both future
states and connect these using equality constraints.</p>
<table cellPadding="10" width="100%" id="table33">
<tr>
<td class="xmpcode">
<pre>x = sdpvar(2,N+1);
U = sdpvar(N,1);</pre>
<pre>F = set(-5 < x0 < 5); % Exploration space
objective = 0;
for k = 1:N
% Feasible region
F = F + set(-1 < U(k) < 1);
% Add stage cost to total cost
objective = objective + (C*x(:,k))'*(C*x(:,k)) + U(k)'*U(k);</pre>
<pre> % Implicit state update
F = F + set(x(:,k+1) == A*x(:,k) + B*U(k));
end
F = F + set(-1 < C*x(:,end) < 1); % Terminal constraint
[sol,diagnostics,Z,Valuefunction,Optimizer] = solvemp(F,objective,[],x(:,1),U(1));</pre>
</td>
</tr>
</table>
<p>This second method is less efficient, since it has a lot more decision
variables. However, in some cases, it is the most convenient way to
model a system (see the PWA examples below for similar approaches), and
the additional decision variables are removed anyway during a pre-solve
process YALMIP applies before calling the multi-parametric solver.</p>
<h3>Parametric programs with binary variables and equality constraints</h3>
<p>YALMIP extends the parametric algorithms in <a href="solvers.htm#mpt">MPT</a>
by adding a layer to
enable binary variables and equality constraints. We can use this to
find explicit solutions to, e.g., predictive control of PWA (piecewise
affine) systems.</p>
<p>Let us find the explicit solution to a predictive control problem,
where the gain of the system depends on the sign of the first state.
This will be a pretty advanced example, so let us start slowly by defining the
some data.</p>
<table cellPadding="10" width="100%" id="table6">
<tr>
<td class="xmpcode">
<pre>yalmip('clear')
clear all</pre>
<pre>% Model data
A = [2 -1;1 0];
B1 = [1;0]; % Small gain for x(1) > 0
B2 = [pi;0]; % Larger gain for x(1) < 0
C = [0.5 0.5];</pre>
<pre>nx = 2; % Number of states
nu = 1; % Number of inputs
% Prediction horizon
N = 4;</pre>
</td>
</tr>
</table>
<p>To simplify the code and the notation, we create a bunch of state and
control vectors in cell arrays</p>
<table cellPadding="10" width="100%" id="table5">
<tr>
<td class="xmpcode">
<pre>% States x(k), ..., x(k+N)
x = sdpvar(repmat(nx,1,N),repmat(1,1,N));</pre>
<pre>% Inputs u(k), ..., u(k+N) (last one not used)
u = sdpvar(repmat(nu,1,N),repmat(1,1,N));</pre>
<pre>% Binary for PWA selection
d = binvar(repmat(2,1,N),repmat(1,1,N));</pre>
</td>
</tr>
</table>
<p>We now run a loop to add constraints on all states and inputs. Note
the use of binary variables define the PWA dynamics, and the recommended
use of <a href="logic.htm#bounds">bounds</a> to improve big-M
relaxations)</p>
<table cellPadding="10" width="100%" id="table7">
<tr>
<td class="xmpcode">
<pre>F = set([]);
obj = 0;
for k = N-1:-1:1
% Strenghten big-M (improves numerics)
bounds(x{k},-5,5);
bounds(u{k},-1,1);
bounds(x{k+1},-5,5);
% Feasible region
F = F + set(-1 < u{k} < 1);
F = F + set(-1 < C*x{k} < 1);
F = F + set(-5 < x{k} < 5);
F = F + set(-1 < C*x{k+1} < 1);
F = F + set(-5 < x{k+1} < 5);</pre>
<pre> % PWA Dynamics
F = F + set(implies(d{k}(1),x{k+1} == (A*x{k}+B1*u{k})));
F = F + set(implies(d{k}(2),x{k+1} == (A*x{k}+B2*u{k})));
F = F + set(implies(d{k}(1),x{k}(1) > 0));
F = F + set(implies(d{k}(2),x{k}(1) < 0));
% It is EXTREMELY important to add as many
% constraints as possible to the binary variables
F = F + set(sum(d{k}) == 1);
% Add stage cost to total cost
obj = obj + norm(x{k},1) + norm(u{k},1);
end</pre>
</td>
</tr>
</table>
<p>The parametric variable here is the current state <b>x{1}</b>. In this
optimization problem, there are a lot of variables that we have no
interest in. To tell YALMIP that we only want the optimizer for the
current state <b>u{1}</b>, we use a fifth input argument.</p>
<table cellPadding="10" width="100%" id="table8">
<tr>
<td class="xmpcode">
<pre>[sol,diagnostics,Z,Valuefunction,Optimizer] = solvemp(F,obj,[],x{1},u{1});</pre>
</td>
</tr>
</table>
<p>To obtain the optimal control input for a specific state, we use <code>double</code>
and <code>assign</code> as usual.</p>
<table cellPadding="10" width="100%" id="table9">
<tr>
<td class="xmpcode">
<pre>assign(x{1},[-1;1]);
double(Optimizer)
ans =</pre>
<pre> 0.9549</pre>
</td>
</tr>
</table>
<p>The optimal cost at this state is available in the value function</p>
<table cellPadding="10" width="100%" id="table10">
<tr>
<td class="xmpcode">
<pre>double(Valuefunction)
ans =</pre>
<pre> 4.2732</pre>
</td>
</tr>
</table>
<p>To convince our self that we have a correct parametric solution, let us
compare it to the solution obtained by solving the problem for this
specific state. </p>
<table cellPadding="10" width="100%" id="table11">
<tr>
<td class="xmpcode">
<pre>sol = solvesdp(F+set(x{1}==[-1;1]),obj);
double(u{1})
<font color="#000000">ans =</font></pre>
<pre><font color="#000000"> 0.9549</font></pre>
<pre>double(obj)
<font color="#000000">ans =</font></pre>
<pre><font color="#000000"> 4.2732</font></pre>
</td>
</tr>
</table>
<h3><a name="dp"></a>Dynamic programming with LTI systems</h3>
<p>The capabilities in YALMIP to work with piecewise functions and
parametric programs enables easy coding of dynamic programming
algorithms. The value function with respect to the parametric variable
for a parametric linear program is a convex PWA function, and this is
the function returned in the fourth output. YALMIP creates this function
internally, and also saves information about convexity etc, and uses it
as any other <a href="extoperators.htm">nonlinear operator</a> (see more
details below). For
binary parametric linear programs, the value function is no longer
convex, but a so called overlapping PWA function. This means that, at
each point, it is defined as the minimum of a set of convex PWA
function. This information is also handled transparently in YALMIP, it
is simply another type of <a href="extoperators.htm">nonlinear operator</a>.
The main difference between the two function classes is that the second
class requires introduction of binary variables when used.</p>
<p>Note, the algorithms described in the following sections are mainly
intended for with (picewise) linear objectives. Dynamic programming with
quadratic
objective functions give rise to problems that are much harder to solve,
although it is supported.</p>
<p>To illustrate how easy it is to work with these PWA functions, we can
solve predictive control using dynamic programming, instead of setting
up the whole problem in one shot as we did above. As a first example, we solve a
standard linear predictive control problem. To fully understand
this example, it is required that you are familiar with predictive
control, dynamic programming and parametric optimization.</p>
<table cellPadding="10" width="100%" id="table12">
<tr>
<td class="xmpcode">
<pre>yalmip('clear')
clear all</pre>
<pre>% Model data
A = [2 -1;1 0];
B = [1;0];
C = [0.5 0.5];</pre>
<pre>nx = 2; % Number of states
nu = 1; % Number of inputs
% Prediction horizon
N = 4;</pre>
<pre>% States x(k), ..., x(k+N)
x = sdpvar(repmat(nx,1,N),repmat(1,1,N));</pre>
<pre>% Inputs u(k), ..., u(k+N) (last one not used)
u = sdpvar(repmat(nu,1,N),repmat(1,1,N));</pre>
</td>
</tr>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -