⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 2006-0540.tex

📁 自动化学报的编辑软件
💻 TEX
📖 第 1 页 / 共 3 页
字号:
\begin{equation}[T_{bij_1 k} ,T_{eij_1 k} ]\subset [t_1 ,+\infty ]\end{equation}\begin{equation}[T_{bij_m 1} ,T_{eij_m 1} ]\subset [0,t_2 ]\end{equation}The first round of event $i$ cannot begin earlier than a certain time $t_1$, and the final match of event $i$ cannot end later than a certain time$t_2 $.The constraints in the above describe the Olympics schedulingproblem. The target is to gain a solution that satisfies all theconstraints. This is a constraint satisfaction problem in fact. Asmentioned before, constraint-based methods are back-tracking methodsin nature. The efficiency of the constraint-based methods depends onthe problem$'$s structure and the back-tracking policy. As thefeasible domain gets smaller, the satisfiability would drop abruptlyfrom a certain point, and to find a feasible solution or decide nosolution is very difficult near the point$^{[19]}$. This phenomenonis called phase transition. So we consider model transforming. Thefeasible domain can be broadened by softening constraints (9), andthen we may try some novel methods.\begin{equation}{\rm Let}\, J=\sum\limits_i {w_i T_i^2 }\end{equation}In (10)\begin{equation}T_i =\begin{array}{l} \left\{ {\begin{array}{l} T_{eij_m 1} -T_{ei} {\begin{array}{*{20}c} \hfill & {T_{eij_m 1} >} \hfill \\\end{array} }T_{ei} \\ 0{\begin{array}{*{20}c} {{\begin{array}{*{20}c} \hfill & \hfill \\\end{array} }} \hfill & \hfill & \hfill & \,\ {{\rm else}} \hfill \\\end{array} } \\ \end{array}} \right. \\ \\ \end{array} \end{equation}The constraint satisfaction problem can be transformed into a constrainedoptimization problem as follows:$$\min_{\rm s.t.} J$$(1)--(8) are satisfied.Constraint (5) can be rewritten as ${\pmb g}({\pmb x})\le {\pmb 0}$.\section{Solution methodology}\subsection{Lagrangian relaxation}Consider the constrained optimization problem. Constraints (1), (2),(3), (4), (6) and (8) are all local constraints, they involve somekind of field. Constraint (7) does not affect the solution. Andconstraint (5) is the only global constraint. We relax constraint(5) as follows\begin{equation}L({\pmb x},{\pmb\lambda} )=J+{\pmb\lambda} ^\tau (\sum {\pmbO}[(i,j,k),m,t]-{\pmb m}_{\max }  )\end{equation}${\pmb x}$ is the vector consisting of $T_{bijk} $ and $T_{eijk} $,${\pmb\lambda} $ is the vector of Lagrangian multipliers and ${\rm{\bf {\pmb\lambda} }}\ge 0$.The dual problem is\begin{equation}q({\pmb\lambda} )=\min L({\pmb x},{\pmb\lambda} )\end{equation}s.t. Constraints (1), (2), (3), (4), (6) and (8) are satisfied.(12) can be rewritten as follows\begin{equation}L({\pmb x},{\pmb\lambda} )=\sum\limits_i {L_i -{\pmb\lambda} {\pmbm}_{\max } }\end{equation}\begin{equation}L_i =w_i T_i^2 +{\pmb\lambda} (\sum {O[(i,j,k),m,t]} )\end{equation}In view of the separability of the original problem, the relaxedproblem can be decomposed into I sub-problems as (15).Due to the duality theorem, $q({\pmb\lambda} \ast )=J\ast $. If theoptimal Lagrangian multipliers are known, the original problem canbe solved by optimizing the separable sub-problems. And if theapproximate optimal Lagrangian multipliers are known, the originalproblem can be approximately solved. We have a Lagrangian relaxationframework as shown in Fig.\,1 to solve the constrained optimizationproblem.\vskip3mm{\centering\vbox{\centerline{\psfig{figure=01.eps,width=8cm}} \vskip2mm {\smallFig.\,1\quad The Lagrangian relaxation framework }}}\subsection{Lagrangian multipliers updating}The Lagrangian multipliers are the solution of the dual problem,which is proved to be a concave problem. The existing method tosolve the dual problem are dependent on variable prioriknowledge$^{[15-18]}$. The subgradient projection method withvariable diameter for the dual prolem optimization, which does notdependent upon any priori knownledge, is presented in [20\,-21] andmodified in the following.It can be concluded that ${\rm {\pmb g}}({\rm {\pmb x}})$ is thesubgradient of $q({\rm {\bf {\pmb\lambda} }})$ by considering that$q({\rm {\bf {\pmb\lambda} }})$ is concave, and for any $\Delta {\rm{\bf {\pmb\lambda} }}$, $q({\rm {\bf {\pmb\lambda} }}+\Delta {\rm{\bf {\pmb\lambda} }})\le q({\rm {\bf {\pmb\lambda} }})+\left\langle{{\rm {\bf g}}({\rm {\pmb x}}^{\rm {\bf {\pmb\lambda} }}),\Delta{\rm {\bf {\pmb\lambda} }}} \right\rangle $.Let $L(q,q_k^{lev} )=\{{\rm {\bf {\pmb\lambda} }}\vert q({\rm {\bf{\pmb\lambda} }})\ge q_k^{lev} \}$; then $M^\ast =L(q,q^\ast )$ isthe set of the optimal solutions. Since $q({\rm {\bf {\pmb\lambda}}})$ is difficult to be expressed directly, we rewrite $M^\ast $ bythe subgradient.$M^\ast = \bigcup\nolimits_{{\rm {\bf {\pmb\lambda} }}'\in D}{L(\overline q ({\rm {\bf {\pmb\lambda} }};{\rm {\bf {\pmb\lambda}}}'),q^\ast )} , \quad \overline q ({\rm {\bf {\pmb\lambda} }};{\rm{\bf {\pmb\lambda} }}')= q({\rm {\bf {\pmb\lambda} }})+ \left\langle{{\rm {\bf g}}({\rm {\pmb x}}^{\rm {\bf {\pmb\lambda} }}),{\rm {\bf{\pmb\lambda} }}-{\rm {\bf {\pmb\lambda} }}'} \right\rangle , \quadD$ is the domain of ${\rm {\bf {\pmb\lambda} }}$.Similarly, the set of the solutions constrained by $q_k^{lev} $ canbe rewritten as follows.\[L(q,q_k^{lev} )= \bigcup\nolimits_{{\rm {\bf {\pmb\lambda} }}'\in D}{L(\overline q ({\rm {\bf {\pmb\lambda} }};{\rm {\bf {\pmb\lambda}}}'),q_k^{lev} )}\]Then let $q^k({\rm {\bf {\pmb\lambda} }})=\overline q ({\rm {\bf{\pmb\lambda} }};{\rm {\bf {\pmb\lambda} }}_k )$, $H^k=\{{\rm {\bf{\pmb\lambda} }}\vert q^k({\rm {\bf {\pmb\lambda} }})\ge q_k^{lev}\}$, ${\rm {\pmb y}}_k =P_{H^k} ({\rm {\bf {\pmb\lambda} }}_k )$.$P_{H^k} ({\rm {\bf {\pmb\lambda} }}_k )$ denotes the projection of${\rm {\bf {\pmb\lambda} }}_k $ on $H^k$. $P_+ (\bullet )$ is theoperator to project on positive semi-space. $d_S (\bullet )$ denotesthe distance to space $S$. With the closed convex set $H^k$ and anadmissible stepsize $t$, the relaxation operator is $R_{H^k,t} ({\rm{\bf {\pmb\lambda} }})={\rm {\bf {\pmb\lambda} }}+t(P_{H^k} ({\rm{\bf {\pmb\lambda} }})-{\rm {\bf {\pmb\lambda} }})$ which has thefej\'{e}r constraction property\[\left| {{\rm {\pmb y}}\!-\!R_{H^k,t} ({\rm {\bf {\pmb\lambda} }})}\right|^2\le \left| {{\rm {\pmb y}}\!-\!{\rm {\bf {\pmb\lambda} }}}\right|^2\!-t_{\min } (2-t_{\max } )d_{H^k}^2 ({\rm {\bf{\pmb\lambda} }}), \hfill \forall {\rm {\pmb y}}\in H^k.\]Recall that $H^k=\{{\rm {\bf {\pmb\lambda} }}\vert q^k({\rm {\bf{\pmb\lambda} }})\ge q_k^{lev} \}$, $R_{H^k,t} ({\rm {\bf{\pmb\lambda} }})={\rm {\bf {\pmb\lambda} }}+{t(q_k^{lev} -q({\rm{\bf {\pmb\lambda} }}_k )){\rm {\pmb g}}({\rm {\pmb x}}_k )}\mathord{\left/ {\vphantom {{t(q_k^{lev} -q({\rm {\bf {\pmb\lambda}}}_k )){\rm {\pmb g}}({\rm {\pmb x}}_k )} {\left| {{\rm {\bfg}}({\rm {\bf x}}_k )} \right|}}} \right. \kern-\nulldelimiterspace}{\left| { {\pmb g}({\pmb x}_k )} \right|}^2$.The method is shown as follows.\textbf{Algorithm 1.} The subgradient projection method withvariable diameter\textbf{Step 0.} Let $k=0$, ${\pmb\lambda} _0 \ge 0$, $\varepsilon$, $\delta _0 $, $d>0$, $\rho _0 =0$, $0<\omega <1$, $k_d >1$;Let ${\pmb x}_0 $ be the solution of $q({\pmb\lambda} _0 )=L({\pmbx}_0 ,{\pmb\lambda} _0 )$, and ${\pmb x}_0^g $ be the correspondingfeasible solution;\begin{align*}&J_0 =J({\pmb x}_0^g ), \ q_0 =q({\pmb\lambda} _0 ), \ q_0^{lev}=\omega J_0 +(1-\omega )q_0 , \ q_0^{up} =J_0 ,\nonumber\\&\overline {\pmb x} ={\rm {\pmb x}}_0^g ;\end{align*}\textbf{Step 1.} If $J_k -q_k \le \varepsilon $, $x$ is the finalsolution, and the algorithm ends; Otherwise, continue;For $0<t_{\min } \le t_k \le t_{\max } <1-\delta $, $\delta $ is asmall positive real number\[{\pmb\lambda} _{k+1} =P_+ ({\pmb\lambda} _k +{t_k (q_k^{lev}-q({\pmb\lambda} _k )){\pmb g}({\pmb x}_k )} \mathord{\left/{\vphantom {{t_k (q_k^{lev} -q({\pmb\lambda} _k )){\pmb g}({\pmbx}_k )} {\left| {g(x_k )} \right|}}} \right.\kern-\nulldelimiterspace} {\left| {{\pmb g}({\pmb x}_k )}\right|}^2);\]\[\rho _{k+1} = \rho _k +t_k (2-t_k )d_{Hk}^2 ({\pmb\lambda} _k )+d_{RN+}^2 ({\pmb\lambda} _k +t_k (P_{Hk} ({\pmb\lambda} _k)-{\pmb\lambda} _k ));\]If $q({\pmb\lambda} _{k+1} )\ge q_k $, then $q_{k+1}=q({\pmb\lambda} _{k+1} )$; otherwise, $q_{k+1} =q_k $;\[k=k+1;\]\textbf{Step 2.} Let ${\pmb x}_k $ be the solution of$q({\pmb\lambda} _k )=L({\pmb x}_k ,{\pmb\lambda} _k )$, ${\pmbx}_k^g $ be the corresponding feasible solution;If $J({\pmb x}_k^g )\le J_{k-1} $, then $J_k =J({\rm {\pmb x}}_k^g)$, $\overline {\pmb x} ={\pmb x}_k^g $; Otherwise, $J_k =J_{k-1} $;\textbf{Step 3.1.} If $\rho _k >d$, then $q_k^{up} =\min\{q_{k-1}^{lev} ,J_k \}$, $\rho _k =0$;\textbf{Step 3.2.} If $\rho _k \le d$, then $q_k^{up} =q_{k-1}^{up}$;\textbf{Step 4.} $0\le \delta _k \le \delta _{k-1} $, if $q_k>q_k^{lev} -\delta _k $, then $q_k^{up} =J_k $, $d=k_d $, $\rho _k=0$;\textbf{Step 5.} $q_k^{lev} =\omega q_k^{up} +(1-\omega )q_k $; goto Step 1;To demonstrate the convergence of the method, we have 3 lemmas and 1theorem, where $L_f =\mathop {\sup }\limits_{{\rm {\pmb x}}\in S}\left| {{\rm {\pmb g}}({\rm {\pmb x}})} \right|$, $S$ is the domainof the primal problem$'$s variables.\textbf{Lemma 1.} If $d<\left| {{\rm {\bf {\pmb\lambda} }}_k -{\rm{\bf {\pmb\lambda} }}^\ast } \right|^2$, then there exists some $k$such that $q_k^{lev} \le q_k +\delta _k $ where $\delta _k $ is notsmaller than a certain constant.\textbf{Lemma 2.} If $d\ge \left| {{\rm {\bf {\pmb\lambda} }}_k-{\rm {\bf {\pmb\lambda} }}^\ast } \right|^2$, $\delta $ is largeenough and $\delta _k =0$ for any $k$, then $q_k^{up} \ge q^\ast $.\textbf{Lemma 3.} If $d\ge \left| {{\rm {\bf {\pmb\lambda} }}_k-{\rm {\bf {\pmb\lambda} }}^\ast } \right|^2$ and $\delta _k =0$ forany $k$, then for any $\varepsilon >0$,\begin{align*}&k>{d(L_f )^2} \mathord{\left/ {\vphantom {{d(L_f )^2} {(t_{\min }(2-t_{\max } )\omega ^2(1-\omega )^2\varepsilon ^2)}}} \right.\kern-\nulldelimiterspace} {(t_{\min } (2-t_{\max } )\omega^2(1-\omega )^2\varepsilon ^2)}-\nonumber\\&{({(f({\rm {\pmb x}}_0 )-q({\rm {\bf {\pmb\lambda} }}_0 ))^2}\mathord{\left/ {\vphantom {{(f({\rm {\bf x}}_0 )-q({\rm {\bf{\pmb\lambda} }}_0 ))^2} \varepsilon }} \right.\kern-\nulldelimiterspace} \varepsilon )^2} \mathord{\left/{\vphantom {{({(f({\rm {\bf x}}_0 )-q({\rm {\bf {\pmb\lambda} }}_0))^2} \mathord{\left/ {\vphantom {{(f({\rm {\bf x}}_0 )-q({\rm {\bf{\pmb\lambda} }}_0 ))^2} \varepsilon }} \right.\kern-\nulldelimiterspace} \varepsilon )^2} {(2e\ln (\omega ))}}}\right. \kern-\nulldelimiterspace} {(2e\ln (\omega ))} \Rightarrowq_k^{up} -q_k <\varepsilon .

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -