📄 if_background.tex
字号:
-\mat{R}^{-1} \mat{G} & \mat{0} & \mat{R}^{-1} \end{bmatrix} \quad \bar{\bvec{\eta}}_t = \begin{bmatrix} \bvec{\eta}_{x_t} \\ \bvec{\eta}_M \\ \bvec{0} \end{bmatrix}\end{equation}%%Notice that the new feature shares information only with the robotpose and not the rest of the map. This is consistent with the factthat, if we knew the robot pose, the previous map wouldn't tell usanything about the location of the new feature. In terms of thegraphical model, the lone edge connected to $\bvec{m}_{n+1}$ pairs it with therobot pose: the new feature is conditionally independent of the map,given the pose of the vehicle.%\subsection{Time Projection Step}It is intuitive to break the time prediction step into twoprocesses: state augmentation and marginalization. We first add thenew pose, $\bvec{x_{t+1}}$, to the state vector, $\bvec{\xi}_t\rightarrow \bar{\bvec{\xi}}_{t+1} = \bigl[\bvec{x}_t^\top \; \underline{\bvec{x}_{t+1}^\top}\; \bvec{M}^\top \bigr]^\top$ where the motionof the robot is governed by the linear model%%\begin{equation*} \bvec{x}_{t+1} = \mat{F} \bvec{x}_t + \bvec{w}_t\end{equation*}%%where $\bvec{w}_t \sim \normalcov{\bvec{0}}{\mat{Q}}$ is Gaussiannoise. We can think of the new pose as a new feature and, as a result,the new canonical parametrization follows from \eqref{eqn:addfeature},%%\begin{subequations} \label{eqn:motionaugment} \begin{equation*} \pdf{\bvec{x}_t, \bvec{x}_{t+1}, \bvec{M} \mid \bvec{z}^t,\bvec{u}^{t+1}} = \normalinfo{\hat{\bvec{\eta}}_{t+1}}{\hat{\mat{\Lambda}}_{t+1}} \\[-5pt] \end{equation*} \begin{align} \hat{\mat{\Lambda}}_{t+1} &= \left[ \begin{array}{c|cc} \left(\mat{\Lambda}_{x_t x_t} + \mat{F}^\top \mat{Q}^{-1} \mat{F} \right) & -\mat{F}^\top \mat{Q}^{-1} & \mat{\Lambda}_{x_t M} \\ \\[-10pt] \hline \\[-10pt] -\mat{Q}^{-1}\mat{F} & \mat{Q}^{-1} & \mat{0} \\ \mat{\Lambda}_{M x_t} & \mat{0} & \mat{\Lambda}_{MM} \end{array} \right] = \left[ \begin{array}{c|cccc} \hat{\mat{\Lambda}}_{t+1}^{11} & \; & \hat{\mat{\Lambda}}_{t+1}^{12} & \; \\ \\[-10pt] \hline \\[-6pt] \hat{\mat{\Lambda}}_{t+1}^{21} & \; & \hat{\mat{\Lambda}}_{t+1}^{22} & \; \\[-6pt] \phantom{.} & \phantom{.} & \phantom{.} & \phantom{.} \end{array} \right] \\ % \hat{\bvec{\eta}}_{t+1} &= \left[ \begin{array}{c} \bvec{\eta}_{x_t} \\ \\[-10pt] \hline \\[-10pt] \bvec{0} \\ \bvec{\eta}_M \end{array} \right] = \left[ \begin{array}{c} \hat{\bvec{\eta}}_{t+1}^1 \\ \\[-10pt] \hline \\[-6pt] \hat{\bvec{\eta}}_{t+1}^2 \\[-6pt] \phantom{.} \end{array} \right] \end{align}\end{subequations}%%As with a new feature, the new pose is linked with the old pose, but notwith the map, as we indicate in the middle plot of Figure \ref{fig:projection}.Next, we get rid of the old robot pose from the state by marginalizingover the distribution.%%\begin{equation*} \pdf{\bvec{x}_{t+1},\bvec{M} \mid \bvec{z}^t, \bvec{u}^{t+1}} = \int\limits_{\bvec{x}_t} \pdf{\bvec{x}_t, \bvec{x}_{t+1}, \bvec{M} \mid \bvec{z}^t, \bvec{u}^{t+1}} \partial \bvec{x}_t\end{equation*}%%Looking back at Table \ref{tab:margcond}, the correspondinginformation form follows if we let $\bvec{\alpha} =\bigl[\bvec{x}_{t+1}^\top \; \bvec{M}^\top \bigr]^\top$ and$\bvec{\beta} = \bvec{x}_t$,%%\begin{subequations} \label{eqn:motionmarg} \begin{equation*} \pdf{\bvec{\xi}_{t+1} \mid \bvec{z}^t,\bvec{u}^{t+1}} = \normalinfo{\bar{\bvec{\eta}}_{t+1}}{\bar{\mat{\Lambda}}_{t+1}} \\[-5pt] \end{equation*} \begin{align} \bar{\mat{\Lambda}}_{t+1} &= \hat{\mat{\Lambda}}_{t+1}^{22} - \hat{\mat{\Lambda}}_{t+1}^{21} \left( \hat{\mat{\Lambda}}_{t+1}^{11} \right)^{-1} \hat{\mat{\Lambda}}_{t+1}^{12} \label{eqn:motionmarg_a} \\ \bar{\bvec{\eta}}_{t+1} &= \hat{\bvec{\eta}}_{t+1}^2 - \hat{\mat{\Lambda}}_{t+1}^{21} \left( \hat{\mat{\Lambda}}_{t+1}^{11} \right)^{-1} \hat{\bvec{\eta}}_{t+1}^1 \label{eqn:motionmarg_b} \end{align}\end{subequations}%%Unlike the state augmentation component to the time prediction step,marginalization significantly changes the information matrix as weshow in the right-most schematic of Figure\ref{fig:projection}. Within the graphical model, marginalization hasadded edges between the new pose and all landmarks that werepreviously linked to the old robot state. Additionally, these featuresare now fully connected. Elements in the information matrix that werezero (i.e. missing edges) are now filled in as, in essence,information that was contained in the old pose has been distributedamong the remaining map and pose. At the same time, the shade of thematrix elements (darker shades imply larger magnitudes) suggests thatexisting constraints have become weaker. These effects are typical ofthe time projection step, which tends to populate the informationmatrix while decreasing the magnitude of existing terms.\subsection{Measurement Update Step}Assume that the robot observes landmarks on an individual basis according to the linearmeasurement model,%%\begin{equation*} \bvec{z}_t = \mat{H} \bvec{\xi}_t + \bvec{v}_t\end{equation*}%%that, again, includes additive Gaussian noise, $\bvec{v}_t \sim\normalcov{\bvec{0}}{\mat{R}}$. We us Bayes' rule to incorporate this information intothe posterior, $\pdf{\bvec{\xi}_t \mid \bvec{z}^{t-1},\bvec{u}^t} =\normalinfo{\bar{\bvec{\eta}}_t}{\bar{\mat{\Lambda}}_t}$, via%%\begin{equation} \pdf{\bvec{\xi}_t \mid \bvec{z}^t,\bvec{u}^t} \propto \pdf{\bvec{z}_t \mid \bvec{\xi}_t} \, \pdf{\bvec{\xi}_t \mid \bvec{z}^{t-1}, \bvec{u}^t}.\end{equation}%%This is equivalent to first adding $\bvec{z}_t$ to the state vectorand then conditioning on the measurement (per Table\ref{tab:margcond}). The resulting information filter update step is:%%\begin{subequations} \label{eqn:measupdate_info} \begin{equation*} \pdf{\bvec{\xi}_t \mid \bvec{z}^t,\bvec{u}^t} = \normalinfo{\bvec{\eta}_t}{\mat{\Lambda}_t} \\[-5pt] \end{equation*} \begin{align} \mat{\Lambda}_t &= \bar{\mat{\Lambda}}_t + \mat{H}^\top \mat{R}^{-1} \mat{H} \label{eqn:measupdate_info_a} \\ \bvec{\eta}_t &= \bar{\bvec{\eta}}_t + \mat{H}^\top \mat{R}^{-1} \bvec{z}_t \label{eqn:measupdate_b} \end{align}\end{subequations}%%Each row of the matrix, $\mat{H}$, corresponds to a differentmeasurement and is sparse, with non-zero values only at positionscorresponding to the robot pose and the feature that is observed. As aresult the $\mat{H}^\top \mat{R}^{-1} \mat{H}$ term is also sparseand modifies off-diagonal constraints between the vehicle and observedfeatures, as well as their diagonal elements. In essence, weare adding ``new information'' as we strengthen links between therobot and map.\section{Conclusion}This description of feature-based SLAM information filters will,hopefully, provide some insight into some of the interestingcharacteristics of the canonical form and its application to SLAM. Itis, by no means, complete since there are a lot of important issuesthat we haven't covered (e.g. nonlinear models, meanrecovery\footnote{Remember, we do not have an explicit representation for the mean, but can recover it from the information vector and matrix via \eqref{eqn:sftoif}.}, data association, etc.). For a moredetailed discussion of SLAM information filters, including thescalability issues, \cite{thrun04a,eustice05a,walter05a} are goodreferences. %%%\bibliography{bibliography}\end{document}
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -