📄 bayesian filtering classes.html
字号:
<TD WIDTH=342> <P>Represents only the Bayesian Likelihood of a state observation</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=210> <P><STRONG>Functional_filter</STRONG></TD> <TD WIDTH=342> <P>Represents only the filter prediction by a simple functional (non-stochastic) model</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=210> <P><STRONG>State_filter</STRONG></TD> <TD WIDTH=342> <P>Represents only the filter state and an update on that state</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=210> <P><STRONG>Kalman_state_filter</STRONG></TD> <TD WIDTH=342> <P>Kalman representation of state statistics.<BR>Represents a state vector and a covariance matrix. That is the 1st (mean) and 2nd (covariance) moments of a distribution.</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=210> <P><B>Information_state_filter</B></TD> <TD WIDTH=342> <P>Information representation of state statistics.<BR>Effectively the inverse of the Kalman_state_filter representation.</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=210> <P><STRONG>Linrz_filter</STRONG></TD> <TD WIDTH=342> <P>Model interface for a linear or gradient linearized Kalman filters.<BR>Specifies filter operation using predict and observe functions</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=210> <P><STRONG>Sample_filter</STRONG></TD> <TD WIDTH=342> <P>Discreate reprensation of state statistic.<BR>Base class for filters representing state probability distribution by a discrete sampling</TD> </TR> </TABLE> </DD></DL><H2 ALIGN=LEFT>Model Hierarchy</H2><P ALIGN=LEFT>These two tables show some of the classes in thehierarchy upon which models are built.</P><A NAME="predict models"></A><DL> <DD> <TABLE WIDTH=561 BORDER=3 CELLPADDING=2 CELLSPACING=0> <COL WIDTH=223> <COL WIDTH=324> <TR VALIGN=TOP> <TD WIDTH=223> <P><STRONG>Sampled_predict_model</STRONG></TD> <TD WIDTH=324> <P>Sampled stochastic prediction model</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=223> <P><STRONG>Functional_predict_model</STRONG> </TD> <TD WIDTH=324> <P>Functional (non-stochastic) prediction model </TD> </TR> <TR VALIGN=TOP> <TD WIDTH=223> <P><STRONG>Addative_predict_model</STRONG></TD> <TD WIDTH=324> <P>Additive Gaussian noise prediction model.<BR>This fundamental model for linear/linearized filtering, with noise added to a functional prediction</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=223> <P><STRONG>Linrz_predict_model</STRONG></TD> <TD WIDTH=324> <P>Linearized prediction model with Jacobian of non-linear functional part</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=223> <P><STRONG>Linear_predict_model</STRONG></TD> <TD WIDTH=324> <P>Linear prediction model</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=223> <P><STRONG>Linear_invertable_predict_model</STRONG></TD> <TD WIDTH=324> <P>Linear prediction model which is invertible</TD> </TR> </TABLE> </DD></DL><P> </P><A NAME="observe models"></A><DL> <DD> <TABLE WIDTH=558 BORDER=3 CELLPADDING=2 CELLSPACING=0> <COL WIDTH=222> <COL WIDTH=322> <TR VALIGN=TOP> <TD WIDTH=222 HEIGHT=39> <P><STRONG>Likelihood_observe_model</STRONG> </TD> <TD WIDTH=322> <P>Likelihood observation model<BR>The most fundamental Bayesian definition of an observation</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=222> <P><STRONG>Functional_observe_model</STRONG></TD> <TD WIDTH=322> <P>Functional (non-stochastic) observation model</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=222> <P><STRONG>Linrz_uncorrelated_observe_model</STRONG></TD> <TD WIDTH=322> <P>Linearized observation model with Jacobian of non-linear functional part and additive uncorrelated noise</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=222 HEIGHT=23> <P><STRONG>Linrz_correlated_observe_model</STRONG></TD> <TD WIDTH=322> <P>as above, but with additive correlated noise</TD> </TR> </TABLE> </DD></DL><H1>Capabilities</H1><H2>Probabilistic representation of state</H2><P>Bayes rule is usually defined in term of Probability DensityFunctions. However PDFs never appear in Bayes++. They are alwaysrepresented by their statistics.</P><P>This is for good reason, there is very little that can be donealgorithmically with a function. However the sufficient statistics,given the assumptions of a filter, can be easily manipulated toimplement Bayes rule. Each filter class is derived from a baseclasses that represent the statistics used. For example theKalman_filter and Sample_filter base classes.</P><P>It would be possible to use a common abstract base class for thatenforces the implementation of a PDF function; a function that mapsstate to probability. This would provide a very weak method to viewthe PDF of state. However such a function could not be efficientlyimplemented by all schemes. Therefore to enforce this requirement inthe base class would be highly restrictive.</P><H2>Linear and Linearized models</H2><P>Many of the filters use classical linear estimation techniques,such as the Kalman filter. To make them useful they are applied inmodified forms to cope with <U>linearized</U> models of some kind.Commonly a gradient linearized model is used to update uncertainty,while the state is directly propagated through the non-linear model.This is the <I>Extended</I> form used by the Extended Kalman Filter.</P><P>However, some numerical schemes cannot be modified using the<I>Extended</I> form. In particular, it is not always possible to usethe extended form with correlated noises. Where this is the case, thelinearized model is used for uncertainty and state.</P><P>There are also many Bayesian filters that work with non-Gaussiannoise and non-linear models - such as the <I>SIR_filter</I>. The SIRscheme has been built so it works with Likelihood model.</P><P>The filters support discontinuous models such as those dependingon angles. In this case the model must be specifically formulated tonormalize the states. However, some schemes need to rely on anadditional normalization function. Normalization works well forfunctions that are locally continuous after normalization (such asangles). Non-linear functions that cannot be made into locallycontinuous models are not appropriate for linear filters.</P><H2>Interface Regularity</H2><P>Where possible, the Schemes have been designed to all have thesame interface, providing for easy interchange. However, this is notpossible in all cases as the mathematical methods vary greatly. Forefficiency some schemes also implement additional functions. Thefunctions can be use to avoid inefficiencies where the general formis not required.</P><P>Scheme class constructors are irregular (their parameter listvaries) and must be used with care. Each numerical scheme hasdifferent requirements, and the representation size needs to beparameterized. A template class Filter_scheme can be used to providea generic constructor interface. It provides provides specializationfor all the Schemes so they can be constructed with a commonparameter list.</P><H3>Open interface</H3><P>The design of the class hierarchy is deliberately open. Many ofthe variables associated with schemes are exposed as <I>publicmembers</I>. For example the covariance filter’s innovationcovariance is public. This is to allow efficient algorithms to beimplemented using the classes. In particular it is often the casethat subsequent computations reuse the values that have already beencomputed by the numerical schemes. Each scheme defines a <I>publicstate representation</I>.</P><P>Furthermore many temporaries are <I>protected members</I> to allowderived classes to modify a scheme without requiring any additionaloverhead to allocate its own temporaries.</P><P>Open interfaces are potentially hazardous. The danger is thatabuse could result in unnecessary dependencies on particularimplementation characteristics.</P><H4>Public state representation Initialization and Update</H4><P>The two functions <B>init</B> and <B>update</B> are defined in thefilter class hierarchy for classes with names ending <B>_state_filter</B>.They are very important in allowing the <I>public staterepresentation</I> to be managed.</P><BLOCKQUOTE><P><U>After</U> the public representation of a scheme ischanged (externally) the filter should be <I>initialized</I> with an<B>init</B> function. The scheme may define additional init functionsthat allow it to be initialized from alternative representations.<B>init()</B> is defined for schemes derived from<B>Kalman_state_filter</B>.</P></BLOCKQUOTE><BLOCKQUOTE><P><U>Before</U> the public representation of a scheme isused the filter should be <I>updated</I> with an <B>update</B>function. The scheme may define additional update functions thatallow it to update alternative representations. <B>update()</B> isdefined for schemes derived from <B>Kalman_state_filter</B>.</P></BLOCKQUOTE><H4>Sharing schemes state representation</H4><P>The filter hierarchy has been specifically designed to allow staterepresentation to be shared. Schemes state representations areinherited using one or more <B>_state_filter</B> 's as virtual baseclasses. It is therefore possible to combine multiple schemes (usingmultiple inheritance) and only a single copy of each staterepresentation will exist.</P><P>The <B>init</B> and <B>update</B> functions should be used tocoordinate between the schemes and state representation. This allowsprecise control numerical conversions necessary for different schemesto share the state.</P><H4>Assignment and Copy Construction</H4><P>Filter classes can be assigned when they are of the same size. Incases where the class includes members in addition to the publicrepresentation this is optimized so that only public representationis assigned. Assignment is equivalent to: <I>update</I>, assignmentof public representation and <I>initialization</I> from the newstate.</P><P>Copy Constructors are NOT defined. Generally the classes areexpensive to copy and so copies should be avoided. Instead referencesor (smart) pointers should be combined with assignment to createcopies if necessary.</P>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -