📄 opencvref_cv.htm
字号:
<dt>corners<dd>Initial coordinates of the input corners and refined coordinates on
output.
<dt>count<dd>Number of corners.
<dt>win<dd>Half sizes of the search window. For example, if <code>win</code>=(5,5) then
5*2+1 × 5*2+1 = 11 × 11 search window is used.
<dt>zero_zone<dd>Half size of the dead region in the middle of the search zone over which the
summation in formulae below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix.
The value of (-1,-1) indicates that there is no such size.
<dt>criteria<dd>Criteria for termination of the iterative process of corner refinement.
That is, the process of corner position refinement stops either after certain number of iteration or
when a required accuracy is achieved. The <code>criteria</code> may specify either of or both the maximum
number of iteration and the required accuracy.
</dl><p>
The function <code>cvFindCornerSubPix</code> iterates to find the sub-pixel accurate location
of corners, or radial saddle points, as shown in on the picture below.</p>
<p>
<image align="center" src="pics/cornersubpix.png">
</p>
<p>
Sub-pixel accurate corner locator is based on the observation that every vector
from the center <code>q</code> to a point <code>p</code> located within a neighborhood of <code>q</code> is orthogonal
to the image gradient at <code>p</code> subject to image and measurement noise. Consider the expression:
</p>
<pre>
ε<sub>i</sub>=DI<sub>p<sub>i</sub></sub><sup>T</sup>•(q-p<sub>i</sub>)
</pre>
where <code>DI<sub>p<sub>i</sub></sub></code> is the image gradient
at the one of the points <code>p<sub>i</sub></code> in a neighborhood of <code>q</code>.
The value of <code>q</code> is to be found such that <code>ε<sub>i</sub></code> is minimized.
A system of equations may be set up with <code>ε<sub>i</sub></code>' set to zero:</p>
<pre>
sum<sub>i</sub>(DI<sub>p<sub>i</sub></sub>•DI<sub>p<sub>i</sub></sub><sup>T</sup>)•q - sum<sub>i</sub>(DI<sub>p<sub>i</sub></sub>•DI<sub>p<sub>i</sub></sub><sup>T</sup>•p<sub>i</sub>) = 0
</pre>
<p>where the gradients are summed within a neighborhood ("search window") of <code>q</code>.
Calling the first gradient term <code>G</code> and the second gradient term <code>b</code> gives:</p>
<pre>
q=G<sup>-1</sup>•b
</pre>
<p>
The algorithm sets the center of the neighborhood window at this new center <code>q</code>
and then iterates until the center keeps within a set threshold.
</p>
<hr><h3><a name="decl_cvGoodFeaturesToTrack">GoodFeaturesToTrack</a></h3>
<p class="Blurb">Determines strong corners on image</p>
<pre>
void cvGoodFeaturesToTrack( const CvArr* image, CvArr* eig_image, CvArr* temp_image,
CvPoint2D32f* corners, int* corner_count,
double quality_level, double min_distance,
const CvArr* mask=NULL, int block_size=3,
int use_harris=0, double k=0.04 );
</pre><p><dl>
<dt>image<dd>The source 8-bit or floating-point 32-bit, single-channel image.
<dt>eig_image<dd>Temporary floating-point 32-bit image of the same size as <code>image</code>.
<dt>temp_image<dd>Another temporary image of the same size and same format as <code>eig_image</code>.
<dt>corners<dd>Output parameter. Detected corners.
<dt>corner_count<dd>Output parameter. Number of detected corners.
<dt>quality_level<dd>Multiplier for the maxmin eigenvalue; specifies minimal accepted
quality of image corners.
<dt>min_distance<dd>Limit, specifying minimum possible distance between returned
corners; Euclidian distance is used.
<dt>mask<dd>Region of interest. The function selects points either in the specified region
or in the whole image if the mask is NULL.
<dt>block_size<dd>Size of the averaging block, passed to underlying
<a href="#decl_cvCornerMinEigenVal">cvCornerMinEigenVal</a> or
<a href="#decl_cvCornerHarris">cvCornerHarris</a> used by the function.
<dt>use_harris<dd>If nonzero, Harris operator (<a href="#decl_cvCornerHarris">cvCornerHarris</a>)
is used instead of default <a href="#decl_cvCornerMinEigenVal">cvCornerMinEigenVal</a>.
<dt>k<dd>Free parameter of Harris detector; used only if <code>use_harris≠0</code>
</dl><p>
The function <code>cvGoodFeaturesToTrack</code> finds corners with big eigenvalues in the
image. The function first calculates the minimal eigenvalue for every source image pixel
using <a href="#decl_cvCornerMinEigenVal">cvCornerMinEigenVal</a> function and stores them in <code>eig_image</code>.
Then it performs non-maxima suppression (only local maxima in 3x3 neighborhood remain).
The next step is rejecting the corners with the
minimal eigenvalue less than <code>quality_level</code>•max(<code>eig_image</code>(x,y)). Finally,
the function ensures that all the corners found are distanced enough one from
another by considering the corners (the most strongest corners are considered first)
and checking that the distance between the newly considered feature and the features considered earlier
is larger than <code>min_distance</code>. So, the function removes the features than are too close
to the stronger features.</p>
<hr><h2><a name="cv_imgproc_resampling">Sampling, Interpolation and Geometrical Transforms</a></h2>
<hr><h3><a name="decl_cvSampleLine">SampleLine</a></h3>
<p class="Blurb">Reads raster line to buffer</p>
<pre>
int cvSampleLine( const CvArr* image, CvPoint pt1, CvPoint pt2,
void* buffer, int connectivity=8 );
</pre><p><dl>
<dt>image<dd>Image to sample the line from.
<dt>pt1<dd>Starting the line point.
<dt>pt2<dd>Ending the line point.
<dt>buffer<dd>Buffer to store the line points; must have enough size to store
max( |<code>pt2.x</code>-<code>pt1.x</code>|+1, |<code>pt2.y</code>-<code>pt1.y</code>|+1 )</code> points in case
of 8-connected line and |<code>pt2.x</code>-<code>pt1.x</code>|+|<code>pt2.y</code>-<code>pt1.y</code>|+1 in case
of 4-connected line.
<dt>connectivity<dd>The line connectivity, 4 or 8.
</dl><p>
The function <code>cvSampleLine</code> implements a particular case of application of line
iterators. The function reads all the image points lying on the line between <code>pt1</code>
and <code>pt2</code>, including the ending points, and stores them into the buffer.</p>
<hr><h3><a name="decl_cvGetRectSubPix">GetRectSubPix</a></h3>
<p class="Blurb">Retrieves pixel rectangle from image with sub-pixel accuracy</p>
<pre>
void cvGetRectSubPix( const CvArr* src, CvArr* dst, CvPoint2D32f center );
</pre><p><dl>
<dt>src<dd>Source image.
<dt>dst<dd>Extracted rectangle.
<dt>center<dd>Floating point coordinates of the extracted rectangle center within the source image.
The center must be inside the image.
</dl><p>
The function <code>cvGetRectSubPix</code> extracts pixels from <code>src</code>:</p>
<pre>
dst(x, y) = src(x + center.x - (width(dst)-1)*0.5, y + center.y - (height(dst)-1)*0.5)
</pre>
<p>
where the values of pixels at non-integer coordinates are
retrieved using bilinear interpolation. Every channel of multiple-channel images is processed
independently.
Whereas the rectangle center must be inside the image, the whole rectangle may be partially occluded.
In this case, the replication border mode is used to get pixel values beyond the image boundaries.
</p>
<hr><h3><a name="decl_cvGetQuadrangleSubPix">GetQuadrangleSubPix</a></h3>
<p class="Blurb">Retrieves pixel quadrangle from image with sub-pixel accuracy</p>
<pre>
void cvGetQuadrangleSubPix( const CvArr* src, CvArr* dst, const CvMat* map_matrix );
</pre><p><dl>
<dt>src<dd>Source image.
<dt>dst<dd>Extracted quadrangle.
<dt>map_matrix<dd>The transformation 2 × 3 matrix [<code>A</code>|<code>b</code>] (see the discussion).
</dl><p>
The function <code>cvGetQuadrangleSubPix</code> extracts pixels from <code>src</code> at sub-pixel accuracy
and stores them to <code>dst</code> as follows:</p>
<pre>
dst(x, y)= src( A<sub>11</sub>x'+A<sub>12</sub>y'+b<sub>1</sub>, A<sub>21</sub>x'+A<sub>22</sub>y'+b<sub>2</sub>),
where <code>A</code> and <code>b</code> are taken from <code>map_matrix</code>
| A<sub>11</sub> A<sub>12</sub> b<sub>1</sub> |
map_matrix = | |
| A<sub>21</sub> A<sub>22</sub> b<sub>2</sub> |,
x'=x-(width(dst)-1)*0.5, y'=y-(height(dst)-1)*0.5
</pre>
<p>
where the values of pixels at non-integer coordinates A•(x,y)<sup>T</sup>+b are
retrieved using bilinear interpolation. When the function needs pixels outside of the image, it uses replication
border mode to reconstruct the values. Every channel of multiple-channel images is processed
independently.</p>
<hr><h3><a name="decl_cvResize">Resize</a></h3>
<p class="Blurb">Resizes image</p>
<pre>
void cvResize( const CvArr* src, CvArr* dst, int interpolation=CV_INTER_LINEAR );
</pre><p><dl>
<dt>src<dd>Source image.
<dt>dst<dd>Destination image.
<dt>interpolation<dd>Interpolation method:<ul>
<li>CV_INTER_NN - nearest-neigbor interpolation,
<li>CV_INTER_LINEAR - bilinear interpolation (used by default)
<li>CV_INTER_AREA - resampling using pixel area relation. It is preferred method for image
decimation that gives moire-free results.
In case of zooming it is similar to <code>CV_INTER_NN</code> method.
<li>CV_INTER_CUBIC - bicubic interpolation.
</ul>
</dl><p>
The function <code>cvResize</code> resizes image <code>src</code> so that it fits exactly to <code>dst</code>.
If ROI is set, the function consideres the ROI as supported as usual.</p>
<hr><h3><a name="decl_cvWarpAffine">WarpAffine</a></h3>
<p class="Blurb">Applies affine transformation to the image</p>
<pre>
void cvWarpAffine( const CvArr* src, CvArr* dst, const CvMat* map_matrix,
int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS,
CvScalar fillval=cvScalarAll(0) );
</pre><p><dl>
<dt>src<dd>Source image.
<dt>dst<dd>Destination image.
<dt>map_matrix<dd>2×3 transformation matrix.
<dt>flags<dd>A combination of interpolation method and the following optional flags:<ul>
<li>CV_WARP_FILL_OUTLIERS - fill all the destination image pixels. If some of them correspond to
outliers in the source image, they are set to <code>fillval</code>.
<li>CV_WARP_INVERSE_MAP - indicates that <code>matrix</code> is inverse transform from destination image
to source and, thus, can be used directly for pixel interpolation. Otherwise,
the function finds the inverse transform from <code>map_matrix</code>.
</ul>
<dt>fillval<dd>A value used to fill outliers.
</dl><p>
The function <code>cvWarpAffine</code> transforms source image using the specified
matrix:</p>
<pre>
dst(x’,y’)<-src(x,y)
(x’,y’)<sup>T</sup>=map_matrix•(x,y,1)<sup>T</sup>+b if CV_WARP_INVERSE_MAP is not set,
(x, y)<sup>T</sup>=map_matrix•(x’,y&apos,1)<sup>T</sup>+b otherwise
</pre>
<p>
The function is similar to <a href="#decl_cvGetQuadrangleSubPix">cvGetQuadrangleSubPix</a> but they are
not exactly the same. <a href="#decl_cvWarpAffine">cvWarpAffine</a> requires input and output
image have the same data type, has larger overhead (so it is not quite suitable for small images)
and can leave part of destination image unchanged. While <a href="#decl_cvGetQuadrangleSubPix">cvGetQuadrangleSubPix</a>
may extract quadrangles from 8-bit images into floating-point buffer, has smaller overhead and
always changes the whole destination image content.
</p>
<p>
To transform a sparse set of points, use <a href="#decl_cvTransform">cvTransform</a>
function from cxcore.</p>
<hr><h3><a name="decl_cv2DRotationMatrix">2DRotationMatrix</a></h3>
<p class="Blurb">Calculates affine matrix of 2d rotation</p>
<pre>
CvMat* cv2DRotationMatrix( CvPoint2D32f center, double angle,
double scale, CvMat* map_matrix );
</pre><p><dl>
<dt>center<dd>Center of the rotation in the source image.
<dt>angle<dd>The rotation angle in degrees. Positive values mean couter-clockwise rotation
(the coordiate origin is assumed at top-left corner).
<dt>scale<dd>Isotropic scale factor.
<dt>map_matrix<dd>Pointer to the destination 2×3 matrix.
</dl><p>
The function <code>cv2DRotationMatrix</code> calculates matrix:</p>
<pre>
[ α β | (1-α)*center.x - β*center.y ]
[ -β α | β*center.x + (1-α)*center.y ]
where α=scale*cos(angle), β=scale*sin(angle)
</pre>
<p>The transformation maps the rotation center to itself. If this is not the purpose,
the shift should be adjusted.</p>
<hr><h3><a name="decl_cvWarpPerspective">WarpPerspective</a></h3>
<p class="Blurb">Applies perspective transformation to the image</p>
<pre>
void cvWarpPerspective( const CvArr* src, CvArr* dst, const CvMat* map_matrix,
int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS,
CvScalar fillval=cvScalarAll(0) );
</pre><p><dl>
<dt>src<dd>Source image.
<dt>dst<dd>Destination image.
<dt>map_matrix<dd>3×3 transformation matrix.
<dt>flags<dd>A combination of interpolation method and the following optional flags:<ul>
<li>CV_WARP_FILL_OUTLIERS - fill all the destination image pixels. If some of them correspond to
outliers in the source image, they are set to <code>fillval</code>.
<li>CV_WARP_INVERSE_MAP - indicates that <code>matrix</code> is inverse transform from destination image
to source and, thus, can be used directly for pixel interpolation. Otherwise,
the function finds the inverse transform from <code>map_matrix</code>.
</ul>
<dt>fillval<dd>A value used to fill outliers.
</dl><p>
The function <code>cvWarpPerspective</code> transforms source image using
the specified matrix:</p>
<pre>
dst(x’,y’)<-src(x,y)
(t•x’,t•y’,t)<sup>T</sup>=map_matrix•(x,y,1)<sup>T</sup>+b if CV_WARP_INVERSE_MAP is not set,
(t•x, t•y, t)<sup>T</sup>=map_matrix•(x’,y&apos,1)<sup>T</sup>+b otherwise
</pre>
<p>
For a sparse set of points
use <a href="#decl_cvPerspectiveTransform">cvPerspectiveTransform</a> function from cxcore.</p>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -