⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 opencvref_cv.htm

📁 opencv 中文文档 关于opencv 的所有函数
💻 HTM
📖 第 1 页 / 共 5 页
字号:
u &lt;- 13*L*(u' - u<sub>n</sub>), where u<sub>n</sub>=0.19793943
v &lt;- 13*L*(v' - v<sub>n</sub>), where v<sub>n</sub>=0.46831096

On output 0&le;L&le;100, -134&le;u&le;220, -140&le;v&le;122
The values are then converted to the destination data type:
    8-bit images:
        L &lt;- L*255/100, u &lt;- (u + 134)*255/354, v &lt;- (v + 140)*255/256
    16-bit images are currently not supported
    32-bit images:
        L, u, v are left as is
</pre>

The above formulae for converting RGB to/from various color spaces have been taken
from multiple sources on Web, primarily from <a href="#paper_ford98">Color Space Conversions (<b>[Ford98]</b>)</a>
document at Charles Poynton site.

<p></p>
<li>Bayer=>RGB (<code>CV_BayerBG2BGR, CV_BayerGB2BGR, CV_BayerRG2BGR, CV_BayerGR2BGR,<br>
                CV_BayerBG2RGB, CV_BayerGB2RGB, CV_BayerRG2RGB, CV_BayerGR2RGB</code>)
<p>Bayer pattern is widely used in CCD and CMOS cameras. It allows to get color picture
out of a single plane where R,G and B pixels (sensors of a particular component) are interleaved like
this:</p>
<p>
<table border=0 width=400>
<tr>
<td><font size=5 color="#ff0000"><p align="center">R</font></td>
<td><font size=5 color="#008000"><p align="center">G</font></td>
<td><font size=5 color="#ff0000"><p align="center">R</font></td>
<td><font size=5 color="#008000"><p align="center">G</font></td>
<td><font size=5 color="#ff0000"><p align="center">R</font></td>
</tr><tr>
<td><font size=5 color="#008000"><p align="center">G</font></td>
<td bgcolor="pink"><font size=5 color="#0000ff" ><p align="center">B</font></td>
<td bgcolor="pink"><font size=5 color="#008000" ><p align="center">G</font></td>
<td><font size=5 color="#0000ff"><p align="center">B</font></td>
<td><font size=5 color="#008000"><p align="center">G</font></td>
</tr><tr>
<td><font size=5 color="#ff0000"><p align="center">R</font></td>
<td><font size=5 color="#008000"><p align="center">G</font></td>
<td><font size=5 color="#ff0000"><p align="center">R</font></td>
<td><font size=5 color="#008000"><p align="center">G</font></td>
<td><font size=5 color="#ff0000"><p align="center">R</font></td>
</tr><tr>
<td><font size=5 color="#008000"><p align="center">G</font></td>
<td><font size=5 color="#0000ff"><p align="center">B</font></td>
<td><font size=5 color="#008000"><p align="center">G</font></td>
<td><font size=5 color="#0000ff"><p align="center">B</font></td>
<td><font size=5 color="#008000"><p align="center">G</font></td>
</tr><tr>
<td><font size=5 color="#ff0000"><p align="center">R</font></td>
<td><font size=5 color="#008000"><p align="center">G</font></td>
<td><font size=5 color="#ff0000"><p align="center">R</font></td>
<td><font size=5 color="#008000"><p align="center">G</font></td>
<td><font size=5 color="#ff0000"><p align="center">R</font></td>
</tr><tr>
<td><font size=5 color="#008000"><p align="center">G</font></td>
<td><font size=5 color="#0000ff"><p align="center">B</font></td>
<td><font size=5 color="#008000"><p align="center">G</font></td>
<td><font size=5 color="#0000ff"><p align="center">B</font></td>
<td><font size=5 color="#008000"><p align="center">G</font></td>
</tr>
</table>
</p><p>
The output RGB components of a pixel are interpolated from 1, 2 or 4 neighbors of the pixel
having the same color. There are several modifications of the above pattern that can be achieved
by shifting the pattern one pixel left and/or one pixel up.
The two letters C<sub>1</sub> and C<sub>2</sub> in the conversion constants CV_BayerC<sub>1</sub>C<sub>2</sub>2{BGR|RGB}
indicate the particular pattern type -
these are components from the second row, second and third columns, respectively.
For example, the above pattern has very popular "BG" type.</p>
</ul>
</p>


<hr><h3><a name="decl_cvThreshold">Threshold</a></h3>
<p class="Blurb">Applies fixed-level threshold to array elements</p>
<pre>
void cvThreshold( const CvArr* src, CvArr* dst, double threshold,
                  double max_value, int threshold_type );
</pre><p><dl>
<dt>src<dd>Source array (single-channel, 8-bit of 32-bit floating point).
<dt>dst<dd>Destination array; must be either the same type as <code>src</code> or 8-bit.
<dt>threshold<dd>Threshold value.
<dt>max_value<dd>Maximum value to use with <code>CV_THRESH_BINARY</code> and
                 <code>CV_THRESH_BINARY_INV</code> thresholding types.
<dt>threshold_type<dd>Thresholding type (see the discussion)
</dl><p>
The function <code>cvThreshold</code> applies fixed-level thresholding to single-channel array.
The function is typically used to get bi-level (binary) image out of grayscale image
(<a href="opencvref_cxcore.htm#decl_cvCmpS">cvCmpS</a>
could be also used for this purpose) or for removing a noise, i.e. filtering out pixels with too small or too large values.
There are several types of thresholding the function supports that are determined by <code>threshold_type</code>:</p>
<pre>
threshold_type=CV_THRESH_BINARY:
dst(x,y) = max_value, if src(x,y)&gt;threshold
           0, otherwise

threshold_type=CV_THRESH_BINARY_INV:
dst(x,y) = 0, if src(x,y)&gt;threshold
           max_value, otherwise

threshold_type=CV_THRESH_TRUNC:
dst(x,y) = threshold, if src(x,y)&gt;threshold
           src(x,y), otherwise

threshold_type=CV_THRESH_TOZERO:
dst(x,y) = src(x,y), if src(x,y)&gt;threshold
           0, otherwise

threshold_type=CV_THRESH_TOZERO_INV:
dst(x,y) = 0, if src(x,y)&gt;threshold
           src(x,y), otherwise
</pre>
<p>And this is the visual description of thresholding types:</p>
<p>
<image align="center" src="pics/threshold.png">
</p>
</p>

<hr><h3><a name="decl_cvAdaptiveThreshold">AdaptiveThreshold</a></h3>
<p class="Blurb">Applies adaptive threshold to array</p>
<pre>
void cvAdaptiveThreshold( const CvArr* src, CvArr* dst, double max_value,
                          int adaptive_method=CV_ADAPTIVE_THRESH_MEAN_C,
                          int threshold_type=CV_THRESH_BINARY,
                          int block_size=3, double param1=5 );
</pre><p><dl>
<dt>src<dd>Source image.
<dt>dst<dd>Destination image.
<dt>max_value<dd>Maximum value that is used with <code>CV_THRESH_BINARY</code> and <code>CV_THRESH_BINARY_INV</code>.
<dt>adaptive_method<dd>Adaptive thresholding algorithm to use: <code>CV_ADAPTIVE_THRESH_MEAN_C</code>
or <code>CV_ADAPTIVE_THRESH_GAUSSIAN_C</code> (see the discussion).
<dt>threshold_type<dd>Thresholding type; must be one of
<ul>
<li><code>CV_THRESH_BINARY,</code>
<li><code>CV_THRESH_BINARY_INV</code>
</ul>
<dt>block_size<dd>The size of a pixel neighborhood that is used to calculate a threshold value for the pixel:
3, 5, 7, ...
<dt>param1<dd>The method-dependent parameter.
For the methods <code>CV_ADAPTIVE_THRESH_MEAN_C</code> and <code>CV_ADAPTIVE_THRESH_GAUSSIAN_C</code>
it is a constant subtracted from mean or weighted mean (see the discussion), though it may be negative.
</dl><p>
The function <code>cvAdaptiveThreshold</code> transforms grayscale image to binary image according to
the formulae:</p>
<pre>
threshold_type=<code>CV_THRESH_BINARY</code>:
dst(x,y) = max_value, if src(x,y)&gt;T(x,y)
           0, otherwise

threshold_type=<code>CV_THRESH_BINARY_INV</code>:
dst(x,y) = 0, if src(x,y)&gt;T(x,y)
           max_value, otherwise
</pre>
<p>where T<sub>I</sub> is a threshold calculated individually for each pixel.</p>
<p>
For the method <code>CV_ADAPTIVE_THRESH_MEAN_C</code> it is a mean of <code>block_size</code> &times; <code>block_size</code>
pixel neighborhood, subtracted by <code>param1</code>.</p><p>
For the method <code>CV_ADAPTIVE_THRESH_GAUSSIAN_C</code> it is a weighted sum (gaussian) of
<code>block_size</code> &times; <code>block_size</code> pixel neighborhood, subtracted by <code>param1</code>.</p>


<hr><h2><a name="cv_imgproc_pyramids">Pyramids and the Applications</a></h2>

<hr><h3><a name="decl_cvPyrDown">PyrDown</a></h3>
<p class="Blurb">Downsamples image</p>
<pre>
void cvPyrDown( const CvArr* src, CvArr* dst, int filter=CV_GAUSSIAN_5x5 );
</pre><p><dl>
<dt>src<dd>The source image.
<dt>dst<dd>The destination image, should have 2x smaller width and height than the source.
<dt>filter<dd>Type of the filter used for convolution; only <code>CV_GAUSSIAN_5x5</code> is
currently supported.
</dl><p>
The function <code>cvPyrDown</code> performs downsampling step of Gaussian pyramid
decomposition. First it convolves source image with the specified filter and
then downsamples the image by rejecting even rows and columns.</p>


<hr><h3><a name="decl_cvPyrUp">PyrUp</a></h3>
<p class="Blurb">Upsamples image</p>
<pre>
void cvPyrUp( const CvArr* src, CvArr* dst, int filter=CV_GAUSSIAN_5x5 );
</pre><p><dl>
<dt>src<dd>The source image.
<dt>dst<dd>The destination image, should have 2x smaller width and height than the source.
<dt>filter<dd>Type of the filter used for convolution; only <code>CV_GAUSSIAN_5x5</code> is
currently supported.
</dl><p>
The function <code>cvPyrUp</code> performs up-sampling step of Gaussian pyramid decomposition.
First it upsamples the source image by injecting even zero rows and columns and
then convolves result with the specified filter multiplied by 4 for
interpolation. So the destination image is four times larger than the source
image.</p>

<hr><h3><a name="decl_cvPyrSegmentation">PyrSegmentation</a></h3>
<p class="Blurb">Implements image segmentation by pyramids</p>
<pre>
void cvPyrSegmentation( IplImage* src, IplImage* dst,
                        CvMemStorage* storage, CvSeq** comp,
                        int level, double threshold1, double threshold2 );
</pre><p><dl>
<dt>src<dd>The source image.
<dt>dst<dd>The destination image.
<dt>storage<dd>Storage; stores the resulting sequence of connected components.
<dt>comp<dd>Pointer to the output sequence of the segmented components.
<dt>level<dd>Maximum level of the pyramid for the segmentation.
<dt>threshold1<dd>Error threshold for establishing the links.
<dt>threshold2<dd>Error threshold for the segments clustering.
</dl><p>
The function <code>cvPyrSegmentation</code> implements image segmentation by pyramids. The
pyramid builds up to the level <code>level</code>. The links between any pixel <code>a</code> on level <code>i</code>
and its candidate father pixel <code>b</code> on the adjacent level are established if
<div> <code>p(c(a),c(b))&lt;threshold1</code>.
After the connected components are defined, they are joined into several
clusters. Any two segments A and B belong to the same cluster, if
<div> <code>p(c(A),c(B))&lt;threshold2</code>. The input
image has only one channel, then
<div><code> p(c&sup1;,c&sup2;)=|c&sup1;-c&sup2;|</code>. If the input image has three channels (red,
green and blue), then
<div><code>p(c&sup1;,c&sup2;)=0,3&middot;(c&sup1;<sub>r</sub>-c&sup2;<sub>r</sub>)+0,59&middot;(c&sup1;<sub>g</sub>-c&sup2;<sub>g</sub>)+0,11&middot;(c&sup1;<sub>b</sub>-c&sup2;<sub>b</sub>) </code> .
There may be more than one connected component per a  cluster.
<div>The images <code>src</code> and <code>dst</code> should be 8-bit single-channel or 3-channel images
or equal size</p>


<hr><h2><a name="cv_imgproc_ccomp">Connected Components</a></h2>

<hr><h3><a name="decl_CvConnectedComp">CvConnectedComp</a></h3>
<p class="Blurb">Connected component</p>
<pre>
    typedef struct CvConnectedComp
    {
        double area; /* area of the segmented component */
        float value; /* gray scale value of the segmented component */
        CvRect rect; /* ROI of the segmented component */
    } CvConnectedComp;
</pre>


<hr><h3><a name="decl_cvFloodFill">FloodFill</a></h3>
<p class="Blurb">Fills a connected component with given color</p>
<pre>
void cvFloodFill( CvArr* image, CvPoint seed_point, CvScalar new_val,
                  CvScalar lo_diff=cvScalarAll(0), CvScalar up_diff=cvScalarAll(0),
                  CvConnectedComp* comp=NULL, int flags=4, CvArr* mask=NULL );
#define CV_FLOODFILL_FIXED_RANGE (1 &lt;&lt; 16)
#define CV_FLOODFILL_MASK_ONLY   (1 &lt;&lt; 17)
</pre><p><dl>
<dt>image<dd>Input 1- or 3-channel, 8-bit or floating-point image.
             It is modified by the function unless CV_FLOODFILL_MASK_ONLY flag is set (see below).
<dt>seed_point<dd>The starting point.
<dt>new_val<dd>New value of repainted domain pixels.
<dt>lo_diff<dd>Maximal lower brightness/color difference between the currently observed pixel and one of its
neighbor belong to the component or seed pixel to add the pixel to component.
In case of 8-bit color images it is packed value.
<dt>up_diff<dd>Maximal upper brightness/color difference between the currently observed pixel and one of its
neighbor belong to the component or seed pixel to add the pixel to component.
In case of 8-bit color images it is packed value.
<dt>comp<dd>Pointer to structure the function fills with the information about the
repainted domain.
<dt>flags<dd>The operation flags. Lower bits contain connectivity value, 4 (by default) or 8,
used within the function. Connectivity d

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -