⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 opencvref_cv.htm

📁 Simple example of circle detection in OpenCV + C++Builder6.
💻 HTM
📖 第 1 页 / 共 5 页
字号:
               int param1=3, int param2=0, double param3=0, double param4=0 );
</pre><p><dl>
<dt>src<dd>The source image.
<dt>dst<dd>The destination image.
<dt>smoothtype<dd>Type of the smoothing:<ul>
<li>CV_BLUR_NO_SCALE (simple blur with no scaling) -
              summation over a pixel <code>param1</code>&times;<code>param2</code> neighborhood.
              If the neighborhood size may vary, one may precompute integral image with <a href="#decl_cvIntegral">cvIntegral</a> function.
<li>CV_BLUR (simple blur) - summation over a pixel <code>param1</code>&times;<code>param2</code> neighborhood with
             subsequent scaling by 1/(<code>param1</code>&bull;<code>param2</code>).
<li>CV_GAUSSIAN (gaussian blur) - convolving image with <code>param1</code>&times;<code>param2</code> Gaussian kernel.
<li>CV_MEDIAN (median blur) - finding median of <code>param1</code>&times;<code>param1</code> neighborhood (i.e.
                              the neighborhood is square).
<li>CV_BILATERAL (bilateral filter) - applying bilateral 3x3 filtering with color sigma=<code>param1</code> and
                                      space sigma=<code>param2</code>. Information about bilateral filtering
                                      can be found at <a href="http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html">
                                                      http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html</a>
</ul>
<dt>param1<dd>The first parameter of smoothing operation.
<dt>param2<dd>The second parameter of smoothing operation. In case of simple scaled/non-scaled and
              Gaussian blur if <code>param2</code> is zero, it is set to <code>param1</code>.
<dt>param3<dd>In case of Gaussian kernel this parameter may specify Gaussian sigma (standard deviation).
              If it is zero, it is calculated from the kernel size:<br>
              <pre>
              sigma = (n/2 - 1)*0.3 + 0.8, where n=param1 for horizontal kernel,
                                                 n=param2 for vertical kernel.
              </pre>
              With the standard sigma for small kernels (3&times;3 to 7&times;7) the performance is better.
              If <code>param3</code> is not zero, while <code>param1</code> and <code>param2</code>
              are zeros, the kernel size is calculated from the sigma (to provide accurate enough operation).
<dt>param4<dd>In case of non-square Gaussian kernel the parameter may be used to specify a different
              (from <code>param3</code>) sigma in the vertical direction.
</dl><p>
The function <code>cvSmooth</code> smooths image using one of several methods. Every of the methods
has some features and restrictions listed below</p>
<p>Blur with no scaling works with single-channel images only and supports accumulation of
8-bit to 16-bit format (similar to <a href="#decl_cvSobel">cvSobel</a> and <a href="#decl_cvLaplace">cvLaplace</a>) and 32-bit floating point
to 32-bit floating-point format.</p><p>
Simple blur and Gaussian blur support 1- or 3-channel, 8-bit and 32-bit floating point images.
These two methods can process images in-place.</p>
<p>Median and bilateral filters work with 1- or 3-channel 8-bit images and can not process images
in-place.</p>


<hr><h3><a name="decl_cvFilter2D">Filter2D</a></h3>
<p class="Blurb">Convolves image with the kernel</p>
<pre>
void cvFilter2D( const CvArr* src, CvArr* dst,
                 const CvMat* kernel,
                 CvPoint anchor=cvPoint(-1,-1));
</pre><p><dl>
<dt>src<dd>The source image.
<dt>dst<dd>The destination image.
<dt>kernel<dd>Convolution kernel, single-channel floating point matrix. If you want to apply
              different kernels to different channels, split the image using
              <a href="opencvref_cxcore.htm#decl_cvSplit">cvSplit</a>
              into separate color planes and process them individually.
<dt>anchor<dd>The anchor of the kernel that indicates the relative position of a filtered point
within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means
that it is at the kernel center.
</ul>
</dl></p><p>
The function <code>cvFilter2D</code> applies arbitrary linear filter to the image.
In-place operation is supported. When the aperture is partially outside the image, the function
interpolates outlier pixel values from the nearest pixels that is inside the image.
</p>


<hr><h3><a name="decl_cvCopyMakeBorder">CopyMakeBorder</a></h3>
<p class="Blurb">Copies image and makes border around it</p>
<pre>
void cvCopyMakeBorder( const CvArr* src, CvArr* dst, CvPoint offset,
                       int bordertype, CvScalar value=cvScalarAll(0) );
</pre><p><dl>
<dt>src<dd>The source image.
<dt>dst<dd>The destination image.
<dt>offset<dd>Coordinates of the top-left corner (or bottom-left in case of images with bottom-left origin)
              of the destination image rectangle where the source image (or its ROI) is copied.
              Size of the rectanlge matches the source image size/ROI size.
<dt>bordertype<dd>Type of the border to create around the copied source image rectangle:<br>
              <code>IPL_BORDER_CONSTANT</code> -
                  border is filled with the fixed value, passed as last parameter of the function.<br>
              <code>IPL_BORDER_REPLICATE</code> -
                  the pixels from the top and bottom rows, the left-most and right-most columns are replicated
                  to fill the border.<br>
              (The other two border types from IPL, <code>IPL_BORDER_REFLECT</code> and <code>IPL_BORDER_WRAP</code>,
              are currently unsupported).
<dt>value<dd>Value of the border pixels if <code>bordertype=IPL_BORDER_CONSTANT</code>.
</dl></p><p>
The function <code>cvCopyMakeBorder</code> copies the source 2D array into interior of destination array
and makes a border of the specified type around the copied area.
The function is useful when one needs to emulate border type that is different from the one embedded into a specific
algorithm implementation. For example, morphological functions, as well as most of other filtering functions in OpenCV,
internally use replication border type, while the user may need zero border or a border, filled with 1's or 255's.
</p>


<hr><h3><a name="decl_cvIntegral">Integral</a></h3>
<p class="Blurb">Calculates integral images</p>
<pre>
void cvIntegral( const CvArr* image, CvArr* sum, CvArr* sqsum=NULL, CvArr* tilted_sum=NULL );
</pre><p><dl>
<dt>image<dd>The source image, <code>W</code>&times;<code>H</code>, 8-bit or floating-point (32f or 64f) image.
<dt>sum<dd>The integral image, <code>W+1</code>&times;<code>H+1</code>, 32-bit integer or double precision floating-point (64f).
<dt>sqsum<dd>The integral image for squared pixel values, <code>W+1</code>&times;<code>H+1</code>, double precision floating-point (64f).
<dt>tilted_sum<dd>The integral for the image rotated by 45 degrees, <code>W+1</code>&times;<code>H+1</code>, the same data type as <code>sum</code>.
</dl><p>
The function <code>cvIntegral</code> calculates one or more integral images for the source image as following:</p>
<pre>
sum(X,Y)=sum<sub>x&lt;X,y&lt;Y</sub>image(x,y)

sqsum(X,Y)=sum<sub>x&lt;X,y&lt;Y</sub>image(x,y)<sup>2</sup>

tilted_sum(X,Y)=sum<sub>y&lt;Y,abs(x-X)&lt;y</sub>image(x,y)
</pre>
<p>Using these integral images, one may calculate sum, mean, standard deviation over
arbitrary up-right or rotated rectangular region of the image in a constant time, for example:</p>
<pre>
sum<sub>x1&lt;=x&lt;x2,y1&lt;=y&lt;y2</sub>image(x,y)=sum(x2,y2)-sum(x1,y2)-sum(x2,y1)+sum(x1,x1)
</pre>
<p>It makes possible to do a fast blurring or fast block correlation with variable window size etc.
In case of multi-channel images sums for each channel are accumulated independently.
</p>

<hr><h3><a name="decl_cvCvtColor">CvtColor</a></h3>
<p class="Blurb">Converts image from one color space to another</p>
<pre>
void cvCvtColor( const CvArr* src, CvArr* dst, int code );
</pre><p><dl>
<dt>src<dd>The source 8-bit (8u), 16-bit (16u) or single-precision floating-point (32f) image.
<dt>dst<dd>The destination image of the same data type as the source one.
           The number of channels may be different.
<dt>code<dd>Color conversion operation that can be specifed using
CV_&lt;src_color_space&gt;2&lt;dst_color_space&gt; constants (see below).
</dl><p>
The function <code>cvCvtColor</code> converts input image from one color space to another.
The function ignores <code>colorModel</code> and <code>channelSeq</code> fields of <code>IplImage</code> header,
so the source image color space should be specified correctly (including order of the channels in case
of RGB space, e.g. BGR means 24-bit format with B<sub>0</sub> G<sub>0</sub> R<sub>0</sub> B<sub>1</sub> G<sub>1</sub> R<sub>1</sub> ... layout,
whereas RGB means 24-bit format with R<sub>0</sub> G<sub>0</sub> B<sub>0</sub> R<sub>1</sub> G<sub>1</sub> B<sub>1</sub> ... layout).
</p><p>
The conventional range for R,G,B channel values is:
<ul>
<li>0..255 for 8-bit images
<li>0..65535 for 16-bit images and
<li>0..1 for floating-point images.
</ul>
Of course, in case of linear transformations the range can be arbitrary,
but in order to get correct results in case of non-linear transformations,
the input image should be scaled if necessary.
</p><p>
The function can do the following transformations:<ul>
<li>Transformations within RGB space like adding/removing alpha channel, reversing the channel order,
conversion to/from 16-bit RGB color (R5:G6:B5 or R5:G5:B5) color, as well as conversion to/from grayscale using:
<pre>
RGB[A]->Gray: Y&lt;-0.299*R + 0.587*G + 0.114*B
Gray->RGB[A]: R&lt;-Y G&lt;-Y B&lt;-Y A&lt;-0
</pre>
<li>RGB&lt;=&gt;CIE XYZ.Rec 709 with D65 white point (<code>CV_BGR2XYZ, CV_RGB2XYZ, CV_XYZ2BGR, CV_XYZ2RGB</code>):
<pre>
|X|    |0.412453  0.357580  0.180423| |R|
|Y| &lt;- |0.212671  0.715160  0.072169|*|G|
|Z|    |0.019334  0.119193  0.950227| |B|

|R|    | 3.240479  -1.53715  -0.498535| |X|
|G| &lt;- |-0.969256   1.875991  0.041556|*|Y|
|B|    | 0.055648  -0.204043  1.057311| |Z|

X, Y and Z cover the whole value range (in case of floating-point images Z may exceed 1).
</pre>
<p></p>
<li>RGB&lt;=&gt;YCrCb JPEG (a.k.a. YCC) (<code>CV_BGR2YCrCb, CV_RGB2YCrCb, CV_YCrCb2BGR, CV_YCrCb2RGB</code>)
<pre>
Y &lt;- 0.299*R + 0.587*G + 0.114*B
Cr &lt;- (R-Y)*0.713 + delta
Cb &lt;- (B-Y)*0.564 + delta

R &lt;- Y + 1.403*(Cr - delta)
G &lt;- Y - 0.344*(Cr - delta) - 0.714*(Cb - delta)
B &lt;- Y + 1.773*(Cb - delta),

              { 128 for 8-bit images,
where delta = { 32768 for 16-bit images
              { 0.5 for floating-point images

Y, Cr and Cb cover the whole value range.
</pre>
<p></p>
<li>RGB&lt;=&gt;HSV (<code>CV_BGR2HSV, CV_RGB2HSV, CV_HSV2BGR, CV_HSV2RGB</code>)
<pre>
// In case of 8-bit and 16-bit images
// R, G and B are converted to floating-point format and scaled to fit 0..1 range

V &lt;- max(R,G,B)
S &lt;- (V-min(R,G,B))/V   if V&ne;0, 0 otherwise

         (G - B)*60/S,  if V=R
H &lt;- 180+(B - R)*60/S,  if V=G
     240+(R - G)*60/S,  if V=B

if H&lt;0 then H&lt;-H+360

On output 0&le;V&le;1, 0&le;S&le;1, 0&le;H&le;360.
The values are then converted to the destination data type:
    8-bit images:
        V &lt;- V*255, S &lt;- S*255, H &lt;- H/2 (to fit to 0..255)
    16-bit images (currently not supported):
        V &lt;- V*65535, S &lt;- S*65535, H &lt;- H
    32-bit images:
        H, S, V are left as is
</pre>
<li>RGB&lt;=&gt;HLS (<code>CV_BGR2HLS, CV_RGB2HLS, CV_HLS2BGR, CV_HLS2RGB</code>)
<pre>
// In case of 8-bit and 16-bit images
// R, G and B are converted to floating-point format and scaled to fit 0..1 range

V<sub>max</sub> &lt;- max(R,G,B)
V<sub>min</sub> &lt;- min(R,G,B)

L &lt;- (V<sub>max</sub> + V<sub>min</sub>)/2

S &lt;- (V<sub>max</sub> - V<sub>min</sub>)/(V<sub>max</sub> + V<sub>min</sub>)  if L &lt; 0.5
     (V<sub>max</sub> - V<sub>min</sub>)/(2 - (V<sub>max</sub> + V<sub>min</sub>))  if L &ge; 0.5

         (G - B)*60/S,  if V<sub>max</sub>=R
H &lt;- 180+(B - R)*60/S,  if V<sub>max</sub>=G
     240+(R - G)*60/S,  if V<sub>max</sub>=B

if H&lt;0 then H&lt;-H+360

On output 0&le;L&le;1, 0&le;S&le;1, 0&le;H&le;360.
The values are then converted to the destination data type:
    8-bit images:
        L &lt;- L*255, S &lt;- S*255, H &lt;- H/2
    16-bit images (currently not supported):
        L &lt;- L*65535, S &lt;- S*65535, H &lt;- H
    32-bit images:
        H, L, S are left as is
</pre>
<li>RGB&lt;=&gt;CIE L*a*b* (<code>CV_BGR2Lab, CV_RGB2Lab, CV_Lab2BGR, CV_Lab2RGB</code>)
<pre>
// In case of 8-bit and 16-bit images
// R, G and B are converted to floating-point format and scaled to fit 0..1 range

// convert R,G,B to CIE XYZ
|X|    |0.412453  0.357580  0.180423| |R|
|Y| &lt;- |0.212671  0.715160  0.072169|*|G|
|Z|    |0.019334  0.119193  0.950227| |B|

X &lt;- X/Xn, where Xn = 0.950456
Z &lt;- Z/Zn, where Zn = 1.088754

L &lt;- 116*Y<sup>1/3</sup>      for Y>0.008856
L &lt;- 903.3*Y      for Y&lt;=0.008856

a &lt;- 500*(f(X)-f(Y)) + delta
b &lt;- 200*(f(Y)-f(Z)) + delta
where f(t)=t<sup>1/3</sup>              for t>0.008856
      f(t)=7.787*t+16/116   for t&lt;=0.008856


where delta = 128 for 8-bit images,
              0 for floating-point images

On output 0&le;L&le;100, -127&le;a&le;127, -127&le;b&le;127
The values are then converted to the destination data type:

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -