📄 opencvref_cv.htm
字号:
Gaussian blur if <code>param2</code> is zero, it is set to <code>param1</code>.
<dt>param3<dd>In case of Gaussian parameter this parameter may specify Gaussian sigma (standard deviation).
If it is zero, it is calculated from the kernel size:<br>
<pre>
sigma = (n/2 - 1)*0.3 + 0.8, where n=param1 for horizontal kernel,
n=param2 for vertical kernel.
</pre>
Using standard sigma for small kernels (3×3 to 7×7) gives better speed.
If <code>param3</code> is not zero, while <code>param1</code> and <code>param2</code>
are zeros, the kernel size is calculated from the sigma (to provide accurate enough operation).
</dl><p>
The function <code>cvSmooth</code> smooths image using one of several methods. Every of the methods
has some features and restrictions listed below</p>
<p>Blur with no scaling works with single-channel images only and supports accumulation of
8-bit to 16-bit format (similar to <a href="#decl_cvSobel">cvSobel</a> and <a href="#decl_cvLaplace">cvLaplace</a>) and 32-bit floating point
to 32-bit floating-point format.</p><p>
Simple blur and Gaussian blur support 1- or 3-channel, 8-bit and 32-bit floating point images.
These two methods can process images in-place.</p>
<p>Median and bilateral filters work with 1- or 3-channel 8-bit images and can not process images
in-place.</p>
<hr><h3><a name="decl_cvFilter2D">Filter2D</a></h3>
<p class="Blurb">Convolves image with the kernel</p>
<pre>
void cvFilter2D( const CvArr* src, CvArr* dst,
const CvMat* kernel,
CvPoint anchor=cvPoint(-1,-1));
</pre><p><dl>
<dt>src<dd>The source image.
<dt>dst<dd>The destination image.
<dt>kernel<dd>Convolution kernel, single-channel floating point matrix. If you want to apply
different kernels to different channels, split the image using
<a href="opencvref_cxcore.htm#decl_cvSplit">cvSplit</a>
into separate color planes and process them individually.
<dt>anchor<dd>The anchor of the kernel that indicates the relative position of a filtered point
within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means
that it is at the kernel center.
</ul>
</dl></p><p>
The function <code>cvFilter2D</code> applies arbitrary linear filter to the image.
In-place operation is supported. When the aperture is partially outside the image, the function
interpolates outlier pixel values from the nearest pixels that is inside the image.
</p>
<hr><h3><a name="decl_cvCopyMakeBorder">CopyMakeBorder</a></h3>
<p class="Blurb">Copies image and makes border around it</p>
<pre>
void cvCopyMakeBorder( const CvArr* src, CvArr* dst, CvPoint offset,
int bordertype, CvScalar value=cvScalarAll(0) );
</pre><p><dl>
<dt>src<dd>The source image.
<dt>dst<dd>The destination image.
<dt>offset<dd>Coordinates of the top-left corner (or bottom-left in case of images with bottom-left origin)
of the destination image rectangle where the source image (or its ROI) is copied.
Size of the rectanlge matches the source image size/ROI size.
<dt>bordertype<dd>Type of the border to create around the copied source image rectangle:<br>
<code>IPL_BORDER_CONSTANT</code> -
border is filled with the fixed value, passed as last parameter of the function.<br>
<code>IPL_BORDER_REPLICATE</code> -
the pixels from the top and bottom rows, the left-most and right-most columns are replicated
to fill the border.<br>
(The other two border types from IPL, <code>IPL_BORDER_REFLECT</code> and <code>IPL_BORDER_WRAP</code>,
are currently unsupported).
<dt>value<dd>Value of the border pixels if <code>bordertype=IPL_BORDER_CONSTANT</code>.
</dl></p><p>
The function <code>cvCopyMakeBorder</code> copies the source 2D array into interior of destination array
and makes a border of the specified type around the copied area.
The function is useful when one needs to emulate border type that is different from the one embedded into a specific
algorithm implementation. For example, morphological functions, as well as most of other filtering functions in OpenCV,
internally use replication border type, while the user may need zero border or a border, filled with 1's or 255's.
</p>
<hr><h3><a name="decl_cvIntegral">Integral</a></h3>
<p class="Blurb">Calculates integral images</p>
<pre>
void cvIntegral( const CvArr* image, CvArr* sum, CvArr* sqsum=NULL, CvArr* tilted_sum=NULL );
</pre><p><dl>
<dt>image<dd>The source image, <code>W</code>×<code>H</code>, 8-bit or floating-point (32f or 64f) image.
<dt>sum<dd>The integral image, <code>W+1</code>×<code>H+1</code>, 32-bit integer or double precision floating-point (64f).
<dt>sqsum<dd>The integral image for squared pixel values, <code>W+1</code>×<code>H+1</code>, double precision floating-point (64f).
<dt>tilted_sum<dd>The integral for the image rotated by 45 degrees, <code>W+1</code>×<code>H+1</code>, the same data type as <code>sum</code>.
</dl><p>
The function <code>cvIntegral</code> calculates one or more integral images for the source image as following:</p>
<pre>
sum(X,Y)=sum<sub>x<X,y<Y</sub>image(x,y)
sqsum(X,Y)=sum<sub>x<X,y<Y</sub>image(x,y)<sup>2</sup>
tilted_sum(X,Y)=sum<sub>y<Y,abs(x-X)<y</sub>image(x,y)
</pre>
<p>Using these integral images, one may calculate sum, mean, standard deviation over
arbitrary up-right or rotated rectangular region of the image in a constant time, for example:</p>
<pre>
sum<sub>x1<=x<x2,y1<=y<y2</sub>image(x,y)=sum(x2,y2)-sum(x1,y2)-sum(x2,y1)+sum(x1,x1)
</pre>
<p>It makes possible to do a fast blurring or fast block correlation with variable window size etc.
In case of multi-channel images sums for each channel are accumulated independently.
</p>
<hr><h3><a name="decl_cvCvtColor">CvtColor</a></h3>
<p class="Blurb">Converts image from one color space to another</p>
<pre>
void cvCvtColor( const CvArr* src, CvArr* dst, int code );
</pre><p><dl>
<dt>src<dd>The source 8-bit (8u), 16-bit (16u) or single-precision floating-point (32f) image.
<dt>dst<dd>The destination image of the same data type as the source one.
The number of channels may be different.
<dt>code<dd>Color conversion operation that can be specifed using
CV_<src_color_space>2<dst_color_space> constants (see below).
</dl><p>
The function <code>cvCvtColor</code> converts input image from one color space to another.
The function ignores <code>colorModel</code> and <code>channelSeq</code> fields of <code>IplImage</code> header,
so the source image color space should be specified correctly (including order of the channels in case
of RGB space, e.g. BGR means 24-bit format with B<sub>0</sub> G<sub>0</sub> R<sub>0</sub> B<sub>1</sub> G<sub>1</sub> R<sub>1</sub> ... layout,
whereas RGB means 24-format with R<sub>0</sub> G<sub>0</sub> B<sub>0</sub> R<sub>1</sub> G<sub>1</sub> B<sub>1</sub> ... layout).
</p><p>
The conventional range for R,G,B channel values is:
<ul>
<li>0..255 for 8-bit images
<li>0..65535 for 16-bit images and
<li>0..1 for floating-point images.
</ul>
Of course, in case of linear transformations the range can be arbitrary,
but in order to get correct results in case of non-linear transformations,
the input image should be scaled if necessary.
</p><p>
The function can do the following transformations:<ul>
<li>Transformations within RGB space like adding/removing alpha channel, reversing the channel order,
conversion to/from 16-bit RGB color (R5:G6:B5 or R5:G5:B5) color, as well as conversion to/from grayscale using:
<pre>
RGB[A]->Gray: Y<-0.299*R + 0.587*G + 0.114*B
Gray->RGB[A]: R<-Y G<-Y B<-Y A<-0
</pre>
<li>RGB<=>CIE XYZ.Rec 709 with D65 white point (<code>CV_BGR2XYZ, CV_RGB2XYZ, CV_XYZ2BGR, CV_XYZ2RGB</code>):
<pre>
|X| |0.412453 0.357580 0.180423| |R|
|Y| <- |0.212671 0.715160 0.072169|*|G|
|Z| |0.019334 0.119193 0.950227| |B|
|R| | 3.240479 -1.53715 -0.498535| |X|
|G| <- |-0.969256 1.875991 0.041556|*|Y|
|B| | 0.055648 -0.204043 1.057311| |Z|
X, Y and Z cover the whole value range (in case of floating-point images Z may exceed 1).
</pre>
<p></p>
<li>RGB<=>YCrCb JPEG (a.k.a. YCC) (<code>CV_BGR2YCrCb, CV_RGB2YCrCb, CV_YCrCb2BGR, CV_YCrCb2RGB</code>)
<pre>
Y <- 0.299*R + 0.587*G + 0.114*B
Cr <- (R-Y)*0.713 + delta
Cb <- (B-Y)*0.564 + delta
R <- Y + 1.403*(Cr - delta)
G <- Y - 0.344*(Cr - delta) - 0.714*(Cb - delta)
B <- Y + 1.773*(Cb - delta),
{ 128 for 8-bit images,
where delta = { 32768 for 16-bit images
{ 0.5 for floating-point images
Y, Cr and Cb cover the whole value range.
</pre>
<p></p>
<li>RGB<=>HSV (<code>CV_BGR2HSV, CV_RGB2HSV, CV_HSV2BGR, CV_HSV2RGB</code>)
<pre>
// In case of 8-bit and 16-bit images
// R, G and B are converted to floating-point format and scaled to fit 0..1 range
V <- max(R,G,B)
S <- (V-min(R,G,B))/V if V≠0, 0 otherwise
(G - B)*60/S, if V=R
H <- 180+(B - R)*60/S, if V=G
240+(R - G)*60/S, if V=B
if H<0 then H<-H+360
On output 0≤V≤1, 0≤S≤1, 0≤H≤360.
The values are then converted to the destination data type:
8-bit images:
V <- V*255, S <- S*255, H <- H/2 (to fit to 0..255)
16-bit images (currently not supported):
V <- V*65535, S <- S*65535, H <- H
32-bit images:
H, S, V are left as is
</pre>
<li>RGB<=>HLS (<code>CV_BGR2HLS, CV_RGB2HLS, CV_HLS2BGR, CV_HLS2RGB</code>)
<pre>
// In case of 8-bit and 16-bit images
// R, G and B are converted to floating-point format and scaled to fit 0..1 range
V<sub>max</sub> <- max(R,G,B)
V<sub>min</sub> <- min(R,G,B)
L <- (V<sub>max</sub> + V<sub>min</sub>)/2
S <- (V<sub>max</sub> - V<sub>min</sub>)/(V<sub>max</sub> + V<sub>min</sub>) if L < 0.5
(V<sub>max</sub> - V<sub>min</sub>)/(2 - (V<sub>max</sub> + V<sub>min</sub>)) if L ≥ 0.5
(G - B)*60/S, if V<sub>max</sub>=R
H <- 180+(B - R)*60/S, if V<sub>max</sub>=G
240+(R - G)*60/S, if V<sub>max</sub>=B
if H<0 then H<-H+360
On output 0≤L≤1, 0≤S≤1, 0≤H≤360.
The values are then converted to the destination data type:
8-bit images:
L <- L*255, S <- S*255, H <- H/2
16-bit images (currently not supported):
L <- L*65535, S <- S*65535, H <- H
32-bit images:
H, L, S are left as is
</pre>
<li>RGB<=>CIE L*a*b* (<code>CV_BGR2Lab, CV_RGB2Lab, CV_Lab2BGR, CV_Lab2RGB</code>)
<pre>
// In case of 8-bit and 16-bit images
// R, G and B are converted to floating-point format and scaled to fit 0..1 range
// convert R,G,B to CIE XYZ
|X| |0.412453 0.357580 0.180423| |R|
|Y| <- |0.212671 0.715160 0.072169|*|G|
|Z| |0.019334 0.119193 0.950227| |B|
X <- X/Xn, where Xn = 0.950456
Z <- Z/Zn, where Zn = 1.088754
L <- 116*Y<sup>1/3</sup> for Y>0.008856
L <- 903.3*Y for Y<=0.008856
a <- 500*(f(X)-f(Y)) + delta
b <- 200*(f(Y)-f(Z)) + delta
where f(t)=t<sup>1/3</sup> for t>0.008856
f(t)=7.787*t+16/116 for t<=0.008856
where delta = 128 for 8-bit images,
0 for floating-point images
On output 0≤L≤100, -127≤a≤127, -127≤b≤127
The values are then converted to the destination data type:
8-bit images:
L <- L*255/100, a <- a + 128, b <- b + 128
16-bit images are currently not supported
32-bit images:
L, a, b are left as is
</pre>
<li>RGB<=>CIE L*u*v* (<code>CV_BGR2Luv, CV_RGB2Luv, CV_Luv2BGR, CV_Luv2RGB</code>)
<pre>
// In case of 8-bit and 16-bit images
// R, G and B are converted to floating-point format and scaled to fit 0..1 range
// convert R,G,B to CIE XYZ
|X| |0.412453 0.357580 0.180423| |R|
|Y| <- |0.212671 0.715160 0.072169|*|G|
|Z| |0.019334 0.119193 0.950227| |B|
L <- 116*Y<sup>1/3</sup> for Y>0.008856
L <- 903.3*Y for Y<=0.008856
u' <- 4*X/(X + 15*Y + 3*Z)
v' <- 9*Y/(X + 15*Y + 3*Z)
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -