📄 jy-note.txt
字号:
Dear all,
In regards to the few email exchanges that have accured about Calibration, I
feel that some information needs to be given.
There are a few important things to know about when calibrating a
camera, regardless of the engine of optimization used. Here are
the main point to worry about, in DECREASING order of importance
1- The input data: the more the number of images the better
....and I am saying MUCH better. If only a few images are
available, then it is necessary to reduce the calibration model.
Aside from the distortion model, there are a total of 5 unkowns
to the problem of calibration: the focal length, the aspect ratio,
the x and y position of the principal point, and the skew.
Usually, the skew is assumed to be equal to zero (for currently
manufactured sensors, it is a good assumption).
Out the four remaining coefficiants, sometimes, some of them
are simply not observable, depending on the number of
"independent" images available for calibration.
one image -> the principal point is not observable, one can only
hope of estimating the focal and the spect ratio, only if the
the orientation of the plane is "generic". If the plane is parallel
to the image plane, nothing can be estimated. If the x or y
coordinate of the surface normal is zero, then the aspect ratio
cannot be estimated.
two images -> if the two planes have different surface normals,
and if the plane orientations are "generic", then theretically,
everything can be estimated! focal, aspect ratio, principal point
(no possible skew.. but we do not care, it is zero in most cases)
HOWEVER!!! we are at the boundary of what can be estimated: we
have JUST enough data, and therefore, the errors of estimation
will be large (very dependent on the accuracy of the location of
the points on the image). Therefore, I DO NOT RECOMMEND to
estimate a full model with 2 images only
three images -> things are getting better. A full model is possible,
(including skew) if the plane orientations are all distincts,
but everything is still very sensitive to input noise.
I recommend to use least 10 images for a reasonable quality of
calibration. Essentially, it all depends on the application.
If one is looking for precise Euclidean reconstruction of the scene
(when designing a 3D scanner for example) then he should put a lot
of care in the calibration and acquire 15 to 20 images.
If one only cares about rough calibration (sometimes, for designing
3D user interfaces, it is just enough to get a rough idea of
Euclidena space), then 5 images should be enough.
In fact, I have calibrate before with one image only, keep a VERY
elementary model-> one focal, no aspect ratio, no principal point,
no skew and a first order radial distortion model at most.
Basically, one should not talk about accuracy of calibration
without specifying the number of images used, and the
conditions of acquisition (for example, the images must not be
acquired with the planes of calibration sitting flat on
a unique plane... THIS IS A SINGULAR CONFIGURATION! ..
you are then not adding information by using more images!! (believe me!)
As for the distortion model: for all common lenses, a 4th order radial
distortion model with tangential distortion is really good enough.
For fisheye lenses, I recommend to push to a 6th order. not beyond.
Let me emphasize: THE KEY IS INPUT DATA! THE MORE
THE BETTER. Without data (images), no matter what engine
of optimization you use, you will go nowhere.
Therefore, I recommend everyone doing camera calibration to
revisit their input data based on what I just said.
2- Of course, it is not only important to have images, it is also very
important to extract the corners accurately.
That is the second most important thing to look at: the corner detector.
In addition, the location of the corner must be as insensitive as possible
with respect to slight image defocus. This is why I recommend to use
symmetric corner features, just as checkerboard corner, and as contrasty
as possible (black and white if possible,... but try not to saturate the
images at any of the two rails 0 or 255... this leads to biais... yes, I
have also looked at that!
As for the accuracy of detection, you should acheive a 0.1 pixel
errors without much effort. Our detector acheive this kind of
accuracy.
3- Now, only in 3rd place in importance is the accuracy of design of the
calibration rig in 3D.
Yes! it may sound surprising to many of you, but this is true!
Calibration errors are at least one order of magnitude more
sensitive to corner detection errors than to 3D rig design errors.
Of course, this does not mean that all of you are free to design your
checkerboards as badly as possible, but I want to mention that there
has been a few research papers written on the subject: it is much more
important to put effort in making sure that the input images are clean, the
plane
locations (orientations) are as generic as possible - ideally spanning the
entire
orientation sphere- , and the corner are very accuratly extracted.
I hope these comments help you all.
I am enclosing a new version of the matlab version of the calibration tool.
It is very user friendly, and gives a lot of feedback on the results,
including
recommendation on which model to use depending on your data, number
of images...
It also returns estimates of uncertainties ont he calibration parameters.
A slightly outdated documentation is attached to OpenCV. Look at it for
usage information (in comes in a form of HTML pages), but do not use the
version of the matlab toolbox that is on it. it is an older version.
The one I am sending to you right now is a lot more recent, complete,..
A more updated documatation is written already, and will be distributed
soon.
<<TOOLBOX_calib.zip>>
Send me feedback.
-Jean-Yves Bouguet, Ph.D.
Senior Researcher
Microprocessor Research Labs
Intel Corporation
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -