📄 thggdel.html
字号:
<h1><a NAME="gdDelaunay">Computation of the Delaunay Grid</a></h1>
Algorithms for the computation of Delaunay grids are well-known and
widely distributed. There are a lot of papers (see
<a HREF="thbib.html#Bowyer1981"> [Bowyer1981] </a>,
<a HREF="thbib.html#Watson1981"> [Watson1981] </a>,
<a HREF="thbib.html#Maus1984"> [Maus1984] </a>,
<a HREF="thbib.html#Field1986"> [Field1986] </a>,
<a HREF="thbib.html#Juenger1989"> [Juenger1989] </a>,
<a HREF="thbib.html#Dwyer1991"> [Dwyer1991] </a>,
<a HREF="thbib.html#Mehlhorn1991"> [Mehlhorn1991] </a>,
<a HREF="thbib.html#Dey1992"> [Dey1992] </a>)
about the computation of the Delaunay grid for a given set of
points. Usually they are based on Watson's algorithm (<a HREF="thbib.html#Bowyer1981"> [Bowyer1981] </a>,<a HREF="thbib.html#Watson1981"> [Watson1981] </a>). The main problems of this algorithm are the time
efficiency, the robustness against rounding errors and the boundary
handling.
<h2><a NAME="gdDelRobust">Robustness of the Delaunay Algorithm </a></h2>
<p>Dey a.o. (see <a HREF="thbib.html#Dey1992"> [Dey1992] </a>) have described a robust
algorithm for Delaunay triangulation in 3D. It is based on a
modification of Watson's algorithm. We use here another variant of this
algorithm which is also robust, but with slightly different results.
<p>The handicap of our algorithm is that in the resulting grid there
may be isolated nodes, if in the input set we have nodes with too
small distance. But this is the necessary consequence of the main
advantage of our variant:
Every tetrahedron in the resulting grid of our algorithm has
a positive volume as in exact, as well as in finite precision
arithmetics.
This is a trivial consequence of our algorithm described later.
Only tetrahedra with volume greater than some epsilon will be
created. If the epsilon is greater than the possible error of the volume
computation, this proves that the volume is positive also in exact
arithmetics.
<p>If we have an input set consisting of points with very small
distances, it is obviously not possible to create such a grid. We
consider this property of our algorithm not as a failure, but as a
regularization of the input point set. The isolated nodes may be
easily detected in the resulting grid and removed from the data
structure if necessary.
<p>The main technique we use to obtain robust results is to replace
the exact test with an epsilon-test. If the epsilon is greater than the
possible error, we obtain an exact inequality in one of the two cases:
<ol>
<li> The circumscribing sphere inequality C(t,p) allows to detect if
a given point p is inside the circumscribing sphere of a tetrahedron
t. We use the modified variant of this test so that for every point
which is exactly outside the sphere we obtain the correct answer.
<li> The volume inequality V(t) tests if the oriented volume of a
tetrahedron t is positive. We use the modified variant so that a
volume which is really negative will be detected as negative.
</ol>
<p>Now let's consider the algorithm in detail. We use the basic
scheme of Watson's algorithm: We start with a simple "infinite" grid
containing all of the input points. This big start grid allows not to
consider the special case of points outside the grid. Then we include
iteratively the points of the given input point set into the grid.
<p> To include a new point into the grid, we have at first to find at
least one tetrahedron which has to be removed. We use a neighbourhood
search algorithm considered later to find it.
<p> Now, beginning with this tetrahedron, we mark neighbour
tetrahedra for removal. In the standard algorithm, the only criterion
to remove a tetrahedron t is the circumscribing sphere criterion
C(t,p): A neighbour of t was already removed and the new point p is
inside the circumscribing sphere of t. This criterion we use in the
modified form described before.
<p> But we use also another criterion: A neighbour of t was already
removed and the volume V(t) of the tetrahedron defined by the face to
this neighbour and the new point p is negative. Again, we use the
modified form described before. In exact arithmetics such tetrahedra
will be already removed by the first criterion. For the modified
criteria it may be not so.
<p>We remove all marked tetrahedra and create new tetrahedra for each
face between a removed and a non-removed tetrahedron. Caused by the
modified volume criterion, the volumes of the new tetrahedra are
positive.
The algorithm always leads to a topologically correct grid.
(Remark that there may be input points which are not part of the
resulting grid.)
The robustness of the neighbourhood search algorithm will be
considered later. The modified volume test shows that each newly
created tetrahedron has a positive volume in exact arithmetics. This
leads to the consequence that the resulting grid will be topologically
correct.
<p>Independently of this general robustness result for our algorithm,
we use also some other strategies to minimize the arithmetical errors
in our computations:
<dl>
<dt>Using exact results if possible.
<dd>If a node is located on an edge, for two of the three coordinates
we know the exact value. But the direction-independent code will
compute these values, so there may be slightly different
results. That's why we explicitly overwrite these values with the
exact results. So for the most typical degenerate cases we have
(rectangles) we minimize the error (often we obtain exact zero results).
<dt>A special formula for circumsphere test.
<dd>For this central test of the Delaunay algorithm we don't compute
the centre and later the distance of the point to this centre, but we
compute the position on the sphere opposite to the first node and use
the scalar product test. This will be much more accurate in the
neighbourhood of the first node. This allows to handle one of the
typical degenerate situations in our algorithm: Tetrahedra which
contain the artificial "infinite points". For these tetrahedra, the
distances between the "normal" points are very small compared with the
distances to the "infinite" points. The algorithm guarantees that
after the first step the first node will be always a normal node.
<dt>High intermediate precision.
<dd>The idea is to use for intermediate computations higher precision
than for representing the data. To make this machine-independent we
use two different floating point types ibgFloat and ibgDouble which
can be easily redefined.
</dl>
<h2><a NAME="ggneighbour">Neighbourhood Search Algorithm </a></h2>
Let's describe here a little bit more detailed the <b>neighbourhood
search</b> algorithm. This algorithm we use in different parts of our
program: For the Delaunay grid generation and the search of the
element containing the given point if we want to use a grid as
input for the contravariant geometry description.
<p>We have a <b>start element</b> and a <b>target point</b>. We have to
find the element containing the target point. In the finite precision
arithmetics, we have to find an element so that some epsilon-environment
of the element contains the point.
<p>In each step of the algorithm, we have a <b>current element</b>
and consider the neighbour elements. In exact arithmetics, let's
denote a neighbour element as <b>good</b> if the target point lies on
the neighbour's side (relative to the plane between the element and
it's neighbour).
<p>For finite precision arithmetics, we have to be more accurate. We
use some test to detect on which side of the plane the target point is
located. Because the test is not accurate, we compare not with 0, but
with some epsilon so that we obtain the following property: If the test
detects that the point lies on the neighbours side of the plane it
lies on this side also in exact arithmetics. So, the neighbour is
good if this test detects it, and the neighbour is named <b>allowed</b>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -