⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 exhaustive search.htm

📁 这是博弈论算法全集第四部分:剪枝算法,其它算法将陆续推出.以便与大家共享
💻 HTM
📖 第 1 页 / 共 5 页
字号:
are unable to say exactly why. And if we can't rigorously formulate an 
invariant, and prove its validity, chances are intuition will mislead us once in 
a while. 
<P>Because of the irregular structure characteristic of state spaces for which 
we use exhaustive search, it is not easy to formulate pruning insight in such a 
way that a program can act on it. As a test field for experiments, we have 
chosen King and Pawn chess endgames. There is much expert knowledge about this 
domain, explained in excellent books, and there are databases of exact values to 
compare against when there are only a few Pawns on the board. 
<P>The peculiar difficulty of King (K) and Pawn (P) endgames comes from the fact 
that Pawns can be promoted to any other piece: Queen (Q), Rook (R), Bishop (B), 
Knight (N). Thus an exhaustive analysis of KP endgames with a total of p Ps 
potentially calls upon all endgames with 2 Ks and p pieces of the right color. 
But the vast majority of KP endgames are decided soon after (or even before) a P 
promotion, because the material balance typically changes drastically - one 
party has a Q, the other does not. Thus, storing all the support databases of 
other piece endgames is an extremely expensive overhead when compared to their 
rare use. 
<P>We are experimenting with simple and often safe heuristics of the type: In a 
KP endgame, if Black promotes a P to a Q, and within x plies can prevent White 
from promoting one of its Ps, Black wins. This particular heuristic has 
well-known exceptions, such as when a White P on the seventh rank ensures a 
draw. Thus it is supplemented by other heuristics that, jointly, capture 
elementary chess lore about KP endgames. 
<P>The reduction in the size of state spaces and compute time effected by 
heuristics that obviate the need for support databases is drastic indeed, as the 
following examples show. KPPKP stands for the state space of endgames where 
White has 2 Ps and Black has 1 P. <BR><BR>
<TABLE border=1>
  <TBODY>
  <TR>
    <TD>State spaces
    <TD>
    <TD>Pawns only 
  <TR>
    <TD>KK + 2 pieces:
    <TD>350 M states
    <TD>KPKP:
    <TD>4.5 M states 
  <TR>
    <TD>KK + 3 pieces:
    <TD>100 G states
    <TD>KPPKP:
    <TD>250 M states 
  <TR>
    <TD>KK + 4 pieces:
    <TD>30 T states
    <TD>KPPKPP:
    <TD>3 G states </TR></TBODY></TABLE>
<P>Comparisons of exact databases with simple heuristics for KPKP combined with 
a 2-4 ply search on P promotion have shown that the latter contain about 2.5% 
errors. This is merely one data point along the trade-off curve between accuracy 
on one hand, and space and time on the other. Improved heuristics, larger 
databases, more forward search at the fringe of P promotion are just some of the 
parameters to play with in an attempt to push exhaustive search beyond the 
boundaries wherein exact calculation is feasible. <A name=7.2></A>
<H4>7.2. Enumeration of Maximally Elastic Graphs (A. Marzetta)</H4>
<P>We study weighted graphs that can be embedded in Euclidean space in such a 
way as to preserve an edge's weight as distance between its two endpoints. Such 
questions arise in a variety of layout problems. In automatic graph drawing, for 
example, vertices are to be placed in the plane so as to approximate desired 
pairwise distances. The analogous 3-d problem arises in the distance geometry 
approach to molecular modeling, where edge weights are approximate distance 
measurements. The concept of elastic embeddability <A 
href="http://nobi.ethz.ch/febi/ex_search_paper/paper.html#Nievergelt95">[Nievergelt 
95]</A> is designed to deal with distances subject to error. Elastic graphs are 
related to generically rigid graphs known in structural engineering. In 
particular, maximally elastic graphs are the same as minimally rigid (isostatic) 
graphs <A href="http://nobi.ethz.ch/febi/ex_search_paper/paper.html#Tay95">[Tay 
95]</A>. Although graphs isostatic in the plane are well understood, the 
analogous problem in 3 or more dimensions has generated interesting conjectures 
that might be settled by a search for counter examples. <A name=7.3></A>
<H4>7.3. Primes in Intervals of Fixed Length (R. Gasser, J. Waldvogel)</H4>
<P>Although the distribution of prime numbers has been studied empirically and 
theoretically for centuries, it remains an inexhaustible source of interesting 
conjectures to be attacked by search. Let <EM>P(x)</EM> denote the number of 
primes &lt;= x. It follows from the prime number theorem: P(x) ~ x / ln(x), that 
the average density of primes decreases with increasing x. Specifically, let 
R(x) = max(P(y+x)) - P(y), where the maximum is over all y &gt;= 0, denote the 
maximally possible number of primes in an interval (y, y+x] of length x &gt; 0. 
Thus one might expect R(x) &lt;= P(x). But although the average decreases, the 
peak densities do not - surprisingly, they even increase in places. Using sieve 
techniques, <A 
href="http://nobi.ethz.ch/febi/ex_search_paper/paper.html#GW95">[Gasser, 
Waldvogel 95]</A> prove, among other results, that for all x,10 &lt; x &lt; 
1416, R(x) &lt; P(x) as expected. But x = 3250 is the first known counter 
example where R(x) &gt; P(x). Apparently, the densest clusters of primes in an 
interval of fixed length x occur starting at huge values of y. <A name=7.4></A>
<H4>7.4. Quo Vadis Exhaustive Search?</H4>
<P>Brute-force techniques, as the name reveals, have never been considered 
elegant instruments in a computer scientist's toolbox. When used by novice 
programmers, the computer science community would automatically assume, often 
correctly, that an algorithms expert could obviously have designed a much more 
efficient program. But brute-force techniques have survived as a research niche 
for half century because a few top experts have always been intrigued by this 
unconventional approach to problem solving. And the complexity of problems 
solved has slowly but steadily grown in proportion to the power of available 
hardware. 
<P>The question is whether exhaustive search will remain a niche, used primarily 
for exotic topics such as games or number theory, or become a mainstream 
approach to a wide variety of applications. Instead of answering this question, 
let us close with one argument against and one in favor of an expanded role. 
<P>As stated in section 1, the effectiveness or ``intelligence'' of a search 
procedure is due mostly to problem-specific knowledge, not to differences 
between one or another general-purpose search procedure. But to the extent that 
a search uses properties specific to one problem to prune the state space, it 
becomes less of a brute-force technique by definition. From this point of view, 
exhaustive search applies to problems we do not yet understand well, and is 
forever being replaced by selective search as knowledge advances. 
<P>The argument for a more important role of brute-force techniques consist of 
two simple observations. We will never run out of problems whose structure we 
understand so poorly that exhaustive search is the only available approach. 
Second, computer science will continue to be technology-driven as it has been 
for decades; raw computing power will continue to increase, in particular with 
increased use of distributed and parallel computation. Thus the issue will 
always be present whether or not newly available raw power can solve new 
problems. 
<P>In any case, brute-force techniques are part of the computer science culture. 
Basic techniques of exhaustive search belong in introductory courses on 
algorithms and data structures along with the more traditional polynomial-time 
algorithms. <BR><BR><BR><A name=ref></A>
<H3>References</H3>
<DL><A name=Allen89></A>
  <DT>[Allen 89] 
  <DD>J. D. Allen: A Note on the Computer Solution of Connect-Four, Heuristic 
  Programming in Artificial Intelligence 1: the first computer Olympiad (eds. 
  D.N.L. Levy and D.F. Beal), Ellis Horwood, Chichester, England. (1989) 134-135 
  <A name=Allis94></A>
  <DT>[Allis 94] 
  <DD>L.V. Allis: Searching for Solutions in Games and Artificial Intelligence, 
  Doctoral dissertation, University of Limburg, Maastricht, (1994). <A 
  name=Appell77></A>
  <DT>[Appell 77] 
  <DD>K. Appell and W. Haken: The Solution of the Four-Color-Map Problem, Sci. 
  American. (Oct. 1977) 108-121, <A name=Avis92></A>
  <DT>[Avis 92] 
  <DD>D. Avis and K. Fukuda: Reverse Search for Enumeration, Report, U. Tsukuba. 
  To appear, Discrete Applied Math. <A name=Berlekamp82></A>
  <DT>[Berlekamp 82] 
  <DD>E. Berlekamp, J.H. Conway and R.K. Guy: Winning Ways for your Mathematical 
  Plays, Academic Press, London, England. <A name=Culberson94></A>
  <DT>[Culberson 94] 
  <DD>J. Culberson and J. Schaeffer: Efficiently Searching the 15-Puzzle, 
  internal report, University of Alberta, Edmonton, Canada. <A 
name=Gasser90></A>
  <DT>[Gasser 90] 
  <DD>R. Gasser: Applying Retrograde Analysis to Nine Men's Morris, Heuristic 
  Programming in Artificial Intelligence 2: the second computer Olympiad (eds. 
  D.N.L. Levy and D.F. Beal), Ellis Horwood, Chichester, England. (1990) 161-173 
  <A name=Gasser91></A>
  <DT>[Gasser 91] 
  <DD>R. Gasser: Endgame Database Compression for Humans and Machines, Heuristic 
  Programming in Artificial Intelligence 3: the third computer Olympiad (eds. 
  H.J. van den Herik and L.V. Allis), Ellis Horwood, Chichester, England. (1990) 
  180-191 <A name=Gasser94></A>
  <DT>[Gasser 94] 
  <DD>R. Gasser, J. Nievergelt: Es ist entschieden: Das Muehlespiel ist 
  unentschieden, Informatik Spektrum, 17, No.5, Okt 1994. 314-317 <A 
  name=Gasser95></A>
  <DT>[Gasser 95] 
  <DD>R. Gasser: Harnessing Computational Resources for Efficient Exhaustive 
  Search, Doctoral dissertation, ETH Zurich, 1995. <A name=GW95></A>
  <DT>[Gasser, Waldvogel 95] 
  <DD>R. Gasser, J. Waldvogel: Primes in intervals of fixed length, in 
  preparation. <A name=Hansson92></A>
  <DT>[Hansson 92] 
  <DD>O. Hansson, A. Mayer and M. Yung: Criticizing Solutions to Relaxed Models 
  Yields Powerful Admissible Heuristics, Information Sciences 63. (1992) 207-227 
  <A name=Herik86></A>
  <DT>[Herik 86] 
  <DD>H.J. van den Herik and I.S. Herschberg: A Data Base on Data Bases, ICCA 
  Journal 9(1), 29. <A name=Horgan93></A>
  <DT>[Horgan 93] 
  <DD>J. Horgan: The Death of Proof, Scientific American. (1993) 74-82 <A 
  name=Knuth75></A>
  <DT>[Knuth 75] 
  <DD>D. E. Knuth and R. W. Moore: An analysis of Alpha-Beta Pruning, Artificial 
  Intelligence, Vol. 6. (1975) 293-326 <A name=Kociemba92></A>
  <DT>[Kociemba 92] 
  <DD>H. Kociemba: Close to God's Algorithm, Cubism for Fun 28. (1992) 10-13 <A 
  name=Korf85></A>
  <DT>[Korf 85] 
  <DD>R.E. Korf: Depth-first Iterative Deepening: An Optimal Admissible Tree 
  Search, Artificial Intelligence, Vol. 27. 97-109 <A name=Lake94></A>
  <DT>[Lake 94] 
  <DD>R. Lake, J. Schaeffer and P. Lu: Solving Large Retrograde Analysis 
  Problems Using a Network of Workstations, internal report, University of 
  Alberta, Edmon

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -