⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 aske plaat mtd(f), a new chess algorithm.htm

📁 这是博弈论算法全集第四部分:剪枝算法,其它算法将陆续推出.以便与大家共享
💻 HTM
📖 第 1 页 / 共 2 页
字号:
  n.lowerbound</I> := <I>g</I>; store <I>n.lowerbound</I>;<BR><B>return 
  </B><I>g</I>; </UL>Transposition table access takes place in the <B>retrieve</B> 
and <B>store</B> calls. The lines around retrieve make sure that if a value is 
present in the table, it is used, instead of continuing the search. The store 
function is needed to make sure that the table is filled with values as they 
become available. In a real program, you would also store the best move in the 
transposition table, and upon retrieving search it first. In the interest of 
brevity that is not shown in this code. 
<P></P>
<P>Text books on Artificial Intelligence typically discuss a version of 
AlphaBeta that does <I>not</I> use memory. Therefore, to avoid any confusion, 
and even though the use of transposition tables is standard practice in the game 
playing community, the fact that MTD(<I>f</I>) needs a memory-enhanced searcher 
is stressed here. (Yes, I dislike the name AlphaBetaWithMemory too. Life would 
be so much simpler if AI text books would discuss useful AlphaBeta versions.) 
The AlphaBetaWithMemory code is given in the interest of completeness. If you 
already have a chess program that uses AlphaBeta or NegaScout (minimax or 
negamax make no difference) and it uses a transposition table, then in all 
likelihood it will work right away. In none of the programs that I tried did I 
have to change the existing AlphaBeta code (actually, in all cases a negamax 
version of NegaScout) to get the transposition table to work properly. </P>
<HR>

<H2>Implementation Tips</H2>
<P>The coarser the grain of eval, the less passes MTD(<I>f</I>) has to make to 
converge to the minimax value. Some programs have a fine grained evaluation 
function, where positional knowledge can be worth as little as one hundredst of 
a pawn. Big score swings can become inefficient in for these programs. It may 
help to dynamically increase the step size: instead of using the previous bound, 
one can, for example, add an extra few points in the search direction (for 
failing high, or searching upward, adding the bonus, and for failing low, or 
searching downward, subtracting the bonus) every two passes or so. (Don Dailey 
found that a scheme like this works well in a version of Cilkchess.) At the end, 
if you overshoot the minimax value, you have to make a small search in the 
opposite direction, using the previous search bound without an extra bonus, to 
make the final convergence. Also, it can be quite instructive to experiment with 
different evaluation function grain sizes. Sometimes coarse grain functions work 
better than fine grain, both for NegaScout and MTD(<I>f</I>). 
<P>Some programs unroll the search of the root node, for example, to do extra 
move sorting at the root. In MTD(<I>f</I>) you can do this too. In the interest 
of cleanliness this is not shown in the pseudo code.</P>
<P>Sometimes forward pruning or search extensions that depend on alpha and beta 
values react in surprising ways to a search that consists of zero window calls 
only. You may have to do some re-tuning to get it all in synch again. Keep in 
mind that the size of the search tree is quite sensitive to the value of the 
search window; a strongly oscillating "first guess" is a bad thing. Also, if it 
weren't for the transposition table, MTD(<I>f</I>)'s re-searches would cause a 
lot of overhead. Make sure that the transposition table works properly. 
MTD(<I>f</I>) does more re-searches than NegaScout, and tends to take a bigger 
penalty from a badly functioning transposition table. Consider storing leaf and 
quiescence nodes in the table. Experiment with different sizes. There is, 
however, a limit beyond which expanding the table becomes pointless. That limit 
should be about the same for NegaScout and MTD(<I>f</I>). </P>
<P>Some tips to keep in mind when doing experiments to better understand the 
behavior of your search algorithm: The size of the search tree can differ 
significantly from position to position. Use a large test set in your 
experiments. Try different set-ups, such as: without null-move pruning, without 
extensions, disregard counting of quiescence nodes, different transposition 
table sizes and storage schemes, disable some parts of the move ordering. As in 
all debugging, if you want to understand the search better, the idea is to 
disable as much smarts as possible, to be able to study the behavior of a clean, 
noise-less, algorithm. (Sure, this will take a lot of time, but it can be quite 
rewarding to get to understand your program better.) </P>
<HR>

<H2>Summary</H2>
<P>To summarize, the core ideas of MTD(<I>f</I>) are: 
<UL>
  <LI>The narrower the AlphaBeta window, the more cutoffs you get, the more 
  efficient the search is. Hence MTD(<I>f</I>) uses only search windows of zero 
  size. 
  <LI>Zero window AlphaBeta calls return bounds. At the root of the tree the 
  return bounds are stored in <I>upperbound</I> (after AlphaBeta "failed low") 
  and <I>lowerbound</I> (after AlphaBeta "failed high"). The bounds delimit the 
  range of possible values for the minimax value. Each time MTD(<I>f</I>) calls 
  AlphaBeta it gets a value back that narrows the range, and the algorithm is 
  one step closer to hitting the minimax value. 
  <LI>Storing nodes in memory gets rid of the overhead inherent in multiple 
  re-searches. A transposition table of sufficient size does the job. 
  <LI>It is more efficient to start the search close to its goal. Therefore 
  MTD(<I>f</I>) needs (and can make use of) a good first guess. </LI></UL>
<P></P>
<HR>

<H2><A name=negascout>NegaScout</A></H2>Here's a negamax version of the 
NegaScout code from <A 
href="http://www.uni-paderborn.de/pc2/people/homes/ar/index.htm">Alexander 
Reinefeld's homepage</A>, the creator of the algorithm. Note that you have to 
add the transposition table access code in the appropriate places yourself. 
(It's a "NegaScout<B>Without</B>Memory".) <PRE>int NegaScout ( int p, alpha, beta );   
{                        /* compute minimax value of position p */
   int a, b, t, i;

   determine successors p_1,...,p_w of p;
   if ( w = 0 )
      return ( Evaluate(p) );                      /* leaf node */
   a = alpha;
   b = beta;
   for ( i = 1; i &lt;= w; i++ ) {
      t = -NegaScout ( p_i, -b, -a );
      if (t &gt; a) &amp;&amp; (t &lt; beta) &amp;&amp; (i &gt; 1) &amp;&amp; (d &lt; maxdepth-1)
         a = -NegaScout ( p_i, -beta, -t );        /* re-search */
      a = max( a, t );
      if ( a &gt;= beta ) 
         return ( a );                               /* cut-off */
      b = a + 1;                         /* set new null window */
   }
   return ( a );
}
</PRE>
<HR>

<H2><A name=further>Further reading</A></H2>Here are some publications that 
describe MTD(<I>f</I>), and why it works (these files can be printed on a 
Postscript printer): 
<UL>
  <LI>The MTD(<I>f</I>) algorithm and the experiments that showed that it worked 
  in practice were described in award winning University of Alberta technical 
  report 94-18, entitled "<A 
  href="http://www.cs.vu.nl/~aske/Papers/tr9418.ps.gz">A New Paradigm for 
  Minimax Search</A>" (a.k.a. Erasmus University report EUR-CS-95-03). 
  <LI>It was subsequently published in a short paper at IJCAI-95 with the highly 
  hyphenated title "<A 
  href="http://www.cs.vu.nl/~aske/Papers/bfmms.ps.gz">Best-First Fixed-Depth 
  Game-Tree Search in Practice</A>". 
  <LI>MTD(<I>f</I>) was described in a longer article in the November 1996 issue 
  of the Journal Artificial Intelligence entitled "<A 
  href="http://www.cs.vu.nl/~aske/Papers/aij-final.ps.gz">Best-First Fixed-Depth 
  Minimax Algorithms</A>", which also deals with the relation of other MTD 
  instances to SSS*-like best-first minimax algorithms. 
  <LI>The algorithm is one of the topics of my 1996 PhD thesis "<A 
  href="http://www.cs.vu.nl/~aske/Papers/abstr-ks.html">Research Re: search 
  &amp; Re-search</A>". </LI></UL>These papers, and more, with their full 
reference information, can be found on this <A 
href="http://www.cs.vu.nl/~aske/minimax.html">list of publications</A>. The work 
on MTD(<I>f</I>) was part of an extraordionary fruitful and <A 
href="http://www.cs.vu.nl/~aske/new-2y.gif">pleasant</A> cooperation with <A 
href="http://www.cs.ualberta.ca/~jonathan">Jonathan Schaeffer</A>, <A 
href="http://www.cs.few.eur.nl/few/people/pijls/">Wim Pijls</A>, and <A 
href="http://www.cs.few.eur.nl/few/people/adebruin/">Arie de Bruin</A>, which I 
gratefully acknowledge. The project also benefited from the kind support of <A 
href="http://www.cs.unimaas.nl/~herik/">Jaap van den Herik</A> and <A 
href="http://www.cs.vu.nl/~bal/">Henri Bal</A>.
<P></P>
<HR>

<H2>Links</H2>
<P>If you are fascinated by game playing programs, and would like to know more 
about how to write one, then you are in luck, because there are many good books 
and links available, for novice and expert alike. The links below can help you 
further. Follow them, and you are bound to learn more, and find a more complete 
answer than by firing off your question by email. 
<UL>
  <LI><A href="http://www.cs.ualberta.ca/~tony/ICCA/anatomy.html">The Anatomy of 
  Chess Programs</A>, by Tony Marsland, is a nice intro on how current chess 
  programs work. 
  <LI><A href="http://www.cs.ualberta.ca/~yngvi/chess.html">Yngvi's Chess 
  Page</A> 
  <LI><A href="http://www.chess-space.com/">Chess Space</A> 
  <LI><A href="http://www.xs4all.nl/~verhelst/chess/programming.html">Paul 
  Verhelst</A> - Question and Answers 
  <LI><A 
  href="http://diwww.epfl.ch/~diderich/research/minimax/minimax.html">Bibliography 
  on Minimax Algorithms</A>. 
  <LI><A href="ftp://ftp.cis.uab.edu/pub/hyatt/">Crafty</A> is a strong program 
  whose source code is freely available and, for a chess program, quite 
  readable. 
  <LI><A href="http://www.chess.ibm.com/">IBM's Deep Blue</A> is the machine 
  that beat World Champion Kasparov in a six game match in May 1997. <A 
  href="http://www.cs.vu.nl/~aske/db.html">A report</A> on this historic 
  achievement. </LI></UL>
<P></P>
<P>If you have a question on MTD(<I>f</I>) and minimax research you are welcome 
to send me a note at aske@cs.vu.nl. Bear in mind, however, that I have other 
duties besides answering questions. The speed of a response increases with the 
relevance to my research. Please take some time to look for answers yourself, 
browsing the web or your library. The preceding list of links was put together 
in the hope that it will be used, prevent basic questions from being emailed, 
and to provide you with an easy way to find answers yourself. I am, however, 
definitely interested in your experience with MTD(<I>f</I>), and in research in 
search algorithms, and parallel algorithms in general. </P>
<HR>
Thanks to Peter Kouwenhoven and Yngvi Bjornsson for suggestions for improvement 
of this page. Thanks to Toru Yamazato for pointing out a bug in the 
AlphaBetaWithMemory pseudo code. 
<HR>

<P>Back to <A href="http://www.cs.vu.nl/~aske/">Aske Plaat</A>.<BR>Back to <A 
href="http://theory.lcs.mit.edu/~cilk/chess.html">Cilkchess</A>.<BR>Back to <A 
href="http://theory.lcs.mit.edu/~cilk/">Cilk</A>.</P>
<HR>
<I>Aske Plaat, Dec 3, 1997, aske@acm.org</I> </BODY></HTML>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -