⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1713.txt

📁 RFC 的详细文档!
💻 TXT
📖 第 1 页 / 共 3 页
字号:
   domain administration, who delegated him authority over the domain.
   From now on the parent name server announces one or more servers for
   the domain, which will receive queries for something they don't know
   about.  (On the other hand, servers may be added to the list without
   the parent's servers knowing, thus hiding valuable information from
   them - this is not a lame delegation, but shouldn't happen either.)
   Other examples are the inclusion of a name in an NS list without
   telling the administrator of that host, or when a server suddenly
   stops providing name service for a domain.

   To detect and warn DNS administrators all over the world about this
   kind of problem, Bryan Beecher from University of Michigan wrote
   lamers, a program to analyze named (the well-known BIND name server)
   logging information [2].  To produce useful logs, named was applied a
   patch to detect and log lame delegations (this patch was originally
   written by Don Lewis from Silicon Systems and is now part of the
   latest release of BIND thanks to Bryan Beecher, so it is expected to
   be widely available in the near future).  Lamers is a small shell
   script that simply scans these logs and reports the lame delegations
   found.  This reporting is done by sending mail to the hostmasters of
   the affected domains, as stated in the SOA record for each of them.
   If this is not possible, the message is sent to the affected name
   servers' postmasters instead.  Manual processing is needed in case of
   bounces, caused by careless setup of those records or invalid
   postmaster addresses.  A report of the errors found by the U-M
   servers is also posted twice a month on the USENET newsgroup
   comp.protocols.tcp-ip.domains.

   If you ever receive such a report, you should study it carefully in
   order to find and correct problems in your domain, or see if your
   servers are being affected by the spreading of erroneous information.
   Better yet, lamers could be run on your servers to detect more lame
   delegations (U-M can't see them all!).  Also, if you receive mail
   reporting a lame delegation affecting your domain or some of your



Romao                                                           [Page 5]

RFC 1713                Tools for DNS debugging            November 1994


   hosts, please don't just ignore it or flame the senders.  They're
   really trying to help!

   You can get lamers from ftp://terminator.cc.umich.edu/dns/lame-
   delegations.

2.4. DOC

   Authority information is one of the most significant parts of the DNS
   data, as the whole mechanism depends on it to correctly traverse the
   domain tree.  Incorrect authority information leads to problems such
   as lame delegations or even, in extreme cases, the inaccessibility of
   a domain.  Take the case where the information given about all its
   name servers is incorrect: being unable to contact the real servers
   you may end up being unable to reach anything inside that domain.
   This may be exaggerated, but if you're on the DNS business long
   enough you've probably have seen some enlightened examples of this
   scenario.

   To look for this kind of problems Paul Mockapetris and Steve Hotz,
   from the Information Sciences Institute, wrote a C-shell script
   called DOC (Domain Obscenity Control), an automated domain testing
   tool that uses dig to query the appropriate name servers about
   authority for a domain and analyzes the responses.

   DOC limits its analysis to authority data since the authors
   anticipated that people would complain about such things as invasion
   of privacy.  Also, at the time it was written most domains were so
   messy that they thought there wouldn't be much point in checking
   anything deeper until the basic problems weren't fixed.

   Only one domain is analyzed each time: the program checks if all the
   servers for the parent domain agree about the delegation information
   for the domain.  DOC then picks a list of name servers for the domain
   (obtained from one of the parent's servers) and starts checking on
   their information, querying each of them: looks for the SOA record,
   checks if the response is authoritative, compares the various records
   retrieved, gets each one's list of NS, compares the lists (both among
   these servers and the parent's), and for those servers inside the
   domain the program looks for PTR records for them.

   Due to several factors, DOC seems to have frozen since its first
   public release, back in 1990.  Within the distribution there is an
   RFC draft about automated domain testing, which was never published.
   Nevertheless, it may provide useful reading.  The software can be
   fetched from ftp://ftp.uu.net/networking/ip/dns/doc.2.0.tar.Z.





Romao                                                           [Page 6]

RFC 1713                Tools for DNS debugging            November 1994


2.5. DDT

   DDT (Domain Debug Tools) is a package of programs to scan DNS
   information for error detection, developed originally by Jorge Frazao
   from PUUG - Portuguese UNIX Users Group and later rewritten by the
   author, at the time at the Faculty of Sciences of University of
   Lisbon.  Each program is specialized in a given set of anomalies: you
   have a checker for authority information, another for glue data, mail
   exchangers, reverse-mappings and miscellaneous errors found in all
   kinds of resource records.  As a whole, they do a rather extensive
   checking on DNS configurations.

   These tools work on cached DNS data, i.e., data stored locally after
   performing zone transfers (presently done by a slightly modified
   version of BIND's named-xfer, called ddt-xfer, which allows recursive
   transfers) from the appropriate servers, rather than querying name
   servers on-line each time they run.  This option was taken for
   several reasons [3]: (1) efficiency, since it reads data from disk,
   avoiding network transit delays, (2) reduced network traffic, data
   has to be fetched only once and then run the programs over it as many
   times as you wish and (3) accessibility - in countries with limited
   Internet access, as was the case in Portugal by the time DDT was in
   its first stages, this may be the only practical way to use the
   tools.

   Point (2) above deserves some special considerations: first, it is
   not entirely true that there aren't additional queries while
   processing the information, one of the tools, the authority checker,
   queries (via dig) each domain's purported name servers in order to
   test the consistency of the authority information they provide about
   the domain.  Second, it may be argued that when the actual tests are
   done the information used may be out of date.  While this is true,
   you should note that this is the DNS nature, if you obtain some piece
   of information you can't be sure that one second later it is still
   valid.  Furthermore, if your source was not the primary for the
   domain then you can't even be sure of the validity in the exact
   moment you got it in the first place.  But experience shows that if
   you see an error, it is likely to be there in the next version of the
   domain information (and if it isn't, nothing was lost by having
   detected it in the past).  On the other side, of course there's
   little point in checking one month old data...

   The list of errors looked for includes lame delegations, version
   number mismatches between servers (this may be a transient problem),
   non-existing servers, domains with only one server, unnecessary glue
   information, MX records pointing to hosts not in the analyzed domain
   (may not be an error, it's just to point possibly strange or
   expensive mail-routing policies), MX records pointing to aliases, A



Romao                                                           [Page 7]

RFC 1713                Tools for DNS debugging            November 1994


   records without the respective PTR and vice-versa, missing trailing
   dots, hostnames with no data (A or CNAME records), aliases pointing
   to aliases, and some more.  Given the specialized nature of each
   tool, it is possible to look for a well defined set of errors,
   instead of having the data analyzed in all possible ways.

   Except for ddt-xfer, all the programs are written in Perl.  A new
   release may come into existence in a near future, after a thorough
   review of the methods used, the set of errors checked for and some
   bug fixing (in particular, a Perl version of ddt-xfer is expected).
   In the mean time, the latest version is available from
   ftp://ns.dns.pt/pub/dns/ddt-2.0.1.tar.gz.

2.6. The Checker Project

   The problem of the huge amount of DNS traffic over the Internet is
   getting researchers close attention for quite some time, mainly
   because most of it is unnecessary.  Observations have shown that DNS
   consumes something like twenty times more bandwidth than it should
   [4].  Some causes for this undoubtedly catastrophic scenario lie on
   deficient resolver and name server implementations spread all over
   the world, from personal to super-computers, running all sorts of
   operating systems.

   While the panacea is yet to be found (claims are made that the latest
   official version of BIND is a great step forward [5]), work has been
   done in order to identify sources of anomalies, as a first approach
   in the search for a solution.  The Checker Project is one such
   effort, developed at the University of Southern California [6].  It
   consists of a set of C code patched into BIND's named, for monitoring
   server activity, building a database with the history of that
   operation (queries and responses).  It is then possible to generate
   reports from the database summarizing activity and identifying
   behavioral patterns from client requests, looking for anomalies.  The
   named code alteration is small and simple unless you want do have PEC
   checking enabled (see below).  You may find sources and documentation
   at ftp://catarina.usc.edu/pub/checker.

   Checker only does this kind of collection and reporting, it does not
   try to enforce any rules on the administrators of the defective sites
   by any means whatsoever.  Authors hope that the simple exhibition of
   the evidences is a reason strong enough for those administrators to
   have their problems fixed.

   An interesting feature is PEC (proactive error checking): the server
   pretends to be unresponsive for some queries by randomly choosing
   some name and start refusing replies for queries on that name during
   a pre-determined period.  Those queries are recorded, though, to try



Romao                                                           [Page 8]

RFC 1713                Tools for DNS debugging            November 1994


   to reason about the retry and timeout schemes used by name servers
   and resolvers.  It is expected that properly implemented clients will
   choose another name server to query, while defective ones will keep
   on trying with the same server.  This feature seems to be still under
   testing as it is not completely clear yet how to interpret the
   results.  A PEC-only error checker is available from USC that is much
   simpler than the full error checker.  It examines another name server
   client every 30 minutes to see if this client causes excessive load.

   Presently Checker has been running on a secondary for the US domain
   for more than a year with little trouble.  Authors feel confident it
   should run on any BSD platform (at least SunOS) without problems, and
   is planned to be included as part of the BIND name server.

   Checker is part of a research project lead by Peter Danzig from USC,
   aimed to implement probabilistic error checking mechanisms like PEC
   on distributed systems [7].  DNS is one such system and it was chosen
   as the platform for testing the validity of these techniques over the
   NSFnet.  It is hoped to achieve enough knowledge to provide means to
   improve performance and reliability of distributed systems.
   Anomalies like undetected server failures, query loops, bad
   retransmission backoff algorithms, misconfigurations and resubmission
   of requests after negative replies are some of the targets for these
   checkers to detect.

2.7. Others

   All the tools described above are the result of systematic work on
   the issue of DNS debugging, some of them included in research
   projects.  For the sake of completeness several other programs are
   mentioned here.  These, though just as serious, seem to have been
   developed in a somewhat ad-hoc fashion, without an implicit intention
   of being used outside the environments where they were born.  This
   impression is, of course, arguable, nevertheless there was no

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -