rfc1118.txt

来自「著名的RFC文档,其中有一些文档是已经翻译成中文的的.」· 文本 代码 · 共 1,347 行 · 第 1/5 页

TXT
1,347
字号
Names   All routing across the network is done by means of the IP address   associated with a packet.  Since humans find it difficult to remember   addresses like 128.174.5.50, a symbolic name register was set up at   the NIC where people would say, "I would like my host to be named   uiucuxc".  Machines connected to the Internet across the nation would   connect to the NIC in the middle of the night, check modification   dates on the hosts file, and if modified, move it to their local   machine.  With the advent of workstations and micros, changes to the   host file would have to be made nightly.  It would also be very labor   intensive and consume a lot of network bandwidth.  RFC-1034 and a   number of others describe Domain Name Service (DNS), a distributed   data base system for mapping names into addresses.   We must look a little more closely into what's in a name.  First,   note that an address specifies a particular connection on a specific   network.  If the machine moves, the address changes.  Second, a   machine can have one or more names and one or more network addresses   (connections) to different networks.  Names point to a something   which does useful work (i.e., the machine) and IP addresses point to   an interface on that provider.  A name is a purely symbolic   representation of a list of addresses on the network.  If a machine   moves to a different network, the addresses will change but the name   could remain the same.   Domain names are tree structured names with the root of the tree atKrol                                                           [Page 15]RFC 1118         The Hitchhikers Guide to the Internet    September 1989   the right.  For example:                             uxc.cso.uiuc.edu   is a machine called "uxc" (purely arbitrary), within the subdomains   of the U of I, and "uiuc" (the University of Illinois at Urbana),   registered with "edu" (the set of educational institutions).   A simplified model of how a name is resolved is that on the user's   machine there is a resolver.  The resolver knows how to contact   across the network a root name server.  Root servers are the base of   the tree structured data retrieval system.  They know who is   responsible for handling first level domains (e.g., 'edu').  What   root servers to use is an installation parameter. From the root   server the resolver finds out who provides 'edu' service.  It   contacts the 'edu' name server which supplies it with a list of   addresses of servers for the subdomains (like 'uiuc').  This action   is repeated with the sub-domain servers until the final subdomain   returns a list of addresses of interfaces on the host in question.   The user's machine then has its choice of which of these addresses to   use for communication.   A group may apply for its own domain name (like 'uiuc' above).  This   is done in a manner similar to the IP address allocation.  The only   requirements are that the requestor have two machines reachable from   the Internet, which will act as name servers for that domain.  Those   servers could also act as servers for subdomains or other servers   could be designated as such.  Note that the servers need not be   located in any particular place, as long as they are reachable for   name resolution.  (U of I could ask Michigan State to act on its   behalf and that would be fine.)  The biggest problem is that someone   must do maintenance on the database.  If the machine is not   convenient, that might not be done in a timely fashion.  The other   thing to note is that once the domain is allocated to an   administrative entity, that entity can freely allocate subdomains   using what ever manner it sees fit.   The Berkeley Internet Name Domain (BIND) Server implements the   Internet name server for UNIX systems.  The name server is a   distributed data base system that allows clients to name resources   and to share that information with other network hosts.  BIND is   integrated with 4.3BSD and is used to lookup and store host names,   addresses, mail agents, host information, and more.  It replaces the   /etc/hosts file or host name lookup.  BIND is still an evolving   program.  To keep up with reports on operational problems, future   design decisions, etc., join the BIND mailing list by sending a   request to Bind-Request@UCBARPA.BERKELEY.EDU.  BIND can also be   obtained via anonymous FTP from ucbarpa.berkeley.edu.Krol                                                           [Page 16]RFC 1118         The Hitchhikers Guide to the Internet    September 1989   There are several advantages in using BIND.  One of the most   important is that it frees a host from relying on /etc/hosts being up   to date and complete.  Within the .uiuc.edu domain, only a few hosts   are included in the host table distributed by SRI.  The remainder are   listed locally within the BIND tables on uxc.cso.uiuc.edu (the server   machine for most of the .uiuc.edu domain).  All are equally reachable   from any other Internet host running BIND, or any DNS resolver.   BIND can also provide mail forwarding information for interior hosts   not directly reachable from the Internet.  These hosts an either be   on non-advertised networks, or not connected to an IP network at all,   as in the case of UUCP-reachable hosts (see RFC-974).  More   information on BIND is available in the "Name Server Operations Guide   for BIND" in UNIX System Manager's Manual, 4.3BSD release.   There are a few special domains on the network, like NIC.DDN.MIL.   The hosts database at the NIC.  There are others of the form   NNSC.NSF.NET.  These special domains are used sparingly, and require   ample justification.  They refer to servers under the administrative   control of the network rather than any single organization.  This   allows for the actual server to be moved around the net while the   user interface to that machine remains constant.  That is, should BBN   relinquish control of the NNSC, the new provider would be pointed to   by that name.   In actuality, the domain system is a much more general and complex   system than has been described.  Resolvers and some servers cache   information to allow steps in the resolution to be skipped.   Information provided by the servers can be arbitrary, not merely IP   addresses.  This allows the system to be used both by non-IP networks   and for mail, where it may be necessary to give information on   intermediate mail bridges.What's wrong with Berkeley Unix   University of California at Berkeley has been funded by DARPA to   modify the Unix system in a number of ways.  Included in these   modifications is support for the Internet protocols.  In earlier   versions (e.g., BSD 4.2) there was good support for the basic   Internet protocols (TCP, IP, SMTP, ARP) which allowed it to perform   nicely on IP Ethernets and smaller Internets.  There were   deficiencies, however, when it was connected to complicated networks.   Most of these problems have been resolved under the newest release   (BSD 4.3).  Since it is the springboard from which many vendors have   launched Unix implementations (either by porting the existing code or   by using it as a model), many implementations (e.g., Ultrix) are   still based on BSD 4.2.  Therefore, many implementations still exist   with the BSD 4.2 problems.  As time goes on, when BSD 4.3 tricklesKrol                                                           [Page 17]RFC 1118         The Hitchhikers Guide to the Internet    September 1989   through vendors as new release, many of the problems will be   resolved.  Following is a list of some problem scenarios and their   handling under each of these releases.   ICMP redirects      Under the Internet model, all a system needs to know to get      anywhere in the Internet is its own address, the address of where      it wants to go, and how to reach a gateway which knows about the      Internet.  It doesn't have to be the best gateway.  If the system      is on a network with multiple gateways, and a host sends a packet      for delivery to a gateway which feels another directly connected      gateway is more appropriate, the gateway sends the sender a      message.  This message is an ICMP redirect, which politely says,      "I'll deliver this message for you, but you really ought to use      that gateway over there to reach this host".  BSD 4.2 ignores      these messages.  This creates more stress on the gateways and the      local network, since for every packet sent, the gateway sends a      packet to the originator.  BSD 4.3 uses the redirect to update its      routing tables, will use the route until it times out, then revert      to the use of the route it thinks is should use.  The whole      process then repeats, but it is far better than one per packet.   Trailers      An application (like FTP) sends a string of octets to TCP which      breaks it into chunks, and adds a TCP header.  TCP then sends      blocks of data to IP which adds its own headers and ships the      packets over the network.  All this prepending of the data with      headers causes memory moves in both the sending and the receiving      machines.  Someone got the bright idea that if packets were long      and they stuck the headers on the end (they became trailers), the      receiving machine could put the packet on the beginning of a page      boundary and if the trailer was OK merely delete it and transfer      control of the page with no memory moves involved.  The problem is      that trailers were never standardized and most gateways don't know      to look for the routing information at the end of the block.  When      trailers are used, the machine typically works fine on the local      network (no gateways involved) and for short blocks through      gateways (on which trailers aren't used).  So TELNET and FTP's of      very short files work just fine and FTP's of long files seem to      hang.  On BSD 4.2 trailers are a boot option and one should make      sure they are off when using the Internet.  BSD 4.3 negotiates      trailers, so it uses them on its local net and doesn't use them      when going across the network.Krol                                                           [Page 18]RFC 1118         The Hitchhikers Guide to the Internet    September 1989   Retransmissions      TCP fires off blocks to its partner at the far end of the      connection.  If it doesn't receive an acknowledgement in a      reasonable amount of time it retransmits the blocks.  The      determination of what is reasonable is done by TCP's      retransmission algorithm.      There is no correct algorithm but some are better than others,      where worse is measured by the number of retransmissions done      unnecessarily.  BSD 4.2 had a retransmission algorithm which      retransmitted quickly and often.  This is exactly what you would      want if you had a bunch of machines on an Ethernet (a low delay      network of large bandwidth).  If you have a network of relatively      longer delay and scarce bandwidth (e.g., 56kb lines), it tends to      retransmit too aggressively.  Therefore, it makes the networks and      gateways pass more traffic than is really necessary for a given      conversation.  Retransmission algorithms do adapt to the delay of      the network after a few packets, but 4.2's adapts slowly in delay      situations.  BSD 4.3 does a lot better and tries to do the best      for both worlds.  It fires off a few retransmissions really      quickly assuming it is on a low delay network, and then backs off      very quickly.  It also allows the delay to be about 4 minutes      before it gives up and declares the connection broken.      Even better than the original 4.3 code is a version of TCP with a      retransmission algorithm developed by Van Jacobson of LBL.  He did      a lot of research into how the algorithm works on real networks      and modified it to get both better throughput and be friendlier to      the network.  This code has been integrated into the later      releases of BSD 4.3 and can be fetched anonymously from      ucbarpa.berkeley.edu in directory 4.3.   Time to Live      The IP packet header contains a field called the time to live      (TTL) field.  It is decremented each time the packet traverses a      gateway.  TTL was designed to prevent packets caught in routing      loops from being passed forever with no hope of delivery.  Since      the definition bears some likeness to the RIP hop count, some      misguided systems have set the TTL field to 15 because the      unreachable flag in RIP is 16.  Obviously, no networks could have      more than 15 hops.  The RIP space where hops are limited ends when      RIP is not used as a routing protocol any more (e.g., when NSFnet      starts transporting the packet).  Therefore, it is quite easy for      a packet to require more than 15 hops.  These machines will      exhibit the behavior of being able to reach some places but not      others even though the routing information appears correct.Krol                                                           [Page 19]RFC 1118         The Hitchhikers Guide to the Internet    September 1989      Solving the problem typically requires kernel patches so it may be      difficult if source is not available.Appendix A - References to Remedial Information-----------------------------------------------  [1]  Quarterman and Hoskins, "Notable Computer Networks",       Communications of the ACM, Vol. 29, No. 10, pp. 932-971, October       1986.

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?