⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1794.txt

📁 著名的RFC文档,其中有一些文档是已经翻译成中文的的.
💻 TXT
📖 第 1 页 / 共 2 页
字号:
Network Working Group                                          T. BriscoRequest for Comments: 1794                            Rutgers UniversityCategory: Informational                                       April 1995                     DNS Support for Load BalancingStatus of this Memo   This memo provides information for the Internet community.  This memo   does not specify an Internet standard of any kind.  Distribution of   this memo is unlimited.1. Introduction   This RFC is meant to first chronicle a foray into the IETF DNS   Working Group, discuss other possible alternatives to   provide/simulate load balancing support for DNS, and to provide an   ultimate, flexible solution for providing DNS support for balancing   loads of many types.2. History   The history of this probably dates back well before my own time - so   undoubtedly some holes are here.  Hopefully they can be filled in by   other authors.   Initially; "load balancing" was intended to permit the Domain Name   System (DNS) [1] agents to support the concept of "clusters" (derived   from the VMS usage) of machines - where all machines were   functionally similar or the same, and it didn't particularly matter   which machine was picked - as long as the load of the processing was   reasonably well distributed across a series of actual different   hosts.  Around 1986 a number of different schemes started surfacing   as hacks to the Berkeley Internet Name Domain server (BIND)   distribution.  Probably the most widely distributed of these were the   "Shuffle Address" (SA) modifications by Bryan Beecher, or possibly   Marshall Rose's "Round Robin" code.   The SA records, however, did a round-robin ordering of the Address   resource records, and didn't do much with regard to the particular   loads on the target machines.  Matt Madison (of TGV) implemented some   changes that used VMS facilities to review the system loads, and   return A RRs in the order of least-loaded to most loaded.   The problem was with SAs was that load was not actually a factor, and   TGV's relied on VMS specific facilities to order the records.  The SA   RRs required changes to the DNS specification (in file syntax and inBrisco                                                          [Page 1]RFC 1794             DNS Support for Load Balancing           April 1995   record processing).  These were both viewed as drawbacks and not as   general solutions.   Most of the Internet waited in anticipation of an IETF approved   method for simulating "clusters".   Through a few IETF DNS Working Group sessions (Chaired by Rob Austein   of Epilogue), it was collectively agreed upon that a number of   criteria must be met:       A) Backwards compatibility with the existing DNS RFC.       B) Information changes frequently.       C) Multiple addresses should be sent out.       D) Must interact with other RRs appropriately.       E) Must be able to represent many types of "loads"       F) Must be fast.   (A) would ensure that the installed base of BIND and other DNS   implementations would continue to operate and interoperate properly.   (B) would permit very fast update times - to enable modeling of   real-time data.  Five minutes was thought as a normal interval,   though changes as fast as every sixty seconds could be imagined.   (C) would cover the possibility of a host's address being advertised   as optimal, yet the machine crashed during the period within the TTL   of the RR.  The second-most preferable address would be advertised   second, the third-most preferable third, and so on.  This would allow   a reasonable stab at recovery during machine failures.   (D) would ensure correct handling of all ancillary information - such   as MX, RP, and TXT information, as well as reverse lookup   information.  It needed to be ensured that such processes as mail   handling continued to work in an unsurprising and predictable manner.   (E) would ensure the flexibility that everyone wished.  A breadth of   "loads" were wished to be represented by various members of the DNS   Working Group.  Some "loads" were fairly eclectic - such as the   address ordering by the RTT to the host, some were pragmatic - such   as balancing the CPU load evenly across a series of hosts.  All   represented valid concerns within their own context, and the idea of   having separate RR types for each was unthinkable (primarily; it   would violate goal A).Brisco                                                          [Page 2]RFC 1794             DNS Support for Load Balancing           April 1995   (F) needed to ensure a few things.  Primarily that the time to   calculate the information to order the addressing information did not   exceed the TTL of the information distributed - i.e., that elements   with a TTL of five minutes didn't take six minutes to calculate.   Similarly; it seems a fairly clear goal in the DNS RFC that clients   should not be kept waiting - that request processing should continue   regardless of the state of any other processing occurring.3. Possible Alternatives   During various discussions with the DNS Working Group and with the   Load Balancing Committee, it was noted that no existing solution   dealt with all wishes appropriately.  One of the major successes of   the DNS is its flexibility - and it was felt that this needed to be   retained in all aspects.  It was conceived that perhaps not only   address information would need to be changed rapidly, but other   records may also need to change rapidly (at least this could not be   ruled out - who knows what technologies lurk in the future).   Of primary concern to many was the ability to interact with older   implementations of DNS.  The DNS is implemented widely now, and   changes to critical portions of the protocol could cause havoc for   years.  It became rapidly apparent through conversations with Jon   Postel and Dave Crocker (Area Director) that modifications to the   protocol would be viewed dimly.4. A Flexible Model   During many hours of discussions, it arose upon suggestion from Rob   Austein that the changes could be implemented without changes to the   protocol; if zone transfer behavior could be subtly changed, then the   zone transfer process could accommodate the changing of various RR   information.  What was needed was a smarter program to do the zone   transfers.  Pursuant to this, changes were made to BIND that would   permit the specification of the program to do the zone transfers for   particular zones.   There is no specification that a secondary has to receive updates   from its primary server in any specific manner - only that it needs   to check periodically, and obtain new zone copies when changes have   been made.  Conceivably the zone transfer agent could obtain the   information from any number of sources (e.g., a load average daemon,   a round-robin sorter) and present the information back to the   nameserver for distribution.   A number of questions arose from this concept, and all seem to have   been dealt with accordingly.  Primarily, the DNS protocol doesn't   guarantee ordering.  While the DNS protocol doesn't guaranteeBrisco                                                          [Page 3]RFC 1794             DNS Support for Load Balancing           April 1995   ordering, it is clear that the ordering is predictive - that   information read in twice in the same order will be presented twice   in the same order to clients.  Clients, of course, may reorder this   information, but that is deemed as a "local issue" as it is   configurable by the remote systems administrators (e.g., sortlists,   etc).  The zone transfer agent would have to account for any "mis-   ordering" that may occur locally, but remote reordering (e.g., client   side sortlists) of RRs is is impossible to predict.  Since local   mis-ordering is consistent, the zone transfer agents could easily   account for this.   Secondarily, but perhaps more subtly, the problem arises that zone   transfers aren't used by primary nameservers, only by secondary   nameservers.  To clarify this, the idea of "fast" or "volatile"   subzones must be dealt with.  In a volatile environment (where   address or other RR ordering changes rapidly), the refresh rate of a   zone must be set very low, and the TTL of the RRs handed out must   similarly be very low.  There is no use in handing out information   with TTLs of an hour, when the conditions for ordering the RRs   changes minutely.  There must be a relatively close relationship   between the refresh rates and TTLs of the information.  Of course,   with very low refresh rates, zone transfers between the primary and   secondary would have to occur frequently.  Given that primary and   secondary nameservers should be topologically and geographically far

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -