⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1122.txt

📁 bind 9.3结合mysql数据库
💻 TXT
📖 第 1 页 / 共 5 页
字号:
         There are varying opinions in the Internet community about         embedded gateway functionality.  The main arguments are as         follows:         o    Pro: in a local network environment where networking is              informal, or in isolated internets, it may be convenient              and economical to use existing host systems as gateways.              There is also an architectural argument for embedded              gateway functionality: multihoming is much more common              than originally foreseen, and multihoming forces a host to              make routing decisions as if it were a gateway.  If the              multihomed  host contains an embedded gateway, it will              have full routing knowledge and as a result will be able              to make more optimal routing decisions.         o    Con: Gateway algorithms and protocols are still changing,              and they will continue to change as the Internet system              grows larger.  Attempting to include a general gateway              function within the host IP layer will force host system              maintainers to track these (more frequent) changes.  Also,              a larger pool of gateway implementations will make              coordinating the changes more difficult.  Finally, the              complexity of a gateway IP layer is somewhat greater than              that of a host, making the implementation and operation              tasks more complex.              In addition, the style of operation of some hosts is not              appropriate for providing stable and robust gateway              service.         There is considerable merit in both of these viewpoints.  One         conclusion can be drawn: an host administrator must have         conscious control over whether or not a given host acts as a         gateway.  See Section 3.1 for the detailed requirements.Internet Engineering Task Force                                [Page 11]RFC1122                       INTRODUCTION                  October 1989   1.2  General Considerations      There are two important lessons that vendors of Internet host      software have learned and which a new vendor should consider      seriously.      1.2.1  Continuing Internet Evolution         The enormous growth of the Internet has revealed problems of         management and scaling in a large datagram-based packet         communication system.  These problems are being addressed, and         as a result there will be continuing evolution of the         specifications described in this document.  These changes will         be carefully planned and controlled, since there is extensive         participation in this planning by the vendors and by the         organizations responsible for operations of the networks.         Development, evolution, and revision are characteristic of         computer network protocols today, and this situation will         persist for some years.  A vendor who develops computer         communication software for the Internet protocol suite (or any         other protocol suite!) and then fails to maintain and update         that software for changing specifications is going to leave a         trail of unhappy customers.  The Internet is a large         communication network, and the users are in constant contact         through it.  Experience has shown that knowledge of         deficiencies in vendor software propagates quickly through the         Internet technical community.      1.2.2  Robustness Principle         At every layer of the protocols, there is a general rule whose         application can lead to enormous benefits in robustness and         interoperability [IP:1]:                "Be liberal in what you accept, and                 conservative in what you send"         Software should be written to deal with every conceivable         error, no matter how unlikely; sooner or later a packet will         come in with that particular combination of errors and         attributes, and unless the software is prepared, chaos can         ensue.  In general, it is best to assume that the network is         filled with malevolent entities that will send in packets         designed to have the worst possible effect.  This assumption         will lead to suitable protective design, although the most         serious problems in the Internet have been caused by         unenvisaged mechanisms triggered by low-probability events;Internet Engineering Task Force                                [Page 12]RFC1122                       INTRODUCTION                  October 1989         mere human malice would never have taken so devious a course!         Adaptability to change must be designed into all levels of         Internet host software.  As a simple example, consider a         protocol specification that contains an enumeration of values         for a particular header field -- e.g., a type field, a port         number, or an error code; this enumeration must be assumed to         be incomplete.  Thus, if a protocol specification defines four         possible error codes, the software must not break when a fifth         code shows up.  An undefined code might be logged (see below),         but it must not cause a failure.         The second part of the principle is almost as important:         software on other hosts may contain deficiencies that make it         unwise to exploit legal but obscure protocol features.  It is         unwise to stray far from the obvious and simple, lest untoward         effects result elsewhere.  A corollary of this is "watch out         for misbehaving hosts"; host software should be prepared, not         just to survive other misbehaving hosts, but also to cooperate         to limit the amount of disruption such hosts can cause to the         shared communication facility.      1.2.3  Error Logging         The Internet includes a great variety of host and gateway         systems, each implementing many protocols and protocol layers,         and some of these contain bugs and mis-features in their         Internet protocol software.  As a result of complexity,         diversity, and distribution of function, the diagnosis of         Internet problems is often very difficult.         Problem diagnosis will be aided if host implementations include         a carefully designed facility for logging erroneous or         "strange" protocol events.  It is important to include as much         diagnostic information as possible when an error is logged.  In         particular, it is often useful to record the header(s) of a         packet that caused an error.  However, care must be taken to         ensure that error logging does not consume prohibitive amounts         of resources or otherwise interfere with the operation of the         host.         There is a tendency for abnormal but harmless protocol events         to overflow error logging files; this can be avoided by using a         "circular" log, or by enabling logging only while diagnosing a         known failure.  It may be useful to filter and count duplicate         successive messages.  One strategy that seems to work well is:         (1) always count abnormalities and make such counts accessible         through the management protocol (see [INTRO:1]); and (2) allowInternet Engineering Task Force                                [Page 13]RFC1122                       INTRODUCTION                  October 1989         the logging of a great variety of events to be selectively         enabled.  For example, it might useful to be able to "log         everything" or to "log everything for host X".         Note that different managements may have differing policies         about the amount of error logging that they want normally         enabled in a host.  Some will say, "if it doesn't hurt me, I         don't want to know about it", while others will want to take a         more watchful and aggressive attitude about detecting and         removing protocol abnormalities.      1.2.4  Configuration         It would be ideal if a host implementation of the Internet         protocol suite could be entirely self-configuring.  This would         allow the whole suite to be implemented in ROM or cast into         silicon, it would simplify diskless workstations, and it would         be an immense boon to harried LAN administrators as well as         system vendors.  We have not reached this ideal; in fact, we         are not even close.         At many points in this document, you will find a requirement         that a parameter be a configurable option.  There are several         different reasons behind such requirements.  In a few cases,         there is current uncertainty or disagreement about the best         value, and it may be necessary to update the recommended value         in the future.  In other cases, the value really depends on         external factors -- e.g., the size of the host and the         distribution of its communication load, or the speeds and         topology of nearby networks -- and self-tuning algorithms are         unavailable and may be insufficient.  In some cases,         configurability is needed because of administrative         requirements.         Finally, some configuration options are required to communicate         with obsolete or incorrect implementations of the protocols,         distributed without sources, that unfortunately persist in many         parts of the Internet.  To make correct systems coexist with         these faulty systems, administrators often have to "mis-         configure" the correct systems.  This problem will correct         itself gradually as the faulty systems are retired, but it         cannot be ignored by vendors.         When we say that a parameter must be configurable, we do not         intend to require that its value be explicitly read from a         configuration file at every boot time.  We recommend that         implementors set up a default for each parameter, so a         configuration file is only necessary to override those defaultsInternet Engineering Task Force                                [Page 14]RFC1122                       INTRODUCTION                  October 1989         that are inappropriate in a particular installation.  Thus, the         configurability requirement is an assurance that it will be         POSSIBLE to override the default when necessary, even in a         binary-only or ROM-based product.         This document requires a particular value for such defaults in         some cases.  The choice of default is a sensitive issue when         the configuration item controls the accommodation to existing         faulty systems.  If the Internet is to converge successfully to         complete interoperability, the default values built into         implementations must implement the official protocol, not         "mis-configurations" to accommodate faulty implementations.         Although marketing considerations have led some vendors to         choose mis-configuration defaults, we urge vendors to choose         defaults that will conform to the standard.         Finally, we note that a vendor needs to provide adequate         documentation on all configuration parameters, their limits and         effects.   1.3  Reading this Document      1.3.1  Organization         Protocol layering, which is generally used as an organizing         principle in implementing network software, has also been used         to organize this document.  In describing the rules, we assume         that an implementation does strictly mirror the layering of the         protocols.  Thus, the following three major sections specify         the requirements for the link layer, the internet layer, and         the transport layer, respectively.  A companion RFC [INTRO:1]         covers application level software.  This layerist organization         was chosen for simplicity and clarity.         However, strict layering is an imperfect model, both for the         protocol suite and for recommended implementation approaches.         Protocols in different layers interact in complex and sometimes         subtle ways, and particular functions often involve multiple         layers.  There are many design choices in an implementation,         many of which involve creative "breaking" of strict layering.         Every implementor is urged to read references [INTRO:7] and         [INTRO:8].         This document describes the conceptual service interface         between layers using a functional ("procedure call") notation,         like that used in the TCP specification [TCP:1].  A host         implementation must support the logical information flowInternet Engineering Task Force                                [Page 15]RFC1122                       INTRODUCTION                  October 1989         implied by these calls, but need not literally implement the         calls themselves.  For example, many implementations reflect         the coupling between the transport layer and the IP layer by         giving them shared access to common data structures.  These         data structures, rather than explicit procedure calls, are then         the agency for passing much of the information that is         required.         In general, each major section of this document is organized         into the following subsections:         (1)  Introduction         (2)  Protocol Walk-Through -- considers the protocol              specification documents section-by-section, correcting

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -