📄 rfc1356.txt
字号:
be configured on a per-peer address basis.
3.4 For compatibility with existing practice, and RFC 877 systems, IP
datagrams MUST, by default, be encapsulated on a virtual circuit
opened with the CC CUD.
Implementations MAY also support up to three other possible
encapsulations of IP:
o IP may be contained in multiplexed data packets on a circuit using
the Null (multiplexed) encapsulation. Such data packets are
identified by a NLPID of hex CC.
o IP may be encapsulated within the SNAP encapsulation on a circuit.
This encapsulation is identified by containing, in the 5-octet SNAP
Malis, Robinson, & Ullmann [Page 5]
RFC 1356 Multiprotocol Interconnect on X.25 August 1992
header, an Organizationally Unique Identifier (OUI) of hex 00-00-00
and Protocol Identifier (PID) of hex 08-00.
o On a circuit using the Null encapsulation, IP may be contained
within the SNAP encapsulation of IP in multiplexed data packets.
If an implementation supports the SNAP, multiplexed, and/or
multiplexed SNAP encapsulations, then it MUST accept the encoding of
IP within the supported encapsulation(s), MAY send IP using those
encapsulation(s), and MUST allow the IP encapsulation to send to be
configured on a per-peer address basis.
3.5 The negotiable facilities of X.25 MAY be used (e.g., packet and
window size negotiation). Since PDUs are sent as complete packet
sequences, any maximum X.25 data packet size MAY be configured or
negotiated between systems and their network service providers. See
section 4.5 for a discussion of maximum X.25 data packet size and
network performance.
There is no implied relationship between PDU size and X.25 packet
size (i.e., the method of setting IP MTU based on X.25 packet size
in RFC 877 is not used).
3.6 Every system MUST be able to receive and transmit PDUs up to at
least 1600 octets in length.
For compatibility with existing practice, as well as
interoperability with RFC 877 systems, the default transmit MTU for
IP datagrams SHOULD default to 1500, and MUST be configurable in at
least the range 576 to 1600.
This is done with a view toward a standard default IP MTU of 1500,
used on both local and wide area networks with no fragmentation at
routers. Actually redefining the IP default MTU is, of course,
outside the scope of this specification.
The PDU size (e.g., IP MTU) MUST be configurable, on at least a
per-interface basis. The maximum transmitted PDU length SHOULD be
configurable on a per-peer basis, and MAY be configurable on a per-
encapsulation basis as well. Note that the ability to configure to
send IP datagrams with an MTU of 576 octets and to receive IP
datagrams of 1600 octets is essential to interoperate with existing
implementations of RFC 877 and implementations of this
specification.
Note that on circuits using the Null (multiplexed) encapsulation,
when IP packets are encapsulated using the NLPID of hex CC, then the
default IP MTU of 1500 implies a PDU size of 1501; a PDU size of
Malis, Robinson, & Ullmann [Page 6]
RFC 1356 Multiprotocol Interconnect on X.25 August 1992
1600 implies an IP MTU of 1599. When IP packets are encapsulated
using the NLPID of hex 80 followed by the SNAP header for IP, then
the default IP MTU of 1500 implies a PDU size of 1506; a PDU size of
1600 implies an IP MTU of 1594.
Of course, an implementation MAY support a maximum PDU size larger
than 1600 octets. In particular, there is no limit to the size that
may be used when explicitly configured by communicating peers.
3.7 Each ISO/IEC TR 9577 encapsulation (e.g., IP, CLNP, and SNAP)
requires a separate virtual circuit between systems. In addition,
multiple virtual circuits for a single encapsulation MAY be used
between systems, to, for example, increase throughput (see notes in
section 4.5).
Receivers SHOULD accept multiple incoming calls with the same
encapsulation from a single system. Having done so, receivers MUST
then accept incoming PDUs on the additional circuit(s), and SHOULD
transmit on the additional circuits.
Shedding load by refusing additional calls for the same
encapsulation with a X.25 diagnostic of 0 (DTE clearing) is correct
practice, as is shortening inactivity timers to try to clear
circuits.
Receivers MUST NOT accept the incoming call, only to close the
circuit or ignore PDUs from the circuit.
Because opening multiple virtual circuits specifying the same
encapsulation is specifically allowed, algorithms to prevent virtual
circuit call collision, such as the one found in section 8.4.3.5 of
ISO/IEC 8473 [4], MUST NOT be implemented.
While allowing multiple virtual circuits for a single protocol is
specifically desired and allowed, implementations MAY choose (by
configuration) to permit only a single circuit for some protocols to
some destinations. Only in such a case, if a colliding incoming
call is received while a call request is pending, the incoming call
shall be rejected. Note that this may result in a failure to
establish a connection. In such a case, each system shall wait at
least a configurable collision retry time before retrying. Adding a
random increment, with exponential backoff if necessary, is
recommended.
3.8 Either system MAY close a virtual circuit. If the virtual circuit
is closed or reset while a datagram is being transmitted, the
datagram is lost. Systems SHOULD be able to configure a minimum
holding time for circuits to remain open as long as the endpoints
Malis, Robinson, & Ullmann [Page 7]
RFC 1356 Multiprotocol Interconnect on X.25 August 1992
are up. (Note that holding time, the time the circuit has been open,
differs from idle time.)
3.9 Each system MUST use an inactivity timer to clear virtual circuits
that are idle for some period of time. Some X.25 networks,
including the ISDN under present tariffs in most areas, charge for
virtual circuit holding time. Even where they do not, the resource
SHOULD be released when idle. The timer SHOULD be configurable; a
timer value of "infinite" is acceptable when explicitly configured.
The default SHOULD be a small number of minutes. For IP, a
reasonable default is 90 seconds.
3.10 Systems SHOULD allow calls from unconfigured calling addresses
(presumably not collect calls, however); this SHOULD be a
configuration option. A system accepting such a call will, of
course, not transmit on that virtual circuit if it cannot determine
the protocol (e.g., IP) address of the caller. As an example, on
the DDN this is not a restriction because IP addresses can be
determined algorithmically based upon the caller's X.121 address
[7,9].
Allowing such calls helps work around various "helpful" address
translations done by the network(s), as well as allowing
experimentation with various address resolution protocols.
3.11 Systems SHOULD use a configurable hold-down timer to prevent calls
to failed destinations from being immediately retried.
3.12 X.25 implementations MUST minimally support the following features
in order to conform with this specification: call setup and
clearing and complete packet sequences. For better performance
and/or interoperability, X.25 implementations SHOULD also support:
extended frame and/or packet sequence numbering, flow control
parameter negotiation, and reverse charging.
3.13 The following X.25 features MUST NOT be used: interrupt packets and
the Q bit (indicating qualified data). Other X.25 features not
explicitly discussed in this document, such as fast select and the
D bit (indicating end-to-end significance) SHOULD NOT be used.
Use of the D bit will interfere with use of the M bit (more data
sequences) required for identification of PDUs. In particular, as
subscription to the D bit modification facility (X.25-1988, section
3.3) will prevent proper operation, this user facility MUST NOT be
subscribed.
3.14 ISO/IEC 8208 [11] defines the clearing diagnostic code 249 to
signify that a requested protocol is not supported. Systems MAY
Malis, Robinson, & Ullmann [Page 8]
RFC 1356 Multiprotocol Interconnect on X.25 August 1992
use this diagnostic code when clearing an incoming call because the
identified protocol is not supported. Non-8208 systems more
typically use a diagnostic code of 0 for this function. Supplying
a diagnostic code is not mandatory, but when it is supplied for
this reason, it MUST be either of these two values.
4. General Remarks
The following remarks are not specifications or requirements for
implementations, but provide developers and users with guidelines and
the results of operational experience with RFC 877.
4.1 Protocols above the network layer, such as TCP or TP4, do not
affect this standard. In particular, no attempt is made to open
X.25 virtual circuits corresponding to TCP or TP4 connections.
4.2 Both the circuit and multiplexed encapsulations of SNAP may be used
to contain any SNAP encapsulated protocol. In particular, this
includes using an OUI of 00-00-00 and the two octets of PID
containing an Ethertype [7], or using IEEE 802.1's OUI of hex 00-
80-C2 with the bridging PIDs listed in RFC 1294, "Multiprotocol
Interconnect over Frame Relay" [8]. Note that IEEE 802.1d bridging
can only be performed over a circuit using the Null (multiplexed)
encapsulation of SNAP, because of the necessity of preserving the
order of PDUs (including 802.1d Bridged PDUs) using different SNAP
headers.
4.3 Experience has shown that there are X.25 implementations that will
assign calls with CC CUD to the X.29 PAD (remote login) facility
when the IP layer is not installed, not configured properly, or not
operating (indeed, they assume that ALL calls for unconfigured or
unbound X.25 protocol IDs are for X.29 PAD sessions). Call
originators can detect that this has occurred at the receiver if the
originator receives any X.25 data packets with the Q bit set,
especially if the first octet of these packets is hex 02, 04, or 06
(X.29 PAD parameter commands). In this case, the originator should
clear the call, and log the occurrence so that the misconfigured
X.25 address can be corrected. It may be useful to also use the
hold-down timer (see section 3.11) to prevent further attempts for
some period of time.
4.4 It is often assumed that a larger X.25 data packet size will result
in increased performance. This is not necessarily true: in typical
X.25 networks it will actually decrease performance.
Many, if not most, X.25 networks completely store X.25 data packets
in each switch before forwarding them. If the X.25 network requires
a path through a number of switches, and low-speed trunks are used,
Malis, Robinson, & Ullmann [Page 9]
RFC 1356 Multiprotocol Interconnect on X.25 August 1992
then negotiating and using large X.25 data packets could result in
large transit delays through the X.25 network as a result of the
time required to clock the data packets over each low-speed trunk.
If a small end-to-end window size is also used, this may also
adversely affect the end-to-end throughput of the X.25 circuit. For
this reason, segmenting large IP datagrams in the X.25 layer into
complete packet sequences of smaller X.25 data packets allows a
greater amount of pipelining through the X.25 switches, with
subsequent improvements in end-to-end throughput.
Large X.25 data packet size combined with slow (e.g., 9.6Kbps)
physical circuits will also increase individual packet latency for
other virtual circuits on the same path; this may cause unacceptable
effects on, for example, X.29 connections.
This discussion is further complicated by the fact that X.25
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -