⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 ip-sysctl.txt

📁 linux 内核源代码
💻 TXT
📖 第 1 页 / 共 3 页
字号:
/proc/sys/net/ipv4/* Variables:ip_forward - BOOLEAN	0 - disabled (default)	not 0 - enabled 	Forward Packets between interfaces.	This variable is special, its change resets all configuration	parameters to their default state (RFC1122 for hosts, RFC1812	for routers)ip_default_ttl - INTEGER	default 64ip_no_pmtu_disc - BOOLEAN	Disable Path MTU Discovery.	default FALSEmin_pmtu - INTEGER	default 562 - minimum discovered Path MTUmtu_expires - INTEGER	Time, in seconds, that cached PMTU information is kept.min_adv_mss - INTEGER	The advertised MSS depends on the first hop route MTU, but will	never be lower than this setting.IP Fragmentation:ipfrag_high_thresh - INTEGER	Maximum memory used to reassemble IP fragments. When 	ipfrag_high_thresh bytes of memory is allocated for this purpose,	the fragment handler will toss packets until ipfrag_low_thresh	is reached.	ipfrag_low_thresh - INTEGER	See ipfrag_high_thresh	ipfrag_time - INTEGER	Time in seconds to keep an IP fragment in memory.	ipfrag_secret_interval - INTEGER	Regeneration interval (in seconds) of the hash secret (or lifetime 	for the hash secret) for IP fragments.	Default: 600ipfrag_max_dist - INTEGER	ipfrag_max_dist is a non-negative integer value which defines the 	maximum "disorder" which is allowed among fragments which share a 	common IP source address. Note that reordering of packets is 	not unusual, but if a large number of fragments arrive from a source 	IP address while a particular fragment queue remains incomplete, it 	probably indicates that one or more fragments belonging to that queue 	have been lost. When ipfrag_max_dist is positive, an additional check 	is done on fragments before they are added to a reassembly queue - if 	ipfrag_max_dist (or more) fragments have arrived from a particular IP 	address between additions to any IP fragment queue using that source 	address, it's presumed that one or more fragments in the queue are 	lost. The existing fragment queue will be dropped, and a new one 	started. An ipfrag_max_dist value of zero disables this check.	Using a very small value, e.g. 1 or 2, for ipfrag_max_dist can	result in unnecessarily dropping fragment queues when normal	reordering of packets occurs, which could lead to poor application 	performance. Using a very large value, e.g. 50000, increases the 	likelihood of incorrectly reassembling IP fragments that originate 	from different IP datagrams, which could result in data corruption.	Default: 64INET peer storage:inet_peer_threshold - INTEGER	The approximate size of the storage.  Starting from this threshold		entries will be thrown aggressively.  This threshold also determines	entries' time-to-live and time intervals between garbage collection	passes.  More entries, less time-to-live, less GC interval.inet_peer_minttl - INTEGER	Minimum time-to-live of entries.  Should be enough to cover fragment	time-to-live on the reassembling side.  This minimum time-to-live  is	guaranteed if the pool size is less than inet_peer_threshold.	Measured in jiffies(1).inet_peer_maxttl - INTEGER	Maximum time-to-live of entries.  Unused entries will expire after	this period of time if there is no memory pressure on the pool (i.e.	when the number of entries in the pool is very small).	Measured in jiffies(1).inet_peer_gc_mintime - INTEGER	Minimum interval between garbage collection passes.  This interval is	in effect under high memory pressure on the pool.	Measured in jiffies(1).inet_peer_gc_maxtime - INTEGER	Minimum interval between garbage collection passes.  This interval is	in effect under low (or absent) memory pressure on the pool.	Measured in jiffies(1).TCP variables: somaxconn - INTEGER	Limit of socket listen() backlog, known in userspace as SOMAXCONN.	Defaults to 128.  See also tcp_max_syn_backlog for additional tuning	for TCP sockets.tcp_abc - INTEGER	Controls Appropriate Byte Count (ABC) defined in RFC3465.	ABC is a way of increasing congestion window (cwnd) more slowly	in response to partial acknowledgments.	Possible values are:		0 increase cwnd once per acknowledgment (no ABC)		1 increase cwnd once per acknowledgment of full sized segment		2 allow increase cwnd by two if acknowledgment is		  of two segments to compensate for delayed acknowledgments.	Default: 0 (off)tcp_abort_on_overflow - BOOLEAN	If listening service is too slow to accept new connections,	reset them. Default state is FALSE. It means that if overflow	occurred due to a burst, connection will recover. Enable this	option _only_ if you are really sure that listening daemon	cannot be tuned to accept connections faster. Enabling this	option can harm clients of your server.tcp_adv_win_scale - INTEGER	Count buffering overhead as bytes/2^tcp_adv_win_scale	(if tcp_adv_win_scale > 0) or bytes-bytes/2^(-tcp_adv_win_scale),	if it is <= 0.	Default: 2tcp_allowed_congestion_control - STRING	Show/set the congestion control choices available to non-privileged	processes. The list is a subset of those listed in	tcp_available_congestion_control.	Default is "reno" and the default setting (tcp_congestion_control).tcp_app_win - INTEGER	Reserve max(window/2^tcp_app_win, mss) of window for application	buffer. Value 0 is special, it means that nothing is reserved.	Default: 31tcp_available_congestion_control - STRING	Shows the available congestion control choices that are registered.	More congestion control algorithms may be available as modules,	but not loaded.tcp_base_mss - INTEGER	The initial value of search_low to be used by Packetization Layer	Path MTU Discovery (MTU probing).  If MTU probing is enabled,	this is the inital MSS used by the connection.tcp_congestion_control - STRING	Set the congestion control algorithm to be used for new	connections. The algorithm "reno" is always available, but	additional choices may be available based on kernel configuration.	Default is set as part of kernel configuration.tcp_dsack - BOOLEAN	Allows TCP to send "duplicate" SACKs.tcp_ecn - BOOLEAN	Enable Explicit Congestion Notification in TCP.tcp_fack - BOOLEAN	Enable FACK congestion avoidance and fast retransmission.	The value is not used, if tcp_sack is not enabled.tcp_fin_timeout - INTEGER	Time to hold socket in state FIN-WAIT-2, if it was closed	by our side. Peer can be broken and never close its side,	or even died unexpectedly. Default value is 60sec.	Usual value used in 2.2 was 180 seconds, you may restore	it, but remember that if your machine is even underloaded WEB server,	you risk to overflow memory with kilotons of dead sockets,	FIN-WAIT-2 sockets are less dangerous than FIN-WAIT-1,	because they eat maximum 1.5K of memory, but they tend	to live longer.	Cf. tcp_max_orphans.tcp_frto - INTEGER	Enables Forward RTO-Recovery (F-RTO) defined in RFC4138.	F-RTO is an enhanced recovery algorithm for TCP retransmission	timeouts.  It is particularly beneficial in wireless environments	where packet loss is typically due to random radio interference	rather than intermediate router congestion.  F-RTO is sender-side	only modification.  Therefore it does not require any support from	the peer, but in a typical case, however, where wireless link is	the local access link and most of the data flows downlink, the	faraway servers should have F-RTO enabled to take advantage of it.	If set to 1, basic version is enabled.  2 enables SACK enhanced	F-RTO if flow uses SACK.  The basic version can be used also when	SACK is in use though scenario(s) with it exists where F-RTO	interacts badly with the packet counting of the SACK enabled TCP	flow.tcp_frto_response - INTEGER	When F-RTO has detected that a TCP retransmission timeout was	spurious (i.e, the timeout would have been avoided had TCP set a	longer retransmission timeout), TCP has several options what to do	next. Possible values are:		0 Rate halving based; a smooth and conservative response,		  results in halved cwnd and ssthresh after one RTT		1 Very conservative response; not recommended because even		  though being valid, it interacts poorly with the rest of		  Linux TCP, halves cwnd and ssthresh immediately		2 Aggressive response; undoes congestion control measures		  that are now known to be unnecessary (ignoring the		  possibility of a lost retransmission that would require		  TCP to be more cautious), cwnd and ssthresh are restored		  to the values prior timeout	Default: 0 (rate halving based)tcp_keepalive_time - INTEGER	How often TCP sends out keepalive messages when keepalive is enabled.	Default: 2hours.tcp_keepalive_probes - INTEGER	How many keepalive probes TCP sends out, until it decides that the	connection is broken. Default value: 9.tcp_keepalive_intvl - INTEGER	How frequently the probes are send out. Multiplied by	tcp_keepalive_probes it is time to kill not responding connection,	after probes started. Default value: 75sec i.e. connection	will be aborted after ~11 minutes of retries.tcp_low_latency - BOOLEAN	If set, the TCP stack makes decisions that prefer lower	latency as opposed to higher throughput.  By default, this	option is not set meaning that higher throughput is preferred.	An example of an application where this default should be	changed would be a Beowulf compute cluster.	Default: 0tcp_max_orphans - INTEGER	Maximal number of TCP sockets not attached to any user file handle,	held by system.	If this number is exceeded orphaned connections are	reset immediately and warning is printed. This limit exists	only to prevent simple DoS attacks, you _must_ not rely on this	or lower the limit artificially, but rather increase it	(probably, after increasing installed memory),	if network conditions require more than default value,	and tune network services to linger and kill such states	more aggressively. Let me to remind again: each orphan eats	up to ~64K of unswappable memory.tcp_max_syn_backlog - INTEGER	Maximal number of remembered connection requests, which are	still did not receive an acknowledgment from connecting client.	Default value is 1024 for systems with more than 128Mb of memory,	and 128 for low memory machines. If server suffers of overload,	try to increase this number.tcp_max_tw_buckets - INTEGER	Maximal number of timewait sockets held by system simultaneously.	If this number is exceeded time-wait socket is immediately destroyed	and warning is printed. This limit exists only to prevent	simple DoS attacks, you _must_ not lower the limit artificially,	but rather increase it (probably, after increasing installed memory),	if network conditions require more than default value.tcp_mem - vector of 3 INTEGERs: min, pressure, max	min: below this number of pages TCP is not bothered about its	memory appetite.	pressure: when amount of memory allocated by TCP exceeds this number	of pages, TCP moderates its memory consumption and enters memory	pressure mode, which is exited when memory consumption falls	under "min".	max: number of pages allowed for queueing by all TCP sockets.	Defaults are calculated at boot time from amount of available	memory.tcp_moderate_rcvbuf - BOOLEAN	If set, TCP performs receive buffer autotuning, attempting to	automatically size the buffer (no greater than tcp_rmem[2]) to	match the size required by the path for full throughput.  Enabled by	default.tcp_mtu_probing - INTEGER	Controls TCP Packetization-Layer Path MTU Discovery.  Takes three	values:	  0 - Disabled	  1 - Disabled by default, enabled when an ICMP black hole detected	  2 - Always enabled, use initial MSS of tcp_base_mss.tcp_no_metrics_save - BOOLEAN	By default, TCP saves various connection metrics in the route cache	when the connection closes, so that connections established in the	near future can use these to set initial conditions.  Usually, this	increases overall performance, but may sometimes cause performance	degradation.  If set, TCP will not cache metrics on closing	connections.tcp_orphan_retries - INTEGER	How may times to retry before killing TCP connection, closed	by our side. Default value 7 corresponds to ~50sec-16min	depending on RTO. If you machine is loaded WEB server,	you should think about lowering this value, such sockets	may consume significant resources. Cf. tcp_max_orphans.tcp_reordering - INTEGER	Maximal reordering of packets in a TCP stream.	Default: 3	tcp_retrans_collapse - BOOLEAN	Bug-to-bug compatibility with some broken printers.	On retransmit try to send bigger packets to work around bugs in	certain TCP stacks.tcp_retries1 - INTEGER	How many times to retry before deciding that something is wrong	and it is necessary to report this suspicion to network layer.	Minimal RFC value is 3, it is default, which corresponds	to ~3sec-8min depending on RTO.tcp_retries2 - INTEGER	How may times to retry before killing alive TCP connection.	RFC1122 says that the limit should be longer than 100 sec.	It is too small number.	Default value 15 corresponds to ~13-30min	depending on RTO.tcp_rfc1337 - BOOLEAN	If set, the TCP stack behaves conforming to RFC1337. If unset,	we are not conforming to RFC, but prevent TCP TIME_WAIT	assassination.	Default: 0tcp_rmem - vector of 3 INTEGERs: min, default, max	min: Minimal size of receive buffer used by TCP sockets.	It is guaranteed to each TCP socket, even under moderate memory	pressure.	Default: 8K	default: default size of receive buffer used by TCP sockets.	This value overrides net.core.rmem_default used by other protocols.	Default: 87380 bytes. This value results in window of 65535 with	default setting of tcp_adv_win_scale and tcp_app_win:0 and a bit	less for default tcp_app_win. See below about these variables.	max: maximal size of receive buffer allowed for automatically	selected receiver buffers for TCP socket. This value does not override	net.core.rmem_max, "static" selection via SO_RCVBUF does not use this.	Default: 87380*2 bytes.tcp_sack - BOOLEAN	Enable select acknowledgments (SACKS).tcp_slow_start_after_idle - BOOLEAN	If set, provide RFC2861 behavior and time out the congestion

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -