⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 sm_sideeffect.c

📁 在linux环境下的流控制传输协议(sctp)的源代码
💻 C
📖 第 1 页 / 共 3 页
字号:
/* SCTP kernel implementation * (C) Copyright IBM Corp. 2001, 2004 * Copyright (c) 1999 Cisco, Inc. * Copyright (c) 1999-2001 Motorola, Inc. * * This file is part of the SCTP kernel implementation * * These functions work with the state functions in sctp_sm_statefuns.c * to implement that state operations.  These functions implement the * steps which require modifying existing data structures. * * This SCTP implementation is free software; * you can redistribute it and/or modify it under the terms of * the GNU General Public License as published by * the Free Software Foundation; either version 2, or (at your option) * any later version. * * This SCTP implementation is distributed in the hope that it * will be useful, but WITHOUT ANY WARRANTY; without even the implied *                 ************************ * warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. * See the GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with GNU CC; see the file COPYING.  If not, write to * the Free Software Foundation, 59 Temple Place - Suite 330, * Boston, MA 02111-1307, USA. * * Please send any bug reports or fixes you make to the * email address(es): *    lksctp developers <lksctp-developers@lists.sourceforge.net> * * Or submit a bug report through the following website: *    http://www.sf.net/projects/lksctp * * Written or modified by: *    La Monte H.P. Yarroll <piggy@acm.org> *    Karl Knutson          <karl@athena.chicago.il.us> *    Jon Grimm             <jgrimm@austin.ibm.com> *    Hui Huang		    <hui.huang@nokia.com> *    Dajiang Zhang	    <dajiang.zhang@nokia.com> *    Daisy Chang	    <daisyc@us.ibm.com> *    Sridhar Samudrala	    <sri@us.ibm.com> *    Ardelle Fan	    <ardelle.fan@intel.com> * * Any bugs reported given to us we will try to fix... any fixes shared will * be incorporated into the next SCTP release. */#include <linux/skbuff.h>#include <linux/types.h>#include <linux/socket.h>#include <linux/ip.h>#include <net/sock.h>#include <net/sctp/sctp.h>#include <net/sctp/sm.h>static int sctp_cmd_interpreter(sctp_event_t event_type,				sctp_subtype_t subtype,				sctp_state_t state,				struct sctp_endpoint *ep,				struct sctp_association *asoc,				void *event_arg,				sctp_disposition_t status,				sctp_cmd_seq_t *commands,				gfp_t gfp);static int sctp_side_effects(sctp_event_t event_type, sctp_subtype_t subtype,			     sctp_state_t state,			     struct sctp_endpoint *ep,			     struct sctp_association *asoc,			     void *event_arg,			     sctp_disposition_t status,			     sctp_cmd_seq_t *commands,			     gfp_t gfp);/******************************************************************** * Helper functions ********************************************************************//* A helper function for delayed processing of INET ECN CE bit. */static void sctp_do_ecn_ce_work(struct sctp_association *asoc,				__u32 lowest_tsn){	/* Save the TSN away for comparison when we receive CWR */	asoc->last_ecne_tsn = lowest_tsn;	asoc->need_ecne = 1;}/* Helper function for delayed processing of SCTP ECNE chunk.  *//* RFC 2960 Appendix A * * RFC 2481 details a specific bit for a sender to send in * the header of its next outbound TCP segment to indicate to * its peer that it has reduced its congestion window.  This * is termed the CWR bit.  For SCTP the same indication is made * by including the CWR chunk.  This chunk contains one data * element, i.e. the TSN number that was sent in the ECNE chunk. * This element represents the lowest TSN number in the datagram * that was originally marked with the CE bit. */static struct sctp_chunk *sctp_do_ecn_ecne_work(struct sctp_association *asoc,					   __u32 lowest_tsn,					   struct sctp_chunk *chunk){	struct sctp_chunk *repl;	/* Our previously transmitted packet ran into some congestion	 * so we should take action by reducing cwnd and ssthresh	 * and then ACK our peer that we we've done so by	 * sending a CWR.	 */	/* First, try to determine if we want to actually lower	 * our cwnd variables.  Only lower them if the ECNE looks more	 * recent than the last response.	 */	if (TSN_lt(asoc->last_cwr_tsn, lowest_tsn)) {		struct sctp_transport *transport;		/* Find which transport's congestion variables		 * need to be adjusted.		 */		transport = sctp_assoc_lookup_tsn(asoc, lowest_tsn);		/* Update the congestion variables. */		if (transport)			sctp_transport_lower_cwnd(transport,						  SCTP_LOWER_CWND_ECNE);		asoc->last_cwr_tsn = lowest_tsn;	}	/* Always try to quiet the other end.  In case of lost CWR,	 * resend last_cwr_tsn.	 */	repl = sctp_make_cwr(asoc, asoc->last_cwr_tsn, chunk);	/* If we run out of memory, it will look like a lost CWR.  We'll	 * get back in sync eventually.	 */	return repl;}/* Helper function to do delayed processing of ECN CWR chunk.  */static void sctp_do_ecn_cwr_work(struct sctp_association *asoc,				 __u32 lowest_tsn){	/* Turn off ECNE getting auto-prepended to every outgoing	 * packet	 */	asoc->need_ecne = 0;}/* Generate SACK if necessary.  We call this at the end of a packet.  */static int sctp_gen_sack(struct sctp_association *asoc, int force,			 sctp_cmd_seq_t *commands){	__u32 ctsn, max_tsn_seen;	struct sctp_chunk *sack;	struct sctp_transport *trans = asoc->peer.last_data_from;	int error = 0;	if (force ||	    (!trans && (asoc->param_flags & SPP_SACKDELAY_DISABLE)) ||	    (trans && (trans->param_flags & SPP_SACKDELAY_DISABLE)))		asoc->peer.sack_needed = 1;	ctsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map);	max_tsn_seen = sctp_tsnmap_get_max_tsn_seen(&asoc->peer.tsn_map);	/* From 12.2 Parameters necessary per association (i.e. the TCB):	 *	 * Ack State : This flag indicates if the next received packet	 * 	     : is to be responded to with a SACK. ...	 *	     : When DATA chunks are out of order, SACK's	 *           : are not delayed (see Section 6).	 *	 * [This is actually not mentioned in Section 6, but we	 * implement it here anyway. --piggy]	 */	if (max_tsn_seen != ctsn)		asoc->peer.sack_needed = 1;	/* From 6.2  Acknowledgement on Reception of DATA Chunks:	 *	 * Section 4.2 of [RFC2581] SHOULD be followed. Specifically,	 * an acknowledgement SHOULD be generated for at least every	 * second packet (not every second DATA chunk) received, and	 * SHOULD be generated within 200 ms of the arrival of any	 * unacknowledged DATA chunk. ...	 */	if (!asoc->peer.sack_needed) {		/* We will need a SACK for the next packet.  */		asoc->peer.sack_needed = 1;		/* Set the SACK delay timeout based on the		 * SACK delay for the last transport		 * data was received from, or the default		 * for the association.		 */		if (trans)			asoc->timeouts[SCTP_EVENT_TIMEOUT_SACK] =				trans->sackdelay;		else			asoc->timeouts[SCTP_EVENT_TIMEOUT_SACK] =				asoc->sackdelay;		/* Restart the SACK timer. */		sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART,				SCTP_TO(SCTP_EVENT_TIMEOUT_SACK));	} else {		if (asoc->a_rwnd > asoc->rwnd)			asoc->a_rwnd = asoc->rwnd;		sack = sctp_make_sack(asoc);		if (!sack)			goto nomem;		asoc->peer.sack_needed = 0;		sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(sack));		/* Stop the SACK timer.  */		sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP,				SCTP_TO(SCTP_EVENT_TIMEOUT_SACK));	}	return error;nomem:	error = -ENOMEM;	return error;}/* When the T3-RTX timer expires, it calls this function to create the * relevant state machine event. */void sctp_generate_t3_rtx_event(unsigned long peer){	int error;	struct sctp_transport *transport = (struct sctp_transport *) peer;	struct sctp_association *asoc = transport->asoc;	/* Check whether a task is in the sock.  */	sctp_bh_lock_sock(asoc->base.sk);	if (sock_owned_by_user(asoc->base.sk)) {		SCTP_DEBUG_PRINTK("%s:Sock is busy.\n", __FUNCTION__);		/* Try again later.  */		if (!mod_timer(&transport->T3_rtx_timer, jiffies + (HZ/20)))			sctp_transport_hold(transport);		goto out_unlock;	}	/* Is this transport really dead and just waiting around for	 * the timer to let go of the reference?	 */	if (transport->dead)		goto out_unlock;	/* Run through the state machine.  */	error = sctp_do_sm(SCTP_EVENT_T_TIMEOUT,			   SCTP_ST_TIMEOUT(SCTP_EVENT_TIMEOUT_T3_RTX),			   asoc->state,			   asoc->ep, asoc,			   transport, GFP_ATOMIC);	if (error)		asoc->base.sk->sk_err = -error;out_unlock:	sctp_bh_unlock_sock(asoc->base.sk);	sctp_transport_put(transport);}/* This is a sa interface for producing timeout events.  It works * for timeouts which use the association as their parameter. */static void sctp_generate_timeout_event(struct sctp_association *asoc,					sctp_event_timeout_t timeout_type){	int error = 0;	sctp_bh_lock_sock(asoc->base.sk);	if (sock_owned_by_user(asoc->base.sk)) {		SCTP_DEBUG_PRINTK("%s:Sock is busy: timer %d\n",				  __FUNCTION__,				  timeout_type);		/* Try again later.  */		if (!mod_timer(&asoc->timers[timeout_type], jiffies + (HZ/20)))			sctp_association_hold(asoc);		goto out_unlock;	}	/* Is this association really dead and just waiting around for	 * the timer to let go of the reference?	 */	if (asoc->base.dead)		goto out_unlock;	/* Run through the state machine.  */	error = sctp_do_sm(SCTP_EVENT_T_TIMEOUT,			   SCTP_ST_TIMEOUT(timeout_type),			   asoc->state, asoc->ep, asoc,			   (void *)timeout_type, GFP_ATOMIC);	if (error)		asoc->base.sk->sk_err = -error;out_unlock:	sctp_bh_unlock_sock(asoc->base.sk);	sctp_association_put(asoc);}static void sctp_generate_t1_cookie_event(unsigned long data){	struct sctp_association *asoc = (struct sctp_association *) data;	sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_T1_COOKIE);}static void sctp_generate_t1_init_event(unsigned long data){	struct sctp_association *asoc = (struct sctp_association *) data;	sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_T1_INIT);}static void sctp_generate_t2_shutdown_event(unsigned long data){	struct sctp_association *asoc = (struct sctp_association *) data;	sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_T2_SHUTDOWN);}static void sctp_generate_t4_rto_event(unsigned long data){	struct sctp_association *asoc = (struct sctp_association *) data;	sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_T4_RTO);}static void sctp_generate_t5_shutdown_guard_event(unsigned long data){	struct sctp_association *asoc = (struct sctp_association *)data;	sctp_generate_timeout_event(asoc,				    SCTP_EVENT_TIMEOUT_T5_SHUTDOWN_GUARD);} /* sctp_generate_t5_shutdown_guard_event() */static void sctp_generate_autoclose_event(unsigned long data){	struct sctp_association *asoc = (struct sctp_association *) data;	sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_AUTOCLOSE);}/* Generate a heart beat event.  If the sock is busy, reschedule.   Make * sure that the transport is still valid. */void sctp_generate_heartbeat_event(unsigned long data){	int error = 0;	struct sctp_transport *transport = (struct sctp_transport *) data;	struct sctp_association *asoc = transport->asoc;	sctp_bh_lock_sock(asoc->base.sk);	if (sock_owned_by_user(asoc->base.sk)) {		SCTP_DEBUG_PRINTK("%s:Sock is busy.\n", __FUNCTION__);		/* Try again later.  */		if (!mod_timer(&transport->hb_timer, jiffies + (HZ/20)))			sctp_transport_hold(transport);		goto out_unlock;	}	/* Is this structure just waiting around for us to actually	 * get destroyed?	 */	if (transport->dead)		goto out_unlock;	error = sctp_do_sm(SCTP_EVENT_T_TIMEOUT,			   SCTP_ST_TIMEOUT(SCTP_EVENT_TIMEOUT_HEARTBEAT),			   asoc->state, asoc->ep, asoc,			   transport, GFP_ATOMIC);	 if (error)		 asoc->base.sk->sk_err = -error;out_unlock:	sctp_bh_unlock_sock(asoc->base.sk);	sctp_transport_put(transport);}/* Inject a SACK Timeout event into the state machine.  */static void sctp_generate_sack_event(unsigned long data){	struct sctp_association *asoc = (struct sctp_association *) data;	sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_SACK);}sctp_timer_event_t *sctp_timer_events[SCTP_NUM_TIMEOUT_TYPES] = {	NULL,	sctp_generate_t1_cookie_event,	sctp_generate_t1_init_event,	sctp_generate_t2_shutdown_event,	NULL,	sctp_generate_t4_rto_event,	sctp_generate_t5_shutdown_guard_event,	NULL,	sctp_generate_sack_event,	sctp_generate_autoclose_event,};/* RFC 2960 8.2 Path Failure Detection * * When its peer endpoint is multi-homed, an endpoint should keep a * error counter for each of the destination transport addresses of the * peer endpoint. * * Each time the T3-rtx timer expires on any address, or when a * HEARTBEAT sent to an idle address is not acknowledged within a RTO, * the error counter of that destination address will be incremented. * When the value in the error counter exceeds the protocol parameter * 'Path.Max.Retrans' of that destination address, the endpoint should * mark the destination transport address as inactive, and a * notification SHOULD be sent to the upper layer. * */static void sctp_do_8_2_transport_strike(struct sctp_association *asoc,					 struct sctp_transport *transport){	/* The check for association's overall error counter exceeding the	 * threshold is done in the state function.	 */	/* When probing UNCONFIRMED addresses, the association overall	 * error count is NOT incremented	 */	if (transport->state != SCTP_UNCONFIRMED)		asoc->overall_error_count++;	if (transport->state != SCTP_INACTIVE &&	    (transport->error_count++ >= transport->pathmaxrxt)) {		SCTP_DEBUG_PRINTK_IPADDR("transport_strike:association %p",					 " transport IP: port:%d failed.\n",					 asoc,					 (&transport->ipaddr),					 ntohs(transport->ipaddr.v4.sin_port));		sctp_assoc_control_transport(asoc, transport,					     SCTP_TRANSPORT_DOWN,					     SCTP_FAILED_THRESHOLD);	}	/* E2) For the destination address for which the timer	 * expires, set RTO <- RTO * 2 ("back off the timer").  The	 * maximum value discussed in rule C7 above (RTO.max) may be	 * used to provide an upper bound to this doubling operation.	 */	transport->last_rto = transport->rto;	transport->rto = min((transport->rto * 2), transport->asoc->rto_max);}/* Worker routine to handle INIT command failure.  */static void sctp_cmd_init_failed(sctp_cmd_seq_t *commands,				 struct sctp_association *asoc,				 unsigned error){	struct sctp_ulpevent *event;	event = sctp_ulpevent_make_assoc_change(asoc,0, SCTP_CANT_STR_ASSOC,						(__u16)error, 0, 0, NULL,						GFP_ATOMIC);	if (event)		sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP,				SCTP_ULPEVENT(event));	sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,			SCTP_STATE(SCTP_STATE_CLOSED));	/* SEND_FAILED sent later when cleaning up the association. */	asoc->outqueue.error = error;	sctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());}/* Worker routine to handle SCTP_CMD_ASSOC_FAILED.  */static void sctp_cmd_assoc_failed(sctp_cmd_seq_t *commands,				  struct sctp_association *asoc,				  sctp_event_t event_type,				  sctp_subtype_t subtype,				  struct sctp_chunk *chunk,				  unsigned error){	struct sctp_ulpevent *event;	/* Cancel any partial delivery in progress. */	sctp_ulpq_abort_pd(&asoc->ulpq, GFP_ATOMIC);	if (event_type == SCTP_EVENT_T_CHUNK && subtype.chunk == SCTP_CID_ABORT)		event = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_COMM_LOST,						(__u16)error, 0, 0, chunk,						GFP_ATOMIC);	else		event = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_COMM_LOST,						(__u16)error, 0, 0, NULL,						GFP_ATOMIC);	if (event)		sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP,				SCTP_ULPEVENT(event));	sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,			SCTP_STATE(SCTP_STATE_CLOSED));	/* SEND_FAILED sent later when cleaning up the association. */	asoc->outqueue.error = error;	sctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());}/* Process an init chunk (may be real INIT/INIT-ACK or an embedded INIT * inside the cookie.  In reality, this is only used for INIT-ACK processing * since all other cases use "temporary" associations and can do all * their work in statefuns directly. */static int sctp_cmd_process_init(sctp_cmd_seq_t *commands,

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -