⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 dev.c

📁 linux 内核源代码
💻 C
📖 第 1 页 / 共 5 页
字号:
	skb->destructor = DEV_GSO_CB(skb)->destructor;out_kfree_skb:	kfree_skb(skb);	return 0;}/** *	dev_queue_xmit - transmit a buffer *	@skb: buffer to transmit * *	Queue a buffer for transmission to a network device. The caller must *	have set the device and priority and built the buffer before calling *	this function. The function can be called from an interrupt. * *	A negative errno code is returned on a failure. A success does not *	guarantee the frame will be transmitted as it may be dropped due *	to congestion or traffic shaping. * * ----------------------------------------------------------------------------------- *      I notice this method can also return errors from the queue disciplines, *      including NET_XMIT_DROP, which is a positive value.  So, errors can also *      be positive. * *      Regardless of the return value, the skb is consumed, so it is currently *      difficult to retry a send to this method.  (You can bump the ref count *      before sending to hold a reference for retry if you are careful.) * *      When calling this method, interrupts MUST be enabled.  This is because *      the BH enable code must have IRQs enabled so that it will not deadlock. *          --BLG */int dev_queue_xmit(struct sk_buff *skb){	struct net_device *dev = skb->dev;	struct Qdisc *q;	int rc = -ENOMEM;	/* GSO will handle the following emulations directly. */	if (netif_needs_gso(dev, skb))		goto gso;	if (skb_shinfo(skb)->frag_list &&	    !(dev->features & NETIF_F_FRAGLIST) &&	    __skb_linearize(skb))		goto out_kfree_skb;	/* Fragmented skb is linearized if device does not support SG,	 * or if at least one of fragments is in highmem and device	 * does not support DMA from it.	 */	if (skb_shinfo(skb)->nr_frags &&	    (!(dev->features & NETIF_F_SG) || illegal_highdma(dev, skb)) &&	    __skb_linearize(skb))		goto out_kfree_skb;	/* If packet is not checksummed and device does not support	 * checksumming for this protocol, complete checksumming here.	 */	if (skb->ip_summed == CHECKSUM_PARTIAL) {		skb_set_transport_header(skb, skb->csum_start -					      skb_headroom(skb));		if (!(dev->features & NETIF_F_GEN_CSUM) &&		    !((dev->features & NETIF_F_IP_CSUM) &&		      skb->protocol == htons(ETH_P_IP)) &&		    !((dev->features & NETIF_F_IPV6_CSUM) &&		      skb->protocol == htons(ETH_P_IPV6)))			if (skb_checksum_help(skb))				goto out_kfree_skb;	}gso:	spin_lock_prefetch(&dev->queue_lock);	/* Disable soft irqs for various locks below. Also	 * stops preemption for RCU.	 */	rcu_read_lock_bh();	/* Updates of qdisc are serialized by queue_lock.	 * The struct Qdisc which is pointed to by qdisc is now a	 * rcu structure - it may be accessed without acquiring	 * a lock (but the structure may be stale.) The freeing of the	 * qdisc will be deferred until it's known that there are no	 * more references to it.	 *	 * If the qdisc has an enqueue function, we still need to	 * hold the queue_lock before calling it, since queue_lock	 * also serializes access to the device queue.	 */	q = rcu_dereference(dev->qdisc);#ifdef CONFIG_NET_CLS_ACT	skb->tc_verd = SET_TC_AT(skb->tc_verd,AT_EGRESS);#endif	if (q->enqueue) {		/* Grab device queue */		spin_lock(&dev->queue_lock);		q = dev->qdisc;		if (q->enqueue) {			/* reset queue_mapping to zero */			skb_set_queue_mapping(skb, 0);			rc = q->enqueue(skb, q);			qdisc_run(dev);			spin_unlock(&dev->queue_lock);			rc = rc == NET_XMIT_BYPASS ? NET_XMIT_SUCCESS : rc;			goto out;		}		spin_unlock(&dev->queue_lock);	}	/* The device has no queue. Common case for software devices:	   loopback, all the sorts of tunnels...	   Really, it is unlikely that netif_tx_lock protection is necessary	   here.  (f.e. loopback and IP tunnels are clean ignoring statistics	   counters.)	   However, it is possible, that they rely on protection	   made by us here.	   Check this and shot the lock. It is not prone from deadlocks.	   Either shot noqueue qdisc, it is even simpler 8)	 */	if (dev->flags & IFF_UP) {		int cpu = smp_processor_id(); /* ok because BHs are off */		if (dev->xmit_lock_owner != cpu) {			HARD_TX_LOCK(dev, cpu);			if (!netif_queue_stopped(dev) &&			    !netif_subqueue_stopped(dev, skb)) {				rc = 0;				if (!dev_hard_start_xmit(skb, dev)) {					HARD_TX_UNLOCK(dev);					goto out;				}			}			HARD_TX_UNLOCK(dev);			if (net_ratelimit())				printk(KERN_CRIT "Virtual device %s asks to "				       "queue packet!\n", dev->name);		} else {			/* Recursion is detected! It is possible,			 * unfortunately */			if (net_ratelimit())				printk(KERN_CRIT "Dead loop on virtual device "				       "%s, fix it urgently!\n", dev->name);		}	}	rc = -ENETDOWN;	rcu_read_unlock_bh();out_kfree_skb:	kfree_skb(skb);	return rc;out:	rcu_read_unlock_bh();	return rc;}/*=======================================================================			Receiver routines  =======================================================================*/int netdev_max_backlog __read_mostly = 1000;int netdev_budget __read_mostly = 300;int weight_p __read_mostly = 64;            /* old backlog weight */DEFINE_PER_CPU(struct netif_rx_stats, netdev_rx_stat) = { 0, };/** *	netif_rx	-	post buffer to the network code *	@skb: buffer to post * *	This function receives a packet from a device driver and queues it for *	the upper (protocol) levels to process.  It always succeeds. The buffer *	may be dropped during processing for congestion control or by the *	protocol layers. * *	return values: *	NET_RX_SUCCESS	(no congestion) *	NET_RX_DROP     (packet was dropped) * */int netif_rx(struct sk_buff *skb){	struct softnet_data *queue;	unsigned long flags;	/* if netpoll wants it, pretend we never saw it */	if (netpoll_rx(skb))		return NET_RX_DROP;	if (!skb->tstamp.tv64)		net_timestamp(skb);	/*	 * The code is rearranged so that the path is the most	 * short when CPU is congested, but is still operating.	 */	local_irq_save(flags);	queue = &__get_cpu_var(softnet_data);	__get_cpu_var(netdev_rx_stat).total++;	if (queue->input_pkt_queue.qlen <= netdev_max_backlog) {		if (queue->input_pkt_queue.qlen) {enqueue:			dev_hold(skb->dev);			__skb_queue_tail(&queue->input_pkt_queue, skb);			local_irq_restore(flags);			return NET_RX_SUCCESS;		}		napi_schedule(&queue->backlog);		goto enqueue;	}	__get_cpu_var(netdev_rx_stat).dropped++;	local_irq_restore(flags);	kfree_skb(skb);	return NET_RX_DROP;}int netif_rx_ni(struct sk_buff *skb){	int err;	preempt_disable();	err = netif_rx(skb);	if (local_softirq_pending())		do_softirq();	preempt_enable();	return err;}EXPORT_SYMBOL(netif_rx_ni);static inline struct net_device *skb_bond(struct sk_buff *skb){	struct net_device *dev = skb->dev;	if (dev->master) {		if (skb_bond_should_drop(skb)) {			kfree_skb(skb);			return NULL;		}		skb->dev = dev->master;	}	return dev;}static void net_tx_action(struct softirq_action *h){	struct softnet_data *sd = &__get_cpu_var(softnet_data);	if (sd->completion_queue) {		struct sk_buff *clist;		local_irq_disable();		clist = sd->completion_queue;		sd->completion_queue = NULL;		local_irq_enable();		while (clist) {			struct sk_buff *skb = clist;			clist = clist->next;			BUG_TRAP(!atomic_read(&skb->users));			__kfree_skb(skb);		}	}	if (sd->output_queue) {		struct net_device *head;		local_irq_disable();		head = sd->output_queue;		sd->output_queue = NULL;		local_irq_enable();		while (head) {			struct net_device *dev = head;			head = head->next_sched;			smp_mb__before_clear_bit();			clear_bit(__LINK_STATE_SCHED, &dev->state);			if (spin_trylock(&dev->queue_lock)) {				qdisc_run(dev);				spin_unlock(&dev->queue_lock);			} else {				netif_schedule(dev);			}		}	}}static inline int deliver_skb(struct sk_buff *skb,			      struct packet_type *pt_prev,			      struct net_device *orig_dev){	atomic_inc(&skb->users);	return pt_prev->func(skb, skb->dev, pt_prev, orig_dev);}#if defined(CONFIG_BRIDGE) || defined (CONFIG_BRIDGE_MODULE)/* These hooks defined here for ATM */struct net_bridge;struct net_bridge_fdb_entry *(*br_fdb_get_hook)(struct net_bridge *br,						unsigned char *addr);void (*br_fdb_put_hook)(struct net_bridge_fdb_entry *ent) __read_mostly;/* * If bridge module is loaded call bridging hook. *  returns NULL if packet was consumed. */struct sk_buff *(*br_handle_frame_hook)(struct net_bridge_port *p,					struct sk_buff *skb) __read_mostly;static inline struct sk_buff *handle_bridge(struct sk_buff *skb,					    struct packet_type **pt_prev, int *ret,					    struct net_device *orig_dev){	struct net_bridge_port *port;	if (skb->pkt_type == PACKET_LOOPBACK ||	    (port = rcu_dereference(skb->dev->br_port)) == NULL)		return skb;	if (*pt_prev) {		*ret = deliver_skb(skb, *pt_prev, orig_dev);		*pt_prev = NULL;	}	return br_handle_frame_hook(port, skb);}#else#define handle_bridge(skb, pt_prev, ret, orig_dev)	(skb)#endif#if defined(CONFIG_MACVLAN) || defined(CONFIG_MACVLAN_MODULE)struct sk_buff *(*macvlan_handle_frame_hook)(struct sk_buff *skb) __read_mostly;EXPORT_SYMBOL_GPL(macvlan_handle_frame_hook);static inline struct sk_buff *handle_macvlan(struct sk_buff *skb,					     struct packet_type **pt_prev,					     int *ret,					     struct net_device *orig_dev){	if (skb->dev->macvlan_port == NULL)		return skb;	if (*pt_prev) {		*ret = deliver_skb(skb, *pt_prev, orig_dev);		*pt_prev = NULL;	}	return macvlan_handle_frame_hook(skb);}#else#define handle_macvlan(skb, pt_prev, ret, orig_dev)	(skb)#endif#ifdef CONFIG_NET_CLS_ACT/* TODO: Maybe we should just force sch_ingress to be compiled in * when CONFIG_NET_CLS_ACT is? otherwise some useless instructions * a compare and 2 stores extra right now if we dont have it on * but have CONFIG_NET_CLS_ACT * NOTE: This doesnt stop any functionality; if you dont have * the ingress scheduler, you just cant add policies on ingress. * */static int ing_filter(struct sk_buff *skb){	struct Qdisc *q;	struct net_device *dev = skb->dev;	int result = TC_ACT_OK;	u32 ttl = G_TC_RTTL(skb->tc_verd);	if (MAX_RED_LOOP < ttl++) {		printk(KERN_WARNING		       "Redir loop detected Dropping packet (%d->%d)\n",		       skb->iif, dev->ifindex);		return TC_ACT_SHOT;	}	skb->tc_verd = SET_TC_RTTL(skb->tc_verd, ttl);	skb->tc_verd = SET_TC_AT(skb->tc_verd, AT_INGRESS);	spin_lock(&dev->ingress_lock);	if ((q = dev->qdisc_ingress) != NULL)		result = q->enqueue(skb, q);	spin_unlock(&dev->ingress_lock);	return result;}static inline struct sk_buff *handle_ing(struct sk_buff *skb,					 struct packet_type **pt_prev,					 int *ret, struct net_device *orig_dev){	if (!skb->dev->qdisc_ingress)		goto out;	if (*pt_prev) {		*ret = deliver_skb(skb, *pt_prev, orig_dev);		*pt_prev = NULL;	} else {		/* Huh? Why does turning on AF_PACKET affect this? */		skb->tc_verd = SET_TC_OK2MUNGE(skb->tc_verd);	}	switch (ing_filter(skb)) {	case TC_ACT_SHOT:	case TC_ACT_STOLEN:		kfree_skb(skb);		return NULL;	}out:	skb->tc_verd = 0;	return skb;}#endif/** *	netif_receive_skb - process receive buffer from network *	@skb: buffer to process * *	netif_receive_skb() is the main receive data processing function. *	It always succeeds. The buffer may be dropped during processing *	for congestion control or by the protocol layers. * *	This function may only be called from softirq context and interrupts *	should be enabled. * *	Return values (usually ignored): *	NET_RX_SUCCESS: no congestion *	NET_RX_DROP: packet was dropped */int netif_receive_skb(struct sk_buff *skb){	struct packet_type *ptype, *pt_prev;	struct net_device *orig_dev;	int ret = NET_RX_DROP;	__be16 type;	/* if we've gotten here through NAPI, check netpoll */	if (netpoll_receive_skb(skb))		return NET_RX_DROP;	if (!skb->tstamp.tv64)		net_timestamp(skb);	if (!skb->iif)		skb->iif = skb->dev->ifindex;	orig_dev = skb_bond(skb);	if (!orig_dev)		return NET_RX_DROP;	__get_cpu_var(netdev_rx_stat).total++;	skb_reset_network_header(skb);	skb_reset_transport_header(skb);	skb->mac_len = skb->network_header - skb->mac_header;	pt_prev = NULL;	rcu_read_lock();#ifdef CONFIG_NET_CLS_ACT	if (skb->tc_verd & TC_NCLS) {		skb->tc_verd = CLR_TC_NCLS(skb->tc_verd);		goto ncls;	}#endif	list_for_each_entry_rcu(ptype, &ptype_all, list) {		if (!ptype->dev || ptype->dev == skb->dev) {			if (pt_prev)				ret = deliver_skb(skb, pt_prev, orig_dev);			pt_prev = ptype;		}	}#ifdef CONFIG_NET_CLS_ACT	skb = handle_ing(skb, &pt_prev, &ret, orig_dev);	if (!skb)		goto out;ncls:#endif	skb = handle_bridge(skb, &pt_prev, &ret, orig_dev);	if (!skb)		goto out;	skb = handle_macvlan(skb, &pt_prev, &ret, orig_dev);	if (!skb)		goto out;	type = skb->protocol;	list_for_each_entry_rcu(ptype, &ptype_base[ntohs(type)&15], list) {		if (ptype->type == type &&		    (!ptype->dev || ptype->dev == skb->dev)) {			if (pt_prev)				ret = deliver_skb(skb, pt_prev, orig_dev);			pt_prev = ptype;		}	}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -