⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 dscc4.c

📁 内核linux2.4.20,可跟rtlinux3.2打补丁 组成实时linux系统,编译内核
💻 C
📖 第 1 页 / 共 4 页
字号:
/* * drivers/net/wan/dscc4/dscc4_main.c: a DSCC4 HDLC driver for Linux * * This software may be used and distributed according to the terms of the  * GNU General Public License.  * * The author may be reached as romieu@cogenit.fr. * Specific bug reports/asian food will be welcome. * * Special thanks to the nice people at CS-Telecom for the hardware and the * access to the test/measure tools. * * *                             Theory of Operation * * I. Board Compatibility * * This device driver is designed for the Siemens PEB20534 4 ports serial * controller as found on Etinc PCISYNC cards. The documentation for the  * chipset is available at http://www.infineon.com: * - Data Sheet "DSCC4, DMA Supported Serial Communication Controller with * 4 Channels, PEB 20534 Version 2.1, PEF 20534 Version 2.1"; * - Application Hint "Management of DSCC4 on-chip FIFO resources". * Jens David has built an adapter based on the same chipset. Take a look * at http://www.afthd.tu-darmstadt.de/~dg1kjd/pciscc4 for a specific * driver. * Sample code (2 revisions) is available at Infineon. * * II. Board-specific settings * * Pcisync can transmit some clock signal to the outside world on the * *first two* ports provided you put a quartz and a line driver on it and * remove the jumpers. The operation is described on Etinc web site. If you * go DCE on these ports, don't forget to use an adequate cable. * * Sharing of the PCI interrupt line for this board is possible. * * III. Driver operation * * The rx/tx operations are based on a linked list of descriptor. I haven't * tried the start/stop descriptor method as this one looks like the cheapest * in terms of PCI manipulation. * * Tx direction * Once the data section of the current descriptor processed, the next linked * descriptor is loaded if the HOLD bit isn't set in the current descriptor. * If HOLD is met, the transmission is stopped until the host unsets it and * signals the change via TxPOLL. * When the tx ring is full, the xmit routine issues a call to netdev_stop. * The device is supposed to be enabled again during an ALLS irq (we could * use HI but as it's easy to loose events, it's fscked). * * Rx direction * The received frames aren't supposed to span over multiple receiving areas. * I may implement it some day but it isn't the highest ranked item. * * IV. Notes * The chipset is buggy. Typically, under some specific load patterns (I * wouldn't call them "high"), the irq queues and the descriptors look like * some event has been lost. Even assuming some fancy PCI feature, it won't  * explain the reproductible missing "C" bit in the descriptors. Faking an  * irq in the periodic timer isn't really elegant but at least it seems  * reliable. * The current error (XDU, RFO) recovery code is untested. * So far, RDO takes his RX channel down and the right sequence to enable it * again is still a mistery. If RDO happens, plan a reboot. More details * in the code (NB: as this happens, TX still works). * Don't mess the cables during operation, especially on DTE ports. I don't * suggest it for DCE either but at least one can get some messages instead * of a complete instant freeze. * Tests are done on Rev. 20 of the silicium. The RDO handling changes with * the documentation/chipset releases. An on-line errata would be welcome. * * TODO: * - some trivial error lurk, * - the stats are fscked, * - use polling at high irq/s, * - performance analysis, * - endianness. * */#include <linux/version.h>#include <linux/module.h>#include <linux/types.h>#include <linux/errno.h>#include <linux/ioport.h>#include <linux/pci.h>#include <linux/kernel.h>#include <linux/mm.h>#include <asm/system.h>#include <asm/cache.h>#include <asm/byteorder.h>#include <asm/uaccess.h>#include <asm/io.h>#include <asm/irq.h>#include <linux/init.h>#include <linux/string.h>#include <linux/if_arp.h>#include <linux/netdevice.h>#include <linux/skbuff.h>#include <linux/delay.h>#include <net/syncppp.h>#include <linux/hdlc.h>/* Version */static const char version[] = "$Id: dscc4.c,v 1.130 2001/02/25 15:27:34 romieu Exp $\n";static int debug;/* Module parameters */MODULE_AUTHOR("Maintainer: Francois Romieu <romieu@cogenit.fr>");MODULE_DESCRIPTION("Siemens PEB20534 PCI Controller");MODULE_LICENSE("GPL");MODULE_PARM(debug,"i");/* Structures */struct TxFD {	u32 state;	u32 next;	u32 data;	u32 complete;	u32 jiffies; /* more hack to come :o) */};struct RxFD {	u32 state1;	u32 next;	u32 data;	u32 state2;	u32 end;};#define DEBUG#define DEBUG_PARANOID#define TX_RING_SIZE    32#define RX_RING_SIZE    32#define IRQ_RING_SIZE   64 /* Keep it A multiple of 32 */#define TX_TIMEOUT      (HZ/10)#define BRR_DIVIDER_MAX 64*0x00008000#define dev_per_card	4#define SOURCE_ID(flags) ((flags >> 28 ) & 0x03)#define TO_SIZE(state) ((state >> 16) & 0x1fff)#define TO_STATE(len) cpu_to_le32((len & TxSizeMax) << 16)#define RX_MAX(len) ((((len) >> 5) + 1) << 5)#define SCC_REG_START(id) SCC_START+(id)*SCC_OFFSET#undef DEBUGstruct dscc4_pci_priv {        u32 *iqcfg;        int cfg_cur;        spinlock_t lock;        struct pci_dev *pdev;        struct net_device *root;        dma_addr_t iqcfg_dma;	u32 xtal_hz;};struct dscc4_dev_priv {        struct sk_buff *rx_skbuff[RX_RING_SIZE];        struct sk_buff *tx_skbuff[TX_RING_SIZE];        struct RxFD *rx_fd;        struct TxFD *tx_fd;        u32 *iqrx;        u32 *iqtx;        u32 rx_current;        u32 tx_current;        u32 iqrx_current;        u32 iqtx_current;        u32 tx_dirty;	int bad_tx_frame;	int bad_rx_frame;	int rx_needs_refill;        dma_addr_t tx_fd_dma;        dma_addr_t rx_fd_dma;        dma_addr_t iqtx_dma;        dma_addr_t iqrx_dma;        struct net_device_stats stats;	struct timer_list timer;        struct dscc4_pci_priv *pci_priv;        spinlock_t lock;        int dev_id;	u32 flags;	u32 timer_help;	u32 hi_expected;	struct hdlc_device_struct hdlc;	int usecount;};/* GLOBAL registers definitions */#define GCMDR   0x00#define GSTAR   0x04#define GMODE   0x08#define IQLENR0 0x0C#define IQLENR1 0x10#define IQRX0   0x14#define IQTX0   0x24#define IQCFG   0x3c#define FIFOCR1 0x44#define FIFOCR2 0x48#define FIFOCR3 0x4c#define FIFOCR4 0x34#define CH0CFG  0x50#define CH0BRDA 0x54#define CH0BTDA 0x58/* SCC registers definitions */#define SCC_START	0x0100#define SCC_OFFSET      0x80#define CMDR    0x00#define STAR    0x04#define CCR0    0x08#define CCR1    0x0c#define CCR2    0x10#define BRR     0x2C#define RLCR    0x40#define IMR     0x54#define ISR     0x58/* Bit masks */#define IntRxScc0       0x10000000#define IntTxScc0       0x01000000#define TxPollCmd	0x00000400#define RxActivate	0x08000000#define MTFi		0x04000000#define Rdr		0x00400000#define Rdt		0x00200000#define Idr		0x00100000#define Idt		0x00080000#define TxSccRes       0x01000000#define RxSccRes       0x00010000#define TxSizeMax	0x1ffc#define RxSizeMax	0x1ffc#define Ccr0ClockMask	0x0000003f#define Ccr1LoopMask	0x00000200#define BrrExpMask	0x00000f00#define BrrMultMask	0x0000003f#define EncodingMask	0x00700000#define Hold		0x40000000#define SccBusy		0x10000000#define FrameOk		(FrameVfr | FrameCrc)#define FrameVfr	0x80#define FrameRdo	0x40#define FrameCrc	0x20#define FrameAborted	0x00000200#define FrameEnd	0x80000000#define DataComplete	0x40000000#define LengthCheck	0x00008000#define SccEvt		0x02000000#define NoAck		0x00000200#define Action		0x00000001#define HiDesc		0x20000000/* SCC events */#define RxEvt		0xf0000000#define TxEvt		0x0f000000#define Alls		0x00040000#define Xdu		0x00010000#define Xmr		0x00002000#define Xpr		0x00001000#define Rdo		0x00000080#define Rfs		0x00000040#define Rfo		0x00000002#define Flex		0x00000001/* DMA core events */#define Cfg		0x00200000#define Hi		0x00040000#define Fi		0x00020000#define Err		0x00010000#define Arf		0x00000002#define ArAck		0x00000001/* Misc */#define NeedIDR		0x00000001#define NeedIDT		0x00000002#define RdoSet		0x00000004/* Functions prototypes */static __inline__ void dscc4_rx_irq(struct dscc4_pci_priv *, struct net_device *);static __inline__ void dscc4_tx_irq(struct dscc4_pci_priv *, struct net_device *);static int dscc4_found1(struct pci_dev *, unsigned long ioaddr);static int dscc4_init_one(struct pci_dev *, const struct pci_device_id *ent);static int dscc4_open(struct net_device *);static int dscc4_start_xmit(struct sk_buff *, struct net_device *);static int dscc4_close(struct net_device *);static int dscc4_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);static int dscc4_change_mtu(struct net_device *dev, int mtu);static int dscc4_init_ring(struct net_device *);static void dscc4_release_ring(struct dscc4_dev_priv *);static void dscc4_timer(unsigned long);static void dscc4_tx_timeout(struct net_device *);static void dscc4_irq(int irq, void *dev_id, struct pt_regs *ptregs);static struct net_device_stats *dscc4_get_stats(struct net_device *);static int dscc4_attach_hdlc_device(struct net_device *);static void dscc4_unattach_hdlc_device(struct net_device *);static int dscc4_hdlc_open(struct hdlc_device_struct *);static void dscc4_hdlc_close(struct hdlc_device_struct *);static int dscc4_hdlc_ioctl(struct hdlc_device_struct *, struct ifreq *, int);static int dscc4_hdlc_xmit(hdlc_device *, struct sk_buff *);#ifdef EXPERIMENTAL_POLLINGstatic int dscc4_tx_poll(struct dscc4_dev_priv *, struct net_device *);#endifvoid inline reset_TxFD(struct TxFD *tx_fd) {	/* FIXME: test with the last arg (size specification) = 0 */	tx_fd->state = FrameEnd | Hold | 0x00100000;	tx_fd->complete = 0x00000000;}void inline dscc4_release_ring_skbuff(struct sk_buff **p, int n){	for(; n > 0; n--) {		if (*p)			dev_kfree_skb(*p);		p++;	}}static void dscc4_release_ring(struct dscc4_dev_priv *dpriv){	struct pci_dev *pdev = dpriv->pci_priv->pdev;	pci_free_consistent(pdev, TX_RING_SIZE*sizeof(struct TxFD),			    dpriv->tx_fd, dpriv->tx_fd_dma);	pci_free_consistent(pdev, RX_RING_SIZE*sizeof(struct RxFD),			    dpriv->rx_fd, dpriv->rx_fd_dma);	dscc4_release_ring_skbuff(dpriv->tx_skbuff, TX_RING_SIZE);	dscc4_release_ring_skbuff(dpriv->rx_skbuff, RX_RING_SIZE);}void inline try_get_rx_skb(struct dscc4_dev_priv *priv, int cur, struct net_device *dev){	struct sk_buff *skb;	skb = dev_alloc_skb(RX_MAX(HDLC_MAX_MRU+2));	priv->rx_skbuff[cur] = skb;	if (!skb) {		priv->rx_fd[cur--].data = (u32) NULL;		priv->rx_fd[cur%RX_RING_SIZE].state1 |= Hold;		priv->rx_needs_refill++;		return;	}	skb->dev = dev;	skb->protocol = htons(ETH_P_IP);	skb->mac.raw = skb->data;	priv->rx_fd[cur].data = pci_map_single(priv->pci_priv->pdev, skb->data,					       skb->len, PCI_DMA_FROMDEVICE);}/* * IRQ/thread/whatever safe */static int dscc4_wait_ack_cec(u32 ioaddr, struct net_device *dev, char *msg){	s16 i = 0;	while (readl(ioaddr + STAR) & SccBusy) {		if (i++ < 0)  {			printk(KERN_ERR "%s: %s timeout\n", dev->name, msg);			return -1;		}	}	printk(KERN_DEBUG "%s: %s ack (%d try)\n", dev->name, msg, i);	return 0;}static int dscc4_do_action(struct net_device *dev, char *msg){	unsigned long ioaddr = dev->base_addr;	u32 state;	s16 i;	writel(Action, ioaddr + GCMDR);	ioaddr += GSTAR;	for (i = 0; i >= 0; i++) {		state = readl(ioaddr);		if (state & Arf) {			printk(KERN_ERR "%s: %s failed\n", dev->name, msg);			writel(Arf, ioaddr);			return -1;		} else if (state & ArAck) {			printk(KERN_DEBUG "%s: %s ack (%d try)\n",			       dev->name, msg, i);			writel(ArAck, ioaddr);			return 0;		}	}	printk(KERN_ERR "%s: %s timeout\n", dev->name, msg);	return -1;}static __inline__ int dscc4_xpr_ack(struct dscc4_dev_priv *dpriv){	int cur;	s16 i;	cur = dpriv->iqtx_current%IRQ_RING_SIZE;	for (i = 0; i >= 0; i++) {		if (!(dpriv->flags & (NeedIDR | NeedIDT)) ||		    (dpriv->iqtx[cur] & Xpr))			return 0;	}	printk(KERN_ERR "%s: %s timeout\n", "dscc4", "XPR");	return -1;}static __inline__ void dscc4_rx_skb(struct dscc4_dev_priv *dpriv, int cur,	struct RxFD *rx_fd, struct net_device *dev){	struct pci_dev *pdev = dpriv->pci_priv->pdev;	struct sk_buff *skb;	int pkt_len;	skb = dpriv->rx_skbuff[cur];	pkt_len = TO_SIZE(rx_fd->state2) - 1;	pci_dma_sync_single(pdev, rx_fd->data, pkt_len + 1, PCI_DMA_FROMDEVICE);	if((skb->data[pkt_len] & FrameOk) == FrameOk) {		pci_unmap_single(pdev, rx_fd->data, skb->len, PCI_DMA_FROMDEVICE);		dpriv->stats.rx_packets++;		dpriv->stats.rx_bytes += pkt_len;		skb->tail += pkt_len;		skb->len = pkt_len;       		if (netif_running(hdlc_to_dev(&dpriv->hdlc)))			hdlc_netif_rx(&dpriv->hdlc, skb);		else			netif_rx(skb);

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -