⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 pktcdvd.c

📁 Linux块设备驱动源码
💻 C
📖 第 1 页 / 共 5 页
字号:
/* * Copyright (C) 2000 Jens Axboe <axboe@suse.de> * Copyright (C) 2001-2004 Peter Osterlund <petero2@telia.com> * * May be copied or modified under the terms of the GNU General Public * License.  See linux/COPYING for more information. * * Packet writing layer for ATAPI and SCSI CD-RW, DVD+RW, DVD-RW and * DVD-RAM devices. * * Theory of operation: * * At the lowest level, there is the standard driver for the CD/DVD device, * typically ide-cd.c or sr.c. This driver can handle read and write requests, * but it doesn't know anything about the special restrictions that apply to * packet writing. One restriction is that write requests must be aligned to * packet boundaries on the physical media, and the size of a write request * must be equal to the packet size. Another restriction is that a * GPCMD_FLUSH_CACHE command has to be issued to the drive before a read * command, if the previous command was a write. * * The purpose of the packet writing driver is to hide these restrictions from * higher layers, such as file systems, and present a block device that can be * randomly read and written using 2kB-sized blocks. * * The lowest layer in the packet writing driver is the packet I/O scheduler. * Its data is defined by the struct packet_iosched and includes two bio * queues with pending read and write requests. These queues are processed * by the pkt_iosched_process_queue() function. The write requests in this * queue are already properly aligned and sized. This layer is responsible for * issuing the flush cache commands and scheduling the I/O in a good order. * * The next layer transforms unaligned write requests to aligned writes. This * transformation requires reading missing pieces of data from the underlying * block device, assembling the pieces to full packets and queuing them to the * packet I/O scheduler. * * At the top layer there is a custom make_request_fn function that forwards * read requests directly to the iosched queue and puts write requests in the * unaligned write queue. A kernel thread performs the necessary read * gathering to convert the unaligned writes to aligned writes and then feeds * them to the packet I/O scheduler. * *************************************************************************/#define VERSION_CODE	"v0.2.0a 2004-07-14 Jens Axboe (axboe@suse.de) and petero2@telia.com"#include <linux/pktcdvd.h>#include <linux/config.h>#include <linux/module.h>#include <linux/types.h>#include <linux/kernel.h>#include <linux/kthread.h>#include <linux/errno.h>#include <linux/spinlock.h>#include <linux/file.h>#include <linux/proc_fs.h>#include <linux/seq_file.h>#include <linux/miscdevice.h>#include <linux/suspend.h>#include <scsi/scsi_cmnd.h>#include <scsi/scsi_ioctl.h>#include <asm/uaccess.h>#if PACKET_DEBUG#define DPRINTK(fmt, args...) printk(KERN_NOTICE fmt, ##args)#else#define DPRINTK(fmt, args...)#endif#if PACKET_DEBUG > 1#define VPRINTK(fmt, args...) printk(KERN_NOTICE fmt, ##args)#else#define VPRINTK(fmt, args...)#endif#define MAX_SPEED 0xffff#define ZONE(sector, pd) (((sector) + (pd)->offset) & ~((pd)->settings.size - 1))static struct pktcdvd_device *pkt_devs[MAX_WRITERS];static struct proc_dir_entry *pkt_proc;static int pkt_major;static struct semaphore ctl_mutex;	/* Serialize open/close/setup/teardown */static mempool_t *psd_pool;static void pkt_bio_finished(struct pktcdvd_device *pd){	BUG_ON(atomic_read(&pd->cdrw.pending_bios) <= 0);	if (atomic_dec_and_test(&pd->cdrw.pending_bios)) {		VPRINTK("pktcdvd: queue empty\n");		atomic_set(&pd->iosched.attention, 1);		wake_up(&pd->wqueue);	}}static void pkt_bio_destructor(struct bio *bio){	kfree(bio->bi_io_vec);	kfree(bio);}static struct bio *pkt_bio_alloc(int nr_iovecs){	struct bio_vec *bvl = NULL;	struct bio *bio;	bio = kmalloc(sizeof(struct bio), GFP_KERNEL);	if (!bio)		goto no_bio;	bio_init(bio);	bvl = kcalloc(nr_iovecs, sizeof(struct bio_vec), GFP_KERNEL);	if (!bvl)		goto no_bvl;	bio->bi_max_vecs = nr_iovecs;	bio->bi_io_vec = bvl;	bio->bi_destructor = pkt_bio_destructor;	return bio; no_bvl:	kfree(bio); no_bio:	return NULL;}/* * Allocate a packet_data struct */static struct packet_data *pkt_alloc_packet_data(void){	int i;	struct packet_data *pkt;	pkt = kzalloc(sizeof(struct packet_data), GFP_KERNEL);	if (!pkt)		goto no_pkt;	pkt->w_bio = pkt_bio_alloc(PACKET_MAX_SIZE);	if (!pkt->w_bio)		goto no_bio;	for (i = 0; i < PAGES_PER_PACKET; i++) {		pkt->pages[i] = alloc_page(GFP_KERNEL|__GFP_ZERO);		if (!pkt->pages[i])			goto no_page;	}	spin_lock_init(&pkt->lock);	for (i = 0; i < PACKET_MAX_SIZE; i++) {		struct bio *bio = pkt_bio_alloc(1);		if (!bio)			goto no_rd_bio;		pkt->r_bios[i] = bio;	}	return pkt;no_rd_bio:	for (i = 0; i < PACKET_MAX_SIZE; i++) {		struct bio *bio = pkt->r_bios[i];		if (bio)			bio_put(bio);	}no_page:	for (i = 0; i < PAGES_PER_PACKET; i++)		if (pkt->pages[i])			__free_page(pkt->pages[i]);	bio_put(pkt->w_bio);no_bio:	kfree(pkt);no_pkt:	return NULL;}/* * Free a packet_data struct */static void pkt_free_packet_data(struct packet_data *pkt){	int i;	for (i = 0; i < PACKET_MAX_SIZE; i++) {		struct bio *bio = pkt->r_bios[i];		if (bio)			bio_put(bio);	}	for (i = 0; i < PAGES_PER_PACKET; i++)		__free_page(pkt->pages[i]);	bio_put(pkt->w_bio);	kfree(pkt);}static void pkt_shrink_pktlist(struct pktcdvd_device *pd){	struct packet_data *pkt, *next;	BUG_ON(!list_empty(&pd->cdrw.pkt_active_list));	list_for_each_entry_safe(pkt, next, &pd->cdrw.pkt_free_list, list) {		pkt_free_packet_data(pkt);	}}static int pkt_grow_pktlist(struct pktcdvd_device *pd, int nr_packets){	struct packet_data *pkt;	INIT_LIST_HEAD(&pd->cdrw.pkt_free_list);	INIT_LIST_HEAD(&pd->cdrw.pkt_active_list);	spin_lock_init(&pd->cdrw.active_list_lock);	while (nr_packets > 0) {		pkt = pkt_alloc_packet_data();		if (!pkt) {			pkt_shrink_pktlist(pd);			return 0;		}		pkt->id = nr_packets;		pkt->pd = pd;		list_add(&pkt->list, &pd->cdrw.pkt_free_list);		nr_packets--;	}	return 1;}static void *pkt_rb_alloc(gfp_t gfp_mask, void *data){	return kmalloc(sizeof(struct pkt_rb_node), gfp_mask);}static void pkt_rb_free(void *ptr, void *data){	kfree(ptr);}static inline struct pkt_rb_node *pkt_rbtree_next(struct pkt_rb_node *node){	struct rb_node *n = rb_next(&node->rb_node);	if (!n)		return NULL;	return rb_entry(n, struct pkt_rb_node, rb_node);}static inline void pkt_rbtree_erase(struct pktcdvd_device *pd, struct pkt_rb_node *node){	rb_erase(&node->rb_node, &pd->bio_queue);	mempool_free(node, pd->rb_pool);	pd->bio_queue_size--;	BUG_ON(pd->bio_queue_size < 0);}/* * Find the first node in the pd->bio_queue rb tree with a starting sector >= s. */static struct pkt_rb_node *pkt_rbtree_find(struct pktcdvd_device *pd, sector_t s){	struct rb_node *n = pd->bio_queue.rb_node;	struct rb_node *next;	struct pkt_rb_node *tmp;	if (!n) {		BUG_ON(pd->bio_queue_size > 0);		return NULL;	}	for (;;) {		tmp = rb_entry(n, struct pkt_rb_node, rb_node);		if (s <= tmp->bio->bi_sector)			next = n->rb_left;		else			next = n->rb_right;		if (!next)			break;		n = next;	}	if (s > tmp->bio->bi_sector) {		tmp = pkt_rbtree_next(tmp);		if (!tmp)			return NULL;	}	BUG_ON(s > tmp->bio->bi_sector);	return tmp;}/* * Insert a node into the pd->bio_queue rb tree. */static void pkt_rbtree_insert(struct pktcdvd_device *pd, struct pkt_rb_node *node){	struct rb_node **p = &pd->bio_queue.rb_node;	struct rb_node *parent = NULL;	sector_t s = node->bio->bi_sector;	struct pkt_rb_node *tmp;	while (*p) {		parent = *p;		tmp = rb_entry(parent, struct pkt_rb_node, rb_node);		if (s < tmp->bio->bi_sector)			p = &(*p)->rb_left;		else			p = &(*p)->rb_right;	}	rb_link_node(&node->rb_node, parent, p);	rb_insert_color(&node->rb_node, &pd->bio_queue);	pd->bio_queue_size++;}/* * Add a bio to a single linked list defined by its head and tail pointers. */static inline void pkt_add_list_last(struct bio *bio, struct bio **list_head, struct bio **list_tail){	bio->bi_next = NULL;	if (*list_tail) {		BUG_ON((*list_head) == NULL);		(*list_tail)->bi_next = bio;		(*list_tail) = bio;	} else {		BUG_ON((*list_head) != NULL);		(*list_head) = bio;		(*list_tail) = bio;	}}/* * Remove and return the first bio from a single linked list defined by its * head and tail pointers. */static inline struct bio *pkt_get_list_first(struct bio **list_head, struct bio **list_tail){	struct bio *bio;	if (*list_head == NULL)		return NULL;	bio = *list_head;	*list_head = bio->bi_next;	if (*list_head == NULL)		*list_tail = NULL;	bio->bi_next = NULL;	return bio;}/* * Send a packet_command to the underlying block device and * wait for completion. */static int pkt_generic_packet(struct pktcdvd_device *pd, struct packet_command *cgc){	char sense[SCSI_SENSE_BUFFERSIZE];	request_queue_t *q;	struct request *rq;	DECLARE_COMPLETION(wait);	int err = 0;	q = bdev_get_queue(pd->bdev);	rq = blk_get_request(q, (cgc->data_direction == CGC_DATA_WRITE) ? WRITE : READ,			     __GFP_WAIT);	rq->errors = 0;	rq->rq_disk = pd->bdev->bd_disk;	rq->bio = NULL;	rq->buffer = NULL;	rq->timeout = 60*HZ;	rq->data = cgc->buffer;	rq->data_len = cgc->buflen;	rq->sense = sense;	memset(sense, 0, sizeof(sense));	rq->sense_len = 0;	rq->flags |= REQ_BLOCK_PC | REQ_HARDBARRIER;	if (cgc->quiet)		rq->flags |= REQ_QUIET;	memcpy(rq->cmd, cgc->cmd, CDROM_PACKET_SIZE);	if (sizeof(rq->cmd) > CDROM_PACKET_SIZE)		memset(rq->cmd + CDROM_PACKET_SIZE, 0, sizeof(rq->cmd) - CDROM_PACKET_SIZE);	rq->ref_count++;	rq->flags |= REQ_NOMERGE;	rq->waiting = &wait;	rq->end_io = blk_end_sync_rq;	elv_add_request(q, rq, ELEVATOR_INSERT_BACK, 1);	generic_unplug_device(q);	wait_for_completion(&wait);	if (rq->errors)		err = -EIO;	blk_put_request(rq);	return err;}/* * A generic sense dump / resolve mechanism should be implemented across * all ATAPI + SCSI devices. */static void pkt_dump_sense(struct packet_command *cgc){	static char *info[9] = { "No sense", "Recovered error", "Not ready",				 "Medium error", "Hardware error", "Illegal request",				 "Unit attention", "Data protect", "Blank check" };	int i;	struct request_sense *sense = cgc->sense;	printk("pktcdvd:");	for (i = 0; i < CDROM_PACKET_SIZE; i++)		printk(" %02x", cgc->cmd[i]);	printk(" - ");	if (sense == NULL) {		printk("no sense\n");		return;	}	printk("sense %02x.%02x.%02x", sense->sense_key, sense->asc, sense->ascq);	if (sense->sense_key > 8) {		printk(" (INVALID)\n");		return;	}	printk(" (%s)\n", info[sense->sense_key]);}/* * flush the drive cache to media */static int pkt_flush_cache(struct pktcdvd_device *pd){	struct packet_command cgc;	init_cdrom_command(&cgc, NULL, 0, CGC_DATA_NONE);	cgc.cmd[0] = GPCMD_FLUSH_CACHE;	cgc.quiet = 1;	/*	 * the IMMED bit -- we default to not setting it, although that	 * would allow a much faster close, this is safer	 */#if 0	cgc.cmd[1] = 1 << 1;#endif	return pkt_generic_packet(pd, &cgc);}/* * speed is given as the normal factor, e.g. 4 for 4x */static int pkt_set_speed(struct pktcdvd_device *pd, unsigned write_speed, unsigned read_speed){	struct packet_command cgc;	struct request_sense sense;	int ret;	init_cdrom_command(&cgc, NULL, 0, CGC_DATA_NONE);	cgc.sense = &sense;	cgc.cmd[0] = GPCMD_SET_SPEED;	cgc.cmd[2] = (read_speed >> 8) & 0xff;	cgc.cmd[3] = read_speed & 0xff;	cgc.cmd[4] = (write_speed >> 8) & 0xff;	cgc.cmd[5] = write_speed & 0xff;	if ((ret = pkt_generic_packet(pd, &cgc)))		pkt_dump_sense(&cgc);	return ret;}/* * Queue a bio for processing by the low-level CD device. Must be called * from process context. */static void pkt_queue_bio(struct pktcdvd_device *pd, struct bio *bio){	spin_lock(&pd->iosched.lock);	if (bio_data_dir(bio) == READ) {		pkt_add_list_last(bio, &pd->iosched.read_queue,				  &pd->iosched.read_queue_tail);	} else {		pkt_add_list_last(bio, &pd->iosched.write_queue,				  &pd->iosched.write_queue_tail);	}	spin_unlock(&pd->iosched.lock);	atomic_set(&pd->iosched.attention, 1);	wake_up(&pd->wqueue);}/* * Process the queued read/write requests. This function handles special * requirements for CDRW drives: * - A cache flush command must be inserted before a read request if the *   previous request was a write. * - Switching between reading and writing is slow, so don't do it more often *   than necessary. * - Optimize for throughput at the expense of latency. This means that streaming *   writes will never be interrupted by a read, but if the drive has to seek *   before the next write, switch to reading instead if there are any pending *   read requests. * - Set the read speed according to current usage pattern. When only reading *   from the device, it's best to use the highest possible read speed, but *   when switching often between reading and writing, it's better to have the *   same read and write speeds. */static void pkt_iosched_process_queue(struct pktcdvd_device *pd){	request_queue_t *q;	if (atomic_read(&pd->iosched.attention) == 0)		return;	atomic_set(&pd->iosched.attention, 0);	q = bdev_get_queue(pd->bdev);	for (;;) {		struct bio *bio;		int reads_queued, writes_queued;		spin_lock(&pd->iosched.lock);

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -