⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 videobook.tmpl

📁 嵌入式系统设计与实例开发实验教材二源码 多线程应用程序设计 串行端口程序设计 AD接口实验 CAN总线通信实验 GPS通信实验 Linux内核移植与编译实验 IC卡读写实验 SD驱动使
💻 TMPL
📖 第 1 页 / 共 5 页
字号:
  </programlisting>  <para>        The interrupt handler is nice and simple for this card as we are assuming        the card is buffering the frame for us. This means we have little to do but        wake up        anybody interested. We also set a capture_ready flag, as we may        capture a frame before an application needs it. In this case we need to know        that a frame is ready. If we had to collect the frame on the interrupt life        would be more complex.  </para>  <para>        The two new routines we need to supply are camera_read which returns a        frame, and camera_poll which waits for a frame to become ready.  </para>  <programlisting>static int camera_poll(struct video_device *dev, 	struct file *file, struct poll_table *wait){        poll_wait(file, &amp;capture_wait, wait);        if(capture_read)                return POLLIN|POLLRDNORM;        return 0;}  </programlisting>  <para>        Our wait queue for polling is the capture_wait queue. This will cause the        task to be woken up by our camera_irq routine. We check capture_read to see        if there is an image present and if so report that it is readable.  </para>  </sect1>  <sect1 id="rdvid">  <title>Reading The Video Image</title>  <programlisting>static long camera_read(struct video_device *dev, char *buf,                                unsigned long count){        struct wait_queue wait = { current, NULL };        u8 *ptr;        int len;        int i;        add_wait_queue(&amp;capture_wait, &amp;wait);        while(!capture_ready)        {                if(file->flags&amp;O_NDELAY)                {                        remove_wait_queue(&amp;capture_wait, &amp;wait);                        current->state = TASK_RUNNING;                        return -EWOULDBLOCK;                }                if(signal_pending(current))                {                        remove_wait_queue(&amp;capture_wait, &amp;wait);                        current->state = TASK_RUNNING;                        return -ERESTARTSYS;                }                schedule();                current->state = TASK_INTERRUPTIBLE;        }        remove_wait_queue(&amp;capture_wait, &amp;wait);        current->state = TASK_RUNNING;  </programlisting>  <para>        The first thing we have to do is to ensure that the application waits until        the next frame is ready. The code here is almost identical to the mouse code        we used earlier in this chapter. It is one of the common building blocks of        Linux device driver code and probably one which you will find occurs in any        drivers you write.  </para>  <para>        We wait for a frame to be ready, or for a signal to interrupt our waiting. If a        signal occurs we need to return from the system call so that the signal can        be sent to the application itself. We also check to see if the user actually        wanted to avoid waiting - ie  if they are using non-blocking I/O and have other things         to get on with.  </para>  <para>        Next we copy the data from the card to the user application. This is rarely        as easy as our example makes out. We will add capture_w, and capture_h here        to hold the width and height of the captured image. We assume the card only        supports 24bit RGB for now.  </para>  <programlisting>        capture_ready = 0;        ptr=(u8 *)buf;        len = capture_w * 3 * capture_h; /* 24bit RGB */        if(len>count)                len=count;  /* Doesn't all fit */        for(i=0; i&lt;len; i++)        {                put_user(inb(io+IMAGE_DATA), ptr);                ptr++;        }        hardware_restart_capture();                        return i;}  </programlisting>  <para>        For a real hardware device you would try to avoid the loop with put_user().        Each call to put_user() has a time overhead checking whether the accesses to user        space are allowed. It would be better to read a line into a temporary buffer        then copy this to user space in one go.  </para>  <para>        Having captured the image and put it into user space we can kick the card to        get the next frame acquired.  </para>  </sect1>  <sect1 id="iocvid">  <title>Video Ioctl Handling</title>  <para>        As with the radio driver the major control interface is via the ioctl()        function. Video capture devices support the same tuner calls as a radio        device and also support additional calls to control how the video functions        are handled. In this simple example the card has no tuners to avoid making        the code complex.   </para>  <programlisting>static int camera_ioctl(struct video_device *dev, unsigned int cmd, void *arg){        switch(cmd)        {                case VIDIOCGCAP:                {                        struct video_capability v;                        v.type = VID_TYPE_CAPTURE|\                                 VID_TYPE_CHROMAKEY|\                                 VID_TYPE_SCALES|\                                 VID_TYPE_OVERLAY;                        v.channels = 1;                        v.audios = 0;                        v.maxwidth = 640;                        v.minwidth = 16;                        v.maxheight = 480;                        v.minheight = 16;                        strcpy(v.name, "My Camera");                        if(copy_to_user(arg, &amp;v, sizeof(v)))                                return -EFAULT;                        return 0;                }  </programlisting>  <para>        The first ioctl we must support and which all video capture and radio        devices are required to support is VIDIOCGCAP. This behaves exactly the same        as with a radio device. This time, however, we report the extra capabilities        we outlined earlier on when defining our video_dev structure.  </para>  <para>        We now set the video flags saying that we support overlay, capture,        scaling and chromakey. We also report size limits - our smallest image is        16x16 pixels, our largest is 640x480.   </para>  <para>        To keep things simple we report no audio and no tuning capabilities at all.  </para>  <programlisting>                        case VIDIOCGCHAN:                {                        struct video_channel v;                        if(copy_from_user(&amp;v, arg, sizeof(v)))                                return -EFAULT;                        if(v.channel != 0)                                return -EINVAL;                        v.flags = 0;                        v.tuners = 0;                        v.type = VIDEO_TYPE_CAMERA;                        v.norm = VIDEO_MODE_AUTO;                        strcpy(v.name, "Camera Input");break;                        if(copy_to_user(&amp;v, arg, sizeof(v)))                                return -EFAULT;                        return 0;                }  </programlisting>  <para>        This follows what is very much the standard way an ioctl handler looks        in Linux. We copy the data into a kernel space variable and we check that the        request is valid (in this case that the input is 0). Finally we copy the        camera info back to the user.  </para>  <para>        The VIDIOCGCHAN ioctl allows a user to ask about video channels (that is        inputs to the video card). Our example card has a single camera input. The        fields in the structure are  </para>   <table frame=all><title>struct video_channel fields</title>   <tgroup cols=2 align=left>   <tbody>   <row>   <entry>channel</><entry>The channel number we are selecting</entry>   </row><row>   <entry>name</><entry>The name for this channel. This is intended                   to describe the port to the user.                   Appropriate names are therefore things like                   "Camera" "SCART input"</entry>   </row><row>   <entry>flags</><entry>Channel properties</entry>   </row><row>   <entry>type</><entry>Input type</entry>   </row><row>   <entry>norm</><entry>The current television encoding being used                   if relevant for this channel.    </entry>    </row>    </tbody>    </tgroup>    </table>    <table frame=all><title>struct video_channel flags</title>    <tgroup cols=2 align=left>    <tbody>    <row>        <entry>VIDEO_VC_TUNER</><entry>Channel has a tuner.</entry>   </row><row>        <entry>VIDEO_VC_AUDIO</><entry>Channel has audio.</entry>    </row>    </tbody>    </tgroup>    </table>    <table frame=all><title>struct video_channel types</title>    <tgroup cols=2 align=left>    <tbody>    <row>        <entry>VIDEO_TYPE_TV</><entry>Television input.</entry>   </row><row>        <entry>VIDEO_TYPE_CAMERA</><entry>Fixed camera input.</entry>   </row><row>	<entry>0</><entry>Type is unknown.</entry>    </row>    </tbody>    </tgroup>    </table>    <table frame=all><title>struct video_channel norms</title>    <tgroup cols=2 align=left>    <tbody>    <row>        <entry>VIDEO_MODE_PAL</><entry>PAL encoded Television</entry>   </row><row>        <entry>VIDEO_MODE_NTSC</><entry>NTSC (US) encoded Television</entry>   </row><row>        <entry>VIDEO_MODE_SECAM</><entry>SECAM (French) Television </entry>   </row><row>        <entry>VIDEO_MODE_AUTO</><entry>Automatic switching, or format does not                                matter</entry>    </row>    </tbody>    </tgroup>    </table>    <para>        The corresponding VIDIOCSCHAN ioctl allows a user to change channel and to        request the norm is changed - for example to switch between a PAL or an NTSC        format camera.  </para>  <programlisting>                case VIDIOCSCHAN:                {                        struct video_channel v;                        if(copy_from_user(&amp;v, arg, sizeof(v)))                                return -EFAULT;                        if(v.channel != 0)                                return -EINVAL;                        if(v.norm != VIDEO_MODE_AUTO)                                return -EINVAL;                        return 0;                }  </programlisting>  <para>        The implementation of this call in our driver is remarkably easy. Because we        are assuming fixed format hardware we need only check that the user has not        tried to change anything.   </para>  <para>        The user also needs to be able to configure and adjust the picture they are        seeing. This is much like adjusting a television set. A user application        also needs to know the palette being used so that it knows how to display        the image that has been captured. The VIDIOCGPICT and VIDIOCSPICT ioctl        calls provide this information.  </para>  <programlisting>                case VIDIOCGPICT                {                        struct video_picture v;                        v.brightness = hardware_brightness();                        v.hue = hardware_hue();                        v.colour = hardware_saturation();                        v.contrast = hardware_brightness();                        /* Not settable */                        v.whiteness = 32768;                        v.depth = 24;           /* 24bit */                        v.palette = VIDEO_PALETTE_RGB24;                        if(copy_to_user(&amp;v, arg,                              sizeof(v)))                                return -EFAULT;                        return 0;                }  </programlisting>  <para>        The brightness, hue, color, and contrast provide the picture controls that        are akin to a conventional television. Whiteness provides additional        control for greyscale images. All of these values are scaled between 0-65535        and have 32768 as the mid point setting. The scaling means that applications        do not have to worry about the capability range of the hardware but can let        it make a best effort attempt.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -