---

How to Write a Device Driver for Video Cameras

Last month I talked about how to write a device driver for radio-tuner cards. This month, I’ll cover video-capture devices, which share the same interfaces as radio devices.

In order to explain the video-capture interface I will use the example of a camera that has no tuners or audio input. This keeps the example relatively clean. To get audio capabilities, you can combine this month’s driver with last month’s driver example.

Before I get into the details of video-capture devices, a little background on the technology is in order. Full-motion video, even at television resolution (which is relatively low) is resource-intensive. These devices continually pass megabytes of data every second from the capture card to the display. Because copying this amount of data through a user application is often unfeasible, several alternative approaches to television tuners have been developed.

The first is to transfer the television image onto the video output directly. This is also how some add-on 3D-rendering cards work, dropping the video into any chosen rectangle of the display. These cards, which include most MPEG-1 cards that use a feature connector, aren’t very friendly in a windowing environment. They don’t understand windows and clipping rectangles, which means that the video window is always on top of the display.

Chromakeying is a technique used by cards to get around this. It is an old television mixing trick in which you mark all the areas you wish to replace with a single clear color not used in the image — TV people use an incredibly bright blue for this, while computing people tend to use a particularly virulent purple. This is because bright blue occurs on the desktop, and anyone with virulent purple windows has another problem besides their TV overlay.

The third approach is to copy the data from the capture card to the video card, but to do it directly across the PCI bus. This relieves the processor from doing the work but does require some intelligence on the part of the video-capture chip, as well as a suitable video card. Programming and debugging these cards can be extremely tricky. There are some complicated interactions with the display, and you may also have to cope with various chipset bugs that show up when PCI cards start talking to each other directly (rather than via the CPU).

To keep our example fairly simple we will assume a card that supports overlaying a flat rectangular image onto the frame-buffer output, uses chromakey for selecting the region on which to draw, and can also capture video into processor memory.

The functions supported by our video-capture driver are shown in Listing One.

Listing One: Video-capture Driver

 static struct video_device my_camera
{
"My Camera",
VID_TYPE_OVERLAY|VID_TYPE_SCALES|VID_TYPE_CAPTURE|
VID_TYPE_CHROMAKEY,
VID_HARDWARE_MYCAMERA,
camera_open.
camera_close,
camera_read,
NULL, /* no write */
camera_poll,
camera_ioctl,
NULL, /* no special init function */
NULL /* no private data */
};

We are going to need a read function, which is used for capturing data from the card, and a poll function so that a driver can wait for the next frame to be captured.

There are several additional video-capability flags that did not apply to the radio interface. These are:

VID_TYPE_CAPTURE: We support image capture.

VID_TYPE_TELETEXT: A teletext capture device (vbi[n])

VID_TYPE_OVERLAY: The image can be directly overlaid onto the frame buffer.

VID_TYPE_CHROMAKEY: Chromakey can be used to select which parts of the image to display.

VID_TYPE_CLIPPING: It is possible to give the board a list of rectangles to draw around.

VID_TYPE_FRAMERAM: The video capture goes into the video memory and actually changes it. Applications need to know this so they can clean up after the card.

VID_TYPE_SCALES: The image can be scaled to various sizes, rather than being a single fixed size.

VID_TYPE_MONOCHROME: The capture will be monochrome. This isn’t a complete answer to the question, since a mono camera on a color capture card will still produce monochrome output.

VID_TYPE_SUBCAPTURE: The card allows only part of its field of view to be captured. This enables applications to avoid copying all of a large image into memory when only some section is relevant.

We set VID_TYPE_CAPTURE so that we are seen as a capture card, VID_TYPE_CHROMAKEY so that the application knows that it is time to draw in particularly virulent purple, and VID_TYPE_SCALES because the video can be resized.

Setup is similar to last month’s radio driver. This time we are going to want an interrupt line for the “frame captured” signal. Not all cards have this, so some of them cannot handle poll().

 static int io = 0x320;
static int irq = 11;

int_init mycamera_init(struct video_init *v)
{
if(check_region(io, MY_IO_SIZE))
{
printk(KERN_ERR “mycamera: port
0x%03Xisinuse.n”,io);
return -EBUSY;
}
if(video_device_register
(&my_camera,VFL_TYPE_GRABBER)==-1)
return -EINVAL;
request_region(io, MY_IO_SIZE,
“mycamera”);
return 0;
}

There is little changed here from the radio-card driver. We specify VFL_TYPE_GRABBERthis time, since we want to be allocated a /dev/video device name.

static int users = 0;

static int camera_open(struct video_device
*dev,int flags)
{
if(users)
return -EBUSY;
if(request_irq(irq, camera_irq, 0,
“camera”, dev)<0)
return -EBUSY;
users++;
MOD_INC_USE_COUNT;
return 0;
}

static int camera_close(struct video_device
*dev)
{
users–;
free_irq(irq, dev);
MOD_DEC_USE_COUNT;
}

The open and close routines are also quite similar. The only real change is that we now request an interrupt for the camera device interrupt line. If we cannot get the interrupt we report EBUSY to the application and give up.

Our example handler is for an ISA bus device. If it were PCI you would be able to share the interrupt and would have set SA_SHIRQ to indicate a shared IRQ.

We pass the device pointer as the interrupt-routine argument. We don’t actually need to do this, since we support only one card, but it makes it easier to upgrade the driver for multiple devices in the future.

Our interrupt routine needs to do little if we assume the card can simply queue one frame to be read after it captures it.

 static struct wait_queue *capture_wait;
static int capture_ready = 0;

static void camera_irq(int irq, void *dev_id,
struct pt_regs *regs)
{
capture_ready=1;
wake_up_interruptible(&capture_wait);
}

The interrupt handler is nice and simple for this card, since we are assuming the card is buffering the frame for us. This means we have very little to do except wake up anybody interested. We also set a capture_ready flag, as we may capture a frame before an application actually needs it.

The two new routines we need to supply are camera_ read, which returns a frame, and camera_poll, which waits for a frame to become ready.

 static int camera_poll
(struct video_device*dev,structfile
*file, struct poll_table *wait)
{
poll_wait(file, &capture_wait, wait);
if(capture_read)
return POLLIN|POLLRDNORM;
return 0;
}

Our wait queue for polling is the capture_wait queue (Listing Two). This will cause the task to be woken up by our camera_irq routine. We check capture_read to see if there is an image present and if so report that it is readable.

Listing Two: The Wait Queue

 static long camera_read(struct video_device *dev, char
*buf,unsigned long count)
{
struct wait_queue wait = { current, NULL};
u8 *ptr;
int len;
int i;

add_wait_queue(&capture_wait, &wait);

while(!capture_ready)
{
if(file->flags&O_NDELAY)
{
remove_wait_queue(&capture_wait,&wait);
current->state = TASK_RUNNING;
return -EWOULDBLOCK;
}
if(signal_pending(current))
{
remove_wait_queue(&capture_wait,&wait);
current->state = TASK_RUNNING;
return -ERESTARTSYS;
}
schedule();
current->state = TASK_INTERRUPTIBLE;
}
remove_wait_queue(&capture_wait, &wait);
current->state = TASK_RUNNING;

The first thing we have to do is to ensure that the application waits until the next frame is ready. The code here is almost identical to the mouse code in the October Gearheads Onlycolumn, which can be found at http://www.linux-mag.com. It is one of the common building blocks of Linux device-driver code and probably one that you will use in any driver you write.

We wait for a frame to be ready or for a signal to interrupt our wait. If a signal occurs we need to return from the system call so that the signal can be sent to the application itself. We also check to see if the user actually wanted to avoid waiting — that is, if they are using non-blocking I/O and have other things to get on with.

Next we copy the data from the card to the user application. This is rarely as easy as our example makes out. We will add the variables capture_w and capture_h here to hold the width and height of the captured image. We assume the card supports only 24-bit RGB for now.

capture_ready = 0;

ptr = (u8 *)buf;
len=capture_w * 3 * capture_h;
/* 24bit RGB */

if(len>count)
len = count;
/* Doesn’t all fit */

for(i=0; i<len; i++)
{
put_user
(inb (io+IMAGE_
DATA),ptr);
ptr++;
}

hardware_restart_
capture();

return i;
}

For a real hardware device you would try to avoid the loop with put_user(). Each call to put_user() must check whether access to user space is allowed, which is costly. It would be better to read a line into a temporary buffer and then copy this to user space in one go.

Having captured the image and put it into user space, we can kick the card to acquire a new frame. Next month we will talk about how to do that.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends, & analysis