blob: c57d1ec6291cf00c7a3a2f87f13c84c0b53d5468 [file] [log] [blame]
<title>Input/Output</title>
<para>The V4L2 API defines several different methods to read from or
write to a device. All drivers exchanging data with applications must
support at least one of them.</para>
<para>The classic I/O method using the <function>read()</function>
and <function>write()</function> function is automatically selected
after opening a V4L2 device. When the driver does not support this
method attempts to read or write will fail at any time.</para>
<para>Other methods must be negotiated. To select the streaming I/O
method with memory mapped or user buffers applications call the
&VIDIOC-REQBUFS; ioctl. The asynchronous I/O method is not defined
yet.</para>
<para>Video overlay can be considered another I/O method, although
the application does not directly receive the image data. It is
selected by initiating video overlay with the &VIDIOC-S-FMT; ioctl.
For more information see <xref linkend="overlay" />.</para>
<para>Generally exactly one I/O method, including overlay, is
associated with each file descriptor. The only exceptions are
applications not exchanging data with a driver ("panel applications",
see <xref linkend="open" />) and drivers permitting simultaneous video capturing
and overlay using the same file descriptor, for compatibility with V4L
and earlier versions of V4L2.</para>
<para><constant>VIDIOC_S_FMT</constant> and
<constant>VIDIOC_REQBUFS</constant> would permit this to some degree,
but for simplicity drivers need not support switching the I/O method
(after first switching away from read/write) other than by closing
and reopening the device.</para>
<para>The following sections describe the various I/O methods in
more detail.</para>
<section id="rw">
<title>Read/Write</title>
<para>Input and output devices support the
<function>read()</function> and <function>write()</function> function,
respectively, when the <constant>V4L2_CAP_READWRITE</constant> flag in
the <structfield>capabilities</structfield> field of &v4l2-capability;
returned by the &VIDIOC-QUERYCAP; ioctl is set.</para>
<para>Drivers may need the CPU to copy the data, but they may also
support DMA to or from user memory, so this I/O method is not
necessarily less efficient than other methods merely exchanging buffer
pointers. It is considered inferior though because no meta-information
like frame counters or timestamps are passed. This information is
necessary to recognize frame dropping and to synchronize with other
data streams. However this is also the simplest I/O method, requiring
little or no setup to exchange data. It permits command line stunts
like this (the <application>vidctrl</application> tool is
fictitious):</para>
<informalexample>
<screen>
&gt; vidctrl /dev/video --input=0 --format=YUYV --size=352x288
&gt; dd if=/dev/video of=myimage.422 bs=202752 count=1
</screen>
</informalexample>
<para>To read from the device applications use the
&func-read; function, to write the &func-write; function.
Drivers must implement one I/O method if they
exchange data with applications, but it need not be this.<footnote>
<para>It would be desirable if applications could depend on
drivers supporting all I/O interfaces, but as much as the complex
memory mapping I/O can be inadequate for some devices we have no
reason to require this interface, which is most useful for simple
applications capturing still images.</para>
</footnote> When reading or writing is supported, the driver
must also support the &func-select; and &func-poll;
function.<footnote>
<para>At the driver level <function>select()</function> and
<function>poll()</function> are the same, and
<function>select()</function> is too important to be optional.</para>
</footnote></para>
</section>
<section id="mmap">
<title>Streaming I/O (Memory Mapping)</title>
<para>Input and output devices support this I/O method when the
<constant>V4L2_CAP_STREAMING</constant> flag in the
<structfield>capabilities</structfield> field of &v4l2-capability;
returned by the &VIDIOC-QUERYCAP; ioctl is set. There are two
streaming methods, to determine if the memory mapping flavor is
supported applications must call the &VIDIOC-REQBUFS; ioctl.</para>
<para>Streaming is an I/O method where only pointers to buffers
are exchanged between application and driver, the data itself is not
copied. Memory mapping is primarily intended to map buffers in device
memory into the application's address space. Device memory can be for
example the video memory on a graphics card with a video capture
add-on. However, being the most efficient I/O method available for a
long time, many other drivers support streaming as well, allocating
buffers in DMA-able main memory.</para>
<para>A driver can support many sets of buffers. Each set is
identified by a unique buffer type value. The sets are independent and
each set can hold a different type of data. To access different sets
at the same time different file descriptors must be used.<footnote>
<para>One could use one file descriptor and set the buffer
type field accordingly when calling &VIDIOC-QBUF; etc., but it makes
the <function>select()</function> function ambiguous. We also like the
clean approach of one file descriptor per logical stream. Video
overlay for example is also a logical stream, although the CPU is not
needed for continuous operation.</para>
</footnote></para>
<para>To allocate device buffers applications call the
&VIDIOC-REQBUFS; ioctl with the desired number of buffers and buffer
type, for example <constant>V4L2_BUF_TYPE_VIDEO_CAPTURE</constant>.
This ioctl can also be used to change the number of buffers or to free
the allocated memory, provided none of the buffers are still
mapped.</para>
<para>Before applications can access the buffers they must map
them into their address space with the &func-mmap; function. The
location of the buffers in device memory can be determined with the
&VIDIOC-QUERYBUF; ioctl. In the single-planar API case, the
<structfield>m.offset</structfield> and <structfield>length</structfield>
returned in a &v4l2-buffer; are passed as sixth and second parameter to the
<function>mmap()</function> function. When using the multi-planar API,
struct &v4l2-buffer; contains an array of &v4l2-plane; structures, each
containing its own <structfield>m.offset</structfield> and
<structfield>length</structfield>. When using the multi-planar API, every
plane of every buffer has to be mapped separately, so the number of
calls to &func-mmap; should be equal to number of buffers times number of
planes in each buffer. The offset and length values must not be modified.
Remember, the buffers are allocated in physical memory, as opposed to virtual
memory, which can be swapped out to disk. Applications should free the buffers
as soon as possible with the &func-munmap; function.</para>
<example>
<title>Mapping buffers in the single-planar API</title>
<programlisting>
&v4l2-requestbuffers; reqbuf;
struct {
void *start;
size_t length;
} *buffers;
unsigned int i;
memset(&amp;reqbuf, 0, sizeof(reqbuf));
reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
reqbuf.memory = V4L2_MEMORY_MMAP;
reqbuf.count = 20;
if (-1 == ioctl (fd, &VIDIOC-REQBUFS;, &amp;reqbuf)) {
if (errno == EINVAL)
printf("Video capturing or mmap-streaming is not supported\n");
else
perror("VIDIOC_REQBUFS");
exit(EXIT_FAILURE);
}
/* We want at least five buffers. */
if (reqbuf.count &lt; 5) {
/* You may need to free the buffers here. */
printf("Not enough buffer memory\n");
exit(EXIT_FAILURE);
}
buffers = calloc(reqbuf.count, sizeof(*buffers));
assert(buffers != NULL);
for (i = 0; i &lt; reqbuf.count; i++) {
&v4l2-buffer; buffer;
memset(&amp;buffer, 0, sizeof(buffer));
buffer.type = reqbuf.type;
buffer.memory = V4L2_MEMORY_MMAP;
buffer.index = i;
if (-1 == ioctl (fd, &VIDIOC-QUERYBUF;, &amp;buffer)) {
perror("VIDIOC_QUERYBUF");
exit(EXIT_FAILURE);
}
buffers[i].length = buffer.length; /* remember for munmap() */
buffers[i].start = mmap(NULL, buffer.length,
PROT_READ | PROT_WRITE, /* recommended */
MAP_SHARED, /* recommended */
fd, buffer.m.offset);
if (MAP_FAILED == buffers[i].start) {
/* If you do not exit here you should unmap() and free()
the buffers mapped so far. */
perror("mmap");
exit(EXIT_FAILURE);
}
}
/* Cleanup. */
for (i = 0; i &lt; reqbuf.count; i++)
munmap(buffers[i].start, buffers[i].length);
</programlisting>
</example>
<example>
<title>Mapping buffers in the multi-planar API</title>
<programlisting>
&v4l2-requestbuffers; reqbuf;
/* Our current format uses 3 planes per buffer */
#define FMT_NUM_PLANES = 3
struct {
void *start[FMT_NUM_PLANES];
size_t length[FMT_NUM_PLANES];
} *buffers;
unsigned int i, j;
memset(&amp;reqbuf, 0, sizeof(reqbuf));
reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
reqbuf.memory = V4L2_MEMORY_MMAP;
reqbuf.count = 20;
if (ioctl(fd, &VIDIOC-REQBUFS;, &amp;reqbuf) &lt; 0) {
if (errno == EINVAL)
printf("Video capturing or mmap-streaming is not supported\n");
else
perror("VIDIOC_REQBUFS");
exit(EXIT_FAILURE);
}
/* We want at least five buffers. */
if (reqbuf.count &lt; 5) {
/* You may need to free the buffers here. */
printf("Not enough buffer memory\n");
exit(EXIT_FAILURE);
}
buffers = calloc(reqbuf.count, sizeof(*buffers));
assert(buffers != NULL);
for (i = 0; i &lt; reqbuf.count; i++) {
&v4l2-buffer; buffer;
&v4l2-plane; planes[FMT_NUM_PLANES];
memset(&amp;buffer, 0, sizeof(buffer));
buffer.type = reqbuf.type;
buffer.memory = V4L2_MEMORY_MMAP;
buffer.index = i;
/* length in struct v4l2_buffer in multi-planar API stores the size
* of planes array. */
buffer.length = FMT_NUM_PLANES;
buffer.m.planes = planes;
if (ioctl(fd, &VIDIOC-QUERYBUF;, &amp;buffer) &lt; 0) {
perror("VIDIOC_QUERYBUF");
exit(EXIT_FAILURE);
}
/* Every plane has to be mapped separately */
for (j = 0; j &lt; FMT_NUM_PLANES; j++) {
buffers[i].length[j] = buffer.m.planes[j].length; /* remember for munmap() */
buffers[i].start[j] = mmap(NULL, buffer.m.planes[j].length,
PROT_READ | PROT_WRITE, /* recommended */
MAP_SHARED, /* recommended */
fd, buffer.m.planes[j].m.offset);
if (MAP_FAILED == buffers[i].start[j]) {
/* If you do not exit here you should unmap() and free()
the buffers and planes mapped so far. */
perror("mmap");
exit(EXIT_FAILURE);
}
}
}
/* Cleanup. */
for (i = 0; i &lt; reqbuf.count; i++)
for (j = 0; j &lt; FMT_NUM_PLANES; j++)
munmap(buffers[i].start[j], buffers[i].length[j]);
</programlisting>
</example>
<para>Conceptually streaming drivers maintain two buffer queues, an incoming
and an outgoing queue. They separate the synchronous capture or output
operation locked to a video clock from the application which is
subject to random disk or network delays and preemption by
other processes, thereby reducing the probability of data loss.
The queues are organized as FIFOs, buffers will be
output in the order enqueued in the incoming FIFO, and were
captured in the order dequeued from the outgoing FIFO.</para>
<para>The driver may require a minimum number of buffers enqueued
at all times to function, apart of this no limit exists on the number
of buffers applications can enqueue in advance, or dequeue and
process. They can also enqueue in a different order than buffers have
been dequeued, and the driver can <emphasis>fill</emphasis> enqueued
<emphasis>empty</emphasis> buffers in any order. <footnote>
<para>Random enqueue order permits applications processing
images out of order (such as video codecs) to return buffers earlier,
reducing the probability of data loss. Random fill order allows
drivers to reuse buffers on a LIFO-basis, taking advantage of caches
holding scatter-gather lists and the like.</para>
</footnote> The index number of a buffer (&v4l2-buffer;
<structfield>index</structfield>) plays no role here, it only
identifies the buffer.</para>
<para>Initially all mapped buffers are in dequeued state,
inaccessible by the driver. For capturing applications it is customary
to first enqueue all mapped buffers, then to start capturing and enter
the read loop. Here the application waits until a filled buffer can be
dequeued, and re-enqueues the buffer when the data is no longer
needed. Output applications fill and enqueue buffers, when enough
buffers are stacked up the output is started with
<constant>VIDIOC_STREAMON</constant>. In the write loop, when
the application runs out of free buffers, it must wait until an empty
buffer can be dequeued and reused.</para>
<para>To enqueue and dequeue a buffer applications use the
&VIDIOC-QBUF; and &VIDIOC-DQBUF; ioctl. The status of a buffer being
mapped, enqueued, full or empty can be determined at any time using the
&VIDIOC-QUERYBUF; ioctl. Two methods exist to suspend execution of the
application until one or more buffers can be dequeued. By default
<constant>VIDIOC_DQBUF</constant> blocks when no buffer is in the
outgoing queue. When the <constant>O_NONBLOCK</constant> flag was
given to the &func-open; function, <constant>VIDIOC_DQBUF</constant>
returns immediately with an &EAGAIN; when no buffer is available. The
&func-select; or &func-poll; function are always available.</para>
<para>To start and stop capturing or output applications call the
&VIDIOC-STREAMON; and &VIDIOC-STREAMOFF; ioctl. Note
<constant>VIDIOC_STREAMOFF</constant> removes all buffers from both
queues as a side effect. Since there is no notion of doing anything
"now" on a multitasking system, if an application needs to synchronize
with another event it should examine the &v4l2-buffer;
<structfield>timestamp</structfield> of captured buffers, or set the
field before enqueuing buffers for output.</para>
<para>Drivers implementing memory mapping I/O must
support the <constant>VIDIOC_REQBUFS</constant>,
<constant>VIDIOC_QUERYBUF</constant>,
<constant>VIDIOC_QBUF</constant>, <constant>VIDIOC_DQBUF</constant>,
<constant>VIDIOC_STREAMON</constant> and
<constant>VIDIOC_STREAMOFF</constant> ioctl, the
<function>mmap()</function>, <function>munmap()</function>,
<function>select()</function> and <function>poll()</function>
function.<footnote>
<para>At the driver level <function>select()</function> and
<function>poll()</function> are the same, and
<function>select()</function> is too important to be optional. The
rest should be evident.</para>
</footnote></para>
<para>[capture example]</para>
</section>
<section id="userp">
<title>Streaming I/O (User Pointers)</title>
<para>Input and output devices support this I/O method when the
<constant>V4L2_CAP_STREAMING</constant> flag in the
<structfield>capabilities</structfield> field of &v4l2-capability;
returned by the &VIDIOC-QUERYCAP; ioctl is set. If the particular user
pointer method (not only memory mapping) is supported must be
determined by calling the &VIDIOC-REQBUFS; ioctl.</para>
<para>This I/O method combines advantages of the read/write and
memory mapping methods. Buffers (planes) are allocated by the application
itself, and can reside for example in virtual or shared memory. Only
pointers to data are exchanged, these pointers and meta-information
are passed in &v4l2-buffer; (or in &v4l2-plane; in the multi-planar API case).
The driver must be switched into user pointer I/O mode by calling the
&VIDIOC-REQBUFS; with the desired buffer type. No buffers (planes) are allocated
beforehand, consequently they are not indexed and cannot be queried like mapped
buffers with the <constant>VIDIOC_QUERYBUF</constant> ioctl.</para>
<example>
<title>Initiating streaming I/O with user pointers</title>
<programlisting>
&v4l2-requestbuffers; reqbuf;
memset (&amp;reqbuf, 0, sizeof (reqbuf));
reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
reqbuf.memory = V4L2_MEMORY_USERPTR;
if (ioctl (fd, &VIDIOC-REQBUFS;, &amp;reqbuf) == -1) {
if (errno == EINVAL)
printf ("Video capturing or user pointer streaming is not supported\n");
else
perror ("VIDIOC_REQBUFS");
exit (EXIT_FAILURE);
}
</programlisting>
</example>
<para>Buffer (plane) addresses and sizes are passed on the fly with the
&VIDIOC-QBUF; ioctl. Although buffers are commonly cycled,
applications can pass different addresses and sizes at each
<constant>VIDIOC_QBUF</constant> call. If required by the hardware the
driver swaps memory pages within physical memory to create a
continuous area of memory. This happens transparently to the
application in the virtual memory subsystem of the kernel. When buffer
pages have been swapped out to disk they are brought back and finally
locked in physical memory for DMA.<footnote>
<para>We expect that frequently used buffers are typically not
swapped out. Anyway, the process of swapping, locking or generating
scatter-gather lists may be time consuming. The delay can be masked by
the depth of the incoming buffer queue, and perhaps by maintaining
caches assuming a buffer will be soon enqueued again. On the other
hand, to optimize memory usage drivers can limit the number of buffers
locked in advance and recycle the most recently used buffers first. Of
course, the pages of empty buffers in the incoming queue need not be
saved to disk. Output buffers must be saved on the incoming and
outgoing queue because an application may share them with other
processes.</para>
</footnote></para>
<para>Filled or displayed buffers are dequeued with the
&VIDIOC-DQBUF; ioctl. The driver can unlock the memory pages at any
time between the completion of the DMA and this ioctl. The memory is
also unlocked when &VIDIOC-STREAMOFF; is called, &VIDIOC-REQBUFS;, or
when the device is closed. Applications must take care not to free
buffers without dequeuing. For once, the buffers remain locked until
further, wasting physical memory. Second the driver will not be
notified when the memory is returned to the application's free list
and subsequently reused for other purposes, possibly completing the
requested DMA and overwriting valuable data.</para>
<para>For capturing applications it is customary to enqueue a
number of empty buffers, to start capturing and enter the read loop.
Here the application waits until a filled buffer can be dequeued, and
re-enqueues the buffer when the data is no longer needed. Output
applications fill and enqueue buffers, when enough buffers are stacked
up output is started. In the write loop, when the application
runs out of free buffers it must wait until an empty buffer can be
dequeued and reused. Two methods exist to suspend execution of the
application until one or more buffers can be dequeued. By default
<constant>VIDIOC_DQBUF</constant> blocks when no buffer is in the
outgoing queue. When the <constant>O_NONBLOCK</constant> flag was
given to the &func-open; function, <constant>VIDIOC_DQBUF</constant>
returns immediately with an &EAGAIN; when no buffer is available. The
&func-select; or &func-poll; function are always available.</para>
<para>To start and stop capturing or output applications call the
&VIDIOC-STREAMON; and &VIDIOC-STREAMOFF; ioctl. Note
<constant>VIDIOC_STREAMOFF</constant> removes all buffers from both
queues and unlocks all buffers as a side effect. Since there is no
notion of doing anything "now" on a multitasking system, if an
application needs to synchronize with another event it should examine
the &v4l2-buffer; <structfield>timestamp</structfield> of captured
buffers, or set the field before enqueuing buffers for output.</para>
<para>Drivers implementing user pointer I/O must
support the <constant>VIDIOC_REQBUFS</constant>,
<constant>VIDIOC_QBUF</constant>, <constant>VIDIOC_DQBUF</constant>,
<constant>VIDIOC_STREAMON</constant> and
<constant>VIDIOC_STREAMOFF</constant> ioctl, the
<function>select()</function> and <function>poll()</function> function.<footnote>
<para>At the driver level <function>select()</function> and
<function>poll()</function> are the same, and
<function>select()</function> is too important to be optional. The
rest should be evident.</para>
</footnote></para>
</section>
<section id="async">
<title>Asynchronous I/O</title>
<para>This method is not defined yet.</para>
</section>
<section id="buffer">
<title>Buffers</title>
<para>A buffer contains data exchanged by application and
driver using one of the Streaming I/O methods. In the multi-planar API, the
data is held in planes, while the buffer structure acts as a container
for the planes. Only pointers to buffers (planes) are exchanged, the data
itself is not copied. These pointers, together with meta-information like
timestamps or field parity, are stored in a struct
<structname>v4l2_buffer</structname>, argument to
the &VIDIOC-QUERYBUF;, &VIDIOC-QBUF; and &VIDIOC-DQBUF; ioctl.
In the multi-planar API, some plane-specific members of struct
<structname>v4l2_buffer</structname>, such as pointers and sizes for each
plane, are stored in struct <structname>v4l2_plane</structname> instead.
In that case, struct <structname>v4l2_buffer</structname> contains an array of
plane structures.</para>
<para>Nominally timestamps refer to the first data byte transmitted.
In practice however the wide range of hardware covered by the V4L2 API
limits timestamp accuracy. Often an interrupt routine will
sample the system clock shortly after the field or frame was stored
completely in memory. So applications must expect a constant
difference up to one field or frame period plus a small (few scan
lines) random error. The delay and error can be much
larger due to compression or transmission over an external bus when
the frames are not properly stamped by the sender. This is frequently
the case with USB cameras. Here timestamps refer to the instant the
field or frame was received by the driver, not the capture time. These
devices identify by not enumerating any video standards, see <xref
linkend="standard" />.</para>
<para>Similar limitations apply to output timestamps. Typically
the video hardware locks to a clock controlling the video timing, the
horizontal and vertical synchronization pulses. At some point in the
line sequence, possibly the vertical blanking, an interrupt routine
samples the system clock, compares against the timestamp and programs
the hardware to repeat the previous field or frame, or to display the
buffer contents.</para>
<para>Apart of limitations of the video device and natural
inaccuracies of all clocks, it should be noted system time itself is
not perfectly stable. It can be affected by power saving cycles,
warped to insert leap seconds, or even turned back or forth by the
system administrator affecting long term measurements. <footnote>
<para>Since no other Linux multimedia
API supports unadjusted time it would be foolish to introduce here. We
must use a universally supported clock to synchronize different media,
hence time of day.</para>
</footnote></para>
<table frame="none" pgwide="1" id="v4l2-buffer">
<title>struct <structname>v4l2_buffer</structname></title>
<tgroup cols="4">
&cs-ustr;
<tbody valign="top">
<row>
<entry>__u32</entry>
<entry><structfield>index</structfield></entry>
<entry></entry>
<entry>Number of the buffer, set by the application. This
field is only used for <link linkend="mmap">memory mapping</link> I/O
and can range from zero to the number of buffers allocated
with the &VIDIOC-REQBUFS; ioctl (&v4l2-requestbuffers; <structfield>count</structfield>) minus one.</entry>
</row>
<row>
<entry>&v4l2-buf-type;</entry>
<entry><structfield>type</structfield></entry>
<entry></entry>
<entry>Type of the buffer, same as &v4l2-format;
<structfield>type</structfield> or &v4l2-requestbuffers;
<structfield>type</structfield>, set by the application.</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>bytesused</structfield></entry>
<entry></entry>
<entry>The number of bytes occupied by the data in the
buffer. It depends on the negotiated data format and may change with
each buffer for compressed variable size data like JPEG images.
Drivers must set this field when <structfield>type</structfield>
refers to an input stream, applications when an output stream.</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>flags</structfield></entry>
<entry></entry>
<entry>Flags set by the application or driver, see <xref
linkend="buffer-flags" />.</entry>
</row>
<row>
<entry>&v4l2-field;</entry>
<entry><structfield>field</structfield></entry>
<entry></entry>
<entry>Indicates the field order of the image in the
buffer, see <xref linkend="v4l2-field" />. This field is not used when
the buffer contains VBI data. Drivers must set it when
<structfield>type</structfield> refers to an input stream,
applications when an output stream.</entry>
</row>
<row>
<entry>struct timeval</entry>
<entry><structfield>timestamp</structfield></entry>
<entry></entry>
<entry><para>For input streams this is the
system time (as returned by the <function>gettimeofday()</function>
function) when the first data byte was captured. For output streams
the data will not be displayed before this time, secondary to the
nominal frame rate determined by the current video standard in
enqueued order. Applications can for example zero this field to
display frames as soon as possible. The driver stores the time at
which the first data byte was actually sent out in the
<structfield>timestamp</structfield> field. This permits
applications to monitor the drift between the video and system
clock.</para></entry>
</row>
<row>
<entry>&v4l2-timecode;</entry>
<entry><structfield>timecode</structfield></entry>
<entry></entry>
<entry>When <structfield>type</structfield> is
<constant>V4L2_BUF_TYPE_VIDEO_CAPTURE</constant> and the
<constant>V4L2_BUF_FLAG_TIMECODE</constant> flag is set in
<structfield>flags</structfield>, this structure contains a frame
timecode. In <link linkend="v4l2-field">V4L2_FIELD_ALTERNATE</link>
mode the top and bottom field contain the same timecode.
Timecodes are intended to help video editing and are typically recorded on
video tapes, but also embedded in compressed formats like MPEG. This
field is independent of the <structfield>timestamp</structfield> and
<structfield>sequence</structfield> fields.</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>sequence</structfield></entry>
<entry></entry>
<entry>Set by the driver, counting the frames in the
sequence.</entry>
</row>
<row>
<entry spanname="hspan"><para>In <link
linkend="v4l2-field">V4L2_FIELD_ALTERNATE</link> mode the top and
bottom field have the same sequence number. The count starts at zero
and includes dropped or repeated frames. A dropped frame was received
by an input device but could not be stored due to lack of free buffer
space. A repeated frame was displayed again by an output device
because the application did not pass new data in
time.</para><para>Note this may count the frames received
e.g. over USB, without taking into account the frames dropped by the
remote hardware due to limited compression throughput or bus
bandwidth. These devices identify by not enumerating any video
standards, see <xref linkend="standard" />.</para></entry>
</row>
<row>
<entry>&v4l2-memory;</entry>
<entry><structfield>memory</structfield></entry>
<entry></entry>
<entry>This field must be set by applications and/or drivers
in accordance with the selected I/O method.</entry>
</row>
<row>
<entry>union</entry>
<entry><structfield>m</structfield></entry>
</row>
<row>
<entry></entry>
<entry>__u32</entry>
<entry><structfield>offset</structfield></entry>
<entry>For the single-planar API and when
<structfield>memory</structfield> is <constant>V4L2_MEMORY_MMAP</constant> this
is the offset of the buffer from the start of the device memory. The value is
returned by the driver and apart of serving as parameter to the &func-mmap;
function not useful for applications. See <xref linkend="mmap" /> for details
</entry>
</row>
<row>
<entry></entry>
<entry>unsigned long</entry>
<entry><structfield>userptr</structfield></entry>
<entry>For the single-planar API and when
<structfield>memory</structfield> is <constant>V4L2_MEMORY_USERPTR</constant>
this is a pointer to the buffer (casted to unsigned long type) in virtual
memory, set by the application. See <xref linkend="userp" /> for details.
</entry>
</row>
<row>
<entry></entry>
<entry>struct v4l2_plane</entry>
<entry><structfield>*planes</structfield></entry>
<entry>When using the multi-planar API, contains a userspace pointer
to an array of &v4l2-plane;. The size of the array should be put
in the <structfield>length</structfield> field of this
<structname>v4l2_buffer</structname> structure.</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>length</structfield></entry>
<entry></entry>
<entry>Size of the buffer (not the payload) in bytes for the
single-planar API. For the multi-planar API should contain the
number of elements in the <structfield>planes</structfield> array.
</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>input</structfield></entry>
<entry></entry>
<entry>Some video capture drivers support rapid and
synchronous video input changes, a function useful for example in
video surveillance applications. For this purpose applications set the
<constant>V4L2_BUF_FLAG_INPUT</constant> flag, and this field to the
number of a video input as in &v4l2-input; field
<structfield>index</structfield>.</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>reserved</structfield></entry>
<entry></entry>
<entry>A place holder for future extensions and custom
(driver defined) buffer types
<constant>V4L2_BUF_TYPE_PRIVATE</constant> and higher. Applications
should set this to 0.</entry>
</row>
</tbody>
</tgroup>
</table>
<table frame="none" pgwide="1" id="v4l2-plane">
<title>struct <structname>v4l2_plane</structname></title>
<tgroup cols="4">
&cs-ustr;
<tbody valign="top">
<row>
<entry>__u32</entry>
<entry><structfield>bytesused</structfield></entry>
<entry></entry>
<entry>The number of bytes occupied by data in the plane
(its payload).</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>length</structfield></entry>
<entry></entry>
<entry>Size in bytes of the plane (not its payload).</entry>
</row>
<row>
<entry>union</entry>
<entry><structfield>m</structfield></entry>
<entry></entry>
<entry></entry>
</row>
<row>
<entry></entry>
<entry>__u32</entry>
<entry><structfield>mem_offset</structfield></entry>
<entry>When the memory type in the containing &v4l2-buffer; is
<constant>V4L2_MEMORY_MMAP</constant>, this is the value that
should be passed to &func-mmap;, similar to the
<structfield>offset</structfield> field in &v4l2-buffer;.</entry>
</row>
<row>
<entry></entry>
<entry>__unsigned long</entry>
<entry><structfield>userptr</structfield></entry>
<entry>When the memory type in the containing &v4l2-buffer; is
<constant>V4L2_MEMORY_USERPTR</constant>, this is a userspace
pointer to the memory allocated for this plane by an application.
</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>data_offset</structfield></entry>
<entry></entry>
<entry>Offset in bytes to video data in the plane, if applicable.
</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>reserved[11]</structfield></entry>
<entry></entry>
<entry>Reserved for future use. Should be zeroed by an
application.</entry>
</row>
</tbody>
</tgroup>
</table>
<table frame="none" pgwide="1" id="v4l2-buf-type">
<title>enum v4l2_buf_type</title>
<tgroup cols="3">
&cs-def;
<tbody valign="top">
<row>
<entry><constant>V4L2_BUF_TYPE_VIDEO_CAPTURE</constant></entry>
<entry>1</entry>
<entry>Buffer of a single-planar video capture stream, see <xref
linkend="capture" />.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE</constant>
</entry>
<entry>9</entry>
<entry>Buffer of a multi-planar video capture stream, see <xref
linkend="capture" />.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_TYPE_VIDEO_OUTPUT</constant></entry>
<entry>2</entry>
<entry>Buffer of a single-planar video output stream, see <xref
linkend="output" />.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE</constant>
</entry>
<entry>10</entry>
<entry>Buffer of a multi-planar video output stream, see <xref
linkend="output" />.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_TYPE_VIDEO_OVERLAY</constant></entry>
<entry>3</entry>
<entry>Buffer for video overlay, see <xref linkend="overlay" />.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_TYPE_VBI_CAPTURE</constant></entry>
<entry>4</entry>
<entry>Buffer of a raw VBI capture stream, see <xref
linkend="raw-vbi" />.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_TYPE_VBI_OUTPUT</constant></entry>
<entry>5</entry>
<entry>Buffer of a raw VBI output stream, see <xref
linkend="raw-vbi" />.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_TYPE_SLICED_VBI_CAPTURE</constant></entry>
<entry>6</entry>
<entry>Buffer of a sliced VBI capture stream, see <xref
linkend="sliced" />.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_TYPE_SLICED_VBI_OUTPUT</constant></entry>
<entry>7</entry>
<entry>Buffer of a sliced VBI output stream, see <xref
linkend="sliced" />.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY</constant></entry>
<entry>8</entry>
<entry>Buffer for video output overlay (OSD), see <xref
linkend="osd" />. Status: <link
linkend="experimental">Experimental</link>.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_TYPE_PRIVATE</constant></entry>
<entry>0x80</entry>
<entry>This and higher values are reserved for custom
(driver defined) buffer types.</entry>
</row>
</tbody>
</tgroup>
</table>
<table frame="none" pgwide="1" id="buffer-flags">
<title>Buffer Flags</title>
<tgroup cols="3">
&cs-def;
<tbody valign="top">
<row>
<entry><constant>V4L2_BUF_FLAG_MAPPED</constant></entry>
<entry>0x0001</entry>
<entry>The buffer resides in device memory and has been mapped
into the application's address space, see <xref linkend="mmap" /> for details.
Drivers set or clear this flag when the
<link linkend="vidioc-querybuf">VIDIOC_QUERYBUF</link>, <link
linkend="vidioc-qbuf">VIDIOC_QBUF</link> or <link
linkend="vidioc-qbuf">VIDIOC_DQBUF</link> ioctl is called. Set by the driver.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_FLAG_QUEUED</constant></entry>
<entry>0x0002</entry>
<entry>Internally drivers maintain two buffer queues, an
incoming and outgoing queue. When this flag is set, the buffer is
currently on the incoming queue. It automatically moves to the
outgoing queue after the buffer has been filled (capture devices) or
displayed (output devices). Drivers set or clear this flag when the
<constant>VIDIOC_QUERYBUF</constant> ioctl is called. After
(successful) calling the <constant>VIDIOC_QBUF </constant>ioctl it is
always set and after <constant>VIDIOC_DQBUF</constant> always
cleared.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_FLAG_DONE</constant></entry>
<entry>0x0004</entry>
<entry>When this flag is set, the buffer is currently on
the outgoing queue, ready to be dequeued from the driver. Drivers set
or clear this flag when the <constant>VIDIOC_QUERYBUF</constant> ioctl
is called. After calling the <constant>VIDIOC_QBUF</constant> or
<constant>VIDIOC_DQBUF</constant> it is always cleared. Of course a
buffer cannot be on both queues at the same time, the
<constant>V4L2_BUF_FLAG_QUEUED</constant> and
<constant>V4L2_BUF_FLAG_DONE</constant> flag are mutually exclusive.
They can be both cleared however, then the buffer is in "dequeued"
state, in the application domain to say so.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_FLAG_ERROR</constant></entry>
<entry>0x0040</entry>
<entry>When this flag is set, the buffer has been dequeued
successfully, although the data might have been corrupted.
This is recoverable, streaming may continue as normal and
the buffer may be reused normally.
Drivers set this flag when the <constant>VIDIOC_DQBUF</constant>
ioctl is called.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_FLAG_KEYFRAME</constant></entry>
<entry>0x0008</entry>
<entry>Drivers set or clear this flag when calling the
<constant>VIDIOC_DQBUF</constant> ioctl. It may be set by video
capture devices when the buffer contains a compressed image which is a
key frame (or field), &ie; can be decompressed on its own.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_FLAG_PFRAME</constant></entry>
<entry>0x0010</entry>
<entry>Similar to <constant>V4L2_BUF_FLAG_KEYFRAME</constant>
this flags predicted frames or fields which contain only differences to a
previous key frame.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_FLAG_BFRAME</constant></entry>
<entry>0x0020</entry>
<entry>Similar to <constant>V4L2_BUF_FLAG_PFRAME</constant>
this is a bidirectional predicted frame or field. [ooc tbd]</entry>
</row>
<row>
<entry><constant>V4L2_BUF_FLAG_TIMECODE</constant></entry>
<entry>0x0100</entry>
<entry>The <structfield>timecode</structfield> field is valid.
Drivers set or clear this flag when the <constant>VIDIOC_DQBUF</constant>
ioctl is called.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_FLAG_INPUT</constant></entry>
<entry>0x0200</entry>
<entry>The <structfield>input</structfield> field is valid.
Applications set or clear this flag before calling the
<constant>VIDIOC_QBUF</constant> ioctl.</entry>
</row>
</tbody>
</tgroup>
</table>
<table pgwide="1" frame="none" id="v4l2-memory">
<title>enum v4l2_memory</title>
<tgroup cols="3">
&cs-def;
<tbody valign="top">
<row>
<entry><constant>V4L2_MEMORY_MMAP</constant></entry>
<entry>1</entry>
<entry>The buffer is used for <link linkend="mmap">memory
mapping</link> I/O.</entry>
</row>
<row>
<entry><constant>V4L2_MEMORY_USERPTR</constant></entry>
<entry>2</entry>
<entry>The buffer is used for <link linkend="userp">user
pointer</link> I/O.</entry>
</row>
<row>
<entry><constant>V4L2_MEMORY_OVERLAY</constant></entry>
<entry>3</entry>
<entry>[to do]</entry>
</row>
</tbody>
</tgroup>
</table>
<section>
<title>Timecodes</title>
<para>The <structname>v4l2_timecode</structname> structure is
designed to hold a <xref linkend="smpte12m" /> or similar timecode.
(struct <structname>timeval</structname> timestamps are stored in
&v4l2-buffer; field <structfield>timestamp</structfield>.)</para>
<table frame="none" pgwide="1" id="v4l2-timecode">
<title>struct <structname>v4l2_timecode</structname></title>
<tgroup cols="3">
&cs-str;
<tbody valign="top">
<row>
<entry>__u32</entry>
<entry><structfield>type</structfield></entry>
<entry>Frame rate the timecodes are based on, see <xref
linkend="timecode-type" />.</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>flags</structfield></entry>
<entry>Timecode flags, see <xref linkend="timecode-flags" />.</entry>
</row>
<row>
<entry>__u8</entry>
<entry><structfield>frames</structfield></entry>
<entry>Frame count, 0 ... 23/24/29/49/59, depending on the
type of timecode.</entry>
</row>
<row>
<entry>__u8</entry>
<entry><structfield>seconds</structfield></entry>
<entry>Seconds count, 0 ... 59. This is a binary, not BCD number.</entry>
</row>
<row>
<entry>__u8</entry>
<entry><structfield>minutes</structfield></entry>
<entry>Minutes count, 0 ... 59. This is a binary, not BCD number.</entry>
</row>
<row>
<entry>__u8</entry>
<entry><structfield>hours</structfield></entry>
<entry>Hours count, 0 ... 29. This is a binary, not BCD number.</entry>
</row>
<row>
<entry>__u8</entry>
<entry><structfield>userbits</structfield>[4]</entry>
<entry>The "user group" bits from the timecode.</entry>
</row>
</tbody>
</tgroup>
</table>
<table frame="none" pgwide="1" id="timecode-type">
<title>Timecode Types</title>
<tgroup cols="3">
&cs-def;
<tbody valign="top">
<row>
<entry><constant>V4L2_TC_TYPE_24FPS</constant></entry>
<entry>1</entry>
<entry>24 frames per second, i.&nbsp;e. film.</entry>
</row>
<row>
<entry><constant>V4L2_TC_TYPE_25FPS</constant></entry>
<entry>2</entry>
<entry>25 frames per second, &ie; PAL or SECAM video.</entry>
</row>
<row>
<entry><constant>V4L2_TC_TYPE_30FPS</constant></entry>
<entry>3</entry>
<entry>30 frames per second, &ie; NTSC video.</entry>
</row>
<row>
<entry><constant>V4L2_TC_TYPE_50FPS</constant></entry>
<entry>4</entry>
<entry></entry>
</row>
<row>
<entry><constant>V4L2_TC_TYPE_60FPS</constant></entry>
<entry>5</entry>
<entry></entry>
</row>
</tbody>
</tgroup>
</table>
<table frame="none" pgwide="1" id="timecode-flags">
<title>Timecode Flags</title>
<tgroup cols="3">
&cs-def;
<tbody valign="top">
<row>
<entry><constant>V4L2_TC_FLAG_DROPFRAME</constant></entry>
<entry>0x0001</entry>
<entry>Indicates "drop frame" semantics for counting frames
in 29.97 fps material. When set, frame numbers 0 and 1 at the start of
each minute, except minutes 0, 10, 20, 30, 40, 50 are omitted from the
count.</entry>
</row>
<row>
<entry><constant>V4L2_TC_FLAG_COLORFRAME</constant></entry>
<entry>0x0002</entry>
<entry>The "color frame" flag.</entry>
</row>
<row>
<entry><constant>V4L2_TC_USERBITS_field</constant></entry>
<entry>0x000C</entry>
<entry>Field mask for the "binary group flags".</entry>
</row>
<row>
<entry><constant>V4L2_TC_USERBITS_USERDEFINED</constant></entry>
<entry>0x0000</entry>
<entry>Unspecified format.</entry>
</row>
<row>
<entry><constant>V4L2_TC_USERBITS_8BITCHARS</constant></entry>
<entry>0x0008</entry>
<entry>8-bit ISO characters.</entry>
</row>
</tbody>
</tgroup>
</table>
</section>
</section>
<section id="field-order">
<title>Field Order</title>
<para>We have to distinguish between progressive and interlaced
video. Progressive video transmits all lines of a video image
sequentially. Interlaced video divides an image into two fields,
containing only the odd and even lines of the image, respectively.
Alternating the so called odd and even field are transmitted, and due
to a small delay between fields a cathode ray TV displays the lines
interleaved, yielding the original frame. This curious technique was
invented because at refresh rates similar to film the image would
fade out too quickly. Transmitting fields reduces the flicker without
the necessity of doubling the frame rate and with it the bandwidth
required for each channel.</para>
<para>It is important to understand a video camera does not expose
one frame at a time, merely transmitting the frames separated into
fields. The fields are in fact captured at two different instances in
time. An object on screen may well move between one field and the
next. For applications analysing motion it is of paramount importance
to recognize which field of a frame is older, the <emphasis>temporal
order</emphasis>.</para>
<para>When the driver provides or accepts images field by field
rather than interleaved, it is also important applications understand
how the fields combine to frames. We distinguish between top (aka odd) and
bottom (aka even) fields, the <emphasis>spatial order</emphasis>: The first line
of the top field is the first line of an interlaced frame, the first
line of the bottom field is the second line of that frame.</para>
<para>However because fields were captured one after the other,
arguing whether a frame commences with the top or bottom field is
pointless. Any two successive top and bottom, or bottom and top fields
yield a valid frame. Only when the source was progressive to begin
with, &eg; when transferring film to video, two fields may come from
the same frame, creating a natural order.</para>
<para>Counter to intuition the top field is not necessarily the
older field. Whether the older field contains the top or bottom lines
is a convention determined by the video standard. Hence the
distinction between temporal and spatial order of fields. The diagrams
below should make this clearer.</para>
<para>All video capture and output devices must report the current
field order. Some drivers may permit the selection of a different
order, to this end applications initialize the
<structfield>field</structfield> field of &v4l2-pix-format; before
calling the &VIDIOC-S-FMT; ioctl. If this is not desired it should
have the value <constant>V4L2_FIELD_ANY</constant> (0).</para>
<table frame="none" pgwide="1" id="v4l2-field">
<title>enum v4l2_field</title>
<tgroup cols="3">
&cs-def;
<tbody valign="top">
<row>
<entry><constant>V4L2_FIELD_ANY</constant></entry>
<entry>0</entry>
<entry>Applications request this field order when any
one of the <constant>V4L2_FIELD_NONE</constant>,
<constant>V4L2_FIELD_TOP</constant>,
<constant>V4L2_FIELD_BOTTOM</constant>, or
<constant>V4L2_FIELD_INTERLACED</constant> formats is acceptable.
Drivers choose depending on hardware capabilities or e.&nbsp;g. the
requested image size, and return the actual field order. &v4l2-buffer;
<structfield>field</structfield> can never be
<constant>V4L2_FIELD_ANY</constant>.</entry>
</row>
<row>
<entry><constant>V4L2_FIELD_NONE</constant></entry>
<entry>1</entry>
<entry>Images are in progressive format, not interlaced.
The driver may also indicate this order when it cannot distinguish
between <constant>V4L2_FIELD_TOP</constant> and
<constant>V4L2_FIELD_BOTTOM</constant>.</entry>
</row>
<row>
<entry><constant>V4L2_FIELD_TOP</constant></entry>
<entry>2</entry>
<entry>Images consist of the top (aka odd) field only.</entry>
</row>
<row>
<entry><constant>V4L2_FIELD_BOTTOM</constant></entry>
<entry>3</entry>
<entry>Images consist of the bottom (aka even) field only.
Applications may wish to prevent a device from capturing interlaced
images because they will have "comb" or "feathering" artefacts around
moving objects.</entry>
</row>
<row>
<entry><constant>V4L2_FIELD_INTERLACED</constant></entry>
<entry>4</entry>
<entry>Images contain both fields, interleaved line by
line. The temporal order of the fields (whether the top or bottom
field is first transmitted) depends on the current video standard.
M/NTSC transmits the bottom field first, all other standards the top
field first.</entry>
</row>
<row>
<entry><constant>V4L2_FIELD_SEQ_TB</constant></entry>
<entry>5</entry>
<entry>Images contain both fields, the top field lines
are stored first in memory, immediately followed by the bottom field
lines. Fields are always stored in temporal order, the older one first
in memory. Image sizes refer to the frame, not fields.</entry>
</row>
<row>
<entry><constant>V4L2_FIELD_SEQ_BT</constant></entry>
<entry>6</entry>
<entry>Images contain both fields, the bottom field
lines are stored first in memory, immediately followed by the top
field lines. Fields are always stored in temporal order, the older one
first in memory. Image sizes refer to the frame, not fields.</entry>
</row>
<row>
<entry><constant>V4L2_FIELD_ALTERNATE</constant></entry>
<entry>7</entry>
<entry>The two fields of a frame are passed in separate
buffers, in temporal order, &ie; the older one first. To indicate the field
parity (whether the current field is a top or bottom field) the driver
or application, depending on data direction, must set &v4l2-buffer;
<structfield>field</structfield> to
<constant>V4L2_FIELD_TOP</constant> or
<constant>V4L2_FIELD_BOTTOM</constant>. Any two successive fields pair
to build a frame. If fields are successive, without any dropped fields
between them (fields can drop individually), can be determined from
the &v4l2-buffer; <structfield>sequence</structfield> field. Image
sizes refer to the frame, not fields. This format cannot be selected
when using the read/write I/O method.<!-- Where it's indistinguishable
from V4L2_FIELD_SEQ_*. --></entry>
</row>
<row>
<entry><constant>V4L2_FIELD_INTERLACED_TB</constant></entry>
<entry>8</entry>
<entry>Images contain both fields, interleaved line by
line, top field first. The top field is transmitted first.</entry>
</row>
<row>
<entry><constant>V4L2_FIELD_INTERLACED_BT</constant></entry>
<entry>9</entry>
<entry>Images contain both fields, interleaved line by
line, top field first. The bottom field is transmitted first.</entry>
</row>
</tbody>
</tgroup>
</table>
<figure id="fieldseq-tb">
<title>Field Order, Top Field First Transmitted</title>
<mediaobject>
<imageobject>
<imagedata fileref="fieldseq_tb.pdf" format="PS" />
</imageobject>
<imageobject>
<imagedata fileref="fieldseq_tb.gif" format="GIF" />
</imageobject>
</mediaobject>
</figure>
<figure id="fieldseq-bt">
<title>Field Order, Bottom Field First Transmitted</title>
<mediaobject>
<imageobject>
<imagedata fileref="fieldseq_bt.pdf" format="PS" />
</imageobject>
<imageobject>
<imagedata fileref="fieldseq_bt.gif" format="GIF" />
</imageobject>
</mediaobject>
</figure>
</section>
<!--
Local Variables:
mode: sgml
sgml-parent-document: "v4l2.sgml"
indent-tabs-mode: nil
End:
-->