struct v4l2_pix_format
&cs-str;
__u32widthImage width in pixels.__u32heightImage height in pixels.Applications set these fields to
request an image size, drivers return the closest possible values. In
case of planar formats the width and
height applies to the largest plane. To
avoid ambiguities drivers must return values rounded up to a multiple
of the scale factor of any smaller planes. For example when the image
format is YUV 4:2:0, width and
height must be multiples of two.__u32pixelformatThe pixel format or type of compression, set by the
application. This is a little endian four character code. V4L2 defines
standard RGB formats in , YUV formats in , and reserved codes in &v4l2-field;fieldVideo images are typically interlaced. Applications
can request to capture or output only the top or bottom field, or both
fields interlaced or sequentially stored in one buffer or alternating
in separate buffers. Drivers return the actual field order selected.
For details see .__u32bytesperlineDistance in bytes between the leftmost pixels in two
adjacent lines.Both applications and drivers
can set this field to request padding bytes at the end of each line.
Drivers however may ignore the value requested by the application,
returning width times bytes per pixel or a
larger value required by the hardware. That implies applications can
just set this field to zero to get a reasonable
default.Video hardware may access padding bytes,
therefore they must reside in accessible memory. Consider cases where
padding bytes after the last line of an image cross a system page
boundary. Input devices may write padding bytes, the value is
undefined. Output devices ignore the contents of padding
bytes.When the image format is planar the
bytesperline value applies to the largest
plane and is divided by the same factor as the
width field for any smaller planes. For
example the Cb and Cr planes of a YUV 4:2:0 image have half as many
padding bytes following each line as the Y plane. To avoid ambiguities
drivers must return a bytesperline value
rounded up to a multiple of the scale factor.__u32sizeimageSize in bytes of the buffer to hold a complete image,
set by the driver. Usually this is
bytesperline times
height. When the image consists of variable
length compressed data this is the maximum number of bytes required to
hold an image.&v4l2-colorspace;colorspaceThis information supplements the
pixelformat and must be set by the driver,
see .__u32privReserved for custom (driver defined) additional
information about formats. When not used drivers and applications must
set this field to zero.
Standard Image FormatsIn order to exchange images between drivers and
applications, it is necessary to have standard image data formats
which both sides will interpret the same way. V4L2 includes several
such formats, and this section is intended to be an unambiguous
specification of the standard image data formats in V4L2.V4L2 drivers are not limited to these formats, however.
Driver-specific formats are possible. In that case the application may
depend on a codec to convert images to one of the standard formats
when needed. But the data can still be stored and retrieved in the
proprietary format. For example, a device may support a proprietary
compressed format. Applications can still capture and save the data in
the compressed format, saving much disk space, and later use a codec
to convert the images to the X Windows screen format when the video is
to be displayed.Even so, ultimately, some standard formats are needed, so
the V4L2 specification would not be complete without well-defined
standard formats.The V4L2 standard formats are mainly uncompressed formats. The
pixels are always arranged in memory from left to right, and from top
to bottom. The first byte of data in the image buffer is always for
the leftmost pixel of the topmost row. Following that is the pixel
immediately to its right, and so on until the end of the top row of
pixels. Following the rightmost pixel of the row there may be zero or
more bytes of padding to guarantee that each row of pixel data has a
certain alignment. Following the pad bytes, if any, is data for the
leftmost pixel of the second row from the top, and so on. The last row
has just as many pad bytes after it as the other rows.In V4L2 each format has an identifier which looks like
PIX_FMT_XXX, defined in the videodev.h header file. These identifiers
represent four character codes
which are also listed below, however they are not the same as those
used in the Windows world.Colorspaces[intro]Gamma Correction[to do]E'R = f(R)E'G = f(G)E'B = f(B)Construction of luminance and color-difference
signals[to do]E'Y =
CoeffR E'R
+ CoeffG E'G
+ CoeffB E'B(E'R - E'Y) = E'R
- CoeffR E'R
- CoeffG E'G
- CoeffB E'B(E'B - E'Y) = E'B
- CoeffR E'R
- CoeffG E'G
- CoeffB E'BRe-normalized color-difference signalsThe color-difference signals are scaled back to unity
range [-0.5;+0.5]:KB = 0.5 / (1 - CoeffB)KR = 0.5 / (1 - CoeffR)PB =
KB (E'B - E'Y) =
0.5 (CoeffR / CoeffB) E'R
+ 0.5 (CoeffG / CoeffB) E'G
+ 0.5 E'BPR =
KR (E'R - E'Y) =
0.5 E'R
+ 0.5 (CoeffG / CoeffR) E'G
+ 0.5 (CoeffB / CoeffR) E'BQuantization[to do]Y' = (Lum. Levels - 1) · E'Y + Lum. OffsetCB = (Chrom. Levels - 1)
· PB + Chrom. OffsetCR = (Chrom. Levels - 1)
· PR + Chrom. OffsetRounding to the nearest integer and clamping to the range
[0;255] finally yields the digital color components Y'CbCr
stored in YUV images.ITU-R Rec. BT.601 color conversionForward Transformation
int ER, EG, EB; /* gamma corrected RGB input [0;255] */
int Y1, Cb, Cr; /* output [0;255] */
double r, g, b; /* temporaries */
double y1, pb, pr;
int
clamp (double x)
{
int r = x; /* round to nearest */
if (r < 0) return 0;
else if (r > 255) return 255;
else return r;
}
r = ER / 255.0;
g = EG / 255.0;
b = EB / 255.0;
y1 = 0.299 * r + 0.587 * g + 0.114 * b;
pb = -0.169 * r - 0.331 * g + 0.5 * b;
pr = 0.5 * r - 0.419 * g - 0.081 * b;
Y1 = clamp (219 * y1 + 16);
Cb = clamp (224 * pb + 128);
Cr = clamp (224 * pr + 128);
/* or shorter */
y1 = 0.299 * ER + 0.587 * EG + 0.114 * EB;
Y1 = clamp ( (219 / 255.0) * y1 + 16);
Cb = clamp (((224 / 255.0) / (2 - 2 * 0.114)) * (EB - y1) + 128);
Cr = clamp (((224 / 255.0) / (2 - 2 * 0.299)) * (ER - y1) + 128);
Inverse Transformation
int Y1, Cb, Cr; /* gamma pre-corrected input [0;255] */
int ER, EG, EB; /* output [0;255] */
double r, g, b; /* temporaries */
double y1, pb, pr;
int
clamp (double x)
{
int r = x; /* round to nearest */
if (r < 0) return 0;
else if (r > 255) return 255;
else return r;
}
y1 = (255 / 219.0) * (Y1 - 16);
pb = (255 / 224.0) * (Cb - 128);
pr = (255 / 224.0) * (Cr - 128);
r = 1.0 * y1 + 0 * pb + 1.402 * pr;
g = 1.0 * y1 - 0.344 * pb - 0.714 * pr;
b = 1.0 * y1 + 1.772 * pb + 0 * pr;
ER = clamp (r * 255); /* [ok? one should prob. limit y1,pb,pr] */
EG = clamp (g * 255);
EB = clamp (b * 255);
enum v4l2_colorspaceIdentifierValueDescriptionChromaticitiesThe coordinates of the color primaries are
given in the CIE system (1931)White PointGamma CorrectionLuminance E'YQuantizationRedGreenBlueY'Cb, CrV4L2_COLORSPACE_SMPTE170M1NTSC/PAL according to ,
x = 0.630, y = 0.340x = 0.310, y = 0.595x = 0.155, y = 0.070x = 0.3127, y = 0.3290,
Illuminant D65E' = 4.5 I for I ≤0.018,
1.099 I0.45 - 0.099 for 0.018 < I0.299 E'R
+ 0.587 E'G
+ 0.114 E'B219 E'Y + 16224 PB,R + 128V4L2_COLORSPACE_SMPTE240M21125-Line (US) HDTV, see x = 0.630, y = 0.340x = 0.310, y = 0.595x = 0.155, y = 0.070x = 0.3127, y = 0.3290,
Illuminant D65E' = 4 I for I ≤0.0228,
1.1115 I0.45 - 0.1115 for 0.0228 < I0.212 E'R
+ 0.701 E'G
+ 0.087 E'B219 E'Y + 16224 PB,R + 128V4L2_COLORSPACE_REC7093HDTV and modern devices, see x = 0.640, y = 0.330x = 0.300, y = 0.600x = 0.150, y = 0.060x = 0.3127, y = 0.3290,
Illuminant D65E' = 4.5 I for I ≤0.018,
1.099 I0.45 - 0.099 for 0.018 < I0.2125 E'R
+ 0.7154 E'G
+ 0.0721 E'B219 E'Y + 16224 PB,R + 128V4L2_COLORSPACE_BT8784Broken Bt878 extentsThe ubiquitous Bt878 video capture chip
quantizes E'Y to 238 levels, yielding a range
of Y' = 16 … 253, unlike Rec. 601 Y' = 16 …
235. This is not a typo in the Bt878 documentation, it has been
implemented in silicon. The chroma extents are unclear., ?????0.299 E'R
+ 0.587 E'G
+ 0.114 E'B237 E'Y + 16224 PB,R + 128 (probably)V4L2_COLORSPACE_470_SYSTEM_M5M/NTSCNo identifier exists for M/PAL which uses
the chromaticities of M/NTSC, the remaining parameters are equal to B and
G/PAL. according to , x = 0.67, y = 0.33x = 0.21, y = 0.71x = 0.14, y = 0.08x = 0.310, y = 0.316, Illuminant C?0.299 E'R
+ 0.587 E'G
+ 0.114 E'B219 E'Y + 16224 PB,R + 128V4L2_COLORSPACE_470_SYSTEM_BG6625-line PAL and SECAM systems according to , x = 0.64, y = 0.33x = 0.29, y = 0.60x = 0.15, y = 0.06x = 0.313, y = 0.329,
Illuminant D65?0.299 E'R
+ 0.587 E'G
+ 0.114 E'B219 E'Y + 16224 PB,R + 128V4L2_COLORSPACE_JPEG7JPEG Y'CbCr, see , ?????0.299 E'R
+ 0.587 E'G
+ 0.114 E'B256 E'Y + 16Note JFIF quantizes
Y'PBPR in range [0;+1] and
[-0.5;+0.5] to 257 levels, however Y'CbCr signals
are still clamped to [0;255].256 PB,R + 128V4L2_COLORSPACE_SRGB8[?]x = 0.640, y = 0.330x = 0.300, y = 0.600x = 0.150, y = 0.060x = 0.3127, y = 0.3290,
Illuminant D65E' = 4.5 I for I ≤0.018,
1.099 I0.45 - 0.099 for 0.018 < In/a
Indexed FormatIn this format each pixel is represented by an 8 bit index
into a 256 entry ARGB palette. It is intended for Video Output Overlays only. There are no ioctls to
access the palette, this must be done with ioctls of the Linux framebuffer API.
RGB Formats
&sub-packed-rgb;
&sub-sbggr8;
&sub-sgbrg8;
&sub-sgrbg8;
&sub-sbggr16;
YUV FormatsYUV is the format native to TV broadcast and composite video
signals. It separates the brightness information (Y) from the color
information (U and V or Cb and Cr). The color information consists of
red and blue color difference signals, this way
the green component can be reconstructed by subtracting from the
brightness component. See for conversion
examples. YUV was chosen because early television would only transmit
brightness information. To add color in a way compatible with existing
receivers a new signal carrier was added to transmit the color
difference signals. Secondary in the YUV format the U and V components
usually have lower resolution than the Y component. This is an analog
video compression technique taking advantage of a property of the
human visual system, being more sensitive to brightness
information.
&sub-packed-yuv;
&sub-grey;
&sub-y16;
&sub-yuyv;
&sub-uyvy;
&sub-yvyu;
&sub-vyuy;
&sub-y41p;
&sub-yuv420;
&sub-yuv410;
&sub-yuv422p;
&sub-yuv411p;
&sub-nv12;
&sub-nv16;
Compressed Formats
Compressed Image Formats
&cs-def;
IdentifierCodeDetailsV4L2_PIX_FMT_JPEG'JPEG'TBD. See also &VIDIOC-G-JPEGCOMP;,
&VIDIOC-S-JPEGCOMP;.V4L2_PIX_FMT_MPEG'MPEG'MPEG stream. The actual format is determined by
extended control V4L2_CID_MPEG_STREAM_TYPE, see
.
Reserved Format IdentifiersThese formats are not defined by this specification, they
are just listed for reference and to avoid naming conflicts. If you
want to register your own format, send an e-mail to the linux-media mailing
list &v4l-ml; for inclusion in the videodev2.h
file. If you want to share your format with other developers add a
link to your documentation and send a copy to the linux-media mailing list
for inclusion in this section. If you think your format should be listed
in a standard format section please make a proposal on the linux-media mailing
list.
Reserved Image Formats
&cs-def;
IdentifierCodeDetailsV4L2_PIX_FMT_DV'dvsd'unknownV4L2_PIX_FMT_ET61X251'E625'Compressed format of the ET61X251 driver.V4L2_PIX_FMT_HI240'HI24'8 bit RGB format used by the BTTV driver.V4L2_PIX_FMT_HM12'HM12'YUV 4:2:0 format used by the
IVTV driver,
http://www.ivtvdriver.org/The format is documented in the
kernel sources in the file Documentation/video4linux/cx2341x/README.hm12V4L2_PIX_FMT_SPCA501'S501'YUYV per line used by the gspca driver.V4L2_PIX_FMT_SPCA505'S505'YYUV per line used by the gspca driver.V4L2_PIX_FMT_SPCA508'S508'YUVY per line used by the gspca driver.V4L2_PIX_FMT_SPCA561'S561'Compressed GBRG Bayer format used by the gspca driver.V4L2_PIX_FMT_SGRBG10'DA10'10 bit raw Bayer, expanded to 16 bits.V4L2_PIX_FMT_SGRBG10DPCM8'DB10'10 bit raw Bayer DPCM compressed to 8 bits.V4L2_PIX_FMT_PAC207'P207'Compressed BGGR Bayer format used by the gspca driver.V4L2_PIX_FMT_MR97310A'M310'Compressed BGGR Bayer format used by the gspca driver.V4L2_PIX_FMT_OV511'O511'OV511 JPEG format used by the gspca driver.V4L2_PIX_FMT_OV518'O518'OV518 JPEG format used by the gspca driver.V4L2_PIX_FMT_PJPG'PJPG'Pixart 73xx JPEG format used by the gspca driver.V4L2_PIX_FMT_SQ905C'905C'Compressed RGGB bayer format used by the gspca driver.V4L2_PIX_FMT_MJPEG'MJPG'Compressed format used by the Zoran driverV4L2_PIX_FMT_PWC1'PWC1'Compressed format of the PWC driver.V4L2_PIX_FMT_PWC2'PWC2'Compressed format of the PWC driver.V4L2_PIX_FMT_SN9C10X'S910'Compressed format of the SN9C102 driver.V4L2_PIX_FMT_SN9C20X_I420'S920'YUV 4:2:0 format of the gspca sn9c20x driver.V4L2_PIX_FMT_STV0680'S680'Bayer format of the gspca stv0680 driver.V4L2_PIX_FMT_WNVA'WNVA'Used by the Winnov Videum driver,
http://www.thedirks.org/winnov/V4L2_PIX_FMT_TM6000'TM60'Used by Trident tm6000V4L2_PIX_FMT_YYUV'YYUV'unknownV4L2_PIX_FMT_Y4'Y04 'Old 4-bit greyscale format. Only the least significant 4 bits of each byte are used,
the other bits are set to 0.V4L2_PIX_FMT_Y6'Y06 'Old 6-bit greyscale format. Only the least significant 6 bits of each byte are used,
the other bits are set to 0.