The V4L2 API was primarily designed for devices exchanging image data with applications. The v4l2_pix_format and v4l2_pix_format_mplane structures define the format and layout of an image in memory. The former is used with the single-planar API, while the latter is used with the multi-planar version (see ). Image formats are negotiated with the &VIDIOC-S-FMT; ioctl. (The explanations here focus on video capturing and output, for overlay frame buffer formats see also &VIDIOC-G-FBUF;.)
Single-planar format structure struct <structname>v4l2_pix_format</structname> &cs-str; __u32 width Image width in pixels. __u32 height Image height in pixels. If field is one of V4L2_FIELD_TOP, V4L2_FIELD_BOTTOM or V4L2_FIELD_ALTERNATE then height refers to the number of lines in the field, otherwise it refers to the number of lines in the frame (which is twice the field height for interlaced formats). Applications set these fields to request an image size, drivers return the closest possible values. In case of planar formats the width and height applies to the largest plane. To avoid ambiguities drivers must return values rounded up to a multiple of the scale factor of any smaller planes. For example when the image format is YUV 4:2:0, width and height must be multiples of two. __u32 pixelformat The pixel format or type of compression, set by the application. This is a little endian four character code. V4L2 defines standard RGB formats in , YUV formats in , and reserved codes in &v4l2-field; field Video images are typically interlaced. Applications can request to capture or output only the top or bottom field, or both fields interlaced or sequentially stored in one buffer or alternating in separate buffers. Drivers return the actual field order selected. For more details on fields see . __u32 bytesperline Distance in bytes between the leftmost pixels in two adjacent lines. Both applications and drivers can set this field to request padding bytes at the end of each line. Drivers however may ignore the value requested by the application, returning width times bytes per pixel or a larger value required by the hardware. That implies applications can just set this field to zero to get a reasonable default.Video hardware may access padding bytes, therefore they must reside in accessible memory. Consider cases where padding bytes after the last line of an image cross a system page boundary. Input devices may write padding bytes, the value is undefined. Output devices ignore the contents of padding bytes.When the image format is planar the bytesperline value applies to the first plane and is divided by the same factor as the width field for the other planes. For example the Cb and Cr planes of a YUV 4:2:0 image have half as many padding bytes following each line as the Y plane. To avoid ambiguities drivers must return a bytesperline value rounded up to a multiple of the scale factor. For compressed formats the bytesperline value makes no sense. Applications and drivers must set this to 0 in that case. __u32 sizeimage Size in bytes of the buffer to hold a complete image, set by the driver. Usually this is bytesperline times height. When the image consists of variable length compressed data this is the maximum number of bytes required to hold an image. &v4l2-colorspace; colorspace This information supplements the pixelformat and must be set by the driver for capture streams and by the application for output streams, see . __u32 priv This field indicates whether the remaining fields of the v4l2_pix_format structure, also called the extended fields, are valid. When set to V4L2_PIX_FMT_PRIV_MAGIC, it indicates that the extended fields have been correctly initialized. When set to any other value it indicates that the extended fields contain undefined values. Applications that wish to use the pixel format extended fields must first ensure that the feature is supported by querying the device for the V4L2_CAP_EXT_PIX_FORMAT capability. If the capability isn't set the pixel format extended fields are not supported and using the extended fields will lead to undefined results. To use the extended fields, applications must set the priv field to V4L2_PIX_FMT_PRIV_MAGIC, initialize all the extended fields and zero the unused bytes of the v4l2_format raw_data field. When the priv field isn't set to V4L2_PIX_FMT_PRIV_MAGIC drivers must act as if all the extended fields were set to zero. On return drivers must set the priv field to V4L2_PIX_FMT_PRIV_MAGIC and all the extended fields to applicable values. __u32 flags Flags set by the application or driver, see . &v4l2-ycbcr-encoding; ycbcr_enc This information supplements the colorspace and must be set by the driver for capture streams and by the application for output streams, see . &v4l2-quantization; quantization This information supplements the colorspace and must be set by the driver for capture streams and by the application for output streams, see . &v4l2-xfer-func; xfer_func This information supplements the colorspace and must be set by the driver for capture streams and by the application for output streams, see .
Multi-planar format structures The v4l2_plane_pix_format structures define size and layout for each of the planes in a multi-planar format. The v4l2_pix_format_mplane structure contains information common to all planes (such as image width and height) and an array of v4l2_plane_pix_format structures, describing all planes of that format. struct <structname>v4l2_plane_pix_format</structname> &cs-str; __u32 sizeimage Maximum size in bytes required for image data in this plane. __u32 bytesperline Distance in bytes between the leftmost pixels in two adjacent lines. See &v4l2-pix-format;. __u16 reserved[6] Reserved for future extensions. Should be zeroed by drivers and applications.
struct <structname>v4l2_pix_format_mplane</structname> &cs-str; __u32 width Image width in pixels. See &v4l2-pix-format;. __u32 height Image height in pixels. See &v4l2-pix-format;. __u32 pixelformat The pixel format. Both single- and multi-planar four character codes can be used. &v4l2-field; field See &v4l2-pix-format;. &v4l2-colorspace; colorspace See &v4l2-pix-format;. &v4l2-plane-pix-format; plane_fmt[VIDEO_MAX_PLANES] An array of structures describing format of each plane this pixel format consists of. The number of valid entries in this array has to be put in the num_planes field. __u8 num_planes Number of planes (i.e. separate memory buffers) for this format and the number of valid entries in the plane_fmt array. __u8 flags Flags set by the application or driver, see . &v4l2-ycbcr-encoding; ycbcr_enc This information supplements the colorspace and must be set by the driver for capture streams and by the application for output streams, see . &v4l2-quantization; quantization This information supplements the colorspace and must be set by the driver for capture streams and by the application for output streams, see . &v4l2-xfer-func; xfer_func This information supplements the colorspace and must be set by the driver for capture streams and by the application for output streams, see . __u8 reserved[7] Reserved for future extensions. Should be zeroed by drivers and applications.
Standard Image Formats In order to exchange images between drivers and applications, it is necessary to have standard image data formats which both sides will interpret the same way. V4L2 includes several such formats, and this section is intended to be an unambiguous specification of the standard image data formats in V4L2. V4L2 drivers are not limited to these formats, however. Driver-specific formats are possible. In that case the application may depend on a codec to convert images to one of the standard formats when needed. But the data can still be stored and retrieved in the proprietary format. For example, a device may support a proprietary compressed format. Applications can still capture and save the data in the compressed format, saving much disk space, and later use a codec to convert the images to the X Windows screen format when the video is to be displayed. Even so, ultimately, some standard formats are needed, so the V4L2 specification would not be complete without well-defined standard formats. The V4L2 standard formats are mainly uncompressed formats. The pixels are always arranged in memory from left to right, and from top to bottom. The first byte of data in the image buffer is always for the leftmost pixel of the topmost row. Following that is the pixel immediately to its right, and so on until the end of the top row of pixels. Following the rightmost pixel of the row there may be zero or more bytes of padding to guarantee that each row of pixel data has a certain alignment. Following the pad bytes, if any, is data for the leftmost pixel of the second row from the top, and so on. The last row has just as many pad bytes after it as the other rows. In V4L2 each format has an identifier which looks like PIX_FMT_XXX, defined in the videodev2.h header file. These identifiers represent four character (FourCC) codes which are also listed below, however they are not the same as those used in the Windows world. For some formats, data is stored in separate, discontiguous memory buffers. Those formats are identified by a separate set of FourCC codes and are referred to as "multi-planar formats". For example, a YUV422 frame is normally stored in one memory buffer, but it can also be placed in two or three separate buffers, with Y component in one buffer and CbCr components in another in the 2-planar version or with each component in its own buffer in the 3-planar case. Those sub-buffers are referred to as "planes".
Colorspaces 'Color' is a very complex concept and depends on physics, chemistry and biology. Just because you have three numbers that describe the 'red', 'green' and 'blue' components of the color of a pixel does not mean that you can accurately display that color. A colorspace defines what it actually means to have an RGB value of e.g. (255, 0, 0). That is, which color should be reproduced on the screen in a perfectly calibrated environment. In order to do that we first need to have a good definition of color, i.e. some way to uniquely and unambiguously define a color so that someone else can reproduce it. Human color vision is trichromatic since the human eye has color receptors that are sensitive to three different wavelengths of light. Hence the need to use three numbers to describe color. Be glad you are not a mantis shrimp as those are sensitive to 12 different wavelengths, so instead of RGB we would be using the ABCDEFGHIJKL colorspace... Color exists only in the eye and brain and is the result of how strongly color receptors are stimulated. This is based on the Spectral Power Distribution (SPD) which is a graph showing the intensity (radiant power) of the light at wavelengths covering the visible spectrum as it enters the eye. The science of colorimetry is about the relationship between the SPD and color as perceived by the human brain. Since the human eye has only three color receptors it is perfectly possible that different SPDs will result in the same stimulation of those receptors and are perceived as the same color, even though the SPD of the light is different. In the 1920s experiments were devised to determine the relationship between SPDs and the perceived color and that resulted in the CIE 1931 standard that defines spectral weighting functions that model the perception of color. Specifically that standard defines functions that can take an SPD and calculate the stimulus for each color receptor. After some further mathematical transforms these stimuli are known as the CIE XYZ tristimulus values and these X, Y and Z values describe a color as perceived by a human unambiguously. These X, Y and Z values are all in the range [0…1]. The Y value in the CIE XYZ colorspace corresponds to luminance. Often the CIE XYZ colorspace is transformed to the normalized CIE xyY colorspace: x = X / (X + Y + Z) y = Y / (X + Y + Z) The x and y values are the chromaticity coordinates and can be used to define a color without the luminance component Y. It is very confusing to have such similar names for these colorspaces. Just be aware that if colors are specified with lower case 'x' and 'y', then the CIE xyY colorspace is used. Upper case 'X' and 'Y' refer to the CIE XYZ colorspace. Also, y has nothing to do with luminance. Together x and y specify a color, and Y the luminance. That is really all you need to remember from a practical point of view. At the end of this section you will find reading resources that go into much more detail if you are interested. A monitor or TV will reproduce colors by emitting light at three different wavelengths, the combination of which will stimulate the color receptors in the eye and thus cause the perception of color. Historically these wavelengths were defined by the red, green and blue phosphors used in the displays. These color primaries are part of what defines a colorspace. Different display devices will have different primaries and some primaries are more suitable for some display technologies than others. This has resulted in a variety of colorspaces that are used for different display technologies or uses. To define a colorspace you need to define the three color primaries (these are typically defined as x, y chromaticity coordinates from the CIE xyY colorspace) but also the white reference: that is the color obtained when all three primaries are at maximum power. This determines the relative power or energy of the primaries. This is usually chosen to be close to daylight which has been defined as the CIE D65 Illuminant. To recapitulate: the CIE XYZ colorspace uniquely identifies colors. Other colorspaces are defined by three chromaticity coordinates defined in the CIE xyY colorspace. Based on those a 3x3 matrix can be constructed that transforms CIE XYZ colors to colors in the new colorspace. Both the CIE XYZ and the RGB colorspace that are derived from the specific chromaticity primaries are linear colorspaces. But neither the eye, nor display technology is linear. Doubling the values of all components in the linear colorspace will not be perceived as twice the intensity of the color. So each colorspace also defines a transfer function that takes a linear color component value and transforms it to the non-linear component value, which is a closer match to the non-linear performance of both the eye and displays. Linear component values are denoted RGB, non-linear are denoted as R'G'B'. In general colors used in graphics are all R'G'B', except in openGL which uses linear RGB. Special care should be taken when dealing with openGL to provide linear RGB colors or to use the built-in openGL support to apply the inverse transfer function. The final piece that defines a colorspace is a function that transforms non-linear R'G'B' to non-linear Y'CbCr. This function is determined by the so-called luma coefficients. There may be multiple possible Y'CbCr encodings allowed for the same colorspace. Many encodings of color prefer to use luma (Y') and chroma (CbCr) instead of R'G'B'. Since the human eye is more sensitive to differences in luminance than in color this encoding allows one to reduce the amount of color information compared to the luma data. Note that the luma (Y') is unrelated to the Y in the CIE XYZ colorspace. Also note that Y'CbCr is often called YCbCr or YUV even though these are strictly speaking wrong. Sometimes people confuse Y'CbCr as being a colorspace. This is not correct, it is just an encoding of an R'G'B' color into luma and chroma values. The underlying colorspace that is associated with the R'G'B' color is also associated with the Y'CbCr color. The final step is how the RGB, R'G'B' or Y'CbCr values are quantized. The CIE XYZ colorspace where X, Y and Z are in the range [0…1] describes all colors that humans can perceive, but the transform to another colorspace will produce colors that are outside the [0…1] range. Once clamped to the [0…1] range those colors can no longer be reproduced in that colorspace. This clamping is what reduces the extent or gamut of the colorspace. How the range of [0…1] is translated to integer values in the range of [0…255] (or higher, depending on the color depth) is called the quantization. This is not part of the colorspace definition. In practice RGB or R'G'B' values are full range, i.e. they use the full [0…255] range. Y'CbCr values on the other hand are limited range with Y' using [16…235] and Cb and Cr using [16…240]. Unfortunately, in some cases limited range RGB is also used where the components use the range [16…235]. And full range Y'CbCr also exists using the [0…255] range. In order to correctly interpret a color you need to know the quantization range, whether it is R'G'B' or Y'CbCr, the used Y'CbCr encoding and the colorspace. From that information you can calculate the corresponding CIE XYZ color and map that again to whatever colorspace your display device uses. The colorspace definition itself consists of the three chromaticity primaries, the white reference chromaticity, a transfer function and the luma coefficients needed to transform R'G'B' to Y'CbCr. While some colorspace standards correctly define all four, quite often the colorspace standard only defines some, and you have to rely on other standards for the missing pieces. The fact that colorspaces are often a mix of different standards also led to very confusing naming conventions where the name of a standard was used to name a colorspace when in fact that standard was part of various other colorspaces as well. If you want to read more about colors and colorspaces, then the following resources are useful: is a good practical book for video engineers, has a much broader scope and describes many more aspects of color (physics, chemistry, biology, etc.). The http://www.brucelindbloom.com website is an excellent resource, especially with respect to the mathematics behind colorspace conversions. The wikipedia CIE 1931 colorspace article is also very useful.
Defining Colorspaces in V4L2 In V4L2 colorspaces are defined by four values. The first is the colorspace identifier (&v4l2-colorspace;) which defines the chromaticities, the default transfer function, the default Y'CbCr encoding and the default quantization method. The second is the transfer function identifier (&v4l2-xfer-func;) to specify non-standard transfer functions. The third is the Y'CbCr encoding identifier (&v4l2-ycbcr-encoding;) to specify non-standard Y'CbCr encodings and the fourth is the quantization identifier (&v4l2-quantization;) to specify non-standard quantization methods. Most of the time only the colorspace field of &v4l2-pix-format; or &v4l2-pix-format-mplane; needs to be filled in. Note that the default R'G'B' quantization is full range for all colorspaces except for BT.2020 which uses limited range R'G'B' quantization. V4L2 Colorspaces &cs-def; Identifier Details V4L2_COLORSPACE_DEFAULT The default colorspace. This can be used by applications to let the driver fill in the colorspace. V4L2_COLORSPACE_SMPTE170M See . V4L2_COLORSPACE_REC709 See . V4L2_COLORSPACE_SRGB See . V4L2_COLORSPACE_ADOBERGB See . V4L2_COLORSPACE_BT2020 See . V4L2_COLORSPACE_DCI_P3 See . V4L2_COLORSPACE_SMPTE240M See . V4L2_COLORSPACE_470_SYSTEM_M See . V4L2_COLORSPACE_470_SYSTEM_BG See . V4L2_COLORSPACE_JPEG See . V4L2_COLORSPACE_RAW The raw colorspace. This is used for raw image capture where the image is minimally processed and is using the internal colorspace of the device. The software that processes an image using this 'colorspace' will have to know the internals of the capture device.
V4L2 Transfer Function &cs-def; Identifier Details V4L2_XFER_FUNC_DEFAULT Use the default transfer function as defined by the colorspace. V4L2_XFER_FUNC_709 Use the Rec. 709 transfer function. V4L2_XFER_FUNC_SRGB Use the sRGB transfer function. V4L2_XFER_FUNC_ADOBERGB Use the AdobeRGB transfer function. V4L2_XFER_FUNC_SMPTE240M Use the SMPTE 240M transfer function. V4L2_XFER_FUNC_NONE Do not use a transfer function (i.e. use linear RGB values). V4L2_XFER_FUNC_DCI_P3 Use the DCI-P3 transfer function. V4L2_XFER_FUNC_SMPTE2084 Use the SMPTE 2084 transfer function.
V4L2 Y'CbCr Encodings &cs-def; Identifier Details V4L2_YCBCR_ENC_DEFAULT Use the default Y'CbCr encoding as defined by the colorspace. V4L2_YCBCR_ENC_601 Use the BT.601 Y'CbCr encoding. V4L2_YCBCR_ENC_709 Use the Rec. 709 Y'CbCr encoding. V4L2_YCBCR_ENC_XV601 Use the extended gamut xvYCC BT.601 encoding. V4L2_YCBCR_ENC_XV709 Use the extended gamut xvYCC Rec. 709 encoding. V4L2_YCBCR_ENC_SYCC Use the extended gamut sYCC encoding. V4L2_YCBCR_ENC_BT2020 Use the default non-constant luminance BT.2020 Y'CbCr encoding. V4L2_YCBCR_ENC_BT2020_CONST_LUM Use the constant luminance BT.2020 Yc'CbcCrc encoding.
V4L2 Quantization Methods &cs-def; Identifier Details V4L2_QUANTIZATION_DEFAULT Use the default quantization encoding as defined by the colorspace. This is always full range for R'G'B' (except for the BT.2020 colorspace) and usually limited range for Y'CbCr. V4L2_QUANTIZATION_FULL_RANGE Use the full range quantization encoding. I.e. the range [0…1] is mapped to [0…255] (with possible clipping to [1…254] to avoid the 0x00 and 0xff values). Cb and Cr are mapped from [-0.5…0.5] to [0…255] (with possible clipping to [1…254] to avoid the 0x00 and 0xff values). V4L2_QUANTIZATION_LIM_RANGE Use the limited range quantization encoding. I.e. the range [0…1] is mapped to [16…235]. Cb and Cr are mapped from [-0.5…0.5] to [16…240].
Detailed Colorspace Descriptions
Colorspace SMPTE 170M (<constant>V4L2_COLORSPACE_SMPTE170M</constant>) The standard defines the colorspace used by NTSC and PAL and by SDTV in general. The default transfer function is V4L2_XFER_FUNC_709. The default Y'CbCr encoding is V4L2_YCBCR_ENC_601. The default Y'CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: SMPTE 170M Chromaticities &cs-str; Color x y Red 0.630 0.340 Green 0.310 0.595 Blue 0.155 0.070 White Reference (D65) 0.3127 0.3290
The red, green and blue chromaticities are also often referred to as the SMPTE C set, so this colorspace is sometimes called SMPTE C as well. The transfer function defined for SMPTE 170M is the same as the one defined in Rec. 709. L' = -1.099(-L)0.45 + 0.099 for L ≤ -0.018 L' = 4.5L for -0.018 < L < 0.018 L' = 1.099L0.45 - 0.099 for L ≥ 0.018 Inverse Transfer function: L = -((L' - 0.099) / -1.099)1/0.45 for L' ≤ -0.081 L = L' / 4.5 for -0.081 < L' < 0.081 L = ((L' + 0.099) / 1.099)1/0.45 for L' ≥ 0.081 The luminance (Y') and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_601 encoding: Y' = 0.299R' + 0.587G' + 0.114B' Cb = -0.169R' - 0.331G' + 0.5B' Cr = 0.5R' - 0.419G' - 0.081B' Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. This conversion to Y'CbCr is identical to the one defined in the standard and this colorspace is sometimes called BT.601 as well, even though BT.601 does not mention any color primaries. The default quantization is limited range, but full range is possible although rarely seen.
Colorspace Rec. 709 (<constant>V4L2_COLORSPACE_REC709</constant>) The standard defines the colorspace used by HDTV in general. The default transfer function is V4L2_XFER_FUNC_709. The default Y'CbCr encoding is V4L2_YCBCR_ENC_709. The default Y'CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: Rec. 709 Chromaticities &cs-str; Color x y Red 0.640 0.330 Green 0.300 0.600 Blue 0.150 0.060 White Reference (D65) 0.3127 0.3290
The full name of this standard is Rec. ITU-R BT.709-5. Transfer function. Normally L is in the range [0…1], but for the extended gamut xvYCC encoding values outside that range are allowed. L' = -1.099(-L)0.45 + 0.099 for L ≤ -0.018 L' = 4.5L for -0.018 < L < 0.018 L' = 1.099L0.45 - 0.099 for L ≥ 0.018 Inverse Transfer function: L = -((L' - 0.099) / -1.099)1/0.45 for L' ≤ -0.081 L = L' / 4.5 for -0.081 < L' < 0.081 L = ((L' + 0.099) / 1.099)1/0.45 for L' ≥ 0.081 The luminance (Y') and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_709 encoding: Y' = 0.2126R' + 0.7152G' + 0.0722B' Cb = -0.1146R' - 0.3854G' + 0.5B' Cr = 0.5R' - 0.4542G' - 0.0458B' Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The default quantization is limited range, but full range is possible although rarely seen. The V4L2_YCBCR_ENC_709 encoding described above is the default for this colorspace, but it can be overridden with V4L2_YCBCR_ENC_601, in which case the BT.601 Y'CbCr encoding is used. Two additional extended gamut Y'CbCr encodings are also possible with this colorspace: The xvYCC 709 encoding (V4L2_YCBCR_ENC_XV709, ) is similar to the Rec. 709 encoding, but it allows for R', G' and B' values that are outside the range [0…1]. The resulting Y', Cb and Cr values are scaled and offset: Y' = (219 / 256) * (0.2126R' + 0.7152G' + 0.0722B') + (16 / 256) Cb = (224 / 256) * (-0.1146R' - 0.3854G' + 0.5B') Cr = (224 / 256) * (0.5R' - 0.4542G' - 0.0458B') The xvYCC 601 encoding (V4L2_YCBCR_ENC_XV601, ) is similar to the BT.601 encoding, but it allows for R', G' and B' values that are outside the range [0…1]. The resulting Y', Cb and Cr values are scaled and offset: Y' = (219 / 256) * (0.299R' + 0.587G' + 0.114B') + (16 / 256) Cb = (224 / 256) * (-0.169R' - 0.331G' + 0.5B') Cr = (224 / 256) * (0.5R' - 0.419G' - 0.081B') Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The non-standard xvYCC 709 or xvYCC 601 encodings can be used by selecting V4L2_YCBCR_ENC_XV709 or V4L2_YCBCR_ENC_XV601. The xvYCC encodings always use full range quantization.
Colorspace sRGB (<constant>V4L2_COLORSPACE_SRGB</constant>) The standard defines the colorspace used by most webcams and computer graphics. The default transfer function is V4L2_XFER_FUNC_SRGB. The default Y'CbCr encoding is V4L2_YCBCR_ENC_SYCC. The default Y'CbCr quantization is full range. The chromaticities of the primary colors and the white reference are: sRGB Chromaticities &cs-str; Color x y Red 0.640 0.330 Green 0.300 0.600 Blue 0.150 0.060 White Reference (D65) 0.3127 0.3290
These chromaticities are identical to the Rec. 709 colorspace. Transfer function. Note that negative values for L are only used by the Y'CbCr conversion. L' = -1.055(-L)1/2.4 + 0.055 for L < -0.0031308 L' = 12.92L for -0.0031308 ≤ L ≤ 0.0031308 L' = 1.055L1/2.4 - 0.055 for 0.0031308 < L ≤ 1 Inverse Transfer function: L = -((-L' + 0.055) / 1.055)2.4 for L' < -0.04045 L = L' / 12.92 for -0.04045 ≤ L' ≤ 0.04045 L = ((L' + 0.055) / 1.055)2.4 for L' > 0.04045 The luminance (Y') and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_SYCC encoding as defined by : Y' = 0.2990R' + 0.5870G' + 0.1140B' Cb = -0.1687R' - 0.3313G' + 0.5B' Cr = 0.5R' - 0.4187G' - 0.0813B' Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The V4L2_YCBCR_ENC_SYCC quantization is always full range. Although this Y'CbCr encoding looks very similar to the V4L2_YCBCR_ENC_XV601 encoding, it is not. The V4L2_YCBCR_ENC_XV601 scales and offsets the Y'CbCr values before quantization, but this encoding does not do that.
Colorspace Adobe RGB (<constant>V4L2_COLORSPACE_ADOBERGB</constant>) The standard defines the colorspace used by computer graphics that use the AdobeRGB colorspace. This is also known as the standard. The default transfer function is V4L2_XFER_FUNC_ADOBERGB. The default Y'CbCr encoding is V4L2_YCBCR_ENC_601. The default Y'CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: Adobe RGB Chromaticities &cs-str; Color x y Red 0.6400 0.3300 Green 0.2100 0.7100 Blue 0.1500 0.0600 White Reference (D65) 0.3127 0.3290
Transfer function: L' = L1/2.19921875 Inverse Transfer function: L = L'2.19921875 The luminance (Y') and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_601 encoding: Y' = 0.299R' + 0.587G' + 0.114B' Cb = -0.169R' - 0.331G' + 0.5B' Cr = 0.5R' - 0.419G' - 0.081B' Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. This transform is identical to one defined in SMPTE 170M/BT.601. The Y'CbCr quantization is limited range.
Colorspace BT.2020 (<constant>V4L2_COLORSPACE_BT2020</constant>) The standard defines the colorspace used by Ultra-high definition television (UHDTV). The default transfer function is V4L2_XFER_FUNC_709. The default Y'CbCr encoding is V4L2_YCBCR_ENC_BT2020. The default R'G'B' quantization is limited range (!), and so is the default Y'CbCr quantization. The chromaticities of the primary colors and the white reference are: BT.2020 Chromaticities &cs-str; Color x y Red 0.708 0.292 Green 0.170 0.797 Blue 0.131 0.046 White Reference (D65) 0.3127 0.3290
Transfer function (same as Rec. 709): L' = 4.5L for 0 ≤ L < 0.018 L' = 1.099L0.45 - 0.099 for 0.018 ≤ L ≤ 1 Inverse Transfer function: L = L' / 4.5 for L' < 0.081 L = ((L' + 0.099) / 1.099)1/0.45 for L' ≥ 0.081 The luminance (Y') and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_BT2020 encoding: Y' = 0.2627R' + 0.6780G' + 0.0593B' Cb = -0.1396R' - 0.3604G' + 0.5B' Cr = 0.5R' - 0.4598G' - 0.0402B' Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The Y'CbCr quantization is limited range. There is also an alternate constant luminance R'G'B' to Yc'CbcCrc (V4L2_YCBCR_ENC_BT2020_CONST_LUM) encoding: Luma: Yc' = (0.2627R + 0.6780G + 0.0593B)' B' - Yc' ≤ 0: Cbc = (B' - Yc') / 1.9404 B' - Yc' > 0: Cbc = (B' - Yc') / 1.5816 R' - Yc' ≤ 0: Crc = (R' - Y') / 1.7184 R' - Yc' > 0: Crc = (R' - Y') / 0.9936 Yc' is clamped to the range [0…1] and Cbc and Crc are clamped to the range [-0.5…0.5]. The Yc'CbcCrc quantization is limited range.
Colorspace DCI-P3 (<constant>V4L2_COLORSPACE_DCI_P3</constant>) The standard defines the colorspace used by cinema projectors that use the DCI-P3 colorspace. The default transfer function is V4L2_XFER_FUNC_DCI_P3. The default Y'CbCr encoding is V4L2_YCBCR_ENC_709. Note that this colorspace does not specify a Y'CbCr encoding since it is not meant to be encoded to Y'CbCr. So this default Y'CbCr encoding was picked because it is the HDTV encoding. The default Y'CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: DCI-P3 Chromaticities &cs-str; Color x y Red 0.6800 0.3200 Green 0.2650 0.6900 Blue 0.1500 0.0600 White Reference 0.3140 0.3510
Transfer function: L' = L1/2.6 Inverse Transfer function: L = L'2.6 Y'CbCr encoding is not specified. V4L2 defaults to Rec. 709.
Colorspace SMPTE 240M (<constant>V4L2_COLORSPACE_SMPTE240M</constant>) The standard was an interim standard used during the early days of HDTV (1988-1998). It has been superseded by Rec. 709. The default transfer function is V4L2_XFER_FUNC_SMPTE240M. The default Y'CbCr encoding is V4L2_YCBCR_ENC_SMPTE240M. The default Y'CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: SMPTE 240M Chromaticities &cs-str; Color x y Red 0.630 0.340 Green 0.310 0.595 Blue 0.155 0.070 White Reference (D65) 0.3127 0.3290
These chromaticities are identical to the SMPTE 170M colorspace. Transfer function: L' = 4L for 0 ≤ L < 0.0228 L' = 1.1115L0.45 - 0.1115 for 0.0228 ≤ L ≤ 1 Inverse Transfer function: L = L' / 4 for 0 ≤ L' < 0.0913 L = ((L' + 0.1115) / 1.1115)1/0.45 for L' ≥ 0.0913 The luminance (Y') and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_SMPTE240M encoding: Y' = 0.2122R' + 0.7013G' + 0.0865B' Cb = -0.1161R' - 0.3839G' + 0.5B' Cr = 0.5R' - 0.4451G' - 0.0549B' Yc' is clamped to the range [0…1] and Cbc and Crc are clamped to the range [-0.5…0.5]. The Y'CbCr quantization is limited range.
Colorspace NTSC 1953 (<constant>V4L2_COLORSPACE_470_SYSTEM_M</constant>) This standard defines the colorspace used by NTSC in 1953. In practice this colorspace is obsolete and SMPTE 170M should be used instead. The default transfer function is V4L2_XFER_FUNC_709. The default Y'CbCr encoding is V4L2_YCBCR_ENC_601. The default Y'CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: NTSC 1953 Chromaticities &cs-str; Color x y Red 0.67 0.33 Green 0.21 0.71 Blue 0.14 0.08 White Reference (C) 0.310 0.316
Note that this colorspace uses Illuminant C instead of D65 as the white reference. To correctly convert an image in this colorspace to another that uses D65 you need to apply a chromatic adaptation algorithm such as the Bradford method. The transfer function was never properly defined for NTSC 1953. The Rec. 709 transfer function is recommended in the literature: L' = 4.5L for 0 ≤ L < 0.018 L' = 1.099L0.45 - 0.099 for 0.018 ≤ L ≤ 1 Inverse Transfer function: L = L' / 4.5 for L' < 0.081 L = ((L' + 0.099) / 1.099)1/0.45 for L' ≥ 0.081 The luminance (Y') and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_601 encoding: Y' = 0.299R' + 0.587G' + 0.114B' Cb = -0.169R' - 0.331G' + 0.5B' Cr = 0.5R' - 0.419G' - 0.081B' Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The Y'CbCr quantization is limited range. This transform is identical to one defined in SMPTE 170M/BT.601.
Colorspace EBU Tech. 3213 (<constant>V4L2_COLORSPACE_470_SYSTEM_BG</constant>) The standard defines the colorspace used by PAL/SECAM in 1975. In practice this colorspace is obsolete and SMPTE 170M should be used instead. The default transfer function is V4L2_XFER_FUNC_709. The default Y'CbCr encoding is V4L2_YCBCR_ENC_601. The default Y'CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: EBU Tech. 3213 Chromaticities &cs-str; Color x y Red 0.64 0.33 Green 0.29 0.60 Blue 0.15 0.06 White Reference (D65) 0.3127 0.3290
The transfer function was never properly defined for this colorspace. The Rec. 709 transfer function is recommended in the literature: L' = 4.5L for 0 ≤ L < 0.018 L' = 1.099L0.45 - 0.099 for 0.018 ≤ L ≤ 1 Inverse Transfer function: L = L' / 4.5 for L' < 0.081 L = ((L' + 0.099) / 1.099)1/0.45 for L' ≥ 0.081 The luminance (Y') and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_601 encoding: Y' = 0.299R' + 0.587G' + 0.114B' Cb = -0.169R' - 0.331G' + 0.5B' Cr = 0.5R' - 0.419G' - 0.081B' Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The Y'CbCr quantization is limited range. This transform is identical to one defined in SMPTE 170M/BT.601.
Colorspace JPEG (<constant>V4L2_COLORSPACE_JPEG</constant>) This colorspace defines the colorspace used by most (Motion-)JPEG formats. The chromaticities of the primary colors and the white reference are identical to sRGB. The transfer function use is V4L2_XFER_FUNC_SRGB. The Y'CbCr encoding is V4L2_YCBCR_ENC_601 with full range quantization where Y' is scaled to [0…255] and Cb/Cr are scaled to [-128…128] and then clipped to [-128…127]. Note that the JPEG standard does not actually store colorspace information. So if something other than sRGB is used, then the driver will have to set that information explicitly. Effectively V4L2_COLORSPACE_JPEG can be considered to be an abbreviation for V4L2_COLORSPACE_SRGB, V4L2_YCBCR_ENC_601 and V4L2_QUANTIZATION_FULL_RANGE.
Detailed Transfer Function Descriptions
Transfer Function SMPTE 2084 (<constant>V4L2_XFER_FUNC_SMPTE2084</constant>) The standard defines the transfer function used by High Dynamic Range content. Constants: m1 = (2610 / 4096) / 4 m2 = (2523 / 4096) * 128 c1 = 3424 / 4096 c2 = (2413 / 4096) * 32 c3 = (2392 / 4096) * 32 Transfer function: L' = ((c1 + c2 * Lm1) / (1 + c3 * Lm1))m2 Inverse Transfer function: L = (max(L'1/m2 - c1, 0) / (c2 - c3 * L'1/m2))1/m1
Indexed Format In this format each pixel is represented by an 8 bit index into a 256 entry ARGB palette. It is intended for Video Output Overlays only. There are no ioctls to access the palette, this must be done with ioctls of the Linux framebuffer API. Indexed Image Format Identifier Code   Byte 0     Bit 7 6 5 4 3 2 1 0 V4L2_PIX_FMT_PAL8 'PAL8' i7 i6 i5 i4 i3 i2 i1 i0
RGB Formats &sub-packed-rgb; &sub-sbggr8; &sub-sgbrg8; &sub-sgrbg8; &sub-srggb8; &sub-sbggr16; &sub-srggb10; &sub-srggb10p; &sub-srggb10alaw8; &sub-srggb10dpcm8; &sub-srggb12;
YUV Formats YUV is the format native to TV broadcast and composite video signals. It separates the brightness information (Y) from the color information (U and V or Cb and Cr). The color information consists of red and blue color difference signals, this way the green component can be reconstructed by subtracting from the brightness component. See for conversion examples. YUV was chosen because early television would only transmit brightness information. To add color in a way compatible with existing receivers a new signal carrier was added to transmit the color difference signals. Secondary in the YUV format the U and V components usually have lower resolution than the Y component. This is an analog video compression technique taking advantage of a property of the human visual system, being more sensitive to brightness information. &sub-packed-yuv; &sub-grey; &sub-y10; &sub-y12; &sub-y10b; &sub-y16; &sub-y16-be; &sub-uv8; &sub-yuyv; &sub-uyvy; &sub-yvyu; &sub-vyuy; &sub-y41p; &sub-yuv420; &sub-yuv420m; &sub-yvu420m; &sub-yuv410; &sub-yuv422p; &sub-yuv411p; &sub-nv12; &sub-nv12m; &sub-nv12mt; &sub-nv16; &sub-nv16m; &sub-nv24; &sub-m420;
Compressed Formats Compressed Image Formats &cs-def; Identifier Code Details V4L2_PIX_FMT_JPEG 'JPEG' TBD. See also &VIDIOC-G-JPEGCOMP;, &VIDIOC-S-JPEGCOMP;. V4L2_PIX_FMT_MPEG 'MPEG' MPEG multiplexed stream. The actual format is determined by extended control V4L2_CID_MPEG_STREAM_TYPE, see . V4L2_PIX_FMT_H264 'H264' H264 video elementary stream with start codes. V4L2_PIX_FMT_H264_NO_SC 'AVC1' H264 video elementary stream without start codes. V4L2_PIX_FMT_H264_MVC 'M264' H264 MVC video elementary stream. V4L2_PIX_FMT_H263 'H263' H263 video elementary stream. V4L2_PIX_FMT_MPEG1 'MPG1' MPEG1 video elementary stream. V4L2_PIX_FMT_MPEG2 'MPG2' MPEG2 video elementary stream. V4L2_PIX_FMT_MPEG4 'MPG4' MPEG4 video elementary stream. V4L2_PIX_FMT_XVID 'XVID' Xvid video elementary stream. V4L2_PIX_FMT_VC1_ANNEX_G 'VC1G' VC1, SMPTE 421M Annex G compliant stream. V4L2_PIX_FMT_VC1_ANNEX_L 'VC1L' VC1, SMPTE 421M Annex L compliant stream. V4L2_PIX_FMT_VP8 'VP80' VP8 video elementary stream.
SDR Formats These formats are used for SDR interface only. &sub-sdr-cu08; &sub-sdr-cu16le; &sub-sdr-cs08; &sub-sdr-cs14le; &sub-sdr-ru12le;
Reserved Format Identifiers These formats are not defined by this specification, they are just listed for reference and to avoid naming conflicts. If you want to register your own format, send an e-mail to the linux-media mailing list &v4l-ml; for inclusion in the videodev2.h file. If you want to share your format with other developers add a link to your documentation and send a copy to the linux-media mailing list for inclusion in this section. If you think your format should be listed in a standard format section please make a proposal on the linux-media mailing list. Reserved Image Formats &cs-def; Identifier Code Details V4L2_PIX_FMT_DV 'dvsd' unknown V4L2_PIX_FMT_ET61X251 'E625' Compressed format of the ET61X251 driver. V4L2_PIX_FMT_HI240 'HI24' 8 bit RGB format used by the BTTV driver. V4L2_PIX_FMT_HM12 'HM12' YUV 4:2:0 format used by the IVTV driver, http://www.ivtvdriver.org/The format is documented in the kernel sources in the file Documentation/video4linux/cx2341x/README.hm12 V4L2_PIX_FMT_CPIA1 'CPIA' YUV format used by the gspca cpia1 driver. V4L2_PIX_FMT_JPGL 'JPGL' JPEG-Light format (Pegasus Lossless JPEG) used in Divio webcams NW 80x. V4L2_PIX_FMT_SPCA501 'S501' YUYV per line used by the gspca driver. V4L2_PIX_FMT_SPCA505 'S505' YYUV per line used by the gspca driver. V4L2_PIX_FMT_SPCA508 'S508' YUVY per line used by the gspca driver. V4L2_PIX_FMT_SPCA561 'S561' Compressed GBRG Bayer format used by the gspca driver. V4L2_PIX_FMT_PAC207 'P207' Compressed BGGR Bayer format used by the gspca driver. V4L2_PIX_FMT_MR97310A 'M310' Compressed BGGR Bayer format used by the gspca driver. V4L2_PIX_FMT_JL2005BCD 'JL20' JPEG compressed RGGB Bayer format used by the gspca driver. V4L2_PIX_FMT_OV511 'O511' OV511 JPEG format used by the gspca driver. V4L2_PIX_FMT_OV518 'O518' OV518 JPEG format used by the gspca driver. V4L2_PIX_FMT_PJPG 'PJPG' Pixart 73xx JPEG format used by the gspca driver. V4L2_PIX_FMT_SE401 'S401' Compressed RGB format used by the gspca se401 driver V4L2_PIX_FMT_SQ905C '905C' Compressed RGGB bayer format used by the gspca driver. V4L2_PIX_FMT_MJPEG 'MJPG' Compressed format used by the Zoran driver V4L2_PIX_FMT_PWC1 'PWC1' Compressed format of the PWC driver. V4L2_PIX_FMT_PWC2 'PWC2' Compressed format of the PWC driver. V4L2_PIX_FMT_SN9C10X 'S910' Compressed format of the SN9C102 driver. V4L2_PIX_FMT_SN9C20X_I420 'S920' YUV 4:2:0 format of the gspca sn9c20x driver. V4L2_PIX_FMT_SN9C2028 'SONX' Compressed GBRG bayer format of the gspca sn9c2028 driver. V4L2_PIX_FMT_STV0680 'S680' Bayer format of the gspca stv0680 driver. V4L2_PIX_FMT_WNVA 'WNVA' Used by the Winnov Videum driver, http://www.thedirks.org/winnov/ V4L2_PIX_FMT_TM6000 'TM60' Used by Trident tm6000 V4L2_PIX_FMT_CIT_YYVYUY 'CITV' Used by xirlink CIT, found at IBM webcams. Uses one line of Y then 1 line of VYUY V4L2_PIX_FMT_KONICA420 'KONI' Used by Konica webcams. YUV420 planar in blocks of 256 pixels. V4L2_PIX_FMT_YYUV 'YYUV' unknown V4L2_PIX_FMT_Y4 'Y04 ' Old 4-bit greyscale format. Only the most significant 4 bits of each byte are used, the other bits are set to 0. V4L2_PIX_FMT_Y6 'Y06 ' Old 6-bit greyscale format. Only the most significant 6 bits of each byte are used, the other bits are set to 0. V4L2_PIX_FMT_S5C_UYVY_JPG 'S5CI' Two-planar format used by Samsung S5C73MX cameras. The first plane contains interleaved JPEG and UYVY image data, followed by meta data in form of an array of offsets to the UYVY data blocks. The actual pointer array follows immediately the interleaved JPEG/UYVY data, the number of entries in this array equals the height of the UYVY image. Each entry is a 4-byte unsigned integer in big endian order and it's an offset to a single pixel line of the UYVY image. The first plane can start either with JPEG or UYVY data chunk. The size of a single UYVY block equals the UYVY image's width multiplied by 2. The size of a JPEG chunk depends on the image and can vary with each line. The second plane, at an offset of 4084 bytes, contains a 4-byte offset to the pointer array in the first plane. This offset is followed by a 4-byte value indicating size of the pointer array. All numbers in the second plane are also in big endian order. Remaining data in the second plane is undefined. The information in the second plane allows to easily find location of the pointer array, which can be different for each frame. The size of the pointer array is constant for given UYVY image height. In order to extract UYVY and JPEG frames an application can initially set a data pointer to the start of first plane and then add an offset from the first entry of the pointers table. Such a pointer indicates start of an UYVY image pixel line. Whole UYVY line can be copied to a separate buffer. These steps should be repeated for each line, i.e. the number of entries in the pointer array. Anything what's in between the UYVY lines is JPEG data and should be concatenated to form the JPEG stream.
Format Flags &cs-def; V4L2_PIX_FMT_FLAG_PREMUL_ALPHA 0x00000001 The color values are premultiplied by the alpha channel value. For example, if a light blue pixel with 50% transparency was described by RGBA values (128, 192, 255, 128), the same pixel described with premultiplied colors would be described by RGBA values (64, 96, 128, 128)