diff options
author | Linus Torvalds | 2023-11-06 15:06:06 -0800 |
---|---|---|
committer | Linus Torvalds | 2023-11-06 15:06:06 -0800 |
commit | be3ca57cfb777ad820c6659d52e60bbdd36bf5ff (patch) | |
tree | 2aec9aa9c20d3a82bce9d3df93a049058c3bca4e /Documentation | |
parent | d2f51b3516dade79269ff45eae2a7668ae711b25 (diff) | |
parent | 3e238417254bfdcc23fe207780b59cbb08656762 (diff) |
Merge tag 'media/v6.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
Pull media updates from Mauro Carvalho Chehab:
- the old V4L2 core videobuf kAPI was finally removed. All media
drivers should now be using VB2 kAPI
- new automotive driver: mgb4
- new platform video driver: npcm-video
- new sensor driver: mt9m114
- new TI driver used in conjunction with Cadence CSI2RX IP to bridge
TI-specific parts
- ir-rx51 was removed and the N900 DT binding was moved to the
pwm-ir-tx generic driver
- drop atomisp-specific ov5693, using the upstream driver instead
- the camss driver has gained RDI3 support for VFE 17x
- the atomisp driver now detects ISP2400 or ISP2401 at run time. No
need to set it up at build time anymore
- lots of driver fixes, cleanups and improvements
* tag 'media/v6.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media: (377 commits)
media: nuvoton: VIDEO_NPCM_VCD_ECE should depend on ARCH_NPCM
media: venus: Fix firmware path for resources
media: venus: hfi_cmds: Replace one-element array with flex-array member and use __counted_by
media: venus: hfi_parser: Add check to keep the number of codecs within range
media: venus: hfi: add checks to handle capabilities from firmware
media: venus: hfi: fix the check to handle session buffer requirement
media: venus: hfi: add checks to perform sanity on queue pointers
media: platform: cadence: select MIPI_DPHY dependency
media: MAINTAINERS: Fix path for J721E CSI2RX bindings
media: cec: meson: always include meson sub-directory in Makefile
media: videobuf2: Fix IS_ERR checking in vb2_dc_put_userptr()
media: platform: mtk-mdp3: fix uninitialized variable in mdp_path_config()
media: mediatek: vcodec: using encoder device to alloc/free encoder memory
media: imx-jpeg: notify source chagne event when the first picture parsed
media: cx231xx: Use EP5_BUF_SIZE macro
media: siano: Drop unnecessary error check for debugfs_create_dir/file()
media: mediatek: vcodec: Handle invalid encoder vsi
media: aspeed: Drop unnecessary error check for debugfs_create_file()
Documentation: media: buffer.rst: fix V4L2_BUF_FLAG_PREPARED
Documentation: media: gen-errors.rst: fix confusing ENOTTY description
...
Diffstat (limited to 'Documentation')
43 files changed, 1268 insertions, 643 deletions
diff --git a/Documentation/admin-guide/media/mgb4.rst b/Documentation/admin-guide/media/mgb4.rst new file mode 100644 index 000000000000..2977f74d7e26 --- /dev/null +++ b/Documentation/admin-guide/media/mgb4.rst @@ -0,0 +1,374 @@ +.. SPDX-License-Identifier: GPL-2.0 + +==================== +mgb4 sysfs interface +==================== + +The mgb4 driver provides a sysfs interface, that is used to configure video +stream related parameters (some of them must be set properly before the v4l2 +device can be opened) and obtain the video device/stream status. + +There are two types of parameters - global / PCI card related, found under +``/sys/class/video4linux/videoX/device`` and module specific found under +``/sys/class/video4linux/videoX``. + + +Global (PCI card) parameters +============================ + +**module_type** (R): + Module type. + + | 0 - No module present + | 1 - FPDL3 + | 2 - GMSL + +**module_version** (R): + Module version number. Zero in case of a missing module. + +**fw_type** (R): + Firmware type. + + | 1 - FPDL3 + | 2 - GMSL + +**fw_version** (R): + Firmware version number. + +**serial_number** (R): + Card serial number. The format is:: + + PRODUCT-REVISION-SERIES-SERIAL + + where each component is a 8b number. + + +Common FPDL3/GMSL input parameters +================================== + +**input_id** (R): + Input number ID, zero based. + +**oldi_lane_width** (RW): + Number of deserializer output lanes. + + | 0 - single + | 1 - dual (default) + +**color_mapping** (RW): + Mapping of the incoming bits in the signal to the colour bits of the pixels. + + | 0 - OLDI/JEIDA + | 1 - SPWG/VESA (default) + +**link_status** (R): + Video link status. If the link is locked, chips are properly connected and + communicating at the same speed and protocol. The link can be locked without + an active video stream. + + A value of 0 is equivalent to the V4L2_IN_ST_NO_SYNC flag of the V4L2 + VIDIOC_ENUMINPUT status bits. + + | 0 - unlocked + | 1 - locked + +**stream_status** (R): + Video stream status. A stream is detected if the link is locked, the input + pixel clock is running and the DE signal is moving. + + A value of 0 is equivalent to the V4L2_IN_ST_NO_SIGNAL flag of the V4L2 + VIDIOC_ENUMINPUT status bits. + + | 0 - not detected + | 1 - detected + +**video_width** (R): + Video stream width. This is the actual width as detected by the HW. + + The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in the width + field of the v4l2_bt_timings struct. + +**video_height** (R): + Video stream height. This is the actual height as detected by the HW. + + The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in the height + field of the v4l2_bt_timings struct. + +**vsync_status** (R): + The type of VSYNC pulses as detected by the video format detector. + + The value is equivalent to the flags returned by VIDIOC_QUERY_DV_TIMINGS in + the polarities field of the v4l2_bt_timings struct. + + | 0 - active low + | 1 - active high + | 2 - not available + +**hsync_status** (R): + The type of HSYNC pulses as detected by the video format detector. + + The value is equivalent to the flags returned by VIDIOC_QUERY_DV_TIMINGS in + the polarities field of the v4l2_bt_timings struct. + + | 0 - active low + | 1 - active high + | 2 - not available + +**vsync_gap_length** (RW): + If the incoming video signal does not contain synchronization VSYNC and + HSYNC pulses, these must be generated internally in the FPGA to achieve + the correct frame ordering. This value indicates, how many "empty" pixels + (pixels with deasserted Data Enable signal) are necessary to generate the + internal VSYNC pulse. + +**hsync_gap_length** (RW): + If the incoming video signal does not contain synchronization VSYNC and + HSYNC pulses, these must be generated internally in the FPGA to achieve + the correct frame ordering. This value indicates, how many "empty" pixels + (pixels with deasserted Data Enable signal) are necessary to generate the + internal HSYNC pulse. The value must be greater than 1 and smaller than + vsync_gap_length. + +**pclk_frequency** (R): + Input pixel clock frequency in kHz. + + The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in + the pixelclock field of the v4l2_bt_timings struct. + + *Note: The frequency_range parameter must be set properly first to get + a valid frequency here.* + +**hsync_width** (R): + Width of the HSYNC signal in PCLK clock ticks. + + The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in + the hsync field of the v4l2_bt_timings struct. + +**vsync_width** (R): + Width of the VSYNC signal in PCLK clock ticks. + + The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in + the vsync field of the v4l2_bt_timings struct. + +**hback_porch** (R): + Number of PCLK pulses between deassertion of the HSYNC signal and the first + valid pixel in the video line (marked by DE=1). + + The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in + the hbackporch field of the v4l2_bt_timings struct. + +**hfront_porch** (R): + Number of PCLK pulses between the end of the last valid pixel in the video + line (marked by DE=1) and assertion of the HSYNC signal. + + The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in + the hfrontporch field of the v4l2_bt_timings struct. + +**vback_porch** (R): + Number of video lines between deassertion of the VSYNC signal and the video + line with the first valid pixel (marked by DE=1). + + The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in + the vbackporch field of the v4l2_bt_timings struct. + +**vfront_porch** (R): + Number of video lines between the end of the last valid pixel line (marked + by DE=1) and assertion of the VSYNC signal. + + The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in + the vfrontporch field of the v4l2_bt_timings struct. + +**frequency_range** (RW) + PLL frequency range of the OLDI input clock generator. The PLL frequency is + derived from the Pixel Clock Frequency (PCLK) and is equal to PCLK if + oldi_lane_width is set to "single" and PCLK/2 if oldi_lane_width is set to + "dual". + + | 0 - PLL < 50MHz (default) + | 1 - PLL >= 50MHz + + *Note: This parameter can not be changed while the input v4l2 device is + open.* + + +Common FPDL3/GMSL output parameters +=================================== + +**output_id** (R): + Output number ID, zero based. + +**video_source** (RW): + Output video source. If set to 0 or 1, the source is the corresponding card + input and the v4l2 output devices are disabled. If set to 2 or 3, the source + is the corresponding v4l2 video output device. The default is + the corresponding v4l2 output, i.e. 2 for OUT1 and 3 for OUT2. + + | 0 - input 0 + | 1 - input 1 + | 2 - v4l2 output 0 + | 3 - v4l2 output 1 + + *Note: This parameter can not be changed while ANY of the input/output v4l2 + devices is open.* + +**display_width** (RW): + Display width. There is no autodetection of the connected display, so the + proper value must be set before the start of streaming. The default width + is 1280. + + *Note: This parameter can not be changed while the output v4l2 device is + open.* + +**display_height** (RW): + Display height. There is no autodetection of the connected display, so the + proper value must be set before the start of streaming. The default height + is 640. + + *Note: This parameter can not be changed while the output v4l2 device is + open.* + +**frame_rate** (RW): + Output video frame rate in frames per second. The default frame rate is + 60Hz. + +**hsync_polarity** (RW): + HSYNC signal polarity. + + | 0 - active low (default) + | 1 - active high + +**vsync_polarity** (RW): + VSYNC signal polarity. + + | 0 - active low (default) + | 1 - active high + +**de_polarity** (RW): + DE signal polarity. + + | 0 - active low + | 1 - active high (default) + +**pclk_frequency** (RW): + Output pixel clock frequency. Allowed values are between 25000-190000(kHz) + and there is a non-linear stepping between two consecutive allowed + frequencies. The driver finds the nearest allowed frequency to the given + value and sets it. When reading this property, you get the exact + frequency set by the driver. The default frequency is 70000kHz. + + *Note: This parameter can not be changed while the output v4l2 device is + open.* + +**hsync_width** (RW): + Width of the HSYNC signal in pixels. The default value is 16. + +**vsync_width** (RW): + Width of the VSYNC signal in video lines. The default value is 2. + +**hback_porch** (RW): + Number of PCLK pulses between deassertion of the HSYNC signal and the first + valid pixel in the video line (marked by DE=1). The default value is 32. + +**hfront_porch** (RW): + Number of PCLK pulses between the end of the last valid pixel in the video + line (marked by DE=1) and assertion of the HSYNC signal. The default value + is 32. + +**vback_porch** (RW): + Number of video lines between deassertion of the VSYNC signal and the video + line with the first valid pixel (marked by DE=1). The default value is 2. + +**vfront_porch** (RW): + Number of video lines between the end of the last valid pixel line (marked + by DE=1) and assertion of the VSYNC signal. The default value is 2. + + +FPDL3 specific input parameters +=============================== + +**fpdl3_input_width** (RW): + Number of deserializer input lines. + + | 0 - auto (default) + | 1 - single + | 2 - dual + +FPDL3 specific output parameters +================================ + +**fpdl3_output_width** (RW): + Number of serializer output lines. + + | 0 - auto (default) + | 1 - single + | 2 - dual + +GMSL specific input parameters +============================== + +**gmsl_mode** (RW): + GMSL speed mode. + + | 0 - 12Gb/s (default) + | 1 - 6Gb/s + | 2 - 3Gb/s + | 3 - 1.5Gb/s + +**gmsl_stream_id** (RW): + The GMSL multi-stream contains up to four video streams. This parameter + selects which stream is captured by the video input. The value is the + zero-based index of the stream. The default stream id is 0. + + *Note: This parameter can not be changed while the input v4l2 device is + open.* + +**gmsl_fec** (RW): + GMSL Forward Error Correction (FEC). + + | 0 - disabled + | 1 - enabled (default) + + +==================== +mgb4 mtd partitions +==================== + +The mgb4 driver creates a MTD device with two partitions: + - mgb4-fw.X - FPGA firmware. + - mgb4-data.X - Factory settings, e.g. card serial number. + +The *mgb4-fw* partition is writable and is used for FW updates, *mgb4-data* is +read-only. The *X* attached to the partition name represents the card number. +Depending on the CONFIG_MTD_PARTITIONED_MASTER kernel configuration, you may +also have a third partition named *mgb4-flash* available in the system. This +partition represents the whole, unpartitioned, card's FLASH memory and one should +not fiddle with it... + +==================== +mgb4 iio (triggers) +==================== + +The mgb4 driver creates an Industrial I/O (IIO) device that provides trigger and +signal level status capability. The following scan elements are available: + +**activity**: + The trigger levels and pending status. + + | bit 1 - trigger 1 pending + | bit 2 - trigger 2 pending + | bit 5 - trigger 1 level + | bit 6 - trigger 2 level + +**timestamp**: + The trigger event timestamp. + +The iio device can operate either in "raw" mode where you can fetch the signal +levels (activity bits 5 and 6) using sysfs access or in triggered buffer mode. +In the triggered buffer mode you can follow the signal level changes (activity +bits 1 and 2) using the iio device in /dev. If you enable the timestamps, you +will also get the exact trigger event time that can be matched to a video frame +(every mgb4 video frame has a timestamp with the same clock source). + +*Note: although the activity sample always contains all the status bits, it makes +no sense to get the pending bits in raw mode or the level bits in the triggered +buffer mode - the values do not represent valid data in such case.* diff --git a/Documentation/admin-guide/media/pci-cardlist.rst b/Documentation/admin-guide/media/pci-cardlist.rst index 42528795d4da..7d8e3c8987db 100644 --- a/Documentation/admin-guide/media/pci-cardlist.rst +++ b/Documentation/admin-guide/media/pci-cardlist.rst @@ -77,6 +77,7 @@ ipu3-cio2 Intel ipu3-cio2 driver ivtv Conexant cx23416/cx23415 MPEG encoder/decoder ivtvfb Conexant cx23415 framebuffer mantis MANTIS based cards +mgb4 Digiteq Automotive MGB4 frame grabber mxb Siemens-Nixdorf 'Multimedia eXtension Board' netup-unidvb NetUP Universal DVB card ngene Micronas nGene diff --git a/Documentation/admin-guide/media/v4l-drivers.rst b/Documentation/admin-guide/media/v4l-drivers.rst index 1c41f87c3917..61283d67ceef 100644 --- a/Documentation/admin-guide/media/v4l-drivers.rst +++ b/Documentation/admin-guide/media/v4l-drivers.rst @@ -17,6 +17,7 @@ Video4Linux (V4L) driver-specific documentation imx7 ipu3 ivtv + mgb4 omap3isp omap4_camera philips diff --git a/Documentation/admin-guide/media/visl.rst b/Documentation/admin-guide/media/visl.rst index 7d2dc78341c9..4328c6c72d30 100644 --- a/Documentation/admin-guide/media/visl.rst +++ b/Documentation/admin-guide/media/visl.rst @@ -78,7 +78,7 @@ The trace events are defined on a per-codec basis, e.g.: .. code-block:: bash - $ ls /sys/kernel/debug/tracing/events/ | grep visl + $ ls /sys/kernel/tracing/events/ | grep visl visl_fwht_controls visl_h264_controls visl_hevc_controls @@ -90,13 +90,13 @@ For example, in order to dump HEVC SPS data: .. code-block:: bash - $ echo 1 > /sys/kernel/debug/tracing/events/visl_hevc_controls/v4l2_ctrl_hevc_sps/enable + $ echo 1 > /sys/kernel/tracing/events/visl_hevc_controls/v4l2_ctrl_hevc_sps/enable The SPS data will be dumped to the trace buffer, i.e.: .. code-block:: bash - $ cat /sys/kernel/debug/tracing/trace + $ cat /sys/kernel/tracing/trace video_parameter_set_id 0 seq_parameter_set_id 0 pic_width_in_luma_samples 1920 diff --git a/Documentation/devicetree/bindings/leds/irled/pwm-ir-tx.yaml b/Documentation/devicetree/bindings/leds/irled/pwm-ir-tx.yaml index f2a6fa140f38..7526e3149f72 100644 --- a/Documentation/devicetree/bindings/leds/irled/pwm-ir-tx.yaml +++ b/Documentation/devicetree/bindings/leds/irled/pwm-ir-tx.yaml @@ -15,7 +15,10 @@ description: properties: compatible: - const: pwm-ir-tx + oneOf: + - const: pwm-ir-tx + - const: nokia,n900-ir + deprecated: true pwms: maxItems: 1 diff --git a/Documentation/devicetree/bindings/media/amlogic,meson6-ir.yaml b/Documentation/devicetree/bindings/media/amlogic,meson6-ir.yaml index 3f9fa92703bb..0f95fe8dd9ac 100644 --- a/Documentation/devicetree/bindings/media/amlogic,meson6-ir.yaml +++ b/Documentation/devicetree/bindings/media/amlogic,meson6-ir.yaml @@ -19,6 +19,7 @@ properties: - amlogic,meson6-ir - amlogic,meson8b-ir - amlogic,meson-gxbb-ir + - amlogic,meson-s4-ir - items: - const: amlogic,meson-gx-ir - const: amlogic,meson-gxbb-ir diff --git a/Documentation/devicetree/bindings/media/cdns,csi2rx.yaml b/Documentation/devicetree/bindings/media/cdns,csi2rx.yaml index 30a335b10762..2008a47c0580 100644 --- a/Documentation/devicetree/bindings/media/cdns,csi2rx.yaml +++ b/Documentation/devicetree/bindings/media/cdns,csi2rx.yaml @@ -18,6 +18,7 @@ properties: items: - enum: - starfive,jh7110-csi2rx + - ti,j721e-csi2rx - const: cdns,csi2rx reg: diff --git a/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml b/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml index 1e2df8cf2937..60f19e1152b3 100644 --- a/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml +++ b/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml @@ -14,6 +14,9 @@ description: |- interface and CCI (I2C compatible) control bus. The output format is raw Bayer. +allOf: + - $ref: /schemas/media/video-interface-devices.yaml# + properties: compatible: const: hynix,hi846 @@ -86,7 +89,7 @@ required: - vddd-supply - port -additionalProperties: false +unevaluatedProperties: false examples: - | @@ -109,6 +112,8 @@ examples: vddio-supply = <®_camera_vddio>; reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>; shutdown-gpios = <&gpio5 4 GPIO_ACTIVE_LOW>; + orientation = <0>; + rotation = <0>; port { camera_out: endpoint { diff --git a/Documentation/devicetree/bindings/media/i2c/onnn,mt9m114.yaml b/Documentation/devicetree/bindings/media/i2c/onnn,mt9m114.yaml new file mode 100644 index 000000000000..f6b87892068a --- /dev/null +++ b/Documentation/devicetree/bindings/media/i2c/onnn,mt9m114.yaml @@ -0,0 +1,114 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/media/i2c/onnn,mt9m114.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: onsemi 1/6-inch 720p CMOS Digital Image Sensor + +maintainers: + - Laurent Pinchart <laurent.pinchart@ideasonboard.com> + +description: |- + The onsemi MT9M114 is a 1/6-inch 720p (1.26 Mp) CMOS digital image sensor + with an active pixel-array size of 1296H x 976V. It is programmable through + an I2C interface and outputs image data over a 8-bit parallel or 1-lane MIPI + CSI-2 connection. + +properties: + compatible: + const: onnn,mt9m114 + + reg: + description: I2C device address + enum: + - 0x48 + - 0x5d + + clocks: + description: EXTCLK clock signal + maxItems: 1 + + vdd-supply: + description: + Core digital voltage supply, 1.8V + + vddio-supply: + description: + I/O digital voltage supply, 1.8V or 2.8V + + vaa-supply: + description: + Analog voltage supply, 2.8V + + reset-gpios: + description: |- + Reference to the GPIO connected to the RESET_BAR pin, if any (active + low). + + port: + $ref: /schemas/graph.yaml#/$defs/port-base + additionalProperties: false + + properties: + endpoint: + $ref: /schemas/media/video-interfaces.yaml# + additionalProperties: false + + properties: + bus-type: + enum: [4, 5, 6] + + link-frequencies: true + remote-endpoint: true + + # The number and mapping of lanes (for CSI-2), and the bus width and + # signal polarities (for parallel and BT.656) are fixed and must not + # be specified. + + required: + - bus-type + - link-frequencies + +required: + - compatible + - reg + - clocks + - vdd-supply + - vddio-supply + - vaa-supply + - port + +additionalProperties: false + +examples: + - | + #include <dt-bindings/gpio/gpio.h> + #include <dt-bindings/media/video-interfaces.h> + + i2c0 { + #address-cells = <1>; + #size-cells = <0>; + + sensor@48 { + compatible = "onnn,mt9m114"; + reg = <0x48>; + + clocks = <&clk24m 0>; + + reset-gpios = <&gpio5 21 GPIO_ACTIVE_LOW>; + + vddio-supply = <®_cam_1v8>; + vdd-supply = <®_cam_1v8>; + vaa-supply = <®_2p8v>; + + port { + endpoint { + bus-type = <MEDIA_BUS_TYPE_CSI2_DPHY>; + link-frequencies = /bits/ 64 <384000000>; + remote-endpoint = <&mipi_csi_in>; + }; + }; + }; + }; +... diff --git a/Documentation/devicetree/bindings/media/i2c/ovti,ov02a10.yaml b/Documentation/devicetree/bindings/media/i2c/ovti,ov02a10.yaml index 763cebe03dc2..67c1c291327b 100644 --- a/Documentation/devicetree/bindings/media/i2c/ovti,ov02a10.yaml +++ b/Documentation/devicetree/bindings/media/i2c/ovti,ov02a10.yaml @@ -68,12 +68,6 @@ properties: marked GPIO_ACTIVE_LOW. maxItems: 1 - rotation: - enum: - - 0 # Sensor Mounted Upright - - 180 # Sensor Mounted Upside Down - default: 0 - port: $ref: /schemas/graph.yaml#/$defs/port-base additionalProperties: false @@ -114,7 +108,7 @@ required: - reset-gpios - port -additionalProperties: false +unevaluatedProperties: false examples: - | diff --git a/Documentation/devicetree/bindings/media/i2c/ovti,ov4689.yaml b/Documentation/devicetree/bindings/media/i2c/ovti,ov4689.yaml index 50579c947f3c..d96199031b66 100644 --- a/Documentation/devicetree/bindings/media/i2c/ovti,ov4689.yaml +++ b/Documentation/devicetree/bindings/media/i2c/ovti,ov4689.yaml @@ -52,10 +52,6 @@ properties: description: GPIO connected to the reset pin (active low) - orientation: true - - rotation: true - port: $ref: /schemas/graph.yaml#/$defs/port-base additionalProperties: false @@ -95,7 +91,7 @@ required: - dvdd-supply - port -additionalProperties: false +unevaluatedProperties: false examples: - | diff --git a/Documentation/devicetree/bindings/media/i2c/ovti,ov5640.yaml b/Documentation/devicetree/bindings/media/i2c/ovti,ov5640.yaml index a621032f9bd0..2c5e69356658 100644 --- a/Documentation/devicetree/bindings/media/i2c/ovti,ov5640.yaml +++ b/Documentation/devicetree/bindings/media/i2c/ovti,ov5640.yaml @@ -44,11 +44,6 @@ properties: description: > Reference to the GPIO connected to the reset pin, if any. - rotation: - enum: - - 0 - - 180 - port: description: Digital Output Port $ref: /schemas/graph.yaml#/$defs/port-base @@ -85,7 +80,7 @@ required: - DOVDD-supply - port -additionalProperties: false +unevaluatedProperties: false examples: - | diff --git a/Documentation/devicetree/bindings/media/i2c/ovti,ov5642.yaml b/Documentation/devicetree/bindings/media/i2c/ovti,ov5642.yaml new file mode 100644 index 000000000000..01f8b2b3fd17 --- /dev/null +++ b/Documentation/devicetree/bindings/media/i2c/ovti,ov5642.yaml @@ -0,0 +1,141 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/media/i2c/ovti,ov5642.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: OmniVision OV5642 Image Sensor + +maintainers: + - Fabio Estevam <festevam@gmail.com> + +allOf: + - $ref: /schemas/media/video-interface-devices.yaml# + +properties: + compatible: + const: ovti,ov5642 + + reg: + maxItems: 1 + + clocks: + description: XCLK Input Clock + + AVDD-supply: + description: Analog voltage supply, 2.8V. + + DVDD-supply: + description: Digital core voltage supply, 1.5V. + + DOVDD-supply: + description: Digital I/O voltage supply, 1.8V. + + powerdown-gpios: + maxItems: 1 + description: Reference to the GPIO connected to the powerdown pin, if any. + + reset-gpios: + maxItems: 1 + description: Reference to the GPIO connected to the reset pin, if any. + + port: + $ref: /schemas/graph.yaml#/$defs/port-base + description: | + Video output port. + + properties: + endpoint: + $ref: /schemas/media/video-interfaces.yaml# + unevaluatedProperties: false + + properties: + bus-type: + enum: [5, 6] + + bus-width: + enum: [8, 10] + default: 10 + + data-shift: + enum: [0, 2] + default: 0 + + hsync-active: + enum: [0, 1] + default: 1 + + vsync-active: + enum: [0, 1] + default: 1 + + pclk-sample: + enum: [0, 1] + default: 1 + + allOf: + - if: + properties: + bus-type: + const: 6 + then: + properties: + hsync-active: false + vsync-active: false + + - if: + properties: + bus-width: + const: 10 + then: + properties: + data-shift: + const: 0 + + required: + - bus-type + + additionalProperties: false + +required: + - compatible + - reg + - clocks + - port + +additionalProperties: false + +examples: + - | + #include <dt-bindings/gpio/gpio.h> + #include <dt-bindings/media/video-interfaces.h> + + i2c { + #address-cells = <1>; + #size-cells = <0>; + + camera@3c { + compatible = "ovti,ov5642"; + reg = <0x3c>; + pinctrl-names = "default"; + pinctrl-0 = <&pinctrl_ov5642>; + clocks = <&clk_ext_camera>; + DOVDD-supply = <&vgen4_reg>; + AVDD-supply = <&vgen3_reg>; + DVDD-supply = <&vgen2_reg>; + powerdown-gpios = <&gpio1 19 GPIO_ACTIVE_HIGH>; + reset-gpios = <&gpio1 20 GPIO_ACTIVE_LOW>; + + port { + ov5642_to_parallel: endpoint { + bus-type = <MEDIA_BUS_TYPE_PARALLEL>; + remote-endpoint = <¶llel_from_ov5642>; + bus-width = <8>; + data-shift = <2>; /* lines 9:2 are used */ + hsync-active = <0>; + vsync-active = <0>; + pclk-sample = <1>; + }; + }; + }; + }; diff --git a/Documentation/devicetree/bindings/media/i2c/ovti,ov5693.yaml b/Documentation/devicetree/bindings/media/i2c/ovti,ov5693.yaml index 6829a4aadd22..3368b3bd8ef2 100644 --- a/Documentation/devicetree/bindings/media/i2c/ovti,ov5693.yaml +++ b/Documentation/devicetree/bindings/media/i2c/ovti,ov5693.yaml @@ -8,7 +8,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# title: Omnivision OV5693/OV5695 CMOS Sensors maintainers: - - Tommaso Merciai <tommaso.merciai@amarulasolutions.com> + - Tommaso Merciai <tomm.merciai@gmail.com> description: | The Omnivision OV5693/OV5695 are high performance, 1/4-inch, 5 megapixel, CMOS diff --git a/Documentation/devicetree/bindings/media/i2c/sony,imx214.yaml b/Documentation/devicetree/bindings/media/i2c/sony,imx214.yaml index e2470dd5920c..60903da84e1f 100644 --- a/Documentation/devicetree/bindings/media/i2c/sony,imx214.yaml +++ b/Documentation/devicetree/bindings/media/i2c/sony,imx214.yaml @@ -91,7 +91,7 @@ required: - vddd-supply - port -additionalProperties: false +unevaluatedProperties: false examples: - | diff --git a/Documentation/devicetree/bindings/media/i2c/sony,imx415.yaml b/Documentation/devicetree/bindings/media/i2c/sony,imx415.yaml index 642f9b15d359..9a00dab2e8a3 100644 --- a/Documentation/devicetree/bindings/media/i2c/sony,imx415.yaml +++ b/Documentation/devicetree/bindings/media/i2c/sony,imx415.yaml @@ -44,14 +44,6 @@ properties: description: Sensor reset (XCLR) GPIO maxItems: 1 - flash-leds: true - - lens-focus: true - - orientation: true - - rotation: true - port: $ref: /schemas/graph.yaml#/$defs/port-base unevaluatedProperties: false @@ -89,7 +81,7 @@ required: - ovdd-supply - port -additionalProperties: false +unevaluatedProperties: false examples: - | diff --git a/Documentation/devicetree/bindings/media/nokia,n900-ir b/Documentation/devicetree/bindings/media/nokia,n900-ir deleted file mode 100644 index 13a18ce37dd1..000000000000 --- a/Documentation/devicetree/bindings/media/nokia,n900-ir +++ /dev/null @@ -1,20 +0,0 @@ -Device-Tree bindings for LIRC TX driver for Nokia N900(RX51) - -Required properties: - - compatible: should be "nokia,n900-ir". - - pwms: specifies PWM used for IR signal transmission. - -Example node: - - pwm9: dmtimer-pwm@9 { - compatible = "ti,omap-dmtimer-pwm"; - ti,timers = <&timer9>; - ti,clock-source = <0x00>; /* timer_sys_ck */ - #pwm-cells = <3>; - }; - - ir: n900-ir { - compatible = "nokia,n900-ir"; - - pwms = <&pwm9 0 26316 0>; /* 38000 Hz */ - }; diff --git a/Documentation/devicetree/bindings/media/nuvoton,npcm-ece.yaml b/Documentation/devicetree/bindings/media/nuvoton,npcm-ece.yaml new file mode 100644 index 000000000000..b47468e54504 --- /dev/null +++ b/Documentation/devicetree/bindings/media/nuvoton,npcm-ece.yaml @@ -0,0 +1,43 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/media/nuvoton,npcm-ece.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Nuvoton NPCM Encoding Compression Engine + +maintainers: + - Joseph Liu <kwliu@nuvoton.com> + - Marvin Lin <kflin@nuvoton.com> + +description: | + Video Encoding Compression Engine (ECE) present on Nuvoton NPCM SoCs. + +properties: + compatible: + enum: + - nuvoton,npcm750-ece + - nuvoton,npcm845-ece + + reg: + maxItems: 1 + + resets: + maxItems: 1 + +required: + - compatible + - reg + - resets + +additionalProperties: false + +examples: + - | + #include <dt-bindings/reset/nuvoton,npcm7xx-reset.h> + + ece: video-codec@f0820000 { + compatible = "nuvoton,npcm750-ece"; + reg = <0xf0820000 0x2000>; + resets = <&rstc NPCM7XX_RESET_IPSRST2 NPCM7XX_RESET_ECE>; + }; diff --git a/Documentation/devicetree/bindings/media/nuvoton,npcm-vcd.yaml b/Documentation/devicetree/bindings/media/nuvoton,npcm-vcd.yaml new file mode 100644 index 000000000000..c885f559d2e5 --- /dev/null +++ b/Documentation/devicetree/bindings/media/nuvoton,npcm-vcd.yaml @@ -0,0 +1,72 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/media/nuvoton,npcm-vcd.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Nuvoton NPCM Video Capture/Differentiation Engine + +maintainers: + - Joseph Liu <kwliu@nuvoton.com> + - Marvin Lin <kflin@nuvoton.com> + +description: | + Video Capture/Differentiation Engine (VCD) present on Nuvoton NPCM SoCs. + +properties: + compatible: + enum: + - nuvoton,npcm750-vcd + - nuvoton,npcm845-vcd + + reg: + maxItems: 1 + + interrupts: + maxItems: 1 + + resets: + maxItems: 1 + + nuvoton,sysgcr: + $ref: /schemas/types.yaml#/definitions/phandle + description: phandle to access GCR (Global Control Register) registers. + + nuvoton,sysgfxi: + $ref: /schemas/types.yaml#/definitions/phandle + description: phandle to access GFXI (Graphics Core Information) registers. + + nuvoton,ece: + $ref: /schemas/types.yaml#/definitions/phandle + description: phandle to access ECE (Encoding Compression Engine) registers. + + memory-region: + maxItems: 1 + description: + CMA pool to use for buffers allocation instead of the default CMA pool. + +required: + - compatible + - reg + - interrupts + - resets + - nuvoton,sysgcr + - nuvoton,sysgfxi + - nuvoton,ece + +additionalProperties: false + +examples: + - | + #include <dt-bindings/interrupt-controller/arm-gic.h> + #include <dt-bindings/reset/nuvoton,npcm7xx-reset.h> + + vcd: vcd@f0810000 { + compatible = "nuvoton,npcm750-vcd"; + reg = <0xf0810000 0x10000>; + interrupts = <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>; + resets = <&rstc NPCM7XX_RESET_IPSRST2 NPCM7XX_RESET_VCD>; + nuvoton,sysgcr = <&gcr>; + nuvoton,sysgfxi = <&gfxi>; + nuvoton,ece = <&ece>; + }; diff --git a/Documentation/devicetree/bindings/media/qcom,sdm845-venus-v2.yaml b/Documentation/devicetree/bindings/media/qcom,sdm845-venus-v2.yaml index d5f80976f4cf..6228fd2b3246 100644 --- a/Documentation/devicetree/bindings/media/qcom,sdm845-venus-v2.yaml +++ b/Documentation/devicetree/bindings/media/qcom,sdm845-venus-v2.yaml @@ -48,6 +48,14 @@ properties: iommus: maxItems: 2 + interconnects: + maxItems: 2 + + interconnect-names: + items: + - const: video-mem + - const: cpu-cfg + operating-points-v2: true opp-table: type: object diff --git a/Documentation/devicetree/bindings/media/rockchip-vpu.yaml b/Documentation/devicetree/bindings/media/rockchip-vpu.yaml index 772ec3283bc6..c57e1f488895 100644 --- a/Documentation/devicetree/bindings/media/rockchip-vpu.yaml +++ b/Documentation/devicetree/bindings/media/rockchip-vpu.yaml @@ -68,6 +68,13 @@ properties: iommus: maxItems: 1 + resets: + items: + - description: AXI reset line + - description: AXI bus interface unit reset line + - description: APB reset line + - description: APB bus interface unit reset line + required: - compatible - reg diff --git a/Documentation/devicetree/bindings/media/samsung,exynos4212-fimc-is.yaml b/Documentation/devicetree/bindings/media/samsung,exynos4212-fimc-is.yaml index 3691cd4962b2..3a5ff3f47060 100644 --- a/Documentation/devicetree/bindings/media/samsung,exynos4212-fimc-is.yaml +++ b/Documentation/devicetree/bindings/media/samsung,exynos4212-fimc-is.yaml @@ -75,13 +75,20 @@ properties: power-domains: maxItems: 1 + samsung,pmu-syscon: + $ref: /schemas/types.yaml#/definitions/phandle + description: + Power Management Unit (PMU) system controller interface, used to + power/start the ISP. + patternProperties: "^pmu@[0-9a-f]+$": type: object additionalProperties: false + deprecated: true description: Node representing the SoC's Power Management Unit (duplicated with the - correct PMU node in the SoC). + correct PMU node in the SoC). Deprecated, use samsung,pmu-syscon. properties: reg: @@ -131,6 +138,7 @@ required: - clock-names - interrupts - ranges + - samsung,pmu-syscon - '#size-cells' additionalProperties: false @@ -179,15 +187,12 @@ examples: <&sysmmu_fimc_fd>, <&sysmmu_fimc_mcuctl>; iommu-names = "isp", "drc", "fd", "mcuctl"; power-domains = <&pd_isp>; + samsung,pmu-syscon = <&pmu_system_controller>; #address-cells = <1>; #size-cells = <1>; ranges; - pmu@10020000 { - reg = <0x10020000 0x3000>; - }; - i2c-isp@12140000 { compatible = "samsung,exynos4212-i2c-isp"; reg = <0x12140000 0x100>; diff --git a/Documentation/devicetree/bindings/media/samsung,fimc.yaml b/Documentation/devicetree/bindings/media/samsung,fimc.yaml index b3486c38a05b..7808d61f1fa3 100644 --- a/Documentation/devicetree/bindings/media/samsung,fimc.yaml +++ b/Documentation/devicetree/bindings/media/samsung,fimc.yaml @@ -118,7 +118,7 @@ examples: #clock-cells = <1>; #address-cells = <1>; #size-cells = <1>; - ranges = <0x0 0x0 0x18000000>; + ranges = <0x0 0x0 0xba1000>; clocks = <&clock CLK_SCLK_CAM0>, <&clock CLK_SCLK_CAM1>, <&clock CLK_PIXELASYNCM0>, <&clock CLK_PIXELASYNCM1>; @@ -133,9 +133,9 @@ examples: pinctrl-0 = <&cam_port_a_clk_active &cam_port_b_clk_active>; pinctrl-names = "default"; - fimc@11800000 { + fimc@0 { compatible = "samsung,exynos4212-fimc"; - reg = <0x11800000 0x1000>; + reg = <0x00000000 0x1000>; interrupts = <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>; clocks = <&clock CLK_FIMC0>, <&clock CLK_SCLK_FIMC0>; @@ -152,9 +152,9 @@ examples: /* ... FIMC 1-3 */ - csis@11880000 { + csis@80000 { compatible = "samsung,exynos4210-csis"; - reg = <0x11880000 0x4000>; + reg = <0x00080000 0x4000>; interrupts = <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>; clocks = <&clock CLK_CSIS0>, <&clock CLK_SCLK_CSIS0>; @@ -187,9 +187,9 @@ examples: /* ... CSIS 1 */ - fimc-lite@12390000 { + fimc-lite@b90000 { compatible = "samsung,exynos4212-fimc-lite"; - reg = <0x12390000 0x1000>; + reg = <0xb90000 0x1000>; interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>; power-domains = <&pd_isp>; clocks = <&isp_clock CLK_ISP_FIMC_LITE0>; @@ -199,9 +199,9 @@ examples: /* ... FIMC-LITE 1 */ - fimc-is@12000000 { + fimc-is@800000 { compatible = "samsung,exynos4212-fimc-is"; - reg = <0x12000000 0x260000>; + reg = <0x00800000 0x260000>; interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 95 IRQ_TYPE_LEVEL_HIGH>; clocks = <&isp_clock CLK_ISP_FIMC_LITE0>, @@ -237,18 +237,15 @@ examples: <&sysmmu_fimc_fd>, <&sysmmu_fimc_mcuctl>; iommu-names = "isp", "drc", "fd", "mcuctl"; power-domains = <&pd_isp>; + samsung,pmu-syscon = <&pmu_system_controller>; #address-cells = <1>; #size-cells = <1>; ranges; - pmu@10020000 { - reg = <0x10020000 0x3000>; - }; - - i2c-isp@12140000 { + i2c-isp@940000 { compatible = "samsung,exynos4212-i2c-isp"; - reg = <0x12140000 0x100>; + reg = <0x00940000 0x100>; clocks = <&isp_clock CLK_ISP_I2C1_ISP>; clock-names = "i2c_isp"; pinctrl-0 = <&fimc_is_i2c1>; diff --git a/Documentation/devicetree/bindings/media/ti,j721e-csi2rx-shim.yaml b/Documentation/devicetree/bindings/media/ti,j721e-csi2rx-shim.yaml new file mode 100644 index 000000000000..f762fdc05e4d --- /dev/null +++ b/Documentation/devicetree/bindings/media/ti,j721e-csi2rx-shim.yaml @@ -0,0 +1,100 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/media/ti,j721e-csi2rx-shim.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: TI J721E CSI2RX Shim + +description: | + The TI J721E CSI2RX Shim is a wrapper around Cadence CSI2RX bridge that + enables sending captured frames to memory over PSI-L DMA. In the J721E + Technical Reference Manual (SPRUIL1B) it is referred to as "SHIM" under the + CSI_RX_IF section. + +maintainers: + - Jai Luthra <j-luthra@ti.com> + +properties: + compatible: + const: ti,j721e-csi2rx-shim + + dmas: + maxItems: 1 + + dma-names: + items: + - const: rx0 + + reg: + maxItems: 1 + + power-domains: + maxItems: 1 + + ranges: true + + "#address-cells": true + + "#size-cells": true + +patternProperties: + "^csi-bridge@": + type: object + description: CSI2 bridge node. + $ref: cdns,csi2rx.yaml# + +required: + - compatible + - reg + - dmas + - dma-names + - power-domains + - ranges + - "#address-cells" + - "#size-cells" + +additionalProperties: false + +examples: + - | + #include <dt-bindings/soc/ti,sci_pm_domain.h> + + ti_csi2rx0: ticsi2rx@4500000 { + compatible = "ti,j721e-csi2rx-shim"; + dmas = <&main_udmap 0x4940>; + dma-names = "rx0"; + reg = <0x4500000 0x1000>; + power-domains = <&k3_pds 26 TI_SCI_PD_EXCLUSIVE>; + #address-cells = <1>; + #size-cells = <1>; + ranges; + + cdns_csi2rx: csi-bridge@4504000 { + compatible = "ti,j721e-csi2rx", "cdns,csi2rx"; + reg = <0x4504000 0x1000>; + clocks = <&k3_clks 26 2>, <&k3_clks 26 0>, <&k3_clks 26 2>, + <&k3_clks 26 2>, <&k3_clks 26 3>, <&k3_clks 26 3>; + clock-names = "sys_clk", "p_clk", "pixel_if0_clk", + "pixel_if1_clk", "pixel_if2_clk", "pixel_if3_clk"; + phys = <&dphy0>; + phy-names = "dphy"; + + ports { + #address-cells = <1>; + #size-cells = <0>; + + csi2_0: port@0 { + + reg = <0>; + + csi2rx0_in_sensor: endpoint { + remote-endpoint = <&csi2_cam0>; + bus-type = <4>; /* CSI2 DPHY. */ + clock-lanes = <0>; + data-lanes = <1 2>; + }; + }; + }; + }; + }; diff --git a/Documentation/devicetree/bindings/media/video-interfaces.yaml b/Documentation/devicetree/bindings/media/video-interfaces.yaml index a211d49dc2ac..26e3e7d7c67b 100644 --- a/Documentation/devicetree/bindings/media/video-interfaces.yaml +++ b/Documentation/devicetree/bindings/media/video-interfaces.yaml @@ -160,6 +160,7 @@ properties: $ref: /schemas/types.yaml#/definitions/uint32-array minItems: 1 maxItems: 8 + uniqueItems: true items: # Assume up to 9 physical lane indices maximum: 8 diff --git a/Documentation/devicetree/bindings/soc/nuvoton/nuvoton,gfxi.yaml b/Documentation/devicetree/bindings/soc/nuvoton/nuvoton,gfxi.yaml new file mode 100644 index 000000000000..0222a43977ab --- /dev/null +++ b/Documentation/devicetree/bindings/soc/nuvoton/nuvoton,gfxi.yaml @@ -0,0 +1,39 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/soc/nuvoton/nuvoton,gfxi.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Graphics Core Information block in Nuvoton SoCs + +maintainers: + - Joseph Liu <kwliu@nuvoton.com> + - Marvin Lin <kflin@nuvoton.com> + +description: + The Graphics Core Information (GFXI) are a block of registers in Nuvoton SoCs + that analyzes Graphics core behavior and provides information in registers. + +properties: + compatible: + items: + - enum: + - nuvoton,npcm750-gfxi + - nuvoton,npcm845-gfxi + - const: syscon + + reg: + maxItems: 1 + +required: + - compatible + - reg + +additionalProperties: false + +examples: + - | + gfxi: gfxi@e000 { + compatible = "nuvoton,npcm750-gfxi", "syscon"; + reg = <0xe000 0x100>; + }; diff --git a/Documentation/devicetree/bindings/trivial-devices.yaml b/Documentation/devicetree/bindings/trivial-devices.yaml index 64b2ef083fdf..c3190f2a168a 100644 --- a/Documentation/devicetree/bindings/trivial-devices.yaml +++ b/Documentation/devicetree/bindings/trivial-devices.yaml @@ -309,8 +309,6 @@ properties: - nuvoton,w83773g # OKI ML86V7667 video decoder - oki,ml86v7667 - # OV5642: Color CMOS QSXGA (5-megapixel) Image Sensor with OmniBSI and Embedded TrueFocus - - ovti,ov5642 # 48-Lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch - plx,pex8648 # Pulsedlight LIDAR range-finding sensor diff --git a/Documentation/driver-api/media/camera-sensor.rst b/Documentation/driver-api/media/camera-sensor.rst index 93f4f2536c25..6456145f96ed 100644 --- a/Documentation/driver-api/media/camera-sensor.rst +++ b/Documentation/driver-api/media/camera-sensor.rst @@ -1,8 +1,14 @@ .. SPDX-License-Identifier: GPL-2.0 +.. _media_writing_camera_sensor_drivers: + Writing camera sensor drivers ============================= +This document covers the in-kernel APIs only. For the best practices on +userspace API implementation in camera sensor drivers, please see +:ref:`media_using_camera_sensor_drivers`. + CSI-2 and parallel (BT.601 and BT.656) busses --------------------------------------------- @@ -13,7 +19,7 @@ Handling clocks Camera sensors have an internal clock tree including a PLL and a number of divisors. The clock tree is generally configured by the driver based on a few -input parameters that are specific to the hardware:: the external clock frequency +input parameters that are specific to the hardware: the external clock frequency and the link frequency. The two parameters generally are obtained from system firmware. **No other frequencies should be used in any circumstances.** @@ -32,110 +38,61 @@ can rely on this frequency being used. Devicetree ~~~~~~~~~~ -The currently preferred way to achieve this is using ``assigned-clocks``, -``assigned-clock-parents`` and ``assigned-clock-rates`` properties. See -``Documentation/devicetree/bindings/clock/clock-bindings.txt`` for more -information. The driver then gets the frequency using ``clk_get_rate()``. +The preferred way to achieve this is using ``assigned-clocks``, +``assigned-clock-parents`` and ``assigned-clock-rates`` properties. See the +`clock device tree bindings +<https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/clock/clock.yaml>`_ +for more information. The driver then gets the frequency using +``clk_get_rate()``. This approach has the drawback that there's no guarantee that the frequency hasn't been modified directly or indirectly by another driver, or supported by the board's clock tree to begin with. Changes to the Common Clock Framework API are required to ensure reliability. -Frame size ----------- - -There are two distinct ways to configure the frame size produced by camera -sensors. - -Freely configurable camera sensor drivers -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Freely configurable camera sensor drivers expose the device's internal -processing pipeline as one or more sub-devices with different cropping and -scaling configurations. The output size of the device is the result of a series -of cropping and scaling operations from the device's pixel array's size. - -An example of such a driver is the CCS driver (see ``drivers/media/i2c/ccs``). - -Register list based drivers -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Register list based drivers generally, instead of able to configure the device -they control based on user requests, are limited to a number of preset -configurations that combine a number of different parameters that on hardware -level are independent. How a driver picks such configuration is based on the -format set on a source pad at the end of the device's internal pipeline. - -Most sensor drivers are implemented this way, see e.g. -``drivers/media/i2c/imx319.c`` for an example. - -Frame interval configuration ----------------------------- - -There are two different methods for obtaining possibilities for different frame -intervals as well as configuring the frame interval. Which one to implement -depends on the type of the device. - -Raw camera sensors -~~~~~~~~~~~~~~~~~~ - -Instead of a high level parameter such as frame interval, the frame interval is -a result of the configuration of a number of camera sensor implementation -specific parameters. Luckily, these parameters tend to be the same for more or -less all modern raw camera sensors. - -The frame interval is calculated using the following equation:: - - frame interval = (analogue crop width + horizontal blanking) * - (analogue crop height + vertical blanking) / pixel rate - -The formula is bus independent and is applicable for raw timing parameters on -large variety of devices beyond camera sensors. Devices that have no analogue -crop, use the full source image size, i.e. pixel array size. - -Horizontal and vertical blanking are specified by ``V4L2_CID_HBLANK`` and -``V4L2_CID_VBLANK``, respectively. The unit of the ``V4L2_CID_HBLANK`` control -is pixels and the unit of the ``V4L2_CID_VBLANK`` is lines. The pixel rate in -the sensor's **pixel array** is specified by ``V4L2_CID_PIXEL_RATE`` in the same -sub-device. The unit of that control is pixels per second. - -Register list based drivers need to implement read-only sub-device nodes for the -purpose. Devices that are not register list based need these to configure the -device's internal processing pipeline. - -The first entity in the linear pipeline is the pixel array. The pixel array may -be followed by other entities that are there to allow configuring binning, -skipping, scaling or digital crop :ref:`v4l2-subdev-selections`. - -USB cameras etc. devices -~~~~~~~~~~~~~~~~~~~~~~~~ - -USB video class hardware, as well as many cameras offering a similar higher -level interface natively, generally use the concept of frame interval (or frame -rate) on device level in firmware or hardware. This means lower level controls -implemented by raw cameras may not be used on uAPI (or even kAPI) to control the -frame interval on these devices. - Power management ---------------- -Always use runtime PM to manage the power states of your device. Camera sensor -drivers are in no way special in this respect: they are responsible for -controlling the power state of the device they otherwise control as well. In -general, the device must be powered on at least when its registers are being -accessed and when it is streaming. - -Existing camera sensor drivers may rely on the old -struct v4l2_subdev_core_ops->s_power() callback for bridge or ISP drivers to -manage their power state. This is however **deprecated**. If you feel you need -to begin calling an s_power from an ISP or a bridge driver, instead please add -runtime PM support to the sensor driver you are using. Likewise, new drivers -should not use s_power. - -Please see examples in e.g. ``drivers/media/i2c/ov8856.c`` and -``drivers/media/i2c/ccs/ccs-core.c``. The two drivers work in both ACPI -and DT based systems. +Camera sensors are used in conjunction with other devices to form a camera +pipeline. They must obey the rules listed herein to ensure coherent power +management over the pipeline. + +Camera sensor drivers are responsible for controlling the power state of the +device they otherwise control as well. They shall use runtime PM to manage +power states. Runtime PM shall be enabled at probe time and disabled at remove +time. Drivers should enable runtime PM autosuspend. + +The runtime PM handlers shall handle clocks, regulators, GPIOs, and other +system resources required to power the sensor up and down. For drivers that +don't use any of those resources (such as drivers that support ACPI systems +only), the runtime PM handlers may be left unimplemented. + +In general, the device shall be powered on at least when its registers are +being accessed and when it is streaming. Drivers should use +``pm_runtime_resume_and_get()`` when starting streaming and +``pm_runtime_put()`` or ``pm_runtime_put_autosuspend()`` when stopping +streaming. They may power the device up at probe time (for example to read +identification registers), but should not keep it powered unconditionally after +probe. + +At system suspend time, the whole camera pipeline must stop streaming, and +restart when the system is resumed. This requires coordination between the +camera sensor and the rest of the camera pipeline. Bridge drivers are +responsible for this coordination, and instruct camera sensors to stop and +restart streaming by calling the appropriate subdev operations +(``.s_stream()``, ``.enable_streams()`` or ``.disable_streams()``). Camera +sensor drivers shall therefore **not** keep track of the streaming state to +stop streaming in the PM suspend handler and restart it in the resume handler. +Drivers should in general not implement the system PM handlers. + +Camera sensor drivers shall **not** implement the subdev ``.s_power()`` +operation, as it is deprecated. While this operation is implemented in some +existing drivers as they predate the deprecation, new drivers shall use runtime +PM instead. If you feel you need to begin calling ``.s_power()`` from an ISP or +a bridge driver, instead add runtime PM support to the sensor driver you are +using and drop its ``.s_power()`` handler. + +Please also see :ref:`examples <media-camera-sensor-examples>`. Control framework ~~~~~~~~~~~~~~~~~ @@ -155,21 +112,36 @@ access the device. Rotation, orientation and flipping ---------------------------------- -Some systems have the camera sensor mounted upside down compared to its natural -mounting rotation. In such cases, drivers shall expose the information to -userspace with the :ref:`V4L2_CID_CAMERA_SENSOR_ROTATION -<v4l2-camera-sensor-rotation>` control. - -Sensor drivers shall also report the sensor's mounting orientation with the -:ref:`V4L2_CID_CAMERA_SENSOR_ORIENTATION <v4l2-camera-sensor-orientation>`. - Use ``v4l2_fwnode_device_parse()`` to obtain rotation and orientation information from system firmware and ``v4l2_ctrl_new_fwnode_properties()`` to register the appropriate controls. -Sensor drivers that have any vertical or horizontal flips embedded in the -register programming sequences shall initialize the V4L2_CID_HFLIP and -V4L2_CID_VFLIP controls with the values programmed by the register sequences. -The default values of these controls shall be 0 (disabled). Especially these -controls shall not be inverted, independently of the sensor's mounting -rotation. +.. _media-camera-sensor-examples: + +Example drivers +--------------- + +Features implemented by sensor drivers vary, and depending on the set of +supported features and other qualities, particular sensor drivers better serve +the purpose of an example. The following drivers are known to be good examples: + +.. flat-table:: Example sensor drivers + :header-rows: 0 + :widths: 1 1 1 2 + + * - Driver name + - File(s) + - Driver type + - Example topic + * - CCS + - ``drivers/media/i2c/ccs/`` + - Freely configurable + - Power management (ACPI and DT), UAPI + * - imx219 + - ``drivers/media/i2c/imx219.c`` + - Register list based + - Power management (DT), UAPI, mode selection + * - imx319 + - ``drivers/media/i2c/imx319.c`` + - Register list based + - Power management (ACPI and DT) diff --git a/Documentation/driver-api/media/drivers/ccs/ccs.rst b/Documentation/driver-api/media/drivers/ccs/ccs.rst index 7389204afcb8..776eec72bc80 100644 --- a/Documentation/driver-api/media/drivers/ccs/ccs.rst +++ b/Documentation/driver-api/media/drivers/ccs/ccs.rst @@ -30,7 +30,7 @@ that purpose, selection target ``V4L2_SEL_TGT_COMPOSE`` is supported on the sink pad (0). Additionally, if a device has no scaler or digital crop functionality, the -source pad (1) expses another digital crop selection rectangle that can only +source pad (1) exposes another digital crop selection rectangle that can only crop at the end of the lines and frames. Scaler @@ -78,6 +78,14 @@ For SMIA (non-++) compliant devices the static data file name is vvvv or vv denotes MIPI and SMIA manufacturer IDs respectively, mmmm model ID and rrrr or rr revision number. +CCS tools +~~~~~~~~~ + +`CCS tools <https://github.com/MIPI-Alliance/ccs-tools/>`_ is a set of +tools for working with CCS static data files. CCS tools includes a +definition of the human-readable CCS static data YAML format and includes a +program to convert it to a binary. + Register definition generator ----------------------------- diff --git a/Documentation/driver-api/media/v4l2-core.rst b/Documentation/driver-api/media/v4l2-core.rst index 239045ecc8f4..58cba831ade5 100644 --- a/Documentation/driver-api/media/v4l2-core.rst +++ b/Documentation/driver-api/media/v4l2-core.rst @@ -13,7 +13,6 @@ Video4Linux devices v4l2-subdev v4l2-event v4l2-controls - v4l2-videobuf v4l2-videobuf2 v4l2-dv-timings v4l2-flash-led-class diff --git a/Documentation/driver-api/media/v4l2-dev.rst b/Documentation/driver-api/media/v4l2-dev.rst index 99e3b5fa7444..d5cb19b21a9f 100644 --- a/Documentation/driver-api/media/v4l2-dev.rst +++ b/Documentation/driver-api/media/v4l2-dev.rst @@ -157,14 +157,6 @@ changing the e.g. exposure of the webcam. Of course, you can always do all the locking yourself by leaving both lock pointers at ``NULL``. -If you use the old :ref:`videobuf framework <vb_framework>` then you must -pass the :c:type:`video_device`->lock to the videobuf queue initialize -function: if videobuf has to wait for a frame to arrive, then it will -temporarily unlock the lock and relock it afterwards. If your driver also -waits in the code, then you should do the same to allow other -processes to access the device node while the first process is waiting for -something. - In the case of :ref:`videobuf2 <vb2_framework>` you will need to implement the ``wait_prepare()`` and ``wait_finish()`` callbacks to unlock/lock if applicable. If you use the ``queue->lock`` pointer, then you can use the helper functions diff --git a/Documentation/driver-api/media/v4l2-videobuf.rst b/Documentation/driver-api/media/v4l2-videobuf.rst deleted file mode 100644 index 4b1d84eefeb8..000000000000 --- a/Documentation/driver-api/media/v4l2-videobuf.rst +++ /dev/null @@ -1,403 +0,0 @@ -.. SPDX-License-Identifier: GPL-2.0 - -.. _vb_framework: - -Videobuf Framework -================== - -Author: Jonathan Corbet <corbet@lwn.net> - -Current as of 2.6.33 - -.. note:: - - The videobuf framework was deprecated in favor of videobuf2. Shouldn't - be used on new drivers. - -Introduction ------------- - -The videobuf layer functions as a sort of glue layer between a V4L2 driver -and user space. It handles the allocation and management of buffers for -the storage of video frames. There is a set of functions which can be used -to implement many of the standard POSIX I/O system calls, including read(), -poll(), and, happily, mmap(). Another set of functions can be used to -implement the bulk of the V4L2 ioctl() calls related to streaming I/O, -including buffer allocation, queueing and dequeueing, and streaming -control. Using videobuf imposes a few design decisions on the driver -author, but the payback comes in the form of reduced code in the driver and -a consistent implementation of the V4L2 user-space API. - -Buffer types ------------- - -Not all video devices use the same kind of buffers. In fact, there are (at -least) three common variations: - - - Buffers which are scattered in both the physical and (kernel) virtual - address spaces. (Almost) all user-space buffers are like this, but it - makes great sense to allocate kernel-space buffers this way as well when - it is possible. Unfortunately, it is not always possible; working with - this kind of buffer normally requires hardware which can do - scatter/gather DMA operations. - - - Buffers which are physically scattered, but which are virtually - contiguous; buffers allocated with vmalloc(), in other words. These - buffers are just as hard to use for DMA operations, but they can be - useful in situations where DMA is not available but virtually-contiguous - buffers are convenient. - - - Buffers which are physically contiguous. Allocation of this kind of - buffer can be unreliable on fragmented systems, but simpler DMA - controllers cannot deal with anything else. - -Videobuf can work with all three types of buffers, but the driver author -must pick one at the outset and design the driver around that decision. - -[It's worth noting that there's a fourth kind of buffer: "overlay" buffers -which are located within the system's video memory. The overlay -functionality is considered to be deprecated for most use, but it still -shows up occasionally in system-on-chip drivers where the performance -benefits merit the use of this technique. Overlay buffers can be handled -as a form of scattered buffer, but there are very few implementations in -the kernel and a description of this technique is currently beyond the -scope of this document.] - -Data structures, callbacks, and initialization ----------------------------------------------- - -Depending on which type of buffers are being used, the driver should -include one of the following files: - -.. code-block:: none - - <media/videobuf-dma-sg.h> /* Physically scattered */ - <media/videobuf-vmalloc.h> /* vmalloc() buffers */ - <media/videobuf-dma-contig.h> /* Physically contiguous */ - -The driver's data structure describing a V4L2 device should include a -struct videobuf_queue instance for the management of the buffer queue, -along with a list_head for the queue of available buffers. There will also -need to be an interrupt-safe spinlock which is used to protect (at least) -the queue. - -The next step is to write four simple callbacks to help videobuf deal with -the management of buffers: - -.. code-block:: none - - struct videobuf_queue_ops { - int (*buf_setup)(struct videobuf_queue *q, - unsigned int *count, unsigned int *size); - int (*buf_prepare)(struct videobuf_queue *q, - struct videobuf_buffer *vb, - enum v4l2_field field); - void (*buf_queue)(struct videobuf_queue *q, - struct videobuf_buffer *vb); - void (*buf_release)(struct videobuf_queue *q, - struct videobuf_buffer *vb); - }; - -buf_setup() is called early in the I/O process, when streaming is being -initiated; its purpose is to tell videobuf about the I/O stream. The count -parameter will be a suggested number of buffers to use; the driver should -check it for rationality and adjust it if need be. As a practical rule, a -minimum of two buffers are needed for proper streaming, and there is -usually a maximum (which cannot exceed 32) which makes sense for each -device. The size parameter should be set to the expected (maximum) size -for each frame of data. - -Each buffer (in the form of a struct videobuf_buffer pointer) will be -passed to buf_prepare(), which should set the buffer's size, width, height, -and field fields properly. If the buffer's state field is -VIDEOBUF_NEEDS_INIT, the driver should pass it to: - -.. code-block:: none - - int videobuf_iolock(struct videobuf_queue* q, struct videobuf_buffer *vb, - struct v4l2_framebuffer *fbuf); - -Among other things, this call will usually allocate memory for the buffer. -Finally, the buf_prepare() function should set the buffer's state to -VIDEOBUF_PREPARED. - -When a buffer is queued for I/O, it is passed to buf_queue(), which should -put it onto the driver's list of available buffers and set its state to -VIDEOBUF_QUEUED. Note that this function is called with the queue spinlock -held; if it tries to acquire it as well things will come to a screeching -halt. Yes, this is the voice of experience. Note also that videobuf may -wait on the first buffer in the queue; placing other buffers in front of it -could again gum up the works. So use list_add_tail() to enqueue buffers. - -Finally, buf_release() is called when a buffer is no longer intended to be -used. The driver should ensure that there is no I/O active on the buffer, -then pass it to the appropriate free routine(s): - -.. code-block:: none - - /* Scatter/gather drivers */ - int videobuf_dma_unmap(struct videobuf_queue *q, - struct videobuf_dmabuf *dma); - int videobuf_dma_free(struct videobuf_dmabuf *dma); - - /* vmalloc drivers */ - void videobuf_vmalloc_free (struct videobuf_buffer *buf); - - /* Contiguous drivers */ - void videobuf_dma_contig_free(struct videobuf_queue *q, - struct videobuf_buffer *buf); - -One way to ensure that a buffer is no longer under I/O is to pass it to: - -.. code-block:: none - - int videobuf_waiton(struct videobuf_buffer *vb, int non_blocking, int intr); - -Here, vb is the buffer, non_blocking indicates whether non-blocking I/O -should be used (it should be zero in the buf_release() case), and intr -controls whether an interruptible wait is used. - -File operations ---------------- - -At this point, much of the work is done; much of the rest is slipping -videobuf calls into the implementation of the other driver callbacks. The -first step is in the open() function, which must initialize the -videobuf queue. The function to use depends on the type of buffer used: - -.. code-block:: none - - void videobuf_queue_sg_init(struct videobuf_queue *q, - struct videobuf_queue_ops *ops, - struct device *dev, - spinlock_t *irqlock, - enum v4l2_buf_type type, - enum v4l2_field field, - unsigned int msize, - void *priv); - - void videobuf_queue_vmalloc_init(struct videobuf_queue *q, - struct videobuf_queue_ops *ops, - struct device *dev, - spinlock_t *irqlock, - enum v4l2_buf_type type, - enum v4l2_field field, - unsigned int msize, - void *priv); - - void videobuf_queue_dma_contig_init(struct videobuf_queue *q, - struct videobuf_queue_ops *ops, - struct device *dev, - spinlock_t *irqlock, - enum v4l2_buf_type type, - enum v4l2_field field, - unsigned int msize, - void *priv); - -In each case, the parameters are the same: q is the queue structure for the -device, ops is the set of callbacks as described above, dev is the device -structure for this video device, irqlock is an interrupt-safe spinlock to -protect access to the data structures, type is the buffer type used by the -device (cameras will use V4L2_BUF_TYPE_VIDEO_CAPTURE, for example), field -describes which field is being captured (often V4L2_FIELD_NONE for -progressive devices), msize is the size of any containing structure used -around struct videobuf_buffer, and priv is a private data pointer which -shows up in the priv_data field of struct videobuf_queue. Note that these -are void functions which, evidently, are immune to failure. - -V4L2 capture drivers can be written to support either of two APIs: the -read() system call and the rather more complicated streaming mechanism. As -a general rule, it is necessary to support both to ensure that all -applications have a chance of working with the device. Videobuf makes it -easy to do that with the same code. To implement read(), the driver need -only make a call to one of: - -.. code-block:: none - - ssize_t videobuf_read_one(struct videobuf_queue *q, - char __user *data, size_t count, - loff_t *ppos, int nonblocking); - - ssize_t videobuf_read_stream(struct videobuf_queue *q, - char __user *data, size_t count, - loff_t *ppos, int vbihack, int nonblocking); - -Either one of these functions will read frame data into data, returning the -amount actually read; the difference is that videobuf_read_one() will only -read a single frame, while videobuf_read_stream() will read multiple frames -if they are needed to satisfy the count requested by the application. A -typical driver read() implementation will start the capture engine, call -one of the above functions, then stop the engine before returning (though a -smarter implementation might leave the engine running for a little while in -anticipation of another read() call happening in the near future). - -The poll() function can usually be implemented with a direct call to: - -.. code-block:: none - - unsigned int videobuf_poll_stream(struct file *file, - struct videobuf_queue *q, - poll_table *wait); - -Note that the actual wait queue eventually used will be the one associated -with the first available buffer. - -When streaming I/O is done to kernel-space buffers, the driver must support -the mmap() system call to enable user space to access the data. In many -V4L2 drivers, the often-complex mmap() implementation simplifies to a -single call to: - -.. code-block:: none - - int videobuf_mmap_mapper(struct videobuf_queue *q, - struct vm_area_struct *vma); - -Everything else is handled by the videobuf code. - -The release() function requires two separate videobuf calls: - -.. code-block:: none - - void videobuf_stop(struct videobuf_queue *q); - int videobuf_mmap_free(struct videobuf_queue *q); - -The call to videobuf_stop() terminates any I/O in progress - though it is -still up to the driver to stop the capture engine. The call to -videobuf_mmap_free() will ensure that all buffers have been unmapped; if -so, they will all be passed to the buf_release() callback. If buffers -remain mapped, videobuf_mmap_free() returns an error code instead. The -purpose is clearly to cause the closing of the file descriptor to fail if -buffers are still mapped, but every driver in the 2.6.32 kernel cheerfully -ignores its return value. - -ioctl() operations ------------------- - -The V4L2 API includes a very long list of driver callbacks to respond to -the many ioctl() commands made available to user space. A number of these -- those associated with streaming I/O - turn almost directly into videobuf -calls. The relevant helper functions are: - -.. code-block:: none - - int videobuf_reqbufs(struct videobuf_queue *q, - struct v4l2_requestbuffers *req); - int videobuf_querybuf(struct videobuf_queue *q, struct v4l2_buffer *b); - int videobuf_qbuf(struct videobuf_queue *q, struct v4l2_buffer *b); - int videobuf_dqbuf(struct videobuf_queue *q, struct v4l2_buffer *b, - int nonblocking); - int videobuf_streamon(struct videobuf_queue *q); - int videobuf_streamoff(struct videobuf_queue *q); - -So, for example, a VIDIOC_REQBUFS call turns into a call to the driver's -vidioc_reqbufs() callback which, in turn, usually only needs to locate the -proper struct videobuf_queue pointer and pass it to videobuf_reqbufs(). -These support functions can replace a great deal of buffer management -boilerplate in a lot of V4L2 drivers. - -The vidioc_streamon() and vidioc_streamoff() functions will be a bit more -complex, of course, since they will also need to deal with starting and -stopping the capture engine. - -Buffer allocation ------------------ - -Thus far, we have talked about buffers, but have not looked at how they are -allocated. The scatter/gather case is the most complex on this front. For -allocation, the driver can leave buffer allocation entirely up to the -videobuf layer; in this case, buffers will be allocated as anonymous -user-space pages and will be very scattered indeed. If the application is -using user-space buffers, no allocation is needed; the videobuf layer will -take care of calling get_user_pages() and filling in the scatterlist array. - -If the driver needs to do its own memory allocation, it should be done in -the vidioc_reqbufs() function, *after* calling videobuf_reqbufs(). The -first step is a call to: - -.. code-block:: none - - struct videobuf_dmabuf *videobuf_to_dma(struct videobuf_buffer *buf); - -The returned videobuf_dmabuf structure (defined in -<media/videobuf-dma-sg.h>) includes a couple of relevant fields: - -.. code-block:: none - - struct scatterlist *sglist; - int sglen; - -The driver must allocate an appropriately-sized scatterlist array and -populate it with pointers to the pieces of the allocated buffer; sglen -should be set to the length of the array. - -Drivers using the vmalloc() method need not (and cannot) concern themselves -with buffer allocation at all; videobuf will handle those details. The -same is normally true of contiguous-DMA drivers as well; videobuf will -allocate the buffers (with dma_alloc_coherent()) when it sees fit. That -means that these drivers may be trying to do high-order allocations at any -time, an operation which is not always guaranteed to work. Some drivers -play tricks by allocating DMA space at system boot time; videobuf does not -currently play well with those drivers. - -As of 2.6.31, contiguous-DMA drivers can work with a user-supplied buffer, -as long as that buffer is physically contiguous. Normal user-space -allocations will not meet that criterion, but buffers obtained from other -kernel drivers, or those contained within huge pages, will work with these -drivers. - -Filling the buffers -------------------- - -The final part of a videobuf implementation has no direct callback - it's -the portion of the code which actually puts frame data into the buffers, -usually in response to interrupts from the device. For all types of -drivers, this process works approximately as follows: - - - Obtain the next available buffer and make sure that somebody is actually - waiting for it. - - - Get a pointer to the memory and put video data there. - - - Mark the buffer as done and wake up the process waiting for it. - -Step (1) above is done by looking at the driver-managed list_head structure -- the one which is filled in the buf_queue() callback. Because starting -the engine and enqueueing buffers are done in separate steps, it's possible -for the engine to be running without any buffers available - in the -vmalloc() case especially. So the driver should be prepared for the list -to be empty. It is equally possible that nobody is yet interested in the -buffer; the driver should not remove it from the list or fill it until a -process is waiting on it. That test can be done by examining the buffer's -done field (a wait_queue_head_t structure) with waitqueue_active(). - -A buffer's state should be set to VIDEOBUF_ACTIVE before being mapped for -DMA; that ensures that the videobuf layer will not try to do anything with -it while the device is transferring data. - -For scatter/gather drivers, the needed memory pointers will be found in the -scatterlist structure described above. Drivers using the vmalloc() method -can get a memory pointer with: - -.. code-block:: none - - void *videobuf_to_vmalloc(struct videobuf_buffer *buf); - -For contiguous DMA drivers, the function to use is: - -.. code-block:: none - - dma_addr_t videobuf_to_dma_contig(struct videobuf_buffer *buf); - -The contiguous DMA API goes out of its way to hide the kernel-space address -of the DMA buffer from drivers. - -The final step is to set the size field of the relevant videobuf_buffer -structure to the actual size of the captured image, set state to -VIDEOBUF_DONE, then call wake_up() on the done queue. At this point, the -buffer is owned by the videobuf layer and the driver should not touch it -again. - -Developers who are interested in more information can go into the relevant -header files; there are a few low-level functions declared there which have -not been talked about here. Note also that all of these calls are exported -GPL-only, so they will not be available to non-GPL kernel modules. diff --git a/Documentation/translations/zh_CN/video4linux/v4l2-framework.txt b/Documentation/translations/zh_CN/video4linux/v4l2-framework.txt index a88fcbc11eca..9cc97ec75d7a 100644 --- a/Documentation/translations/zh_CN/video4linux/v4l2-framework.txt +++ b/Documentation/translations/zh_CN/video4linux/v4l2-framework.txt @@ -768,18 +768,6 @@ const char *video_device_node_name(struct video_device *vdev); 此功能,而非访问 video_device::num 和 video_device::minor 域。 -视频缓冲辅助函数 ---------------- - -v4l2 核心 API 提供了一个处理视频缓冲的标准方法(称为“videobuf”)。 -这些方法使驱动可以通过统一的方式实现 read()、mmap() 和 overlay()。 -目前在设备上支持视频缓冲的方法有分散/聚集 DMA(videobuf-dma-sg)、 -线性 DMA(videobuf-dma-contig)以及大多用于 USB 设备的用 vmalloc -分配的缓冲(videobuf-vmalloc)。 - -请参阅 Documentation/driver-api/media/v4l2-videobuf.rst,以获得更多关于 videobuf -层的使用信息。 - v4l2_fh 结构体 ------------- diff --git a/Documentation/userspace-api/media/drivers/camera-sensor.rst b/Documentation/userspace-api/media/drivers/camera-sensor.rst new file mode 100644 index 000000000000..919a50e8b9d9 --- /dev/null +++ b/Documentation/userspace-api/media/drivers/camera-sensor.rst @@ -0,0 +1,104 @@ +.. SPDX-License-Identifier: GPL-2.0 + +.. _media_using_camera_sensor_drivers: + +Using camera sensor drivers +=========================== + +This section describes common practices for how the V4L2 sub-device interface is +used to control the camera sensor drivers. + +You may also find :ref:`media_writing_camera_sensor_drivers` useful. + +Frame size +---------- + +There are two distinct ways to configure the frame size produced by camera +sensors. + +Freely configurable camera sensor drivers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Freely configurable camera sensor drivers expose the device's internal +processing pipeline as one or more sub-devices with different cropping and +scaling configurations. The output size of the device is the result of a series +of cropping and scaling operations from the device's pixel array's size. + +An example of such a driver is the CCS driver. + +Register list based drivers +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Register list based drivers generally, instead of able to configure the device +they control based on user requests, are limited to a number of preset +configurations that combine a number of different parameters that on hardware +level are independent. How a driver picks such configuration is based on the +format set on a source pad at the end of the device's internal pipeline. + +Most sensor drivers are implemented this way. + +Frame interval configuration +---------------------------- + +There are two different methods for obtaining possibilities for different frame +intervals as well as configuring the frame interval. Which one to implement +depends on the type of the device. + +Raw camera sensors +~~~~~~~~~~~~~~~~~~ + +Instead of a high level parameter such as frame interval, the frame interval is +a result of the configuration of a number of camera sensor implementation +specific parameters. Luckily, these parameters tend to be the same for more or +less all modern raw camera sensors. + +The frame interval is calculated using the following equation:: + + frame interval = (analogue crop width + horizontal blanking) * + (analogue crop height + vertical blanking) / pixel rate + +The formula is bus independent and is applicable for raw timing parameters on +large variety of devices beyond camera sensors. Devices that have no analogue +crop, use the full source image size, i.e. pixel array size. + +Horizontal and vertical blanking are specified by ``V4L2_CID_HBLANK`` and +``V4L2_CID_VBLANK``, respectively. The unit of the ``V4L2_CID_HBLANK`` control +is pixels and the unit of the ``V4L2_CID_VBLANK`` is lines. The pixel rate in +the sensor's **pixel array** is specified by ``V4L2_CID_PIXEL_RATE`` in the same +sub-device. The unit of that control is pixels per second. + +Register list based drivers need to implement read-only sub-device nodes for the +purpose. Devices that are not register list based need these to configure the +device's internal processing pipeline. + +The first entity in the linear pipeline is the pixel array. The pixel array may +be followed by other entities that are there to allow configuring binning, +skipping, scaling or digital crop, see :ref:`VIDIOC_SUBDEV_G_SELECTION +<VIDIOC_SUBDEV_G_SELECTION>`. + +USB cameras etc. devices +~~~~~~~~~~~~~~~~~~~~~~~~ + +USB video class hardware, as well as many cameras offering a similar higher +level interface natively, generally use the concept of frame interval (or frame +rate) on device level in firmware or hardware. This means lower level controls +implemented by raw cameras may not be used on uAPI (or even kAPI) to control the +frame interval on these devices. + +Rotation, orientation and flipping +---------------------------------- + +Some systems have the camera sensor mounted upside down compared to its natural +mounting rotation. In such cases, drivers shall expose the information to +userspace with the :ref:`V4L2_CID_CAMERA_SENSOR_ROTATION +<v4l2-camera-sensor-rotation>` control. + +Sensor drivers shall also report the sensor's mounting orientation with the +:ref:`V4L2_CID_CAMERA_SENSOR_ORIENTATION <v4l2-camera-sensor-orientation>`. + +Sensor drivers that have any vertical or horizontal flips embedded in the +register programming sequences shall initialize the :ref:`V4L2_CID_HFLIP +<v4l2-cid-hflip>` and :ref:`V4L2_CID_VFLIP <v4l2-cid-vflip>` controls with the +values programmed by the register sequences. The default values of these +controls shall be 0 (disabled). Especially these controls shall not be inverted, +independently of the sensor's mounting rotation. diff --git a/Documentation/userspace-api/media/drivers/index.rst b/Documentation/userspace-api/media/drivers/index.rst index 6708d649afd7..1726f8ec86fa 100644 --- a/Documentation/userspace-api/media/drivers/index.rst +++ b/Documentation/userspace-api/media/drivers/index.rst @@ -32,11 +32,13 @@ For more details see the file COPYING in the source distribution of Linux. :numbered: aspeed-video + camera-sensor ccs cx2341x-uapi dw100 imx-uapi max2175 + npcm-video omap3isp-uapi st-vgxy61 uvcvideo diff --git a/Documentation/userspace-api/media/drivers/npcm-video.rst b/Documentation/userspace-api/media/drivers/npcm-video.rst new file mode 100644 index 000000000000..b47771dd8b27 --- /dev/null +++ b/Documentation/userspace-api/media/drivers/npcm-video.rst @@ -0,0 +1,66 @@ +.. SPDX-License-Identifier: GPL-2.0 + +.. include:: <isonum.txt> + +NPCM video driver +================= + +This driver is used to control the Video Capture/Differentiation (VCD) engine +and Encoding Compression Engine (ECE) present on Nuvoton NPCM SoCs. The VCD can +capture a frame from digital video input and compare two frames in memory, and +the ECE can compress the frame data into HEXTILE format. + +Driver-specific Controls +------------------------ + +V4L2_CID_NPCM_CAPTURE_MODE +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The VCD engine supports two modes: + +- COMPLETE mode: + + Capture the next complete frame into memory. + +- DIFF mode: + + Compare the incoming frame with the frame stored in memory, and updates the + differentiated frame in memory. + +Application can use ``V4L2_CID_NPCM_CAPTURE_MODE`` control to set the VCD mode +with different control values (enum v4l2_npcm_capture_mode): + +- ``V4L2_NPCM_CAPTURE_MODE_COMPLETE``: will set VCD to COMPLETE mode. +- ``V4L2_NPCM_CAPTURE_MODE_DIFF``: will set VCD to DIFF mode. + +V4L2_CID_NPCM_RECT_COUNT +~~~~~~~~~~~~~~~~~~~~~~~~ + +If using V4L2_PIX_FMT_HEXTILE format, VCD will capture frame data and then ECE +will compress the data into HEXTILE rectangles and store them in V4L2 video +buffer with the layout defined in Remote Framebuffer Protocol: +:: + + (RFC 6143, https://www.rfc-editor.org/rfc/rfc6143.html#section-7.6.1) + + +--------------+--------------+-------------------+ + | No. of bytes | Type [Value] | Description | + +--------------+--------------+-------------------+ + | 2 | U16 | x-position | + | 2 | U16 | y-position | + | 2 | U16 | width | + | 2 | U16 | height | + | 4 | S32 | encoding-type (5) | + +--------------+--------------+-------------------+ + | HEXTILE rectangle data | + +-------------------------------------------------+ + +Application can get the video buffer through VIDIOC_DQBUF, and followed by +calling ``V4L2_CID_NPCM_RECT_COUNT`` control to get the number of HEXTILE +rectangles in this buffer. + +References +---------- +include/uapi/linux/npcm-video.h + +**Copyright** |copy| 2022 Nuvoton Technologies diff --git a/Documentation/userspace-api/media/gen-errors.rst b/Documentation/userspace-api/media/gen-errors.rst index e595d0bea109..4e8defd3612b 100644 --- a/Documentation/userspace-api/media/gen-errors.rst +++ b/Documentation/userspace-api/media/gen-errors.rst @@ -59,9 +59,7 @@ Generic Error Codes - - ``ENOTTY`` - - The ioctl is not supported by the driver, actually meaning that - the required functionality is not available, or the file - descriptor is not for a media device. + - The ioctl is not supported by the file descriptor. - - ``ENOSPC`` diff --git a/Documentation/userspace-api/media/v4l/buffer.rst b/Documentation/userspace-api/media/v4l/buffer.rst index 04dec3e570ed..52bbee81c080 100644 --- a/Documentation/userspace-api/media/v4l/buffer.rst +++ b/Documentation/userspace-api/media/v4l/buffer.rst @@ -549,9 +549,9 @@ Buffer Flags - 0x00000400 - The buffer has been prepared for I/O and can be queued by the application. Drivers set or clear this flag when the - :ref:`VIDIOC_QUERYBUF`, + :ref:`VIDIOC_QUERYBUF <VIDIOC_QUERYBUF>`, :ref:`VIDIOC_PREPARE_BUF <VIDIOC_QBUF>`, - :ref:`VIDIOC_QBUF` or + :ref:`VIDIOC_QBUF <VIDIOC_QBUF>` or :ref:`VIDIOC_DQBUF <VIDIOC_QBUF>` ioctl is called. * .. _`V4L2-BUF-FLAG-NO-CACHE-INVALIDATE`: diff --git a/Documentation/userspace-api/media/v4l/control.rst b/Documentation/userspace-api/media/v4l/control.rst index 4463fce694b0..57893814a1e5 100644 --- a/Documentation/userspace-api/media/v4l/control.rst +++ b/Documentation/userspace-api/media/v4l/control.rst @@ -143,9 +143,13 @@ Control IDs recognise the difference between digital and analogue gain use controls ``V4L2_CID_DIGITAL_GAIN`` and ``V4L2_CID_ANALOGUE_GAIN``. +.. _v4l2-cid-hflip: + ``V4L2_CID_HFLIP`` ``(boolean)`` Mirror the picture horizontally. +.. _v4l2-cid-vflip: + ``V4L2_CID_VFLIP`` ``(boolean)`` Mirror the picture vertically. diff --git a/Documentation/userspace-api/media/v4l/dev-subdev.rst b/Documentation/userspace-api/media/v4l/dev-subdev.rst index a4f1df7093e8..43988516acdd 100644 --- a/Documentation/userspace-api/media/v4l/dev-subdev.rst +++ b/Documentation/userspace-api/media/v4l/dev-subdev.rst @@ -579,20 +579,19 @@ is started. There are three steps in configuring the streams: -1) Set up links. Connect the pads between sub-devices using the :ref:`Media -Controller API <media_controller>` +1. Set up links. Connect the pads between sub-devices using the + :ref:`Media Controller API <media_controller>` -2) Streams. Streams are declared and their routing is configured by -setting the routing table for the sub-device using -:ref:`VIDIOC_SUBDEV_S_ROUTING <VIDIOC_SUBDEV_G_ROUTING>` ioctl. Note that -setting the routing table will reset formats and selections in the -sub-device to default values. +2. Streams. Streams are declared and their routing is configured by setting the + routing table for the sub-device using :ref:`VIDIOC_SUBDEV_S_ROUTING + <VIDIOC_SUBDEV_G_ROUTING>` ioctl. Note that setting the routing table will + reset formats and selections in the sub-device to default values. -3) Configure formats and selections. Formats and selections of each stream -are configured separately as documented for plain sub-devices in -:ref:`format-propagation`. The stream ID is set to the same stream ID -associated with either sink or source pads of routes configured using the -:ref:`VIDIOC_SUBDEV_S_ROUTING <VIDIOC_SUBDEV_G_ROUTING>` ioctl. +3. Configure formats and selections. Formats and selections of each stream are + configured separately as documented for plain sub-devices in + :ref:`format-propagation`. The stream ID is set to the same stream ID + associated with either sink or source pads of routes configured using the + :ref:`VIDIOC_SUBDEV_S_ROUTING <VIDIOC_SUBDEV_G_ROUTING>` ioctl. Multiplexed streams setup example ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -618,11 +617,11 @@ modeled as V4L2 devices, exposed to userspace via /dev/videoX nodes. To configure this pipeline, the userspace must take the following steps: -1) Set up media links between entities: connect the sensors to the bridge, -bridge to the receiver, and the receiver to the DMA engines. This step does -not differ from normal non-multiplexed media controller setup. +1. Set up media links between entities: connect the sensors to the bridge, + bridge to the receiver, and the receiver to the DMA engines. This step does + not differ from normal non-multiplexed media controller setup. -2) Configure routing +2. Configure routing .. flat-table:: Bridge routing table :header-rows: 1 @@ -656,14 +655,14 @@ not differ from normal non-multiplexed media controller setup. - V4L2_SUBDEV_ROUTE_FL_ACTIVE - Pixel data stream from Sensor B -3) Configure formats and selections +3. Configure formats and selections -After configuring routing, the next step is configuring the formats and -selections for the streams. This is similar to performing this step without -streams, with just one exception: the ``stream`` field needs to be assigned -to the value of the stream ID. + After configuring routing, the next step is configuring the formats and + selections for the streams. This is similar to performing this step without + streams, with just one exception: the ``stream`` field needs to be assigned + to the value of the stream ID. -A common way to accomplish this is to start from the sensors and propagate the -configurations along the stream towards the receiver, -using :ref:`VIDIOC_SUBDEV_S_FMT <VIDIOC_SUBDEV_G_FMT>` ioctls to configure each -stream endpoint in each sub-device. + A common way to accomplish this is to start from the sensors and propagate + the configurations along the stream towards the receiver, using + :ref:`VIDIOC_SUBDEV_S_FMT <VIDIOC_SUBDEV_G_FMT>` ioctls to configure each + stream endpoint in each sub-device. diff --git a/Documentation/userspace-api/media/v4l/dv-timings.rst b/Documentation/userspace-api/media/v4l/dv-timings.rst index e17f056b129f..4b19bcb4bd80 100644 --- a/Documentation/userspace-api/media/v4l/dv-timings.rst +++ b/Documentation/userspace-api/media/v4l/dv-timings.rst @@ -33,6 +33,27 @@ current DV timings they use the the DV timings as seen by the video receiver applications use the :ref:`VIDIOC_QUERY_DV_TIMINGS` ioctl. +When the hardware detects a video source change (e.g. the video +signal appears or disappears, or the video resolution changes), then +it will issue a `V4L2_EVENT_SOURCE_CHANGE` event. Use the +:ref:`ioctl VIDIOC_SUBSCRIBE_EVENT <VIDIOC_SUBSCRIBE_EVENT>` and the +:ref:`VIDIOC_DQEVENT` to check if this event was reported. + +If the video signal changed, then the application has to stop +streaming, free all buffers, and call the :ref:`VIDIOC_QUERY_DV_TIMINGS` +to obtain the new video timings, and if they are valid, it can set +those by calling the :ref:`ioctl VIDIOC_S_DV_TIMINGS <VIDIOC_G_DV_TIMINGS>`. +This will also update the format, so use the :ref:`ioctl VIDIOC_G_FMT <VIDIOC_G_FMT>` +to obtain the new format. Now the application can allocate new buffers +and start streaming again. + +The :ref:`VIDIOC_QUERY_DV_TIMINGS` will just report what the +hardware detects, it will never change the configuration. If the +currently set timings and the actually detected timings differ, then +typically this will mean that you will not be able to capture any +video. The correct approach is to rely on the `V4L2_EVENT_SOURCE_CHANGE` +event so you know when something changed. + Applications can make use of the :ref:`input-capabilities` and :ref:`output-capabilities` flags to determine whether the digital video ioctls can be used with the given input or output. diff --git a/Documentation/userspace-api/media/v4l/pixfmt-reserved.rst b/Documentation/userspace-api/media/v4l/pixfmt-reserved.rst index 296ad2025e8d..886ba7b08d6b 100644 --- a/Documentation/userspace-api/media/v4l/pixfmt-reserved.rst +++ b/Documentation/userspace-api/media/v4l/pixfmt-reserved.rst @@ -288,6 +288,13 @@ please make a proposal on the linux-media mailing list. - 'MT2110R' - This format is two-planar 10-Bit raster mode and having similitude with ``V4L2_PIX_FMT_MM21`` in term of alignment and tiling. Used for AVC. + * .. _V4L2-PIX-FMT-HEXTILE: + + - ``V4L2_PIX_FMT_HEXTILE`` + - 'HXTL' + - Compressed format used by Nuvoton NPCM video driver. This format is + defined in Remote Framebuffer Protocol (RFC 6143, chapter 7.7.4 Hextile + Encoding). .. raw:: latex \normalsize diff --git a/Documentation/userspace-api/media/v4l/pixfmt-srggb12p.rst b/Documentation/userspace-api/media/v4l/pixfmt-srggb12p.rst index b6e79e2f8ce4..7c3810ff783c 100644 --- a/Documentation/userspace-api/media/v4l/pixfmt-srggb12p.rst +++ b/Documentation/userspace-api/media/v4l/pixfmt-srggb12p.rst @@ -60,7 +60,7 @@ Each cell is one byte. G\ :sub:`10low`\ (bits 3--0) - G\ :sub:`12high` - R\ :sub:`13high` - - R\ :sub:`13low`\ (bits 3--2) + - R\ :sub:`13low`\ (bits 7--4) G\ :sub:`12low`\ (bits 3--0) - - start + 12: @@ -82,6 +82,6 @@ Each cell is one byte. G\ :sub:`30low`\ (bits 3--0) - G\ :sub:`32high` - R\ :sub:`33high` - - R\ :sub:`33low`\ (bits 3--2) + - R\ :sub:`33low`\ (bits 7--4) G\ :sub:`32low`\ (bits 3--0) |