diff options
author | Linus Torvalds | 2020-06-06 11:01:58 -0700 |
---|---|---|
committer | Linus Torvalds | 2020-06-06 11:01:58 -0700 |
commit | 3925c3bbdf886f1ddf64461b9b380e1bb36f90c1 (patch) | |
tree | 99ebd7c46d46893057be0e5b16ea2bb356a1303b /drivers/pci | |
parent | 9fa88c5d3f5eae3e68ef20d226c3f13e21490668 (diff) | |
parent | 2bd81cd04a3f5eb873cc81fa16c469377be3b092 (diff) |
Merge tag 'pci-v5.8-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
"Enumeration:
- Program MPS for RCiEP devices (Ashok Raj)
- Fix pci_register_host_bridge() device_register() error handling
(Rob Herring)
- Fix pci_host_bridge struct device release/free handling (Rob
Herring)
Resource management:
- Allow resizing BARs for devices on root bus (Ard Biesheuvel)
Power management:
- Reduce Thunderbolt resume time by working around devices that don't
support DLL Link Active reporting (Mika Westerberg)
- Work around a Pericom USB controller OHCI/EHCI PME# defect
(Kai-Heng Feng)
Virtualization:
- Add ACS quirk for Intel Root Complex Integrated Endpoints (Ashok
Raj)
- Avoid FLR for AMD Starship USB 3.0 (Kevin Buettner)
- Avoid FLR for AMD Matisse HD Audio & USB 3.0 (Marcos Scriven)
Error handling:
- Use only _OSC (not HEST FIRMWARE_FIRST) to determine AER ownership
(Alexandru Gagniuc, Kuppuswamy Sathyanarayanan)
- Reduce verbosity by logging only ACPI_NOTIFY_DISCONNECT_RECOVER
events (Kuppuswamy Sathyanarayanan)
- Don't enable AER by default in Kconfig (Bjorn Helgaas)
Peer-to-peer DMA:
- Add AMD Zen Raven and Renoir Root Ports to whitelist (Alex Deucher)
ASPM:
- Allow ASPM on links to PCIe-to-PCI/PCI-X Bridges (Kai-Heng Feng)
Endpoint framework:
- Fix DMA channel release in test (Kunihiko Hayashi)
- Add page size as argument to pci_epc_mem_init() (Lad Prabhakar)
- Add support to handle multiple base for mapping outbound memory
(Lad Prabhakar)
Generic host bridge driver:
- Support building as module (Rob Herring)
- Eliminate pci_host_common_probe wrappers (Rob Herring)
Amlogic Meson PCIe controller driver:
- Don't use FAST_LINK_MODE to set up link (Marc Zyngier)
Broadcom STB PCIe controller driver:
- Disable ASPM L0s if 'aspm-no-l0s' in DT (Jim Quinlan)
- Fix clk_put() error (Jim Quinlan)
- Fix window register offset (Jim Quinlan)
- Assert fundamental reset on initialization (Nicolas Saenz Julienne)
- Add notify xHCI reset property (Nicolas Saenz Julienne)
- Add init routine for Raspberry Pi 4 VL805 USB controller (Nicolas
Saenz Julienne)
- Sync with Raspberry Pi 4 firmware for VL805 initialization (Nicolas
Saenz Julienne)
Cadence PCIe controller driver:
- Remove "cdns,max-outbound-regions" DT property (replaced by
"ranges") (Kishon Vijay Abraham I)
- Read 32-bit (not 16-bit) Vendor ID/Device ID property from DT
(Kishon Vijay Abraham I)
Marvell Aardvark PCIe controller driver:
- Improve link training (Marek Behún)
- Add PHY support (Marek Behún)
- Add "phys", "max-link-speed", "reset-gpios" to dt-binding (Marek
Behún)
- Train link immediately after enabling training to work around
detection issues with some cards (Pali Rohár)
- Issue PERST via GPIO to work around detection issues (Pali Rohár)
- Don't blindly enable ASPM L0s (Pali Rohár)
- Replace custom macros by standard linux/pci_regs.h macros (Pali
Rohár)
Microsoft Hyper-V host bridge driver:
- Fix probe failure path to release resource (Wei Hu)
- Retry PCI bus D0 entry on invalid device state for kdump (Wei Hu)
Renesas R-Car PCIe controller driver:
- Fix incorrect programming of OB windows (Andrew Murray)
- Add suspend/resume (Kazufumi Ikeda)
- Rename pcie-rcar.c to pcie-rcar-host.c (Lad Prabhakar)
- Add endpoint controller driver (Lad Prabhakar)
- Fix PCIEPAMR mask calculation (Lad Prabhakar)
- Add r8a77961 to DT binding (Yoshihiro Shimoda)
Socionext UniPhier Pro5 controller driver:
- Add endpoint controller driver (Kunihiko Hayashi)
Synopsys DesignWare PCIe controller driver:
- Program outbound ATU upper limit register (Alan Mikhak)
- Fix inner MSI IRQ domain registration (Marc Zyngier)
Miscellaneous:
- Check for platform_get_irq() failure consistently (negative return
means failure) (Aman Sharma)
- Fix several runtime PM get/put imbalances (Dinghao Liu)
- Use flexible-array and struct_size() helpers for code cleanup
(Gustavo A. R. Silva)
- Update & fix issues in bridge emulation of PCIe registers (Jon
Derrick)
- Add macros for bridge window names (PCI_BRIDGE_IO_WINDOW, etc)
(Krzysztof Wilczyński)
- Work around Intel PCH MROMs that have invalid BARs (Xiaochun Lee)"
* tag 'pci-v5.8-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (100 commits)
PCI: uniphier: Add Socionext UniPhier Pro5 PCIe endpoint controller driver
PCI: Add ACS quirk for Intel Root Complex Integrated Endpoints
PCI/DPC: Print IRQ number used by port
PCI/AER: Use "aer" variable for capability offset
PCI/AER: Remove redundant dev->aer_cap checks
PCI/AER: Remove redundant pci_is_pcie() checks
PCI/AER: Remove HEST/FIRMWARE_FIRST parsing for AER ownership
PCI: tegra: Fix runtime PM imbalance on error
PCI: vmd: Filter resource type bits from shadow register
PCI: tegra194: Fix runtime PM imbalance on error
dt-bindings: PCI: Add UniPhier PCIe endpoint controller description
PCI: hv: Use struct_size() helper
PCI: Rename _DSM constants to align with spec
PCI: Avoid FLR for AMD Starship USB 3.0
PCI: Avoid FLR for AMD Matisse HD Audio & USB 3.0
x86/PCI: Drop unused xen_register_pirq() gsi_override parameter
PCI: dwc: Use private data pointer of "struct irq_domain" to get pcie_port
PCI: amlogic: meson: Don't use FAST_LINK_MODE to set up link
PCI: dwc: Fix inner MSI IRQ domain registration
PCI: dwc: pci-dra7xx: Use devm_platform_ioremap_resource_byname()
...
Diffstat (limited to 'drivers/pci')
67 files changed, 3217 insertions, 1871 deletions
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig index ae36edb1d7db..b08efea39496 100644 --- a/drivers/pci/controller/Kconfig +++ b/drivers/pci/controller/Kconfig @@ -58,15 +58,33 @@ config PCIE_RCAR bool "Renesas R-Car PCIe controller" depends on ARCH_RENESAS || COMPILE_TEST depends on PCI_MSI_IRQ_DOMAIN + select PCIE_RCAR_HOST help Say Y here if you want PCIe controller support on R-Car SoCs. + This option will be removed after arm64 defconfig is updated. + +config PCIE_RCAR_HOST + bool "Renesas R-Car PCIe host controller" + depends on ARCH_RENESAS || COMPILE_TEST + depends on PCI_MSI_IRQ_DOMAIN + help + Say Y here if you want PCIe controller support on R-Car SoCs in host + mode. + +config PCIE_RCAR_EP + bool "Renesas R-Car PCIe endpoint controller" + depends on ARCH_RENESAS || COMPILE_TEST + depends on PCI_ENDPOINT + help + Say Y here if you want PCIe controller support on R-Car SoCs in + endpoint mode. config PCI_HOST_COMMON - bool + tristate select PCI_ECAM config PCI_HOST_GENERIC - bool "Generic PCI host controller" + tristate "Generic PCI host controller" depends on OF select PCI_HOST_COMMON select IRQ_DOMAIN diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile index fbac4b0190a0..efd9733ead26 100644 --- a/drivers/pci/controller/Makefile +++ b/drivers/pci/controller/Makefile @@ -7,7 +7,8 @@ obj-$(CONFIG_PCI_MVEBU) += pci-mvebu.o obj-$(CONFIG_PCI_AARDVARK) += pci-aardvark.o obj-$(CONFIG_PCI_TEGRA) += pci-tegra.o obj-$(CONFIG_PCI_RCAR_GEN2) += pci-rcar-gen2.o -obj-$(CONFIG_PCIE_RCAR) += pcie-rcar.o +obj-$(CONFIG_PCIE_RCAR_HOST) += pcie-rcar.o pcie-rcar-host.o +obj-$(CONFIG_PCIE_RCAR_EP) += pcie-rcar.o pcie-rcar-ep.o obj-$(CONFIG_PCI_HOST_COMMON) += pci-host-common.o obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c index 1c173dad67d1..1c15c8352125 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-ep.c +++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c @@ -450,7 +450,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) epc->max_functions = 1; ret = pci_epc_mem_init(epc, pcie->mem_res->start, - resource_size(pcie->mem_res)); + resource_size(pcie->mem_res), PAGE_SIZE); if (ret < 0) { dev_err(dev, "failed to initialize the memory space\n"); goto err_init; diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c index 9b1c3966414b..8c2543f28ba0 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-host.c +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c @@ -140,9 +140,6 @@ static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) for_each_of_pci_range(&parser, &range) { bool is_io; - if (r >= rc->max_regions) - break; - if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_MEM) is_io = false; else if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_IO) @@ -219,17 +216,14 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) pcie = &rc->pcie; pcie->is_rc = true; - rc->max_regions = 32; - of_property_read_u32(np, "cdns,max-outbound-regions", &rc->max_regions); - rc->no_bar_nbits = 32; of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits); rc->vendor_id = 0xffff; - of_property_read_u16(np, "vendor-id", &rc->vendor_id); + of_property_read_u32(np, "vendor-id", &rc->vendor_id); rc->device_id = 0xffff; - of_property_read_u16(np, "device-id", &rc->device_id); + of_property_read_u32(np, "device-id", &rc->device_id); res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg"); pcie->reg_base = devm_ioremap_resource(dev, res); diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h index a2b28b912ca4..df14ad002fe9 100644 --- a/drivers/pci/controller/cadence/pcie-cadence.h +++ b/drivers/pci/controller/cadence/pcie-cadence.h @@ -251,7 +251,6 @@ struct cdns_pcie { * @bus_range: first/last buses behind the PCIe host controller * @cfg_base: IO mapped window to access the PCI configuration space of a * single function at a time - * @max_regions: maximum number of regions supported by the hardware * @no_bar_nbits: Number of bits to keep for inbound (PCIe -> CPU) address * translation (nbits sets into the "no BAR match" register) * @vendor_id: PCI vendor ID @@ -262,10 +261,9 @@ struct cdns_pcie_rc { struct resource *cfg_res; struct resource *bus_range; void __iomem *cfg_base; - u32 max_regions; u32 no_bar_nbits; - u16 vendor_id; - u16 device_id; + u32 vendor_id; + u32 device_id; }; /** diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig index 03dcaf65d159..044a3761c44f 100644 --- a/drivers/pci/controller/dwc/Kconfig +++ b/drivers/pci/controller/dwc/Kconfig @@ -26,7 +26,7 @@ config PCI_DRA7XX_HOST depends on OF && HAS_IOMEM && TI_PIPE3 select PCIE_DW_HOST select PCI_DRA7XX - default y + default y if SOC_DRA7XX help Enables support for the PCIe controller in the DRA7xx SoC to work in host mode. There are two instances of PCIe controller in DRA7xx. @@ -111,7 +111,6 @@ config PCI_KEYSTONE_HOST depends on PCI_MSI_IRQ_DOMAIN select PCIE_DW_HOST select PCI_KEYSTONE - default y help Enables support for the PCIe controller in the Keystone SoC to work in host mode. The PCI controller on Keystone is based on @@ -281,15 +280,25 @@ config PCIE_TEGRA194_EP selected. This uses the DesignWare core. config PCIE_UNIPHIER - bool "Socionext UniPhier PCIe controllers" + bool "Socionext UniPhier PCIe host controllers" depends on ARCH_UNIPHIER || COMPILE_TEST depends on OF && HAS_IOMEM depends on PCI_MSI_IRQ_DOMAIN select PCIE_DW_HOST help - Say Y here if you want PCIe controller support on UniPhier SoCs. + Say Y here if you want PCIe host controller support on UniPhier SoCs. This driver supports LD20 and PXs3 SoCs. +config PCIE_UNIPHIER_EP + bool "Socionext UniPhier PCIe endpoint controllers" + depends on ARCH_UNIPHIER || COMPILE_TEST + depends on OF && HAS_IOMEM + depends on PCI_ENDPOINT + select PCIE_DW_EP + help + Say Y here if you want PCIe endpoint controller support on + UniPhier SoCs. This driver supports Pro5 SoC. + config PCIE_AL bool "Amazon Annapurna Labs PCIe controller" depends on OF && (ARM64 || COMPILE_TEST) diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile index 8a637cfcf6e9..a751553fa0db 100644 --- a/drivers/pci/controller/dwc/Makefile +++ b/drivers/pci/controller/dwc/Makefile @@ -19,6 +19,7 @@ obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o obj-$(CONFIG_PCI_MESON) += pci-meson.o obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o +obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o # The following drivers are for devices that use the generic ACPI # pci_root.c driver but don't support standard ECAM config access. diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c index 3b0e58f2de58..6184ebc9392d 100644 --- a/drivers/pci/controller/dwc/pci-dra7xx.c +++ b/drivers/pci/controller/dwc/pci-dra7xx.c @@ -840,7 +840,6 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev) struct phy **phy; struct device_link **link; void __iomem *base; - struct resource *res; struct dw_pcie *pci; struct dra7xx_pcie *dra7xx; struct device *dev = &pdev->dev; @@ -877,10 +876,9 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev) return irq; } - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ti_conf"); - base = devm_ioremap(dev, res->start, resource_size(res)); - if (!base) - return -ENOMEM; + base = devm_platform_ioremap_resource_byname(pdev, "ti_conf"); + if (IS_ERR(base)) + return PTR_ERR(base); phy_count = of_property_count_strings(np, "phy-names"); if (phy_count < 0) { diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c index acfbd34032a8..8f08ae53f53e 100644 --- a/drivers/pci/controller/dwc/pci-imx6.c +++ b/drivers/pci/controller/dwc/pci-imx6.c @@ -868,9 +868,9 @@ static int imx6_add_pcie_port(struct imx6_pcie *imx6_pcie, if (IS_ENABLED(CONFIG_PCI_MSI)) { pp->msi_irq = platform_get_irq_byname(pdev, "msi"); - if (pp->msi_irq <= 0) { + if (pp->msi_irq < 0) { dev_err(dev, "failed to get MSI irq\n"); - return -ENODEV; + return pp->msi_irq; } } diff --git a/drivers/pci/controller/dwc/pci-meson.c b/drivers/pci/controller/dwc/pci-meson.c index 3715dceca1bf..ca59ba9e0ecd 100644 --- a/drivers/pci/controller/dwc/pci-meson.c +++ b/drivers/pci/controller/dwc/pci-meson.c @@ -289,11 +289,11 @@ static void meson_pcie_init_dw(struct meson_pcie *mp) meson_cfg_writel(mp, val, PCIE_CFG0); val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF); - val &= ~LINK_CAPABLE_MASK; + val &= ~(LINK_CAPABLE_MASK | FAST_LINK_MODE); meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF); val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF); - val |= LINK_CAPABLE_X1 | FAST_LINK_MODE; + val |= LINK_CAPABLE_X1; meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF); val = meson_elb_readl(mp, PCIE_GEN2_CTRL_OFF); diff --git a/drivers/pci/controller/dwc/pcie-al.c b/drivers/pci/controller/dwc/pcie-al.c index 1eeda2f6371f..270868f3859a 100644 --- a/drivers/pci/controller/dwc/pcie-al.c +++ b/drivers/pci/controller/dwc/pcie-al.c @@ -80,7 +80,7 @@ static int al_pcie_init(struct pci_config_window *cfg) return 0; } -struct pci_ecam_ops al_pcie_ops = { +const struct pci_ecam_ops al_pcie_ops = { .bus_shift = 20, .init = al_pcie_init, .pci_ops = { diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c index 1cdcbd102ce8..5e5b8821bed8 100644 --- a/drivers/pci/controller/dwc/pcie-designware-ep.c +++ b/drivers/pci/controller/dwc/pcie-designware-ep.c @@ -412,11 +412,11 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, reg = ep->msi_cap + PCI_MSI_DATA_32; msg_data = dw_pcie_readw_dbi(pci, reg); } - aligned_offset = msg_addr_lower & (epc->mem->page_size - 1); + aligned_offset = msg_addr_lower & (epc->mem->window.page_size - 1); msg_addr = ((u64)msg_addr_upper) << 32 | (msg_addr_lower & ~aligned_offset); ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, - epc->mem->page_size); + epc->mem->window.page_size); if (ret) return ret; @@ -433,7 +433,6 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct pci_epf_msix_tbl *msix_tbl; struct pci_epc *epc = ep->epc; - struct pci_epf_bar *epf_bar; u32 reg, msg_data, vec_ctrl; unsigned int aligned_offset; u32 tbl_offset; @@ -446,10 +445,7 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, bir = (tbl_offset & PCI_MSIX_TABLE_BIR); tbl_offset &= PCI_MSIX_TABLE_OFFSET; - epf_bar = ep->epf_bar[bir]; - msix_tbl = epf_bar->addr; - msix_tbl = (struct pci_epf_msix_tbl *)((char *)msix_tbl + tbl_offset); - + msix_tbl = ep->epf_bar[bir]->addr + tbl_offset; msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr; msg_data = msix_tbl[(interrupt_num - 1)].msg_data; vec_ctrl = msix_tbl[(interrupt_num - 1)].vector_ctrl; @@ -459,9 +455,9 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, return -EPERM; } - aligned_offset = msg_addr & (epc->mem->page_size - 1); + aligned_offset = msg_addr & (epc->mem->window.page_size - 1); ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, - epc->mem->page_size); + epc->mem->window.page_size); if (ret) return ret; @@ -477,7 +473,7 @@ void dw_pcie_ep_exit(struct dw_pcie_ep *ep) struct pci_epc *epc = ep->epc; pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, - epc->mem->page_size); + epc->mem->window.page_size); pci_epc_mem_exit(epc); } @@ -610,15 +606,15 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) if (ret < 0) epc->max_functions = 1; - ret = __pci_epc_mem_init(epc, ep->phys_base, ep->addr_size, - ep->page_size); + ret = pci_epc_mem_init(epc, ep->phys_base, ep->addr_size, + ep->page_size); if (ret < 0) { dev_err(dev, "Failed to initialize address space\n"); return ret; } ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys, - epc->mem->page_size); + epc->mem->window.page_size); if (!ep->msi_mem) { dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n"); return -ENOMEM; diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index 395feb8ca051..0a4a5aa6fe46 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -236,7 +236,7 @@ static void dw_pcie_irq_domain_free(struct irq_domain *domain, unsigned int virq, unsigned int nr_irqs) { struct irq_data *d = irq_domain_get_irq_data(domain, virq); - struct pcie_port *pp = irq_data_get_irq_chip_data(d); + struct pcie_port *pp = domain->host_data; unsigned long flags; raw_spin_lock_irqsave(&pp->lock, flags); @@ -264,6 +264,8 @@ int dw_pcie_allocate_domains(struct pcie_port *pp) return -ENOMEM; } + irq_domain_update_bus_token(pp->irq_domain, DOMAIN_BUS_NEXUS); + pp->msi_domain = pci_msi_create_irq_domain(fwnode, &dw_pcie_msi_domain_info, pp->irq_domain); diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index 681548c88282..c92496e36fd5 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c @@ -244,13 +244,16 @@ static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, int index, u64 pci_addr, u32 size) { u32 retries, val; + u64 limit_addr = cpu_addr + size - 1; dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_BASE, lower_32_bits(cpu_addr)); dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_BASE, upper_32_bits(cpu_addr)); - dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LIMIT, - lower_32_bits(cpu_addr + size - 1)); + dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_LIMIT, + lower_32_bits(limit_addr)); + dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_LIMIT, + upper_32_bits(limit_addr)); dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_TARGET, lower_32_bits(pci_addr)); dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_TARGET, diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index d6e1f397e6b0..656e00f8fbeb 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h @@ -112,9 +112,10 @@ #define PCIE_ATU_UNR_REGION_CTRL2 0x04 #define PCIE_ATU_UNR_LOWER_BASE 0x08 #define PCIE_ATU_UNR_UPPER_BASE 0x0C -#define PCIE_ATU_UNR_LIMIT 0x10 +#define PCIE_ATU_UNR_LOWER_LIMIT 0x10 #define PCIE_ATU_UNR_LOWER_TARGET 0x14 #define PCIE_ATU_UNR_UPPER_TARGET 0x18 +#define PCIE_ATU_UNR_UPPER_LIMIT 0x20 /* * The default address offset between dbi_base and atu_base. Root controller diff --git a/drivers/pci/controller/dwc/pcie-hisi.c b/drivers/pci/controller/dwc/pcie-hisi.c index 6d9e1b2b8f7b..0ad4e07dd4c2 100644 --- a/drivers/pci/controller/dwc/pcie-hisi.c +++ b/drivers/pci/controller/dwc/pcie-hisi.c @@ -104,7 +104,7 @@ static int hisi_pcie_init(struct pci_config_window *cfg) return 0; } -struct pci_ecam_ops hisi_pcie_ops = { +const struct pci_ecam_ops hisi_pcie_ops = { .bus_shift = 20, .init = hisi_pcie_init, .pci_ops = { @@ -332,15 +332,6 @@ static struct platform_driver hisi_pcie_driver = { }; builtin_platform_driver(hisi_pcie_driver); -static int hisi_pcie_almost_ecam_probe(struct platform_device *pdev) -{ - struct device *dev = &pdev->dev; - struct pci_ecam_ops *ops; - - ops = (struct pci_ecam_ops *)of_device_get_match_data(dev); - return pci_host_common_probe(pdev, ops); -} - static int hisi_pcie_platform_init(struct pci_config_window *cfg) { struct device *dev = cfg->parent; @@ -362,7 +353,7 @@ static int hisi_pcie_platform_init(struct pci_config_window *cfg) return 0; } -struct pci_ecam_ops hisi_pcie_platform_ops = { +static const struct pci_ecam_ops hisi_pcie_platform_ops = { .bus_shift = 20, .init = hisi_pcie_platform_init, .pci_ops = { @@ -375,17 +366,17 @@ struct pci_ecam_ops hisi_pcie_platform_ops = { static const struct of_device_id hisi_pcie_almost_ecam_of_match[] = { { .compatible = "hisilicon,hip06-pcie-ecam", - .data = (void *) &hisi_pcie_platform_ops, + .data = &hisi_pcie_platform_ops, }, { .compatible = "hisilicon,hip07-pcie-ecam", - .data = (void *) &hisi_pcie_platform_ops, + .data = &hisi_pcie_platform_ops, }, {}, }; static struct platform_driver hisi_pcie_almost_ecam_driver = { - .probe = hisi_pcie_almost_ecam_probe, + .probe = pci_host_common_probe, .driver = { .name = "hisi-pcie-almost-ecam", .of_match_table = hisi_pcie_almost_ecam_of_match, diff --git a/drivers/pci/controller/dwc/pcie-intel-gw.c b/drivers/pci/controller/dwc/pcie-intel-gw.c index fc2a12212dec..2d8dbb318087 100644 --- a/drivers/pci/controller/dwc/pcie-intel-gw.c +++ b/drivers/pci/controller/dwc/pcie-intel-gw.c @@ -453,7 +453,7 @@ static int intel_pcie_msi_init(struct pcie_port *pp) return 0; } -u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr) +static u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr) { return cpu_addr + BUS_IATU_OFFSET; } diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c index ae30a2fd3716..92b77f7d8354 100644 --- a/drivers/pci/controller/dwc/pcie-tegra194.c +++ b/drivers/pci/controller/dwc/pcie-tegra194.c @@ -1623,7 +1623,7 @@ static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie) ret = pinctrl_pm_select_default_state(dev); if (ret < 0) { dev_err(dev, "Failed to configure sideband pins: %d\n", ret); - goto fail_pinctrl; + goto fail_pm_get_sync; } tegra_pcie_init_controller(pcie); @@ -1650,9 +1650,8 @@ static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie) fail_host_init: tegra_pcie_deinit_controller(pcie); -fail_pinctrl: - pm_runtime_put_sync(dev); fail_pm_get_sync: + pm_runtime_put_sync(dev); pm_runtime_disable(dev); return ret; } @@ -2190,9 +2189,9 @@ static int tegra_pcie_dw_probe(struct platform_device *pdev) } pp->irq = platform_get_irq_byname(pdev, "intr"); - if (!pp->irq) { + if (pp->irq < 0) { dev_err(dev, "Failed to get \"intr\" interrupt\n"); - return -ENODEV; + return pp->irq; } pcie->bpmp = tegra_bpmp_get(dev); diff --git a/drivers/pci/controller/dwc/pcie-uniphier-ep.c b/drivers/pci/controller/dwc/pcie-uniphier-ep.c new file mode 100644 index 000000000000..148355960061 --- /dev/null +++ b/drivers/pci/controller/dwc/pcie-uniphier-ep.c @@ -0,0 +1,383 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PCIe endpoint controller driver for UniPhier SoCs + * Copyright 2018 Socionext Inc. + * Author: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> + */ + +#include <linux/bitops.h> +#include <linux/bitfield.h> +#include <linux/clk.h> +#include <linux/delay.h> +#include <linux/init.h> +#include <linux/of_device.h> +#include <linux/pci.h> +#include <linux/phy/phy.h> +#include <linux/platform_device.h> +#include <linux/reset.h> + +#include "pcie-designware.h" + +/* Link Glue registers */ +#define PCL_RSTCTRL0 0x0010 +#define PCL_RSTCTRL_AXI_REG BIT(3) +#define PCL_RSTCTRL_AXI_SLAVE BIT(2) +#define PCL_RSTCTRL_AXI_MASTER BIT(1) +#define PCL_RSTCTRL_PIPE3 BIT(0) + +#define PCL_RSTCTRL1 0x0020 +#define PCL_RSTCTRL_PERST BIT(0) + +#define PCL_RSTCTRL2 0x0024 +#define PCL_RSTCTRL_PHY_RESET BIT(0) + +#define PCL_MODE 0x8000 +#define PCL_MODE_REGEN BIT(8) +#define PCL_MODE_REGVAL BIT(0) + +#define PCL_APP_CLK_CTRL 0x8004 +#define PCL_APP_CLK_REQ BIT(0) + +#define PCL_APP_READY_CTRL 0x8008 +#define PCL_APP_LTSSM_ENABLE BIT(0) + +#define PCL_APP_MSI0 0x8040 +#define PCL_APP_VEN_MSI_TC_MASK GENMASK(10, 8) +#define PCL_APP_VEN_MSI_VECTOR_MASK GENMASK(4, 0) + +#define PCL_APP_MSI1 0x8044 +#define PCL_APP_MSI_REQ BIT(0) + +#define PCL_APP_INTX 0x8074 +#define PCL_APP_INTX_SYS_INT BIT(0) + +/* assertion time of INTx in usec */ +#define PCL_INTX_WIDTH_USEC 30 + +struct uniphier_pcie_ep_priv { + void __iomem *base; + struct dw_pcie pci; + struct clk *clk, *clk_gio; + struct reset_control *rst, *rst_gio; + struct phy *phy; + const struct pci_epc_features *features; +}; + +#define to_uniphier_pcie(x) dev_get_drvdata((x)->dev) + +static void uniphier_pcie_ltssm_enable(struct uniphier_pcie_ep_priv *priv, + bool enable) +{ + u32 val; + + val = readl(priv->base + PCL_APP_READY_CTRL); + if (enable) + val |= PCL_APP_LTSSM_ENABLE; + else + val &= ~PCL_APP_LTSSM_ENABLE; + writel(val, priv->base + PCL_APP_READY_CTRL); +} + +static void uniphier_pcie_phy_reset(struct uniphier_pcie_ep_priv *priv, + bool assert) +{ + u32 val; + + val = readl(priv->base + PCL_RSTCTRL2); + if (assert) + val |= PCL_RSTCTRL_PHY_RESET; + else + val &= ~PCL_RSTCTRL_PHY_RESET; + writel(val, priv->base + PCL_RSTCTRL2); +} + +static void uniphier_pcie_init_ep(struct uniphier_pcie_ep_priv *priv) +{ + u32 val; + + /* set EP mode */ + val = readl(priv->base + PCL_MODE); + val |= PCL_MODE_REGEN | PCL_MODE_REGVAL; + writel(val, priv->base + PCL_MODE); + + /* clock request */ + val = readl(priv->base + PCL_APP_CLK_CTRL); + val &= ~PCL_APP_CLK_REQ; + writel(val, priv->base + PCL_APP_CLK_CTRL); + + /* deassert PIPE3 and AXI reset */ + val = readl(priv->base + PCL_RSTCTRL0); + val |= PCL_RSTCTRL_AXI_REG | PCL_RSTCTRL_AXI_SLAVE + | PCL_RSTCTRL_AXI_MASTER | PCL_RSTCTRL_PIPE3; + writel(val, priv->base + PCL_RSTCTRL0); + + uniphier_pcie_ltssm_enable(priv, false); + + msleep(100); +} + +static int uniphier_pcie_start_link(struct dw_pcie *pci) +{ + struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); + + uniphier_pcie_ltssm_enable(priv, true); + + return 0; +} + +static void uniphier_pcie_stop_link(struct dw_pcie *pci) +{ + struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); + + uniphier_pcie_ltssm_enable(priv, false); +} + +static void uniphier_pcie_ep_init(struct dw_pcie_ep *ep) +{ + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + enum pci_barno bar; + + for (bar = BAR_0; bar <= BAR_5; bar++) + dw_pcie_ep_reset_bar(pci, bar); +} + +static int uniphier_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep) +{ + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); + u32 val; + + /* + * This makes pulse signal to send INTx to the RC, so this should + * be cleared as soon as possible. This sequence is covered with + * mutex in pci_epc_raise_irq(). + */ + /* assert INTx */ + val = readl(priv->base + PCL_APP_INTX); + val |= PCL_APP_INTX_SYS_INT; + writel(val, priv->base + PCL_APP_INTX); + + udelay(PCL_INTX_WIDTH_USEC); + + /* deassert INTx */ + val &= ~PCL_APP_INTX_SYS_INT; + writel(val, priv->base + PCL_APP_INTX); + + return 0; +} + +static int uniphier_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, + u8 func_no, u16 interrupt_num) +{ + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); + u32 val; + + val = FIELD_PREP(PCL_APP_VEN_MSI_TC_MASK, func_no) + | FIELD_PREP(PCL_APP_VEN_MSI_VECTOR_MASK, interrupt_num - 1); + writel(val, priv->base + PCL_APP_MSI0); + + val = readl(priv->base + PCL_APP_MSI1); + val |= PCL_APP_MSI_REQ; + writel(val, priv->base + PCL_APP_MSI1); + + return 0; +} + +static int uniphier_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, + enum pci_epc_irq_type type, + u16 interrupt_num) +{ + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + + switch (type) { + case PCI_EPC_IRQ_LEGACY: + return uniphier_pcie_ep_raise_legacy_irq(ep); + case PCI_EPC_IRQ_MSI: + return uniphier_pcie_ep_raise_msi_irq(ep, func_no, + interrupt_num); + default: + dev_err(pci->dev, "UNKNOWN IRQ type (%d)\n", type); + } + + return 0; +} + +static const struct pci_epc_features* +uniphier_pcie_get_features(struct dw_pcie_ep *ep) +{ + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); + + return priv->features; +} + +static const struct dw_pcie_ep_ops uniphier_pcie_ep_ops = { + .ep_init = uniphier_pcie_ep_init, + .raise_irq = uniphier_pcie_ep_raise_irq, + .get_features = uniphier_pcie_get_features, +}; + +static int uniphier_add_pcie_ep(struct uniphier_pcie_ep_priv *priv, + struct platform_device *pdev) +{ + struct dw_pcie *pci = &priv->pci; + struct dw_pcie_ep *ep = &pci->ep; + struct device *dev = &pdev->dev; + struct resource *res; + int ret; + + ep->ops = &uniphier_pcie_ep_ops; + + pci->dbi_base2 = devm_platform_ioremap_resource_byname(pdev, "dbi2"); + if (IS_ERR(pci->dbi_base2)) + return PTR_ERR(pci->dbi_base2); + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); + if (!res) + return -EINVAL; + + ep->phys_base = res->start; + ep->addr_size = resource_size(res); + + ret = dw_pcie_ep_init(ep); + if (ret) + dev_err(dev, "Failed to initialize endpoint (%d)\n", ret); + + return ret; +} + +static int uniphier_pcie_ep_enable(struct uniphier_pcie_ep_priv *priv) +{ + int ret; + + ret = clk_prepare_enable(priv->clk); + if (ret) + return ret; + + ret = clk_prepare_enable(priv->clk_gio); + if (ret) + goto out_clk_disable; + + ret = reset_control_deassert(priv->rst); + if (ret) + goto out_clk_gio_disable; + + ret = reset_control_deassert(priv->rst_gio); + if (ret) + goto out_rst_assert; + + uniphier_pcie_init_ep(priv); + + uniphier_pcie_phy_reset(priv, true); + + ret = phy_init(priv->phy); + if (ret) + goto out_rst_gio_assert; + + uniphier_pcie_phy_reset(priv, false); + + return 0; + +out_rst_gio_assert: + reset_control_assert(priv->rst_gio); +out_rst_assert: + reset_control_assert(priv->rst); +out_clk_gio_disable: + clk_disable_unprepare(priv->clk_gio); +out_clk_disable: + clk_disable_unprepare(priv->clk); + + return ret; +} + +static const struct dw_pcie_ops dw_pcie_ops = { + .start_link = uniphier_pcie_start_link, + .stop_link = uniphier_pcie_stop_link, +}; + +static int uniphier_pcie_ep_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct uniphier_pcie_ep_priv *priv; + struct resource *res; + int ret; + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->features = of_device_get_match_data(dev); + if (WARN_ON(!priv->features)) + return -EINVAL; + + priv->pci.dev = dev; + priv->pci.ops = &dw_pcie_ops; + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); + priv->pci.dbi_base = devm_pci_remap_cfg_resource(dev, res); + if (IS_ERR(priv->pci.dbi_base)) + return PTR_ERR(priv->pci.dbi_base); + + priv->base = devm_platform_ioremap_resource_byname(pdev, "link"); + if (IS_ERR(priv->base)) + return PTR_ERR(priv->base); + + priv->clk_gio = devm_clk_get(dev, "gio"); + if (IS_ERR(priv->clk_gio)) + return PTR_ERR(priv->clk_gio); + + priv->rst_gio = devm_reset_control_get_shared(dev, "gio"); + if (IS_ERR(priv->rst_gio)) + return PTR_ERR(priv->rst_gio); + + priv->clk = devm_clk_get(dev, "link"); + if (IS_ERR(priv->clk)) + return PTR_ERR(priv->clk); + + priv->rst = devm_reset_control_get_shared(dev, "link"); + if (IS_ERR(priv->rst)) + return PTR_ERR(priv->rst); + + priv->phy = devm_phy_optional_get(dev, "pcie-phy"); + if (IS_ERR(priv->phy)) { + ret = PTR_ERR(priv->phy); + dev_err(dev, "Failed to get phy (%d)\n", ret); + return ret; + } + + platform_set_drvdata(pdev, priv); + + ret = uniphier_pcie_ep_enable(priv); + if (ret) + return ret; + + return uniphier_add_pcie_ep(priv, pdev); +} + +static const struct pci_epc_features uniphier_pro5_data = { + .linkup_notifier = false, + .msi_capable = true, + .msix_capable = false, + .align = 1 << 16, + .bar_fixed_64bit = BIT(BAR_0) | BIT(BAR_2) | BIT(BAR_4), + .reserved_bar = BIT(BAR_4), +}; + +static const struct of_device_id uniphier_pcie_ep_match[] = { + { + .compatible = "socionext,uniphier-pro5-pcie-ep", + .data = &uniphier_pro5_data, + }, + { /* sentinel */ }, +}; + +static struct platform_driver uniphier_pcie_ep_driver = { + .probe = uniphier_pcie_ep_probe, + .driver = { + .name = "uniphier-pcie-ep", + .of_match_table = uniphier_pcie_ep_match, + .suppress_bind_attrs = true, + }, +}; +builtin_platform_driver(uniphier_pcie_ep_driver); diff --git a/drivers/pci/controller/mobiveil/pcie-mobiveil-host.c b/drivers/pci/controller/mobiveil/pcie-mobiveil-host.c index a94be264240f..5907baa9b1f2 100644 --- a/drivers/pci/controller/mobiveil/pcie-mobiveil-host.c +++ b/drivers/pci/controller/mobiveil/pcie-mobiveil-host.c @@ -522,9 +522,9 @@ static int mobiveil_pcie_integrated_interrupt_init(struct mobiveil_pcie *pcie) mobiveil_pcie_enable_msi(pcie); rp->irq = platform_get_irq(pdev, 0); - if (rp->irq <= 0) { + if (rp->irq < 0) { dev_err(dev, "failed to map IRQ: %d\n", rp->irq); - return -ENODEV; + return rp->irq; } /* initialize the IRQ domains */ diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c index 2a20b649f40c..90ff291c24f0 100644 --- a/drivers/pci/controller/pci-aardvark.c +++ b/drivers/pci/controller/pci-aardvark.c @@ -9,15 +9,18 @@ */ #include <linux/delay.h> +#include <linux/gpio.h> #include <linux/interrupt.h> #include <linux/irq.h> #include <linux/irqdomain.h> #include <linux/kernel.h> #include <linux/pci.h> #include <linux/init.h> +#include <linux/phy/phy.h> #include <linux/platform_device.h> #include <linux/msi.h> #include <linux/of_address.h> +#include <linux/of_gpio.h> #include <linux/of_pci.h> #include "../pci.h" @@ -31,16 +34,6 @@ #define PCIE_CORE_CMD_MEM_IO_REQ_EN BIT(2) #define PCIE_CORE_DEV_REV_REG 0x8 #define PCIE_CORE_PCIEXP_CAP 0xc0 -#define PCIE_CORE_DEV_CTRL_STATS_REG 0xc8 -#define PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE (0 << 4) -#define PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT 5 -#define PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE (0 << 11) -#define PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT 12 -#define PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SZ 0x2 -#define PCIE_CORE_LINK_CTRL_STAT_REG 0xd0 -#define PCIE_CORE_LINK_L0S_ENTRY BIT(0) -#define PCIE_CORE_LINK_TRAINING BIT(5) -#define PCIE_CORE_LINK_WIDTH_SHIFT 20 #define PCIE_CORE_ERR_CAPCTL_REG 0x118 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5) #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6) @@ -101,6 +94,8 @@ #define PCIE_CORE_CTRL2_STRICT_ORDER_ENABLE BIT(5) #define PCIE_CORE_CTRL2_OB_WIN_ENABLE BIT(6) #define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10) +#define PCIE_CORE_REF_CLK_REG (CONTROL_BASE_ADDR + 0x14) +#define PCIE_CORE_REF_CLK_TX_ENABLE BIT(1) #define PCIE_MSG_LOG_REG (CONTROL_BASE_ADDR + 0x30) #define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40) #define PCIE_MSG_PM_PME_MASK BIT(7) @@ -201,7 +196,10 @@ struct advk_pcie { struct mutex msi_used_lock; u16 msi_msg; int root_bus_nr; + int link_gen; struct pci_bridge_emul bridge; + struct gpio_desc *reset_gpio; + struct phy *phy; }; static inline void advk_writel(struct advk_pcie *pcie, u32 val, u64 reg) @@ -214,6 +212,11 @@ static inline u32 advk_readl(struct advk_pcie *pcie, u64 reg) return readl(pcie->base + reg); } +static inline u16 advk_read16(struct advk_pcie *pcie, u64 reg) +{ + return advk_readl(pcie, (reg & ~0x3)) >> ((reg & 0x3) * 8); +} + static int advk_pcie_link_up(struct advk_pcie *pcie) { u32 val, ltssm_state; @@ -225,20 +228,16 @@ static int advk_pcie_link_up(struct advk_pcie *pcie) static int advk_pcie_wait_for_link(struct advk_pcie *pcie) { - struct device *dev = &pcie->pdev->dev; int retries; /* check if the link is up or not */ for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) { - if (advk_pcie_link_up(pcie)) { - dev_info(dev, "link up\n"); + if (advk_pcie_link_up(pcie)) return 0; - } usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); } - dev_err(dev, "link never came up\n"); return -ETIMEDOUT; } @@ -253,10 +252,115 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie) } } +static int advk_pcie_train_at_gen(struct advk_pcie *pcie, int gen) +{ + int ret, neg_gen; + u32 reg; + + /* Setup link speed */ + reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); + reg &= ~PCIE_GEN_SEL_MSK; + if (gen == 3) + reg |= SPEED_GEN_3; + else if (gen == 2) + reg |= SPEED_GEN_2; + else + reg |= SPEED_GEN_1; + advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); + + /* + * Enable link training. This is not needed in every call to this + * function, just once suffices, but it does not break anything either. + */ + reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); + reg |= LINK_TRAINING_EN; + advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); + + /* + * Start link training immediately after enabling it. + * This solves problems for some buggy cards. + */ + reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL); + reg |= PCI_EXP_LNKCTL_RL; + advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL); + + ret = advk_pcie_wait_for_link(pcie); + if (ret) + return ret; + + reg = advk_read16(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKSTA); + neg_gen = reg & PCI_EXP_LNKSTA_CLS; + + return neg_gen; +} + +static void advk_pcie_train_link(struct advk_pcie *pcie) +{ + struct device *dev = &pcie->pdev->dev; + int neg_gen = -1, gen; + + /* + * Try link training at link gen specified by device tree property + * 'max-link-speed'. If this fails, iteratively train at lower gen. + */ + for (gen = pcie->link_gen; gen > 0; --gen) { + neg_gen = advk_pcie_train_at_gen(pcie, gen); + if (neg_gen > 0) + break; + } + + if (neg_gen < 0) + goto err; + + /* + * After successful training if negotiated gen is lower than requested, + * train again on negotiated gen. This solves some stability issues for + * some buggy gen1 cards. + */ + if (neg_gen < gen) { + gen = neg_gen; + neg_gen = advk_pcie_train_at_gen(pcie, gen); + } + + if (neg_gen == gen) { + dev_info(dev, "link up at gen %i\n", gen); + return; + } + +err: + dev_err(dev, "link never came up\n"); +} + +static void advk_pcie_issue_perst(struct advk_pcie *pcie) +{ + u32 reg; + + if (!pcie->reset_gpio) + return; + + /* PERST does not work for some cards when link training is enabled */ + reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); + reg &= ~LINK_TRAINING_EN; + advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); + + /* 10ms delay is needed for some cards */ + dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n"); + gpiod_set_value_cansleep(pcie->reset_gpio, 1); + usleep_range(10000, 11000); + gpiod_set_value_cansleep(pcie->reset_gpio, 0); +} + static void advk_pcie_setup_hw(struct advk_pcie *pcie) { u32 reg; + advk_pcie_issue_perst(pcie); + + /* Enable TX */ + reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG); + reg |= PCIE_CORE_REF_CLK_TX_ENABLE; + advk_writel(pcie, reg, PCIE_CORE_REF_CLK_REG); + /* Set to Direct mode */ reg = advk_readl(pcie, CTRL_CONFIG_REG); reg &= ~(CTRL_MODE_MASK << CTRL_MODE_SHIFT); @@ -275,36 +379,26 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie) PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV; advk_writel(pcie, reg, PCIE_CORE_ERR_CAPCTL_REG); - /* Set PCIe Device Control and Status 1 PF0 register */ - reg = PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE | - (7 << PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT) | - PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE | - (PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SZ << - PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT); - advk_writel(pcie, reg, PCIE_CORE_DEV_CTRL_STATS_REG); + /* Set PCIe Device Control register */ + reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL); + reg &= ~PCI_EXP_DEVCTL_RELAX_EN; + reg &= ~PCI_EXP_DEVCTL_NOSNOOP_EN; + reg &= ~PCI_EXP_DEVCTL_READRQ; + reg |= PCI_EXP_DEVCTL_PAYLOAD; /* Set max payload size */ + reg |= PCI_EXP_DEVCTL_READRQ_512B; + advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL); /* Program PCIe Control 2 to disable strict ordering */ reg = PCIE_CORE_CTRL2_RESERVED | PCIE_CORE_CTRL2_TD_ENABLE; advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG); - /* Set GEN2 */ - reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); - reg &= ~PCIE_GEN_SEL_MSK; - reg |= SPEED_GEN_2; - advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); - /* Set lane X1 */ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); reg &= ~LANE_CNT_MSK; reg |= LANE_COUNT_1; advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); - /* Enable link training */ - reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); - reg |= LINK_TRAINING_EN; - advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); - /* Enable MSI */ reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG); reg |= PCIE_CORE_CTRL2_MSI_ENABLE; @@ -340,23 +434,22 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie) /* * PERST# signal could have been asserted by pinctrl subsystem before - * probe() callback has been called, making the endpoint going into + * probe() callback has been called or issued explicitly by reset gpio + * function advk_pcie_issue_perst(), making the endpoint going into * fundamental reset. As required by PCI Express spec a delay for at * least 100ms after such a reset before link training is needed. */ msleep(PCI_PM_D3COLD_WAIT); - /* Start link training */ - reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG); - reg |= PCIE_CORE_LINK_TRAINING; - advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG); - - advk_pcie_wait_for_link(pcie); - - reg = PCIE_CORE_LINK_L0S_ENTRY | - (1 << PCIE_CORE_LINK_WIDTH_SHIFT); - advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG); + advk_pcie_train_link(pcie); + /* + * FIXME: The following register update is suspicious. This register is + * applicable only when the PCI controller is configured for Endpoint + * mode, not as a Root Complex. But apparently when this code is + * removed, some cards stop working. This should be investigated and + * a comment explaining this should be put here. + */ reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG); reg |= PCIE_CORE_CMD_MEM_ACCESS_EN | PCIE_CORE_CMD_IO_ACCESS_EN | @@ -952,6 +1045,62 @@ static irqreturn_t advk_pcie_irq_handler(int irq, void *arg) return IRQ_HANDLED; } +static void __maybe_unused advk_pcie_disable_phy(struct advk_pcie *pcie) +{ + phy_power_off(pcie->phy); + phy_exit(pcie->phy); +} + +static int advk_pcie_enable_phy(struct advk_pcie *pcie) +{ + int ret; + + if (!pcie->phy) + return 0; + + ret = phy_init(pcie->phy); + if (ret) + return ret; + + ret = phy_set_mode(pcie->phy, PHY_MODE_PCIE); + if (ret) { + phy_exit(pcie->phy); + return ret; + } + + ret = phy_power_on(pcie->phy); + if (ret) { + phy_exit(pcie->phy); + return ret; + } + + return 0; +} + +static int advk_pcie_setup_phy(struct advk_pcie *pcie) +{ + struct device *dev = &pcie->pdev->dev; + struct device_node *node = dev->of_node; + int ret = 0; + + pcie->phy = devm_of_phy_get(dev, node, NULL); + if (IS_ERR(pcie->phy) && (PTR_ERR(pcie->phy) == -EPROBE_DEFER)) + return PTR_ERR(pcie->phy); + + /* Old bindings miss the PHY handle */ + if (IS_ERR(pcie->phy)) { + dev_warn(dev, "PHY unavailable (%ld)\n", PTR_ERR(pcie->phy)); + pcie->phy = NULL; + return 0; + } + + ret = advk_pcie_enable_phy(pcie); + if (ret) + dev_err(dev, "Failed to initialize PHY (%d)\n", ret); + + return ret; +} + static int advk_pcie_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; @@ -973,6 +1122,9 @@ static int advk_pcie_probe(struct platform_device *pdev) return PTR_ERR(pcie->base); irq = platform_get_irq(pdev, 0); + if (irq < 0) + return irq; + ret = devm_request_irq(dev, irq, advk_pcie_irq_handler, IRQF_SHARED | IRQF_NO_THREAD, "advk-pcie", pcie); @@ -989,6 +1141,32 @@ static int advk_pcie_probe(struct platform_device *pdev) } pcie->root_bus_nr = bus->start; + pcie->reset_gpio = devm_gpiod_get_from_of_node(dev, dev->of_node, + "reset-gpios", 0, + GPIOD_OUT_LOW, + "pcie1-reset"); + ret = PTR_ERR_OR_ZERO(pcie->reset_gpio); + if (ret) { + if (ret == -ENOENT) { + pcie->reset_gpio = NULL; + } else { + if (ret != -EPROBE_DEFER) + dev_err(dev, "Failed to get reset-gpio: %i\n", + ret); + return ret; + } + } + + ret = of_pci_get_max_link_speed(dev->of_node); + if (ret <= 0 || ret > 3) + pcie->link_gen = 3; + else + pcie->link_gen = ret; + + ret = advk_pcie_setup_phy(pcie); + if (ret) + return ret; + advk_pcie_setup_hw(pcie); advk_sw_pci_bridge_init(pcie); diff --git a/drivers/pci/controller/pci-host-common.c b/drivers/pci/controller/pci-host-common.c index 250a3fc80ec6..953de57f6c57 100644 --- a/drivers/pci/controller/pci-host-common.c +++ b/drivers/pci/controller/pci-host-common.c @@ -8,7 +8,9 @@ */ #include <linux/kernel.h> +#include <linux/module.h> #include <linux/of_address.h> +#include <linux/of_device.h> #include <linux/of_pci.h> #include <linux/pci-ecam.h> #include <linux/platform_device.h> @@ -19,7 +21,7 @@ static void gen_pci_unmap_cfg(void *ptr) } static struct pci_config_window *gen_pci_init(struct device *dev, - struct list_head *resources, struct pci_ecam_ops *ops) + struct list_head *resources, const struct pci_ecam_ops *ops) { int err; struct resource cfgres; @@ -54,15 +56,19 @@ err_out: return ERR_PTR(err); } -int pci_host_common_probe(struct platform_device *pdev, - struct pci_ecam_ops *ops) +int pci_host_common_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct pci_host_bridge *bridge; struct pci_config_window *cfg; struct list_head resources; + const struct pci_ecam_ops *ops; int ret; + ops = of_device_get_match_data(&pdev->dev); + if (!ops) + return -ENODEV; + bridge = devm_pci_alloc_host_bridge(dev, 0); if (!bridge) return -ENOMEM; @@ -82,7 +88,7 @@ int pci_host_common_probe(struct platform_device *pdev, bridge->dev.parent = dev; bridge->sysdata = cfg; bridge->busnr = cfg->busr.start; - bridge->ops = &ops->pci_ops; + bridge->ops = (struct pci_ops *)&ops->pci_ops; bridge->map_irq = of_irq_parse_and_map_pci; bridge->swizzle_irq = pci_common_swizzle; @@ -95,6 +101,7 @@ int pci_host_common_probe(struct platform_device *pdev, platform_set_drvdata(pdev, bridge->bus); return 0; } +EXPORT_SYMBOL_GPL(pci_host_common_probe); int pci_host_common_remove(struct platform_device *pdev) { @@ -107,3 +114,6 @@ int pci_host_common_remove(struct platform_device *pdev) return 0; } +EXPORT_SYMBOL_GPL(pci_host_common_remove); + +MODULE_LICENSE("GPL v2"); diff --git a/drivers/pci/controller/pci-host-generic.c b/drivers/pci/controller/pci-host-generic.c index 75a2fb930d4b..b51977abfdf1 100644 --- a/drivers/pci/controller/pci-host-generic.c +++ b/drivers/pci/controller/pci-host-generic.c @@ -10,12 +10,11 @@ #include <linux/kernel.h> #include <linux/init.h> -#include <linux/of_address.h> -#include <linux/of_pci.h> +#include <linux/module.h> #include <linux/pci-ecam.h> #include <linux/platform_device.h> -static struct pci_ecam_ops gen_pci_cfg_cam_bus_ops = { +static const struct pci_ecam_ops gen_pci_cfg_cam_bus_ops = { .bus_shift = 16, .pci_ops = { .map_bus = pci_ecam_map_bus, @@ -49,7 +48,7 @@ static void __iomem *pci_dw_ecam_map_bus(struct pci_bus *bus, return pci_ecam_map_bus(bus, devfn, where); } -static struct pci_ecam_ops pci_dw_ecam_bus_ops = { +static const struct pci_ecam_ops pci_dw_ecam_bus_ops = { .bus_shift = 20, .pci_ops = { .map_bus = pci_dw_ecam_map_bus, @@ -76,25 +75,16 @@ static const struct of_device_id gen_pci_of_match[] = { { }, }; - -static int gen_pci_probe(struct platform_device *pdev) -{ - const struct of_device_id *of_id; - struct pci_ecam_ops *ops; - - of_id = of_match_node(gen_pci_of_match, pdev->dev.of_node); - ops = (struct pci_ecam_ops *)of_id->data; - - return pci_host_common_probe(pdev, ops); -} +MODULE_DEVICE_TABLE(of, gen_pci_of_match); static struct platform_driver gen_pci_driver = { .driver = { .name = "pci-host-generic", .of_match_table = gen_pci_of_match, - .suppress_bind_attrs = true, }, - .probe = gen_pci_probe, + .probe = pci_host_common_probe, .remove = pci_host_common_remove, }; -builtin_platform_driver(gen_pci_driver); +module_platform_driver(gen_pci_driver); + +MODULE_LICENSE("GPL v2"); diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index 222ff5639ebe..bf40ff09c99d 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -480,6 +480,9 @@ struct hv_pcibus_device { struct workqueue_struct *wq; + /* Highest slot of child device with resources allocated */ + int wslot_res_allocated; + /* hypercall arg, must not cross page boundary */ struct hv_retarget_device_interrupt retarget_msi_interrupt_params; @@ -2210,10 +2213,8 @@ static void hv_pci_devices_present(struct hv_pcibus_device *hbus, struct hv_dr_state *dr; int i; - dr = kzalloc(offsetof(struct hv_dr_state, func) + - (sizeof(struct hv_pcidev_description) * - (relations->device_count)), GFP_NOWAIT); - + dr = kzalloc(struct_size(dr, func, relations->device_count), + GFP_NOWAIT); if (!dr) return; @@ -2247,10 +2248,8 @@ static void hv_pci_devices_present2(struct hv_pcibus_device *hbus, struct hv_dr_state *dr; int i; - dr = kzalloc(offsetof(struct hv_dr_state, func) + - (sizeof(struct hv_pcidev_description) * - (relations->device_count)), GFP_NOWAIT); - + dr = kzalloc(struct_size(dr, func, relations->device_count), + GFP_NOWAIT); if (!dr) return; @@ -2444,9 +2443,8 @@ static void hv_pci_onchannelcallback(void *context) bus_rel = (struct pci_bus_relations *)buffer; if (bytes_recvd < - offsetof(struct pci_bus_relations, func) + - (sizeof(struct pci_function_description) * - (bus_rel->device_count))) { + struct_size(bus_rel, func, + bus_rel->device_count)) { dev_err(&hbus->hdev->device, "bus relations too small\n"); break; @@ -2459,9 +2457,8 @@ static void hv_pci_onchannelcallback(void *context) bus_rel2 = (struct pci_bus_relations2 *)buffer; if (bytes_recvd < - offsetof(struct pci_bus_relations2, func) + - (sizeof(struct pci_function_description2) * - (bus_rel2->device_count))) { + struct_size(bus_rel2, func, + bus_rel2->device_count)) { dev_err(&hbus->hdev->device, "bus relations v2 too small\n"); break; @@ -2748,6 +2745,8 @@ static void hv_free_config_window(struct hv_pcibus_device *hbus) vmbus_free_mmio(hbus->mem_config->start, PCI_CONFIG_MMIO_LENGTH); } +static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs); + /** * hv_pci_enter_d0() - Bring the "bus" into the D0 power state * @hdev: VMBus's tracking struct for this root PCI bus @@ -2760,8 +2759,10 @@ static int hv_pci_enter_d0(struct hv_device *hdev) struct pci_bus_d0_entry *d0_entry; struct hv_pci_compl comp_pkt; struct pci_packet *pkt; + bool retry = true; int ret; +enter_d0_retry: /* * Tell the host that the bus is ready to use, and moved into the * powered-on state. This includes telling the host which region @@ -2788,6 +2789,38 @@ static int hv_pci_enter_d0(struct hv_device *hdev) if (ret) goto exit; + /* + * In certain case (Kdump) the pci device of interest was + * not cleanly shut down and resource is still held on host + * side, the host could return invalid device status. + * We need to explicitly request host to release the resource + * and try to enter D0 again. + */ + if (comp_pkt.completion_status < 0 && retry) { + retry = false; + + dev_err(&hdev->device, "Retrying D0 Entry\n"); + + /* + * Hv_pci_bus_exit() calls hv_send_resource_released() + * to free up resources of its child devices. + * In the kdump kernel we need to set the + * wslot_res_allocated to 255 so it scans all child + * devices to release resources allocated in the + * normal kernel before panic happened. + */ + hbus->wslot_res_allocated = 255; + + ret = hv_pci_bus_exit(hdev, true); + + if (ret == 0) { + kfree(pkt); + goto enter_d0_retry; + } + dev_err(&hdev->device, + "Retrying D0 failed with ret %d\n", ret); + } + if (comp_pkt.completion_status < 0) { dev_err(&hdev->device, "PCI Pass-through VSP failed D0 Entry with status %x\n", @@ -2859,7 +2892,7 @@ static int hv_send_resources_allocated(struct hv_device *hdev) struct hv_pci_dev *hpdev; struct pci_packet *pkt; size_t size_res; - u32 wslot; + int wslot; int ret; size_res = (hbus->protocol_version < PCI_PROTOCOL_VERSION_1_2) @@ -2912,6 +2945,8 @@ static int hv_send_resources_allocated(struct hv_device *hdev) comp_pkt.completion_status); break; } + + hbus->wslot_res_allocated = wslot; } kfree(pkt); @@ -2930,10 +2965,10 @@ static int hv_send_resources_released(struct hv_device *hdev) struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); struct pci_child_message pkt; struct hv_pci_dev *hpdev; - u32 wslot; + int wslot; int ret; - for (wslot = 0; wslot < 256; wslot++) { + for (wslot = hbus->wslot_res_allocated; wslot >= 0; wslot--) { hpdev = get_pcichild_wslot(hbus, wslot); if (!hpdev) continue; @@ -2948,8 +2983,12 @@ static int hv_send_resources_released(struct hv_device *hdev) VM_PKT_DATA_INBAND, 0); if (ret) return ret; + + hbus->wslot_res_allocated = wslot - 1; } + hbus->wslot_res_allocated = -1; + return 0; } @@ -3049,6 +3088,7 @@ static int hv_pci_probe(struct hv_device *hdev, if (!hbus) return -ENOMEM; hbus->state = hv_pcibus_init; + hbus->wslot_res_allocated = -1; /* * The PCI bus "domain" is what is called "segment" in ACPI and other @@ -3148,7 +3188,7 @@ static int hv_pci_probe(struct hv_device *hdev, ret = hv_pci_allocate_bridge_windows(hbus); if (ret) - goto free_irq_domain; + goto exit_d0; ret = hv_send_resources_allocated(hdev); if (ret) @@ -3166,6 +3206,8 @@ static int hv_pci_probe(struct hv_device *hdev, free_windows: hv_pci_free_bridge_windows(hbus); +exit_d0: + (void) hv_pci_bus_exit(hdev, true); free_irq_domain: irq_domain_remove(hbus->irq_domain); free_fwnode: @@ -3185,7 +3227,7 @@ free_bus: return ret; } -static int hv_pci_bus_exit(struct hv_device *hdev, bool hibernating) +static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs) { struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); struct { @@ -3203,7 +3245,7 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool hibernating) if (hdev->channel->rescind) return 0; - if (!hibernating) { + if (!keep_devs) { /* Delete any children which might still exist. */ dr = kzalloc(sizeof(*dr), GFP_KERNEL); if (dr && hv_pci_start_relations_work(hbus, dr)) diff --git a/drivers/pci/controller/pci-tegra.c b/drivers/pci/controller/pci-tegra.c index 3e64ba6a36a8..235b456698fc 100644 --- a/drivers/pci/controller/pci-tegra.c +++ b/drivers/pci/controller/pci-tegra.c @@ -2219,8 +2219,8 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie) if (PTR_ERR(rp->reset_gpio) == -ENOENT) { rp->reset_gpio = NULL; } else { - dev_err(dev, "failed to get reset GPIO: %d\n", - err); + dev_err(dev, "failed to get reset GPIO: %ld\n", + PTR_ERR(rp->reset_gpio)); return PTR_ERR(rp->reset_gpio); } } @@ -2712,7 +2712,7 @@ static int tegra_pcie_probe(struct platform_device *pdev) err = pm_runtime_get_sync(pcie->dev); if (err < 0) { dev_err(dev, "fail to enable pcie controller: %d\n", err); - goto teardown_msi; + goto pm_runtime_put; } host->busnr = bus->start; @@ -2746,7 +2746,6 @@ static int tegra_pcie_probe(struct platform_device *pdev) pm_runtime_put: pm_runtime_put_sync(pcie->dev); pm_runtime_disable(pcie->dev); -teardown_msi: tegra_pcie_msi_teardown(pcie); put_resources: tegra_pcie_put_resources(pcie); diff --git a/drivers/pci/controller/pci-thunder-ecam.c b/drivers/pci/controller/pci-thunder-ecam.c index 32d1d7b81ef4..7e8835fee5f7 100644 --- a/drivers/pci/controller/pci-thunder-ecam.c +++ b/drivers/pci/controller/pci-thunder-ecam.c @@ -345,7 +345,7 @@ static int thunder_ecam_config_write(struct pci_bus *bus, unsigned int devfn, return pci_generic_config_write(bus, devfn, where, size, val); } -struct pci_ecam_ops pci_thunder_ecam_ops = { +const struct pci_ecam_ops pci_thunder_ecam_ops = { .bus_shift = 20, .pci_ops = { .map_bus = pci_ecam_map_bus, @@ -357,22 +357,20 @@ struct pci_ecam_ops pci_thunder_ecam_ops = { #ifdef CONFIG_PCI_HOST_THUNDER_ECAM static const struct of_device_id thunder_ecam_of_match[] = { - { .compatible = "cavium,pci-host-thunder-ecam" }, + { + .compatible = "cavium,pci-host-thunder-ecam", + .data = &pci_thunder_ecam_ops, + }, { }, }; -static int thunder_ecam_probe(struct platform_device *pdev) -{ - return pci_host_common_probe(pdev, &pci_thunder_ecam_ops); -} - static struct platform_driver thunder_ecam_driver = { .driver = { .name = KBUILD_MODNAME, .of_match_table = thunder_ecam_of_match, .suppress_bind_attrs = true, }, - .probe = thunder_ecam_probe, + .probe = pci_host_common_probe, }; builtin_platform_driver(thunder_ecam_driver); diff --git a/drivers/pci/controller/pci-thunder-pem.c b/drivers/pci/controller/pci-thunder-pem.c index 9491e266b1ea..3f847969143e 100644 --- a/drivers/pci/controller/pci-thunder-pem.c +++ b/drivers/pci/controller/pci-thunder-pem.c @@ -403,7 +403,7 @@ static int thunder_pem_acpi_init(struct pci_config_window *cfg) return thunder_pem_init(dev, cfg, res_pem); } -struct pci_ecam_ops thunder_pem_ecam_ops = { +const struct pci_ecam_ops thunder_pem_ecam_ops = { .bus_shift = 24, .init = thunder_pem_acpi_init, .pci_ops = { @@ -440,7 +440,7 @@ static int thunder_pem_platform_init(struct pci_config_window *cfg) return thunder_pem_init(dev, cfg, res_pem); } -static struct pci_ecam_ops pci_thunder_pem_ops = { +static const struct pci_ecam_ops pci_thunder_pem_ops = { .bus_shift = 24, .init = thunder_pem_platform_init, .pci_ops = { @@ -451,22 +451,20 @@ static struct pci_ecam_ops pci_thunder_pem_ops = { }; static const struct of_device_id thunder_pem_of_match[] = { - { .compatible = "cavium,pci-host-thunder-pem" }, + { + .compatible = "cavium,pci-host-thunder-pem", + .data = &pci_thunder_pem_ops, + }, { }, }; -static int thunder_pem_probe(struct platform_device *pdev) -{ - return pci_host_common_probe(pdev, &pci_thunder_pem_ops); -} - static struct platform_driver thunder_pem_driver = { .driver = { .name = KBUILD_MODNAME, .of_match_table = thunder_pem_of_match, .suppress_bind_attrs = true, }, - .probe = thunder_pem_probe, + .probe = pci_host_common_probe, }; builtin_platform_driver(thunder_pem_driver); diff --git a/drivers/pci/controller/pci-v3-semi.c b/drivers/pci/controller/pci-v3-semi.c index bd05221f5a22..3681e5af3878 100644 --- a/drivers/pci/controller/pci-v3-semi.c +++ b/drivers/pci/controller/pci-v3-semi.c @@ -720,7 +720,7 @@ static int v3_pci_probe(struct platform_device *pdev) int irq; int ret; - host = pci_alloc_host_bridge(sizeof(*v3)); + host = devm_pci_alloc_host_bridge(dev, sizeof(*v3)); if (!host) return -ENOMEM; @@ -777,9 +777,9 @@ static int v3_pci_probe(struct platform_device *pdev) /* Get and request error IRQ resource */ irq = platform_get_irq(pdev, 0); - if (irq <= 0) { + if (irq < 0) { dev_err(dev, "unable to obtain PCIv3 error IRQ\n"); - return -ENODEV; + return irq; } ret = devm_request_irq(dev, irq, v3_irq, 0, "PCIv3 error", v3); diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c index de195fd430dc..d1efa8ffbae1 100644 --- a/drivers/pci/controller/pci-xgene.c +++ b/drivers/pci/controller/pci-xgene.c @@ -256,7 +256,7 @@ static int xgene_v1_pcie_ecam_init(struct pci_config_window *cfg) return xgene_pcie_ecam_init(cfg, XGENE_PCIE_IP_VER_1); } -struct pci_ecam_ops xgene_v1_pcie_ecam_ops = { +const struct pci_ecam_ops xgene_v1_pcie_ecam_ops = { .bus_shift = 16, .init = xgene_v1_pcie_ecam_init, .pci_ops = { @@ -271,7 +271,7 @@ static int xgene_v2_pcie_ecam_init(struct pci_config_window *cfg) return xgene_pcie_ecam_init(cfg, XGENE_PCIE_IP_VER_2); } -struct pci_ecam_ops xgene_v2_pcie_ecam_ops = { +const struct pci_ecam_ops xgene_v2_pcie_ecam_ops = { .bus_shift = 16, .init = xgene_v2_pcie_ecam_init, .pci_ops = { diff --git a/drivers/pci/controller/pcie-altera.c b/drivers/pci/controller/pcie-altera.c index b447c3e4abad..24cb1c331058 100644 --- a/drivers/pci/controller/pcie-altera.c +++ b/drivers/pci/controller/pcie-altera.c @@ -193,7 +193,7 @@ static bool altera_pcie_valid_device(struct altera_pcie *pcie, if (bus->number == pcie->root_bus_nr && dev > 0) return false; - return true; + return true; } static int tlp_read_packet(struct altera_pcie *pcie, u32 *value) diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c index 6d79d14527a6..7730ea845ff2 100644 --- a/drivers/pci/controller/pcie-brcmstb.c +++ b/drivers/pci/controller/pcie-brcmstb.c @@ -28,6 +28,8 @@ #include <linux/string.h> #include <linux/types.h> +#include <soc/bcm2835/raspberrypi-firmware.h> + #include "../pci.h" /* BRCM_PCIE_CAP_REGS - Offset for the mandatory capability config regs */ @@ -41,6 +43,9 @@ #define PCIE_RC_CFG_PRIV1_ID_VAL3 0x043c #define PCIE_RC_CFG_PRIV1_ID_VAL3_CLASS_CODE_MASK 0xffffff +#define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY 0x04dc +#define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK 0xc00 + #define PCIE_RC_DL_MDIO_ADDR 0x1100 #define PCIE_RC_DL_MDIO_WR_DATA 0x1104 #define PCIE_RC_DL_MDIO_RD_DATA 0x1108 @@ -54,11 +59,11 @@ #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO 0x400c #define PCIE_MEM_WIN0_LO(win) \ - PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO + ((win) * 4) + PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO + ((win) * 8) #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI 0x4010 #define PCIE_MEM_WIN0_HI(win) \ - PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI + ((win) * 4) + PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI + ((win) * 8) #define PCIE_MISC_RC_BAR1_CONFIG_LO 0x402c #define PCIE_MISC_RC_BAR1_CONFIG_LO_SIZE_MASK 0x1f @@ -693,10 +698,11 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie) int num_out_wins = 0; u16 nlw, cls, lnksta; int i, ret; - u32 tmp; + u32 tmp, aspm_support; /* Reset the bridge */ brcm_pcie_bridge_sw_init_set(pcie, 1); + brcm_pcie_perst_set(pcie, 1); usleep_range(100, 200); @@ -803,6 +809,15 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie) num_out_wins++; } + /* Don't advertise L0s capability if 'aspm-no-l0s' */ + aspm_support = PCIE_LINK_STATE_L1; + if (!of_property_read_bool(pcie->np, "aspm-no-l0s")) + aspm_support |= PCIE_LINK_STATE_L0S; + tmp = readl(base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY); + u32p_replace_bits(&tmp, aspm_support, + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK); + writel(tmp, base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY); + /* * For config space accesses on the RC, show the right class for * a PCIe-PCIe bridge (the default setting is to be EP mode). @@ -899,7 +914,6 @@ static void __brcm_pcie_remove(struct brcm_pcie *pcie) brcm_msi_remove(pcie); brcm_pcie_turn_off(pcie); clk_disable_unprepare(pcie->clk); - clk_put(pcie->clk); } static int brcm_pcie_remove(struct platform_device *pdev) @@ -917,11 +931,26 @@ static int brcm_pcie_probe(struct platform_device *pdev) { struct device_node *np = pdev->dev.of_node, *msi_np; struct pci_host_bridge *bridge; + struct device_node *fw_np; struct brcm_pcie *pcie; struct pci_bus *child; struct resource *res; int ret; + /* + * We have to wait for Raspberry Pi's firmware interface to be up as a + * PCI fixup, rpi_firmware_init_vl805(), depends on it. This driver's + * probe can race with the firmware interface's (see + * drivers/firmware/raspberrypi.c) and potentially break the PCI fixup. + */ + fw_np = of_find_compatible_node(NULL, NULL, + "raspberrypi,bcm2835-firmware"); + if (fw_np && !rpi_firmware_get(fw_np)) { + of_node_put(fw_np); + return -EPROBE_DEFER; + } + of_node_put(fw_np); + bridge = devm_pci_alloc_host_bridge(&pdev->dev, sizeof(*pcie)); if (!bridge) return -ENOMEM; diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c index cb982891b22b..ebfa7d5a4e2d 100644 --- a/drivers/pci/controller/pcie-mediatek.c +++ b/drivers/pci/controller/pcie-mediatek.c @@ -651,6 +651,9 @@ static int mtk_pcie_setup_irq(struct mtk_pcie_port *port, } port->irq = platform_get_irq(pdev, port->slot); + if (port->irq < 0) + return port->irq; + irq_set_chained_handler_and_data(port->irq, mtk_pcie_intr_handler, port); diff --git a/drivers/pci/controller/pcie-rcar-ep.c b/drivers/pci/controller/pcie-rcar-ep.c new file mode 100644 index 000000000000..b4a288e24aaf --- /dev/null +++ b/drivers/pci/controller/pcie-rcar-ep.c @@ -0,0 +1,563 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PCIe endpoint driver for Renesas R-Car SoCs + * Copyright (c) 2020 Renesas Electronics Europe GmbH + * + * Author: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> + */ + +#include <linux/clk.h> +#include <linux/delay.h> +#include <linux/of_address.h> +#include <linux/of_irq.h> +#include <linux/of_pci.h> +#include <linux/of_platform.h> +#include <linux/pci.h> +#include <linux/pci-epc.h> +#include <linux/phy/phy.h> +#include <linux/platform_device.h> + +#include "pcie-rcar.h" + +#define RCAR_EPC_MAX_FUNCTIONS 1 + +/* Structure representing the PCIe interface */ +struct rcar_pcie_endpoint { + struct rcar_pcie pcie; + phys_addr_t *ob_mapped_addr; + struct pci_epc_mem_window *ob_window; + u8 max_functions; + unsigned int bar_to_atu[MAX_NR_INBOUND_MAPS]; + unsigned long *ib_window_map; + u32 num_ib_windows; + u32 num_ob_windows; +}; + +static void rcar_pcie_ep_hw_init(struct rcar_pcie *pcie) +{ + u32 val; + + rcar_pci_write_reg(pcie, 0, PCIETCTLR); + + /* Set endpoint mode */ + rcar_pci_write_reg(pcie, 0, PCIEMSR); + + /* Initialize default capabilities. */ + rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP); + rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS), + PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ENDPOINT << 4); + rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f, + PCI_HEADER_TYPE_NORMAL); + + /* Write out the physical slot number = 0 */ + rcar_rmw32(pcie, REXPCAP(PCI_EXP_SLTCAP), PCI_EXP_SLTCAP_PSN, 0); + + val = rcar_pci_read_reg(pcie, EXPCAP(1)); + /* device supports fixed 128 bytes MPSS */ + val &= ~GENMASK(2, 0); + rcar_pci_write_reg(pcie, val, EXPCAP(1)); + + val = rcar_pci_read_reg(pcie, EXPCAP(2)); + /* read requests size 128 bytes */ + val &= ~GENMASK(14, 12); + /* payload size 128 bytes */ + val &= ~GENMASK(7, 5); + rcar_pci_write_reg(pcie, val, EXPCAP(2)); + + /* Set target link speed to 5.0 GT/s */ + rcar_rmw32(pcie, EXPCAP(12), PCI_EXP_LNKSTA_CLS, + PCI_EXP_LNKSTA_CLS_5_0GB); + + /* Set the completion timer timeout to the maximum 50ms. */ + rcar_rmw32(pcie, TLCTLR + 1, 0x3f, 50); + + /* Terminate list of capabilities (Next Capability Offset=0) */ + rcar_rmw32(pcie, RVCCAP(0), 0xfff00000, 0); + + /* flush modifications */ + wmb(); +} + +static int rcar_pcie_ep_get_window(struct rcar_pcie_endpoint *ep, + phys_addr_t addr) +{ + int i; + + for (i = 0; i < ep->num_ob_windows; i++) + if (ep->ob_window[i].phys_base == addr) + return i; + + return -EINVAL; +} + +static int rcar_pcie_parse_outbound_ranges(struct rcar_pcie_endpoint *ep, + struct platform_device *pdev) +{ + struct rcar_pcie *pcie = &ep->pcie; + char outbound_name[10]; + struct resource *res; + unsigned int i = 0; + + ep->num_ob_windows = 0; + for (i = 0; i < RCAR_PCI_MAX_RESOURCES; i++) { + sprintf(outbound_name, "memory%u", i); + res = platform_get_resource_byname(pdev, + IORESOURCE_MEM, + outbound_name); + if (!res) { + dev_err(pcie->dev, "missing outbound window %u\n", i); + return -EINVAL; + } + if (!devm_request_mem_region(&pdev->dev, res->start, + resource_size(res), + outbound_name)) { + dev_err(pcie->dev, "Cannot request memory region %s.\n", + outbound_name); + return -EIO; + } + + ep->ob_window[i].phys_base = res->start; + ep->ob_window[i].size = resource_size(res); + /* controller doesn't support multiple allocation + * from same window, so set page_size to window size + */ + ep->ob_window[i].page_size = resource_size(res); + } + ep->num_ob_windows = i; + + return 0; +} + +static int rcar_pcie_ep_get_pdata(struct rcar_pcie_endpoint *ep, + struct platform_device *pdev) +{ + struct rcar_pcie *pcie = &ep->pcie; + struct pci_epc_mem_window *window; + struct device *dev = pcie->dev; + struct resource res; + int err; + + err = of_address_to_resource(dev->of_node, 0, &res); + if (err) + return err; + pcie->base = devm_ioremap_resource(dev, &res); + if (IS_ERR(pcie->base)) + return PTR_ERR(pcie->base); + + ep->ob_window = devm_kcalloc(dev, RCAR_PCI_MAX_RESOURCES, + sizeof(*window), GFP_KERNEL); + if (!ep->ob_window) + return -ENOMEM; + + rcar_pcie_parse_outbound_ranges(ep, pdev); + + err = of_property_read_u8(dev->of_node, "max-functions", + &ep->max_functions); + if (err < 0 || ep->max_functions > RCAR_EPC_MAX_FUNCTIONS) + ep->max_functions = RCAR_EPC_MAX_FUNCTIONS; + + return 0; +} + +static int rcar_pcie_ep_write_header(struct pci_epc *epc, u8 fn, + struct pci_epf_header *hdr) +{ + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); + struct rcar_pcie *pcie = &ep->pcie; + u32 val; + + if (!fn) + val = hdr->vendorid; + else + val = rcar_pci_read_reg(pcie, IDSETR0); + val |= hdr->deviceid << 16; + rcar_pci_write_reg(pcie, val, IDSETR0); + + val = hdr->revid; + val |= hdr->progif_code << 8; + val |= hdr->subclass_code << 16; + val |= hdr->baseclass_code << 24; + rcar_pci_write_reg(pcie, val, IDSETR1); + + if (!fn) + val = hdr->subsys_vendor_id; + else + val = rcar_pci_read_reg(pcie, SUBIDSETR); + val |= hdr->subsys_id << 16; + rcar_pci_write_reg(pcie, val, SUBIDSETR); + + if (hdr->interrupt_pin > PCI_INTERRUPT_INTA) + return -EINVAL; + val = rcar_pci_read_reg(pcie, PCICONF(15)); + val |= (hdr->interrupt_pin << 8); + rcar_pci_write_reg(pcie, val, PCICONF(15)); + + return 0; +} + +static int rcar_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, + struct pci_epf_bar *epf_bar) +{ + int flags = epf_bar->flags | LAR_ENABLE | LAM_64BIT; + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); + u64 size = 1ULL << fls64(epf_bar->size - 1); + dma_addr_t cpu_addr = epf_bar->phys_addr; + enum pci_barno bar = epf_bar->barno; + struct rcar_pcie *pcie = &ep->pcie; + u32 mask; + int idx; + int err; + + idx = find_first_zero_bit(ep->ib_window_map, ep->num_ib_windows); + if (idx >= ep->num_ib_windows) { + dev_err(pcie->dev, "no free inbound window\n"); + return -EINVAL; + } + + if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) + flags |= IO_SPACE; + + ep->bar_to_atu[bar] = idx; + /* use 64-bit BARs */ + set_bit(idx, ep->ib_window_map); + set_bit(idx + 1, ep->ib_window_map); + + if (cpu_addr > 0) { + unsigned long nr_zeros = __ffs64(cpu_addr); + u64 alignment = 1ULL << nr_zeros; + + size = min(size, alignment); + } + + size = min(size, 1ULL << 32); + + mask = roundup_pow_of_two(size) - 1; + mask &= ~0xf; + + rcar_pcie_set_inbound(pcie, cpu_addr, + 0x0, mask | flags, idx, false); + + err = rcar_pcie_wait_for_phyrdy(pcie); + if (err) { + dev_err(pcie->dev, "phy not ready\n"); + return -EINVAL; + } + + return 0; +} + +static void rcar_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, + struct pci_epf_bar *epf_bar) +{ + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); + enum pci_barno bar = epf_bar->barno; + u32 atu_index = ep->bar_to_atu[bar]; + + rcar_pcie_set_inbound(&ep->pcie, 0x0, 0x0, 0x0, bar, false); + + clear_bit(atu_index, ep->ib_window_map); + clear_bit(atu_index + 1, ep->ib_window_map); +} + +static int rcar_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 interrupts) +{ + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); + struct rcar_pcie *pcie = &ep->pcie; + u32 flags; + + flags = rcar_pci_read_reg(pcie, MSICAP(fn)); + flags |= interrupts << MSICAP0_MMESCAP_OFFSET; + rcar_pci_write_reg(pcie, flags, MSICAP(fn)); + + return 0; +} + +static int rcar_pcie_ep_get_msi(struct pci_epc *epc, u8 fn) +{ + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); + struct rcar_pcie *pcie = &ep->pcie; + u32 flags; + + flags = rcar_pci_read_reg(pcie, MSICAP(fn)); + if (!(flags & MSICAP0_MSIE)) + return -EINVAL; + + return ((flags & MSICAP0_MMESE_MASK) >> MSICAP0_MMESE_OFFSET); +} + +static int rcar_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, + phys_addr_t addr, u64 pci_addr, size_t size) +{ + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); + struct rcar_pcie *pcie = &ep->pcie; + struct resource_entry win; + struct resource res; + int window; + int err; + + /* check if we have a link. */ + err = rcar_pcie_wait_for_dl(pcie); + if (err) { + dev_err(pcie->dev, "link not up\n"); + return err; + } + + window = rcar_pcie_ep_get_window(ep, addr); + if (window < 0) { + dev_err(pcie->dev, "failed to get corresponding window\n"); + return -EINVAL; + } + + memset(&win, 0x0, sizeof(win)); + memset(&res, 0x0, sizeof(res)); + res.start = pci_addr; + res.end = pci_addr + size - 1; + res.flags = IORESOURCE_MEM; + win.res = &res; + + rcar_pcie_set_outbound(pcie, window, &win); + + ep->ob_mapped_addr[window] = addr; + + return 0; +} + +static void rcar_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, + phys_addr_t addr) +{ + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); + struct resource_entry win; + struct resource res; + int idx; + + for (idx = 0; idx < ep->num_ob_windows; idx++) + if (ep->ob_mapped_addr[idx] == addr) + break; + + if (idx >= ep->num_ob_windows) + return; + + memset(&win, 0x0, sizeof(win)); + memset(&res, 0x0, sizeof(res)); + win.res = &res; + rcar_pcie_set_outbound(&ep->pcie, idx, &win); + + ep->ob_mapped_addr[idx] = 0; +} + +static int rcar_pcie_ep_assert_intx(struct rcar_pcie_endpoint *ep, + u8 fn, u8 intx) +{ + struct rcar_pcie *pcie = &ep->pcie; + u32 val; + + val = rcar_pci_read_reg(pcie, PCIEMSITXR); + if ((val & PCI_MSI_FLAGS_ENABLE)) { + dev_err(pcie->dev, "MSI is enabled, cannot assert INTx\n"); + return -EINVAL; + } + + val = rcar_pci_read_reg(pcie, PCICONF(1)); + if ((val & INTDIS)) { + dev_err(pcie->dev, "INTx message transmission is disabled\n"); + return -EINVAL; + } + + val = rcar_pci_read_reg(pcie, PCIEINTXR); + if ((val & ASTINTX)) { + dev_err(pcie->dev, "INTx is already asserted\n"); + return -EINVAL; + } + + val |= ASTINTX; + rcar_pci_write_reg(pcie, val, PCIEINTXR); + usleep_range(1000, 1001); + val = rcar_pci_read_reg(pcie, PCIEINTXR); + val &= ~ASTINTX; + rcar_pci_write_reg(pcie, val, PCIEINTXR); + + return 0; +} + +static int rcar_pcie_ep_assert_msi(struct rcar_pcie *pcie, + u8 fn, u8 interrupt_num) +{ + u16 msi_count; + u32 val; + + /* Check MSI enable bit */ + val = rcar_pci_read_reg(pcie, MSICAP(fn)); + if (!(val & MSICAP0_MSIE)) + return -EINVAL; + + /* Get MSI numbers from MME */ + msi_count = ((val & MSICAP0_MMESE_MASK) >> MSICAP0_MMESE_OFFSET); + msi_count = 1 << msi_count; + + if (!interrupt_num || interrupt_num > msi_count) + return -EINVAL; + + val = rcar_pci_read_reg(pcie, PCIEMSITXR); + rcar_pci_write_reg(pcie, val | (interrupt_num - 1), PCIEMSITXR); + + return 0; +} + +static int rcar_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, + enum pci_epc_irq_type type, + u16 interrupt_num) +{ + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); + + switch (type) { + case PCI_EPC_IRQ_LEGACY: + return rcar_pcie_ep_assert_intx(ep, fn, 0); + + case PCI_EPC_IRQ_MSI: + return rcar_pcie_ep_assert_msi(&ep->pcie, fn, interrupt_num); + + default: + return -EINVAL; + } +} + +static int rcar_pcie_ep_start(struct pci_epc *epc) +{ + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); + + rcar_pci_write_reg(&ep->pcie, MACCTLR_INIT_VAL, MACCTLR); + rcar_pci_write_reg(&ep->pcie, CFINIT, PCIETCTLR); + + return 0; +} + +static void rcar_pcie_ep_stop(struct pci_epc *epc) +{ + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); + + rcar_pci_write_reg(&ep->pcie, 0, PCIETCTLR); +} + +static const struct pci_epc_features rcar_pcie_epc_features = { + .linkup_notifier = false, + .msi_capable = true, + .msix_capable = false, + /* use 64-bit BARs so mark BAR[1,3,5] as reserved */ + .reserved_bar = 1 << BAR_1 | 1 << BAR_3 | 1 << BAR_5, + .bar_fixed_64bit = 1 << BAR_0 | 1 << BAR_2 | 1 << BAR_4, + .bar_fixed_size[0] = 128, + .bar_fixed_size[2] = 256, + .bar_fixed_size[4] = 256, +}; + +static const struct pci_epc_features* +rcar_pcie_ep_get_features(struct pci_epc *epc, u8 func_no) +{ + return &rcar_pcie_epc_features; +} + +static const struct pci_epc_ops rcar_pcie_epc_ops = { + .write_header = rcar_pcie_ep_write_header, + .set_bar = rcar_pcie_ep_set_bar, + .clear_bar = rcar_pcie_ep_clear_bar, + .set_msi = rcar_pcie_ep_set_msi, + .get_msi = rcar_pcie_ep_get_msi, + .map_addr = rcar_pcie_ep_map_addr, + .unmap_addr = rcar_pcie_ep_unmap_addr, + .raise_irq = rcar_pcie_ep_raise_irq, + .start = rcar_pcie_ep_start, + .stop = rcar_pcie_ep_stop, + .get_features = rcar_pcie_ep_get_features, +}; + +static const struct of_device_id rcar_pcie_ep_of_match[] = { + { .compatible = "renesas,r8a774c0-pcie-ep", }, + { .compatible = "renesas,rcar-gen3-pcie-ep" }, + { }, +}; + +static int rcar_pcie_ep_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct rcar_pcie_endpoint *ep; + struct rcar_pcie *pcie; + struct pci_epc *epc; + int err; + + ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); + if (!ep) + return -ENOMEM; + + pcie = &ep->pcie; + pcie->dev = dev; + + pm_runtime_enable(dev); + err = pm_runtime_get_sync(dev); + if (err < 0) { + dev_err(dev, "pm_runtime_get_sync failed\n"); + goto err_pm_disable; + } + + err = rcar_pcie_ep_get_pdata(ep, pdev); + if (err < 0) { + dev_err(dev, "failed to request resources: %d\n", err); + goto err_pm_put; + } + + ep->num_ib_windows = MAX_NR_INBOUND_MAPS; + ep->ib_window_map = + devm_kcalloc(dev, BITS_TO_LONGS(ep->num_ib_windows), + sizeof(long), GFP_KERNEL); + if (!ep->ib_window_map) { + err = -ENOMEM; + dev_err(dev, "failed to allocate memory for inbound map\n"); + goto err_pm_put; + } + + ep->ob_mapped_addr = devm_kcalloc(dev, ep->num_ob_windows, + sizeof(*ep->ob_mapped_addr), + GFP_KERNEL); + if (!ep->ob_mapped_addr) { + err = -ENOMEM; + dev_err(dev, "failed to allocate memory for outbound memory pointers\n"); + goto err_pm_put; + } + + epc = devm_pci_epc_create(dev, &rcar_pcie_epc_ops); + if (IS_ERR(epc)) { + dev_err(dev, "failed to create epc device\n"); + err = PTR_ERR(epc); + goto err_pm_put; + } + + epc->max_functions = ep->max_functions; + epc_set_drvdata(epc, ep); + + rcar_pcie_ep_hw_init(pcie); + + err = pci_epc_multi_mem_init(epc, ep->ob_window, ep->num_ob_windows); + if (err < 0) { + dev_err(dev, "failed to initialize the epc memory space\n"); + goto err_pm_put; + } + + return 0; + +err_pm_put: + pm_runtime_put(dev); + +err_pm_disable: + pm_runtime_disable(dev); + + return err; +} + +static struct platform_driver rcar_pcie_ep_driver = { + .driver = { + .name = "rcar-pcie-ep", + .of_match_table = rcar_pcie_ep_of_match, + .suppress_bind_attrs = true, + }, + .probe = rcar_pcie_ep_probe, +}; +builtin_platform_driver(rcar_pcie_ep_driver); diff --git a/drivers/pci/controller/pcie-rcar-host.c b/drivers/pci/controller/pcie-rcar-host.c new file mode 100644 index 000000000000..d210a36561be --- /dev/null +++ b/drivers/pci/controller/pcie-rcar-host.c @@ -0,0 +1,1130 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PCIe driver for Renesas R-Car SoCs + * Copyright (C) 2014-2020 Renesas Electronics Europe Ltd + * + * Based on: + * arch/sh/drivers/pci/pcie-sh7786.c + * arch/sh/drivers/pci/ops-sh7786.c + * Copyright (C) 2009 - 2011 Paul Mundt + * + * Author: Phil Edworthy <phil.edworthy@renesas.com> + */ + +#include <linux/bitops.h> +#include <linux/clk.h> +#include <linux/delay.h> +#include <linux/interrupt.h> +#include <linux/irq.h> +#include <linux/irqdomain.h> +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/msi.h> +#include <linux/of_address.h> +#include <linux/of_irq.h> +#include <linux/of_pci.h> +#include <linux/of_platform.h> +#include <linux/pci.h> +#include <linux/phy/phy.h> +#include <linux/platform_device.h> +#include <linux/pm_runtime.h> +#include <linux/slab.h> + +#include "pcie-rcar.h" + +struct rcar_msi { + DECLARE_BITMAP(used, INT_PCI_MSI_NR); + struct irq_domain *domain; + struct msi_controller chip; + unsigned long pages; + struct mutex lock; + int irq1; + int irq2; +}; + +static inline struct rcar_msi *to_rcar_msi(struct msi_controller *chip) +{ + return container_of(chip, struct rcar_msi, chip); +} + +/* Structure representing the PCIe interface */ +struct rcar_pcie_host { + struct rcar_pcie pcie; + struct device *dev; + struct phy *phy; + void __iomem *base; + struct list_head resources; + int root_bus_nr; + struct clk *bus_clk; + struct rcar_msi msi; + int (*phy_init_fn)(struct rcar_pcie_host *host); +}; + +static u32 rcar_read_conf(struct rcar_pcie *pcie, int where) +{ + unsigned int shift = BITS_PER_BYTE * (where & 3); + u32 val = rcar_pci_read_reg(pcie, where & ~3); + + return val >> shift; +} + +/* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ +static int rcar_pcie_config_access(struct rcar_pcie_host *host, + unsigned char access_type, struct pci_bus *bus, + unsigned int devfn, int where, u32 *data) +{ + struct rcar_pcie *pcie = &host->pcie; + unsigned int dev, func, reg, index; + + dev = PCI_SLOT(devfn); + func = PCI_FUNC(devfn); + reg = where & ~3; + index = reg / 4; + + /* + * While each channel has its own memory-mapped extended config + * space, it's generally only accessible when in endpoint mode. + * When in root complex mode, the controller is unable to target + * itself with either type 0 or type 1 accesses, and indeed, any + * controller initiated target transfer to its own config space + * result in a completer abort. + * + * Each channel effectively only supports a single device, but as + * the same channel <-> device access works for any PCI_SLOT() + * value, we cheat a bit here and bind the controller's config + * space to devfn 0 in order to enable self-enumeration. In this + * case the regular ECAR/ECDR path is sidelined and the mangled + * config access itself is initiated as an internal bus transaction. + */ + if (pci_is_root_bus(bus)) { + if (dev != 0) + return PCIBIOS_DEVICE_NOT_FOUND; + + if (access_type == RCAR_PCI_ACCESS_READ) { + *data = rcar_pci_read_reg(pcie, PCICONF(index)); + } else { + /* Keep an eye out for changes to the root bus number */ + if (pci_is_root_bus(bus) && (reg == PCI_PRIMARY_BUS)) + host->root_bus_nr = *data & 0xff; + + rcar_pci_write_reg(pcie, *data, PCICONF(index)); + } + + return PCIBIOS_SUCCESSFUL; + } + + if (host->root_bus_nr < 0) + return PCIBIOS_DEVICE_NOT_FOUND; + + /* Clear errors */ + rcar_pci_write_reg(pcie, rcar_pci_read_reg(pcie, PCIEERRFR), PCIEERRFR); + + /* Set the PIO address */ + rcar_pci_write_reg(pcie, PCIE_CONF_BUS(bus->number) | + PCIE_CONF_DEV(dev) | PCIE_CONF_FUNC(func) | reg, PCIECAR); + + /* Enable the configuration access */ + if (bus->parent->number == host->root_bus_nr) + rcar_pci_write_reg(pcie, CONFIG_SEND_ENABLE | TYPE0, PCIECCTLR); + else + rcar_pci_write_reg(pcie, CONFIG_SEND_ENABLE | TYPE1, PCIECCTLR); + + /* Check for errors */ + if (rcar_pci_read_reg(pcie, PCIEERRFR) & UNSUPPORTED_REQUEST) + return PCIBIOS_DEVICE_NOT_FOUND; + + /* Check for master and target aborts */ + if (rcar_read_conf(pcie, RCONF(PCI_STATUS)) & + (PCI_STATUS_REC_MASTER_ABORT | PCI_STATUS_REC_TARGET_ABORT)) + return PCIBIOS_DEVICE_NOT_FOUND; + + if (access_type == RCAR_PCI_ACCESS_READ) + *data = rcar_pci_read_reg(pcie, PCIECDR); + else + rcar_pci_write_reg(pcie, *data, PCIECDR); + + /* Disable the configuration access */ + rcar_pci_write_reg(pcie, 0, PCIECCTLR); + + return PCIBIOS_SUCCESSFUL; +} + +static int rcar_pcie_read_conf(struct pci_bus *bus, unsigned int devfn, + int where, int size, u32 *val) +{ + struct rcar_pcie_host *host = bus->sysdata; + int ret; + + ret = rcar_pcie_config_access(host, RCAR_PCI_ACCESS_READ, + bus, devfn, where, val); + if (ret != PCIBIOS_SUCCESSFUL) { + *val = 0xffffffff; + return ret; + } + + if (size == 1) + *val = (*val >> (BITS_PER_BYTE * (where & 3))) & 0xff; + else if (size == 2) + *val = (*val >> (BITS_PER_BYTE * (where & 2))) & 0xffff; + + dev_dbg(&bus->dev, "pcie-config-read: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n", + bus->number, devfn, where, size, *val); + + return ret; +} + +/* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ +static int rcar_pcie_write_conf(struct pci_bus *bus, unsigned int devfn, + int where, int size, u32 val) +{ + struct rcar_pcie_host *host = bus->sysdata; + unsigned int shift; + u32 data; + int ret; + + ret = rcar_pcie_config_access(host, RCAR_PCI_ACCESS_READ, + bus, devfn, where, &data); + if (ret != PCIBIOS_SUCCESSFUL) + return ret; + + dev_dbg(&bus->dev, "pcie-config-write: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n", + bus->number, devfn, where, size, val); + + if (size == 1) { + shift = BITS_PER_BYTE * (where & 3); + data &= ~(0xff << shift); + data |= ((val & 0xff) << shift); + } else if (size == 2) { + shift = BITS_PER_BYTE * (where & 2); + data &= ~(0xffff << shift); + data |= ((val & 0xffff) << shift); + } else + data = val; + + ret = rcar_pcie_config_access(host, RCAR_PCI_ACCESS_WRITE, + bus, devfn, where, &data); + + return ret; +} + +static struct pci_ops rcar_pcie_ops = { + .read = rcar_pcie_read_conf, + .write = rcar_pcie_write_conf, +}; + +static int rcar_pcie_setup(struct list_head *resource, + struct rcar_pcie_host *host) +{ + struct resource_entry *win; + int i = 0; + + /* Setup PCI resources */ + resource_list_for_each_entry(win, &host->resources) { + struct resource *res = win->res; + + if (!res->flags) + continue; + + switch (resource_type(res)) { + case IORESOURCE_IO: + case IORESOURCE_MEM: + rcar_pcie_set_outbound(&host->pcie, i, win); + i++; + break; + case IORESOURCE_BUS: + host->root_bus_nr = res->start; + break; + default: + continue; + } + + pci_add_resource(resource, res); + } + + return 1; +} + +static void rcar_pcie_force_speedup(struct rcar_pcie *pcie) +{ + struct device *dev = pcie->dev; + unsigned int timeout = 1000; + u32 macsr; + + if ((rcar_pci_read_reg(pcie, MACS2R) & LINK_SPEED) != LINK_SPEED_5_0GTS) + return; + + if (rcar_pci_read_reg(pcie, MACCTLR) & SPEED_CHANGE) { + dev_err(dev, "Speed change already in progress\n"); + return; + } + + macsr = rcar_pci_read_reg(pcie, MACSR); + if ((macsr & LINK_SPEED) == LINK_SPEED_5_0GTS) + goto done; + + /* Set target link speed to 5.0 GT/s */ + rcar_rmw32(pcie, EXPCAP(12), PCI_EXP_LNKSTA_CLS, + PCI_EXP_LNKSTA_CLS_5_0GB); + + /* Set speed change reason as intentional factor */ + rcar_rmw32(pcie, MACCGSPSETR, SPCNGRSN, 0); + + /* Clear SPCHGFIN, SPCHGSUC, and SPCHGFAIL */ + if (macsr & (SPCHGFIN | SPCHGSUC | SPCHGFAIL)) + rcar_pci_write_reg(pcie, macsr, MACSR); + + /* Start link speed change */ + rcar_rmw32(pcie, MACCTLR, SPEED_CHANGE, SPEED_CHANGE); + + while (timeout--) { + macsr = rcar_pci_read_reg(pcie, MACSR); + if (macsr & SPCHGFIN) { + /* Clear the interrupt bits */ + rcar_pci_write_reg(pcie, macsr, MACSR); + + if (macsr & SPCHGFAIL) + dev_err(dev, "Speed change failed\n"); + + goto done; + } + + msleep(1); + } + + dev_err(dev, "Speed change timed out\n"); + +done: + dev_info(dev, "Current link speed is %s GT/s\n", + (macsr & LINK_SPEED) == LINK_SPEED_5_0GTS ? "5" : "2.5"); +} + +static void rcar_pcie_hw_enable(struct rcar_pcie_host *host) +{ + struct rcar_pcie *pcie = &host->pcie; + struct resource_entry *win; + LIST_HEAD(res); + int i = 0; + + /* Try setting 5 GT/s link speed */ + rcar_pcie_force_speedup(pcie); + + /* Setup PCI resources */ + resource_list_for_each_entry(win, &host->resources) { + struct resource *res = win->res; + + if (!res->flags) + continue; + + switch (resource_type(res)) { + case IORESOURCE_IO: + case IORESOURCE_MEM: + rcar_pcie_set_outbound(pcie, i, win); + i++; + break; + } + } +} + +static int rcar_pcie_enable(struct rcar_pcie_host *host) +{ + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(host); + struct rcar_pcie *pcie = &host->pcie; + struct device *dev = pcie->dev; + struct pci_bus *bus, *child; + int ret; + + /* Try setting 5 GT/s link speed */ + rcar_pcie_force_speedup(pcie); + + rcar_pcie_setup(&bridge->windows, host); + + pci_add_flags(PCI_REASSIGN_ALL_BUS); + + bridge->dev.parent = dev; + bridge->sysdata = host; + bridge->busnr = host->root_bus_nr; + bridge->ops = &rcar_pcie_ops; + bridge->map_irq = of_irq_parse_and_map_pci; + bridge->swizzle_irq = pci_common_swizzle; + if (IS_ENABLED(CONFIG_PCI_MSI)) + bridge->msi = &host->msi.chip; + + ret = pci_scan_root_bus_bridge(bridge); + if (ret < 0) + return ret; + + bus = bridge->bus; + + pci_bus_size_bridges(bus); + pci_bus_assign_resources(bus); + + list_for_each_entry(child, &bus->children, node) + pcie_bus_configure_settings(child); + + pci_bus_add_devices(bus); + + return 0; +} + +static int phy_wait_for_ack(struct rcar_pcie *pcie) +{ + struct device *dev = pcie->dev; + unsigned int timeout = 100; + + while (timeout--) { + if (rcar_pci_read_reg(pcie, H1_PCIEPHYADRR) & PHY_ACK) + return 0; + + udelay(100); + } + + dev_err(dev, "Access to PCIe phy timed out\n"); + + return -ETIMEDOUT; +} + +static void phy_write_reg(struct rcar_pcie *pcie, + unsigned int rate, u32 addr, + unsigned int lane, u32 data) +{ + u32 phyaddr; + + phyaddr = WRITE_CMD | + ((rate & 1) << RATE_POS) | + ((lane & 0xf) << LANE_POS) | + ((addr & 0xff) << ADR_POS); + + /* Set write data */ + rcar_pci_write_reg(pcie, data, H1_PCIEPHYDOUTR); + rcar_pci_write_reg(pcie, phyaddr, H1_PCIEPHYADRR); + + /* Ignore errors as they will be dealt with if the data link is down */ + phy_wait_for_ack(pcie); + + /* Clear command */ + rcar_pci_write_reg(pcie, 0, H1_PCIEPHYDOUTR); + rcar_pci_write_reg(pcie, 0, H1_PCIEPHYADRR); + + /* Ignore errors as they will be dealt with if the data link is down */ + phy_wait_for_ack(pcie); +} + +static int rcar_pcie_hw_init(struct rcar_pcie *pcie) +{ + int err; + + /* Begin initialization */ + rcar_pci_write_reg(pcie, 0, PCIETCTLR); + + /* Set mode */ + rcar_pci_write_reg(pcie, 1, PCIEMSR); + + err = rcar_pcie_wait_for_phyrdy(pcie); + if (err) + return err; + + /* + * Initial header for port config space is type 1, set the device + * class to match. Hardware takes care of propagating the IDSETR + * settings, so there is no need to bother with a quirk. + */ + rcar_pci_write_reg(pcie, PCI_CLASS_BRIDGE_PCI << 16, IDSETR1); + + /* + * Setup Secondary Bus Number & Subordinate Bus Number, even though + * they aren't used, to avoid bridge being detected as broken. + */ + rcar_rmw32(pcie, RCONF(PCI_SECONDARY_BUS), 0xff, 1); + rcar_rmw32(pcie, RCONF(PCI_SUBORDINATE_BUS), 0xff, 1); + + /* Initialize default capabilities. */ + rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP); + rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS), + PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ROOT_PORT << 4); + rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f, + PCI_HEADER_TYPE_BRIDGE); + + /* Enable data link layer active state reporting */ + rcar_rmw32(pcie, REXPCAP(PCI_EXP_LNKCAP), PCI_EXP_LNKCAP_DLLLARC, + PCI_EXP_LNKCAP_DLLLARC); + + /* Write out the physical slot number = 0 */ + rcar_rmw32(pcie, REXPCAP(PCI_EXP_SLTCAP), PCI_EXP_SLTCAP_PSN, 0); + + /* Set the completion timer timeout to the maximum 50ms. */ + rcar_rmw32(pcie, TLCTLR + 1, 0x3f, 50); + + /* Terminate list of capabilities (Next Capability Offset=0) */ + rcar_rmw32(pcie, RVCCAP(0), 0xfff00000, 0); + + /* Enable MSI */ + if (IS_ENABLED(CONFIG_PCI_MSI)) + rcar_pci_write_reg(pcie, 0x801f0000, PCIEMSITXR); + + rcar_pci_write_reg(pcie, MACCTLR_INIT_VAL, MACCTLR); + + /* Finish initialization - establish a PCI Express link */ + rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR); + + /* This will timeout if we don't have a link. */ + err = rcar_pcie_wait_for_dl(pcie); + if (err) + return err; + + /* Enable INTx interrupts */ + rcar_rmw32(pcie, PCIEINTXR, 0, 0xF << 8); + + wmb(); + + return 0; +} + +static int rcar_pcie_phy_init_h1(struct rcar_pcie_host *host) +{ + struct rcar_pcie *pcie = &host->pcie; + + /* Initialize the phy */ + phy_write_reg(pcie, 0, 0x42, 0x1, 0x0EC34191); + phy_write_reg(pcie, 1, 0x42, 0x1, 0x0EC34180); + phy_write_reg(pcie, 0, 0x43, 0x1, 0x00210188); + phy_write_reg(pcie, 1, 0x43, 0x1, 0x00210188); + phy_write_reg(pcie, 0, 0x44, 0x1, 0x015C0014); + phy_write_reg(pcie, 1, 0x44, 0x1, 0x015C0014); + phy_write_reg(pcie, 1, 0x4C, 0x1, 0x786174A0); + phy_write_reg(pcie, 1, 0x4D, 0x1, 0x048000BB); + phy_write_reg(pcie, 0, 0x51, 0x1, 0x079EC062); + phy_write_reg(pcie, 0, 0x52, 0x1, 0x20000000); + phy_write_reg(pcie, 1, 0x52, 0x1, 0x20000000); + phy_write_reg(pcie, 1, 0x56, 0x1, 0x00003806); + + phy_write_reg(pcie, 0, 0x60, 0x1, 0x004B03A5); + phy_write_reg(pcie, 0, 0x64, 0x1, 0x3F0F1F0F); + phy_write_reg(pcie, 0, 0x66, 0x1, 0x00008000); + + return 0; +} + +static int rcar_pcie_phy_init_gen2(struct rcar_pcie_host *host) +{ + struct rcar_pcie *pcie = &host->pcie; + + /* + * These settings come from the R-Car Series, 2nd Generation User's + * Manual, section 50.3.1 (2) Initialization of the physical layer. + */ + rcar_pci_write_reg(pcie, 0x000f0030, GEN2_PCIEPHYADDR); + rcar_pci_write_reg(pcie, 0x00381203, GEN2_PCIEPHYDATA); + rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL); + rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL); + + rcar_pci_write_reg(pcie, 0x000f0054, GEN2_PCIEPHYADDR); + /* The following value is for DC connection, no termination resistor */ + rcar_pci_write_reg(pcie, 0x13802007, GEN2_PCIEPHYDATA); + rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL); + rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL); + + return 0; +} + +static int rcar_pcie_phy_init_gen3(struct rcar_pcie_host *host) +{ + int err; + + err = phy_init(host->phy); + if (err) + return err; + + err = phy_power_on(host->phy); + if (err) + phy_exit(host->phy); + + return err; +} + +static int rcar_msi_alloc(struct rcar_msi *chip) +{ + int msi; + + mutex_lock(&chip->lock); + + msi = find_first_zero_bit(chip->used, INT_PCI_MSI_NR); + if (msi < INT_PCI_MSI_NR) + set_bit(msi, chip->used); + else + msi = -ENOSPC; + + mutex_unlock(&chip->lock); + + return msi; +} + +static int rcar_msi_alloc_region(struct rcar_msi *chip, int no_irqs) +{ + int msi; + + mutex_lock(&chip->lock); + msi = bitmap_find_free_region(chip->used, INT_PCI_MSI_NR, + order_base_2(no_irqs)); + mutex_unlock(&chip->lock); + + return msi; +} + +static void rcar_msi_free(struct rcar_msi *chip, unsigned long irq) +{ + mutex_lock(&chip->lock); + clear_bit(irq, chip->used); + mutex_unlock(&chip->lock); +} + +static irqreturn_t rcar_pcie_msi_irq(int irq, void *data) +{ + struct rcar_pcie_host *host = data; + struct rcar_pcie *pcie = &host->pcie; + struct rcar_msi *msi = &host->msi; + struct device *dev = pcie->dev; + unsigned long reg; + + reg = rcar_pci_read_reg(pcie, PCIEMSIFR); + + /* MSI & INTx share an interrupt - we only handle MSI here */ + if (!reg) + return IRQ_NONE; + + while (reg) { + unsigned int index = find_first_bit(®, 32); + unsigned int msi_irq; + + /* clear the interrupt */ + rcar_pci_write_reg(pcie, 1 << index, PCIEMSIFR); + + msi_irq = irq_find_mapping(msi->domain, index); + if (msi_irq) { + if (test_bit(index, msi->used)) + generic_handle_irq(msi_irq); + else + dev_info(dev, "unhandled MSI\n"); + } else { + /* Unknown MSI, just clear it */ + dev_dbg(dev, "unexpected MSI\n"); + } + + /* see if there's any more pending in this vector */ + reg = rcar_pci_read_reg(pcie, PCIEMSIFR); + } + + return IRQ_HANDLED; +} + +static int rcar_msi_setup_irq(struct msi_controller *chip, struct pci_dev *pdev, + struct msi_desc *desc) +{ + struct rcar_msi *msi = to_rcar_msi(chip); + struct rcar_pcie_host *host = container_of(chip, struct rcar_pcie_host, + msi.chip); + struct rcar_pcie *pcie = &host->pcie; + struct msi_msg msg; + unsigned int irq; + int hwirq; + + hwirq = rcar_msi_alloc(msi); + if (hwirq < 0) + return hwirq; + + irq = irq_find_mapping(msi->domain, hwirq); + if (!irq) { + rcar_msi_free(msi, hwirq); + return -EINVAL; + } + + irq_set_msi_desc(irq, desc); + + msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; + msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR); + msg.data = hwirq; + + pci_write_msi_msg(irq, &msg); + + return 0; +} + +static int rcar_msi_setup_irqs(struct msi_controller *chip, + struct pci_dev *pdev, int nvec, int type) +{ + struct rcar_msi *msi = to_rcar_msi(chip); + struct rcar_pcie_host *host = container_of(chip, struct rcar_pcie_host, + msi.chip); + struct rcar_pcie *pcie = &host->pcie; + struct msi_desc *desc; + struct msi_msg msg; + unsigned int irq; + int hwirq; + int i; + + /* MSI-X interrupts are not supported */ + if (type == PCI_CAP_ID_MSIX) + return -EINVAL; + + WARN_ON(!list_is_singular(&pdev->dev.msi_list)); + desc = list_entry(pdev->dev.msi_list.next, struct msi_desc, list); + + hwirq = rcar_msi_alloc_region(msi, nvec); + if (hwirq < 0) + return -ENOSPC; + + irq = irq_find_mapping(msi->domain, hwirq); + if (!irq) + return -ENOSPC; + + for (i = 0; i < nvec; i++) { + /* + * irq_create_mapping() called from rcar_pcie_probe() pre- + * allocates descs, so there is no need to allocate descs here. + * We can therefore assume that if irq_find_mapping() above + * returns non-zero, then the descs are also successfully + * allocated. + */ + if (irq_set_msi_desc_off(irq, i, desc)) { + /* TODO: clear */ + return -EINVAL; + } + } + + desc->nvec_used = nvec; + desc->msi_attrib.multiple = order_base_2(nvec); + + msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; + msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR); + msg.data = hwirq; + + pci_write_msi_msg(irq, &msg); + + return 0; +} + +static void rcar_msi_teardown_irq(struct msi_controller *chip, unsigned int irq) +{ + struct rcar_msi *msi = to_rcar_msi(chip); + struct irq_data *d = irq_get_irq_data(irq); + + rcar_msi_free(msi, d->hwirq); +} + +static struct irq_chip rcar_msi_irq_chip = { + .name = "R-Car PCIe MSI", + .irq_enable = pci_msi_unmask_irq, + .irq_disable = pci_msi_mask_irq, + .irq_mask = pci_msi_mask_irq, + .irq_unmask = pci_msi_unmask_irq, +}; + +static int rcar_msi_map(struct irq_domain *domain, unsigned int irq, + irq_hw_number_t hwirq) +{ + irq_set_chip_and_handler(irq, &rcar_msi_irq_chip, handle_simple_irq); + irq_set_chip_data(irq, domain->host_data); + + return 0; +} + +static const struct irq_domain_ops msi_domain_ops = { + .map = rcar_msi_map, +}; + +static void rcar_pcie_unmap_msi(struct rcar_pcie_host *host) +{ + struct rcar_msi *msi = &host->msi; + int i, irq; + + for (i = 0; i < INT_PCI_MSI_NR; i++) { + irq = irq_find_mapping(msi->domain, i); + if (irq > 0) + irq_dispose_mapping(irq); + } + + irq_domain_remove(msi->domain); +} + +static void rcar_pcie_hw_enable_msi(struct rcar_pcie_host *host) +{ + struct rcar_pcie *pcie = &host->pcie; + struct rcar_msi *msi = &host->msi; + unsigned long base; + + /* setup MSI data target */ + base = virt_to_phys((void *)msi->pages); + + rcar_pci_write_reg(pcie, lower_32_bits(base) | MSIFE, PCIEMSIALR); + rcar_pci_write_reg(pcie, upper_32_bits(base), PCIEMSIAUR); + + /* enable all MSI interrupts */ + rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER); +} + +static int rcar_pcie_enable_msi(struct rcar_pcie_host *host) +{ + struct rcar_pcie *pcie = &host->pcie; + struct device *dev = pcie->dev; + struct rcar_msi *msi = &host->msi; + int err, i; + + mutex_init(&msi->lock); + + msi->chip.dev = dev; + msi->chip.setup_irq = rcar_msi_setup_irq; + msi->chip.setup_irqs = rcar_msi_setup_irqs; + msi->chip.teardown_irq = rcar_msi_teardown_irq; + + msi->domain = irq_domain_add_linear(dev->of_node, INT_PCI_MSI_NR, + &msi_domain_ops, &msi->chip); + if (!msi->domain) { + dev_err(dev, "failed to create IRQ domain\n"); + return -ENOMEM; + } + + for (i = 0; i < INT_PCI_MSI_NR; i++) + irq_create_mapping(msi->domain, i); + + /* Two irqs are for MSI, but they are also used for non-MSI irqs */ + err = devm_request_irq(dev, msi->irq1, rcar_pcie_msi_irq, + IRQF_SHARED | IRQF_NO_THREAD, + rcar_msi_irq_chip.name, host); + if (err < 0) { + dev_err(dev, "failed to request IRQ: %d\n", err); + goto err; + } + + err = devm_request_irq(dev, msi->irq2, rcar_pcie_msi_irq, + IRQF_SHARED | IRQF_NO_THREAD, + rcar_msi_irq_chip.name, host); + if (err < 0) { + dev_err(dev, "failed to request IRQ: %d\n", err); + goto err; + } + + /* setup MSI data target */ + msi->pages = __get_free_pages(GFP_KERNEL, 0); + rcar_pcie_hw_enable_msi(host); + + return 0; + +err: + rcar_pcie_unmap_msi(host); + return err; +} + +static void rcar_pcie_teardown_msi(struct rcar_pcie_host *host) +{ + struct rcar_pcie *pcie = &host->pcie; + struct rcar_msi *msi = &host->msi; + + /* Disable all MSI interrupts */ + rcar_pci_write_reg(pcie, 0, PCIEMSIIER); + + /* Disable address decoding of the MSI interrupt, MSIFE */ + rcar_pci_write_reg(pcie, 0, PCIEMSIALR); + + free_pages(msi->pages, 0); + + rcar_pcie_unmap_msi(host); +} + +static int rcar_pcie_get_resources(struct rcar_pcie_host *host) +{ + struct rcar_pcie *pcie = &host->pcie; + struct device *dev = pcie->dev; + struct resource res; + int err, i; + + host->phy = devm_phy_optional_get(dev, "pcie"); + if (IS_ERR(host->phy)) + return PTR_ERR(host->phy); + + err = of_address_to_resource(dev->of_node, 0, &res); + if (err) + return err; + + pcie->base = devm_ioremap_resource(dev, &res); + if (IS_ERR(pcie->base)) + return PTR_ERR(pcie->base); + + host->bus_clk = devm_clk_get(dev, "pcie_bus"); + if (IS_ERR(host->bus_clk)) { + dev_err(dev, "cannot get pcie bus clock\n"); + return PTR_ERR(host->bus_clk); + } + + i = irq_of_parse_and_map(dev->of_node, 0); + if (!i) { + dev_err(dev, "cannot get platform resources for msi interrupt\n"); + err = -ENOENT; + goto err_irq1; + } + host->msi.irq1 = i; + + i = irq_of_parse_and_map(dev->of_node, 1); + if (!i) { + dev_err(dev, "cannot get platform resources for msi interrupt\n"); + err = -ENOENT; + goto err_irq2; + } + host->msi.irq2 = i; + + return 0; + +err_irq2: + irq_dispose_mapping(host->msi.irq1); +err_irq1: + return err; +} + +static int rcar_pcie_inbound_ranges(struct rcar_pcie *pcie, + struct resource_entry *entry, + int *index) +{ + u64 restype = entry->res->flags; + u64 cpu_addr = entry->res->start; + u64 cpu_end = entry->res->end; + u64 pci_addr = entry->res->start - entry->offset; + u32 flags = LAM_64BIT | LAR_ENABLE; + u64 mask; + u64 size = resource_size(entry->res); + int idx = *index; + + if (restype & IORESOURCE_PREFETCH) + flags |= LAM_PREFETCH; + + while (cpu_addr < cpu_end) { + if (idx >= MAX_NR_INBOUND_MAPS - 1) { + dev_err(pcie->dev, "Failed to map inbound regions!\n"); + return -EINVAL; + } + /* + * If the size of the range is larger than the alignment of + * the start address, we have to use multiple entries to + * perform the mapping. + */ + if (cpu_addr > 0) { + unsigned long nr_zeros = __ffs64(cpu_addr); + u64 alignment = 1ULL << nr_zeros; + + size = min(size, alignment); + } + /* Hardware supports max 4GiB inbound region */ + size = min(size, 1ULL << 32); + + mask = roundup_pow_of_two(size) - 1; + mask &= ~0xf; + + rcar_pcie_set_inbound(pcie, cpu_addr, pci_addr, + lower_32_bits(mask) | flags, idx, true); + + pci_addr += size; + cpu_addr += size; + idx += 2; + } + *index = idx; + + return 0; +} + +static int rcar_pcie_parse_map_dma_ranges(struct rcar_pcie_host *host) +{ + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(host); + struct resource_entry *entry; + int index = 0, err = 0; + + resource_list_for_each_entry(entry, &bridge->dma_ranges) { + err = rcar_pcie_inbound_ranges(&host->pcie, entry, &index); + if (err) + break; + } + + return err; +} + +static const struct of_device_id rcar_pcie_of_match[] = { + { .compatible = "renesas,pcie-r8a7779", + .data = rcar_pcie_phy_init_h1 }, + { .compatible = "renesas,pcie-r8a7790", + .data = rcar_pcie_phy_init_gen2 }, + { .compatible = "renesas,pcie-r8a7791", + .data = rcar_pcie_phy_init_gen2 }, + { .compatible = "renesas,pcie-rcar-gen2", + .data = rcar_pcie_phy_init_gen2 }, + { .compatible = "renesas,pcie-r8a7795", + .data = rcar_pcie_phy_init_gen3 }, + { .compatible = "renesas,pcie-rcar-gen3", + .data = rcar_pcie_phy_init_gen3 }, + {}, +}; + +static int rcar_pcie_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct rcar_pcie_host *host; + struct rcar_pcie *pcie; + u32 data; + int err; + struct pci_host_bridge *bridge; + + bridge = pci_alloc_host_bridge(sizeof(*host)); + if (!bridge) + return -ENOMEM; + + host = pci_host_bridge_priv(bridge); + pcie = &host->pcie; + pcie->dev = dev; + platform_set_drvdata(pdev, host); + + err = pci_parse_request_of_pci_ranges(dev, &host->resources, + &bridge->dma_ranges, NULL); + if (err) + goto err_free_bridge; + + pm_runtime_enable(pcie->dev); + err = pm_runtime_get_sync(pcie->dev); + if (err < 0) { + dev_err(pcie->dev, "pm_runtime_get_sync failed\n"); + goto err_pm_disable; + } + + err = rcar_pcie_get_resources(host); + if (err < 0) { + dev_err(dev, "failed to request resources: %d\n", err); + goto err_pm_put; + } + + err = clk_prepare_enable(host->bus_clk); + if (err) { + dev_err(dev, "failed to enable bus clock: %d\n", err); + goto err_unmap_msi_irqs; + } + + err = rcar_pcie_parse_map_dma_ranges(host); + if (err) + goto err_clk_disable; + + host->phy_init_fn = of_device_get_match_data(dev); + err = host->phy_init_fn(host); + if (err) { + dev_err(dev, "failed to init PCIe PHY\n"); + goto err_clk_disable; + } + + /* Failure to get a link might just be that no cards are inserted */ + if (rcar_pcie_hw_init(pcie)) { + dev_info(dev, "PCIe link down\n"); + err = -ENODEV; + goto err_phy_shutdown; + } + + data = rcar_pci_read_reg(pcie, MACSR); + dev_info(dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f); + + if (IS_ENABLED(CONFIG_PCI_MSI)) { + err = rcar_pcie_enable_msi(host); + if (err < 0) { + dev_err(dev, + "failed to enable MSI support: %d\n", + err); + goto err_phy_shutdown; + } + } + + err = rcar_pcie_enable(host); + if (err) + goto err_msi_teardown; + + return 0; + +err_msi_teardown: + if (IS_ENABLED(CONFIG_PCI_MSI)) + rcar_pcie_teardown_msi(host); + +err_phy_shutdown: + if (host->phy) { + phy_power_off(host->phy); + phy_exit(host->phy); + } + +err_clk_disable: + clk_disable_unprepare(host->bus_clk); + +err_unmap_msi_irqs: + irq_dispose_mapping(host->msi.irq2); + irq_dispose_mapping(host->msi.irq1); + +err_pm_put: + pm_runtime_put(dev); + +err_pm_disable: + pm_runtime_disable(dev); + pci_free_resource_list(&host->resources); + +err_free_bridge: + pci_free_host_bridge(bridge); + + return err; +} + +static int __maybe_unused rcar_pcie_resume(struct device *dev) +{ + struct rcar_pcie_host *host = dev_get_drvdata(dev); + struct rcar_pcie *pcie = &host->pcie; + unsigned int data; + int err; + + err = rcar_pcie_parse_map_dma_ranges(host); + if (err) + return 0; + + /* Failure to get a link might just be that no cards are inserted */ + err = host->phy_init_fn(host); + if (err) { + dev_info(dev, "PCIe link down\n"); + return 0; + } + + data = rcar_pci_read_reg(pcie, MACSR); + dev_info(dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f); + + /* Enable MSI */ + if (IS_ENABLED(CONFIG_PCI_MSI)) + rcar_pcie_hw_enable_msi(host); + + rcar_pcie_hw_enable(host); + + return 0; +} + +static int rcar_pcie_resume_noirq(struct device *dev) +{ + struct rcar_pcie_host *host = dev_get_drvdata(dev); + struct rcar_pcie *pcie = &host->pcie; + + if (rcar_pci_read_reg(pcie, PMSR) && + !(rcar_pci_read_reg(pcie, PCIETCTLR) & DL_DOWN)) + return 0; + + /* Re-establish the PCIe link */ + rcar_pci_write_reg(pcie, MACCTLR_INIT_VAL, MACCTLR); + rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR); + return rcar_pcie_wait_for_dl(pcie); +} + +static const struct dev_pm_ops rcar_pcie_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(NULL, rcar_pcie_resume) + .resume_noirq = rcar_pcie_resume_noirq, +}; + +static struct platform_driver rcar_pcie_driver = { + .driver = { + .name = "rcar-pcie", + .of_match_table = rcar_pcie_of_match, + .pm = &rcar_pcie_pm_ops, + .suppress_bind_attrs = true, + }, + .probe = rcar_pcie_probe, +}; +builtin_platform_driver(rcar_pcie_driver); diff --git a/drivers/pci/controller/pcie-rcar.c b/drivers/pci/controller/pcie-rcar.c index 759c6542c5c8..7583699ef7b6 100644 --- a/drivers/pci/controller/pcie-rcar.c +++ b/drivers/pci/controller/pcie-rcar.c @@ -1,177 +1,27 @@ // SPDX-License-Identifier: GPL-2.0 /* * PCIe driver for Renesas R-Car SoCs - * Copyright (C) 2014 Renesas Electronics Europe Ltd - * - * Based on: - * arch/sh/drivers/pci/pcie-sh7786.c - * arch/sh/drivers/pci/ops-sh7786.c - * Copyright (C) 2009 - 2011 Paul Mundt + * Copyright (C) 2014-2020 Renesas Electronics Europe Ltd * * Author: Phil Edworthy <phil.edworthy@renesas.com> */ -#include <linux/bitops.h> -#include <linux/clk.h> #include <linux/delay.h> -#include <linux/interrupt.h> -#include <linux/irq.h> -#include <linux/irqdomain.h> -#include <linux/kernel.h> -#include <linux/init.h> -#include <linux/msi.h> -#include <linux/of_address.h> -#include <linux/of_irq.h> -#include <linux/of_pci.h> -#include <linux/of_platform.h> #include <linux/pci.h> -#include <linux/phy/phy.h> -#include <linux/platform_device.h> -#include <linux/pm_runtime.h> -#include <linux/slab.h> - -#define PCIECAR 0x000010 -#define PCIECCTLR 0x000018 -#define CONFIG_SEND_ENABLE BIT(31) -#define TYPE0 (0 << 8) -#define TYPE1 BIT(8) -#define PCIECDR 0x000020 -#define PCIEMSR 0x000028 -#define PCIEINTXR 0x000400 -#define PCIEPHYSR 0x0007f0 -#define PHYRDY BIT(0) -#define PCIEMSITXR 0x000840 - -/* Transfer control */ -#define PCIETCTLR 0x02000 -#define DL_DOWN BIT(3) -#define CFINIT BIT(0) -#define PCIETSTR 0x02004 -#define DATA_LINK_ACTIVE BIT(0) -#define PCIEERRFR 0x02020 -#define UNSUPPORTED_REQUEST BIT(4) -#define PCIEMSIFR 0x02044 -#define PCIEMSIALR 0x02048 -#define MSIFE BIT(0) -#define PCIEMSIAUR 0x0204c -#define PCIEMSIIER 0x02050 - -/* root port address */ -#define PCIEPRAR(x) (0x02080 + ((x) * 0x4)) - -/* local address reg & mask */ -#define PCIELAR(x) (0x02200 + ((x) * 0x20)) -#define PCIELAMR(x) (0x02208 + ((x) * 0x20)) -#define LAM_PREFETCH BIT(3) -#define LAM_64BIT BIT(2) -#define LAR_ENABLE BIT(1) - -/* PCIe address reg & mask */ -#define PCIEPALR(x) (0x03400 + ((x) * 0x20)) -#define PCIEPAUR(x) (0x03404 + ((x) * 0x20)) -#define PCIEPAMR(x) (0x03408 + ((x) * 0x20)) -#define PCIEPTCTLR(x) (0x0340c + ((x) * 0x20)) -#define PAR_ENABLE BIT(31) -#define IO_SPACE BIT(8) - -/* Configuration */ -#define PCICONF(x) (0x010000 + ((x) * 0x4)) -#define PMCAP(x) (0x010040 + ((x) * 0x4)) -#define EXPCAP(x) (0x010070 + ((x) * 0x4)) -#define VCCAP(x) (0x010100 + ((x) * 0x4)) - -/* link layer */ -#define IDSETR1 0x011004 -#define TLCTLR 0x011048 -#define MACSR 0x011054 -#define SPCHGFIN BIT(4) -#define SPCHGFAIL BIT(6) -#define SPCHGSUC BIT(7) -#define LINK_SPEED (0xf << 16) -#define LINK_SPEED_2_5GTS (1 << 16) -#define LINK_SPEED_5_0GTS (2 << 16) -#define MACCTLR 0x011058 -#define MACCTLR_NFTS_MASK GENMASK(23, 16) /* The name is from SH7786 */ -#define SPEED_CHANGE BIT(24) -#define SCRAMBLE_DISABLE BIT(27) -#define LTSMDIS BIT(31) -#define MACCTLR_INIT_VAL (LTSMDIS | MACCTLR_NFTS_MASK) -#define PMSR 0x01105c -#define MACS2R 0x011078 -#define MACCGSPSETR 0x011084 -#define SPCNGRSN BIT(31) - -/* R-Car H1 PHY */ -#define H1_PCIEPHYADRR 0x04000c -#define WRITE_CMD BIT(16) -#define PHY_ACK BIT(24) -#define RATE_POS 12 -#define LANE_POS 8 -#define ADR_POS 0 -#define H1_PCIEPHYDOUTR 0x040014 - -/* R-Car Gen2 PHY */ -#define GEN2_PCIEPHYADDR 0x780 -#define GEN2_PCIEPHYDATA 0x784 -#define GEN2_PCIEPHYCTRL 0x78c - -#define INT_PCI_MSI_NR 32 - -#define RCONF(x) (PCICONF(0) + (x)) -#define RPMCAP(x) (PMCAP(0) + (x)) -#define REXPCAP(x) (EXPCAP(0) + (x)) -#define RVCCAP(x) (VCCAP(0) + (x)) -#define PCIE_CONF_BUS(b) (((b) & 0xff) << 24) -#define PCIE_CONF_DEV(d) (((d) & 0x1f) << 19) -#define PCIE_CONF_FUNC(f) (((f) & 0x7) << 16) +#include "pcie-rcar.h" -#define RCAR_PCI_MAX_RESOURCES 4 -#define MAX_NR_INBOUND_MAPS 6 - -struct rcar_msi { - DECLARE_BITMAP(used, INT_PCI_MSI_NR); - struct irq_domain *domain; - struct msi_controller chip; - unsigned long pages; - struct mutex lock; - int irq1; - int irq2; -}; - -static inline struct rcar_msi *to_rcar_msi(struct msi_controller *chip) -{ - return container_of(chip, struct rcar_msi, chip); -} - -/* Structure representing the PCIe interface */ -struct rcar_pcie { - struct device *dev; - struct phy *phy; - void __iomem *base; - struct list_head resources; - int root_bus_nr; - struct clk *bus_clk; - struct rcar_msi msi; -}; - -static void rcar_pci_write_reg(struct rcar_pcie *pcie, u32 val, - unsigned int reg) +void rcar_pci_write_reg(struct rcar_pcie *pcie, u32 val, unsigned int reg) { writel(val, pcie->base + reg); } -static u32 rcar_pci_read_reg(struct rcar_pcie *pcie, unsigned int reg) +u32 rcar_pci_read_reg(struct rcar_pcie *pcie, unsigned int reg) { return readl(pcie->base + reg); } -enum { - RCAR_PCI_ACCESS_READ, - RCAR_PCI_ACCESS_WRITE, -}; - -static void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data) +void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data) { unsigned int shift = BITS_PER_BYTE * (where & 3); u32 val = rcar_pci_read_reg(pcie, where & ~3); @@ -181,163 +31,42 @@ static void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data) rcar_pci_write_reg(pcie, val, where & ~3); } -static u32 rcar_read_conf(struct rcar_pcie *pcie, int where) +int rcar_pcie_wait_for_phyrdy(struct rcar_pcie *pcie) { - unsigned int shift = BITS_PER_BYTE * (where & 3); - u32 val = rcar_pci_read_reg(pcie, where & ~3); - - return val >> shift; -} - -/* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ -static int rcar_pcie_config_access(struct rcar_pcie *pcie, - unsigned char access_type, struct pci_bus *bus, - unsigned int devfn, int where, u32 *data) -{ - unsigned int dev, func, reg, index; - - dev = PCI_SLOT(devfn); - func = PCI_FUNC(devfn); - reg = where & ~3; - index = reg / 4; - - /* - * While each channel has its own memory-mapped extended config - * space, it's generally only accessible when in endpoint mode. - * When in root complex mode, the controller is unable to target - * itself with either type 0 or type 1 accesses, and indeed, any - * controller initiated target transfer to its own config space - * result in a completer abort. - * - * Each channel effectively only supports a single device, but as - * the same channel <-> device access works for any PCI_SLOT() - * value, we cheat a bit here and bind the controller's config - * space to devfn 0 in order to enable self-enumeration. In this - * case the regular ECAR/ECDR path is sidelined and the mangled - * config access itself is initiated as an internal bus transaction. - */ - if (pci_is_root_bus(bus)) { - if (dev != 0) - return PCIBIOS_DEVICE_NOT_FOUND; - - if (access_type == RCAR_PCI_ACCESS_READ) { - *data = rcar_pci_read_reg(pcie, PCICONF(index)); - } else { - /* Keep an eye out for changes to the root bus number */ - if (pci_is_root_bus(bus) && (reg == PCI_PRIMARY_BUS)) - pcie->root_bus_nr = *data & 0xff; - - rcar_pci_write_reg(pcie, *data, PCICONF(index)); - } - - return PCIBIOS_SUCCESSFUL; - } - - if (pcie->root_bus_nr < 0) - return PCIBIOS_DEVICE_NOT_FOUND; - - /* Clear errors */ - rcar_pci_write_reg(pcie, rcar_pci_read_reg(pcie, PCIEERRFR), PCIEERRFR); - - /* Set the PIO address */ - rcar_pci_write_reg(pcie, PCIE_CONF_BUS(bus->number) | - PCIE_CONF_DEV(dev) | PCIE_CONF_FUNC(func) | reg, PCIECAR); - - /* Enable the configuration access */ - if (bus->parent->number == pcie->root_bus_nr) - rcar_pci_write_reg(pcie, CONFIG_SEND_ENABLE | TYPE0, PCIECCTLR); - else - rcar_pci_write_reg(pcie, CONFIG_SEND_ENABLE | TYPE1, PCIECCTLR); - - /* Check for errors */ - if (rcar_pci_read_reg(pcie, PCIEERRFR) & UNSUPPORTED_REQUEST) - return PCIBIOS_DEVICE_NOT_FOUND; - - /* Check for master and target aborts */ - if (rcar_read_conf(pcie, RCONF(PCI_STATUS)) & - (PCI_STATUS_REC_MASTER_ABORT | PCI_STATUS_REC_TARGET_ABORT)) - return PCIBIOS_DEVICE_NOT_FOUND; - - if (access_type == RCAR_PCI_ACCESS_READ) - *data = rcar_pci_read_reg(pcie, PCIECDR); - else - rcar_pci_write_reg(pcie, *data, PCIECDR); - - /* Disable the configuration access */ - rcar_pci_write_reg(pcie, 0, PCIECCTLR); - - return PCIBIOS_SUCCESSFUL; -} + unsigned int timeout = 10; -static int rcar_pcie_read_conf(struct pci_bus *bus, unsigned int devfn, - int where, int size, u32 *val) -{ - struct rcar_pcie *pcie = bus->sysdata; - int ret; + while (timeout--) { + if (rcar_pci_read_reg(pcie, PCIEPHYSR) & PHYRDY) + return 0; - ret = rcar_pcie_config_access(pcie, RCAR_PCI_ACCESS_READ, - bus, devfn, where, val); - if (ret != PCIBIOS_SUCCESSFUL) { - *val = 0xffffffff; - return ret; + msleep(5); } - if (size == 1) - *val = (*val >> (BITS_PER_BYTE * (where & 3))) & 0xff; - else if (size == 2) - *val = (*val >> (BITS_PER_BYTE * (where & 2))) & 0xffff; - - dev_dbg(&bus->dev, "pcie-config-read: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n", - bus->number, devfn, where, size, *val); - - return ret; + return -ETIMEDOUT; } -/* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ -static int rcar_pcie_write_conf(struct pci_bus *bus, unsigned int devfn, - int where, int size, u32 val) +int rcar_pcie_wait_for_dl(struct rcar_pcie *pcie) { - struct rcar_pcie *pcie = bus->sysdata; - unsigned int shift; - u32 data; - int ret; - - ret = rcar_pcie_config_access(pcie, RCAR_PCI_ACCESS_READ, - bus, devfn, where, &data); - if (ret != PCIBIOS_SUCCESSFUL) - return ret; - - dev_dbg(&bus->dev, "pcie-config-write: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n", - bus->number, devfn, where, size, val); + unsigned int timeout = 10000; - if (size == 1) { - shift = BITS_PER_BYTE * (where & 3); - data &= ~(0xff << shift); - data |= ((val & 0xff) << shift); - } else if (size == 2) { - shift = BITS_PER_BYTE * (where & 2); - data &= ~(0xffff << shift); - data |= ((val & 0xffff) << shift); - } else - data = val; + while (timeout--) { + if ((rcar_pci_read_reg(pcie, PCIETSTR) & DATA_LINK_ACTIVE)) + return 0; - ret = rcar_pcie_config_access(pcie, RCAR_PCI_ACCESS_WRITE, - bus, devfn, where, &data); + udelay(5); + cpu_relax(); + } - return ret; + return -ETIMEDOUT; } -static struct pci_ops rcar_pcie_ops = { - .read = rcar_pcie_read_conf, - .write = rcar_pcie_write_conf, -}; - -static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie, - struct resource *res) +void rcar_pcie_set_outbound(struct rcar_pcie *pcie, int win, + struct resource_entry *window) { /* Setup PCIe address space mappings for each resource */ - resource_size_t size; + struct resource *res = window->res; resource_size_t res_start; + resource_size_t size; u32 mask; rcar_pci_write_reg(pcie, 0x00000000, PCIEPTCTLR(win)); @@ -347,13 +76,16 @@ static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie, * keeps things pretty simple. */ size = resource_size(res); - mask = (roundup_pow_of_two(size) / SZ_128) - 1; + if (size > 128) + mask = (roundup_pow_of_two(size) / SZ_128) - 1; + else + mask = 0x0; rcar_pci_write_reg(pcie, mask << 7, PCIEPAMR(win)); if (res->flags & IORESOURCE_IO) - res_start = pci_pio_to_address(res->start); + res_start = pci_pio_to_address(res->start) - window->offset; else - res_start = res->start; + res_start = res->start - window->offset; rcar_pci_write_reg(pcie, upper_32_bits(res_start), PCIEPAUR(win)); rcar_pci_write_reg(pcie, lower_32_bits(res_start) & ~0x7F, @@ -367,883 +99,22 @@ static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie, rcar_pci_write_reg(pcie, mask, PCIEPTCTLR(win)); } -static int rcar_pcie_setup(struct list_head *resource, struct rcar_pcie *pci) -{ - struct resource_entry *win; - int i = 0; - - /* Setup PCI resources */ - resource_list_for_each_entry(win, &pci->resources) { - struct resource *res = win->res; - - if (!res->flags) - continue; - - switch (resource_type(res)) { - case IORESOURCE_IO: - case IORESOURCE_MEM: - rcar_pcie_setup_window(i, pci, res); - i++; - break; - case IORESOURCE_BUS: - pci->root_bus_nr = res->start; - break; - default: - continue; - } - - pci_add_resource(resource, res); - } - - return 1; -} - -static void rcar_pcie_force_speedup(struct rcar_pcie *pcie) -{ - struct device *dev = pcie->dev; - unsigned int timeout = 1000; - u32 macsr; - - if ((rcar_pci_read_reg(pcie, MACS2R) & LINK_SPEED) != LINK_SPEED_5_0GTS) - return; - - if (rcar_pci_read_reg(pcie, MACCTLR) & SPEED_CHANGE) { - dev_err(dev, "Speed change already in progress\n"); - return; - } - - macsr = rcar_pci_read_reg(pcie, MACSR); - if ((macsr & LINK_SPEED) == LINK_SPEED_5_0GTS) - goto done; - - /* Set target link speed to 5.0 GT/s */ - rcar_rmw32(pcie, EXPCAP(12), PCI_EXP_LNKSTA_CLS, - PCI_EXP_LNKSTA_CLS_5_0GB); - - /* Set speed change reason as intentional factor */ - rcar_rmw32(pcie, MACCGSPSETR, SPCNGRSN, 0); - - /* Clear SPCHGFIN, SPCHGSUC, and SPCHGFAIL */ - if (macsr & (SPCHGFIN | SPCHGSUC | SPCHGFAIL)) - rcar_pci_write_reg(pcie, macsr, MACSR); - - /* Start link speed change */ - rcar_rmw32(pcie, MACCTLR, SPEED_CHANGE, SPEED_CHANGE); - - while (timeout--) { - macsr = rcar_pci_read_reg(pcie, MACSR); - if (macsr & SPCHGFIN) { - /* Clear the interrupt bits */ - rcar_pci_write_reg(pcie, macsr, MACSR); - - if (macsr & SPCHGFAIL) - dev_err(dev, "Speed change failed\n"); - - goto done; - } - - msleep(1); - } - - dev_err(dev, "Speed change timed out\n"); - -done: - dev_info(dev, "Current link speed is %s GT/s\n", - (macsr & LINK_SPEED) == LINK_SPEED_5_0GTS ? "5" : "2.5"); -} - -static int rcar_pcie_enable(struct rcar_pcie *pcie) -{ - struct device *dev = pcie->dev; - struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); - struct pci_bus *bus, *child; - int ret; - - /* Try setting 5 GT/s link speed */ - rcar_pcie_force_speedup(pcie); - - rcar_pcie_setup(&bridge->windows, pcie); - - pci_add_flags(PCI_REASSIGN_ALL_BUS); - - bridge->dev.parent = dev; - bridge->sysdata = pcie; - bridge->busnr = pcie->root_bus_nr; - bridge->ops = &rcar_pcie_ops; - bridge->map_irq = of_irq_parse_and_map_pci; - bridge->swizzle_irq = pci_common_swizzle; - if (IS_ENABLED(CONFIG_PCI_MSI)) - bridge->msi = &pcie->msi.chip; - - ret = pci_scan_root_bus_bridge(bridge); - if (ret < 0) - return ret; - - bus = bridge->bus; - - pci_bus_size_bridges(bus); - pci_bus_assign_resources(bus); - - list_for_each_entry(child, &bus->children, node) - pcie_bus_configure_settings(child); - - pci_bus_add_devices(bus); - - return 0; -} - -static int phy_wait_for_ack(struct rcar_pcie *pcie) -{ - struct device *dev = pcie->dev; - unsigned int timeout = 100; - - while (timeout--) { - if (rcar_pci_read_reg(pcie, H1_PCIEPHYADRR) & PHY_ACK) - return 0; - - udelay(100); - } - - dev_err(dev, "Access to PCIe phy timed out\n"); - - return -ETIMEDOUT; -} - -static void phy_write_reg(struct rcar_pcie *pcie, - unsigned int rate, u32 addr, - unsigned int lane, u32 data) -{ - u32 phyaddr; - - phyaddr = WRITE_CMD | - ((rate & 1) << RATE_POS) | - ((lane & 0xf) << LANE_POS) | - ((addr & 0xff) << ADR_POS); - - /* Set write data */ - rcar_pci_write_reg(pcie, data, H1_PCIEPHYDOUTR); - rcar_pci_write_reg(pcie, phyaddr, H1_PCIEPHYADRR); - - /* Ignore errors as they will be dealt with if the data link is down */ - phy_wait_for_ack(pcie); - - /* Clear command */ - rcar_pci_write_reg(pcie, 0, H1_PCIEPHYDOUTR); - rcar_pci_write_reg(pcie, 0, H1_PCIEPHYADRR); - - /* Ignore errors as they will be dealt with if the data link is down */ - phy_wait_for_ack(pcie); -} - -static int rcar_pcie_wait_for_phyrdy(struct rcar_pcie *pcie) -{ - unsigned int timeout = 10; - - while (timeout--) { - if (rcar_pci_read_reg(pcie, PCIEPHYSR) & PHYRDY) - return 0; - - msleep(5); - } - - return -ETIMEDOUT; -} - -static int rcar_pcie_wait_for_dl(struct rcar_pcie *pcie) -{ - unsigned int timeout = 10000; - - while (timeout--) { - if ((rcar_pci_read_reg(pcie, PCIETSTR) & DATA_LINK_ACTIVE)) - return 0; - - udelay(5); - cpu_relax(); - } - - return -ETIMEDOUT; -} - -static int rcar_pcie_hw_init(struct rcar_pcie *pcie) -{ - int err; - - /* Begin initialization */ - rcar_pci_write_reg(pcie, 0, PCIETCTLR); - - /* Set mode */ - rcar_pci_write_reg(pcie, 1, PCIEMSR); - - err = rcar_pcie_wait_for_phyrdy(pcie); - if (err) - return err; - - /* - * Initial header for port config space is type 1, set the device - * class to match. Hardware takes care of propagating the IDSETR - * settings, so there is no need to bother with a quirk. - */ - rcar_pci_write_reg(pcie, PCI_CLASS_BRIDGE_PCI << 16, IDSETR1); - - /* - * Setup Secondary Bus Number & Subordinate Bus Number, even though - * they aren't used, to avoid bridge being detected as broken. - */ - rcar_rmw32(pcie, RCONF(PCI_SECONDARY_BUS), 0xff, 1); - rcar_rmw32(pcie, RCONF(PCI_SUBORDINATE_BUS), 0xff, 1); - - /* Initialize default capabilities. */ - rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP); - rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS), - PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ROOT_PORT << 4); - rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f, - PCI_HEADER_TYPE_BRIDGE); - - /* Enable data link layer active state reporting */ - rcar_rmw32(pcie, REXPCAP(PCI_EXP_LNKCAP), PCI_EXP_LNKCAP_DLLLARC, - PCI_EXP_LNKCAP_DLLLARC); - - /* Write out the physical slot number = 0 */ - rcar_rmw32(pcie, REXPCAP(PCI_EXP_SLTCAP), PCI_EXP_SLTCAP_PSN, 0); - - /* Set the completion timer timeout to the maximum 50ms. */ - rcar_rmw32(pcie, TLCTLR + 1, 0x3f, 50); - - /* Terminate list of capabilities (Next Capability Offset=0) */ - rcar_rmw32(pcie, RVCCAP(0), 0xfff00000, 0); - - /* Enable MSI */ - if (IS_ENABLED(CONFIG_PCI_MSI)) - rcar_pci_write_reg(pcie, 0x801f0000, PCIEMSITXR); - - rcar_pci_write_reg(pcie, MACCTLR_INIT_VAL, MACCTLR); - - /* Finish initialization - establish a PCI Express link */ - rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR); - - /* This will timeout if we don't have a link. */ - err = rcar_pcie_wait_for_dl(pcie); - if (err) - return err; - - /* Enable INTx interrupts */ - rcar_rmw32(pcie, PCIEINTXR, 0, 0xF << 8); - - wmb(); - - return 0; -} - -static int rcar_pcie_phy_init_h1(struct rcar_pcie *pcie) -{ - /* Initialize the phy */ - phy_write_reg(pcie, 0, 0x42, 0x1, 0x0EC34191); - phy_write_reg(pcie, 1, 0x42, 0x1, 0x0EC34180); - phy_write_reg(pcie, 0, 0x43, 0x1, 0x00210188); - phy_write_reg(pcie, 1, 0x43, 0x1, 0x00210188); - phy_write_reg(pcie, 0, 0x44, 0x1, 0x015C0014); - phy_write_reg(pcie, 1, 0x44, 0x1, 0x015C0014); - phy_write_reg(pcie, 1, 0x4C, 0x1, 0x786174A0); - phy_write_reg(pcie, 1, 0x4D, 0x1, 0x048000BB); - phy_write_reg(pcie, 0, 0x51, 0x1, 0x079EC062); - phy_write_reg(pcie, 0, 0x52, 0x1, 0x20000000); - phy_write_reg(pcie, 1, 0x52, 0x1, 0x20000000); - phy_write_reg(pcie, 1, 0x56, 0x1, 0x00003806); - - phy_write_reg(pcie, 0, 0x60, 0x1, 0x004B03A5); - phy_write_reg(pcie, 0, 0x64, 0x1, 0x3F0F1F0F); - phy_write_reg(pcie, 0, 0x66, 0x1, 0x00008000); - - return 0; -} - -static int rcar_pcie_phy_init_gen2(struct rcar_pcie *pcie) +void rcar_pcie_set_inbound(struct rcar_pcie *pcie, u64 cpu_addr, + u64 pci_addr, u64 flags, int idx, bool host) { /* - * These settings come from the R-Car Series, 2nd Generation User's - * Manual, section 50.3.1 (2) Initialization of the physical layer. + * Set up 64-bit inbound regions as the range parser doesn't + * distinguish between 32 and 64-bit types. */ - rcar_pci_write_reg(pcie, 0x000f0030, GEN2_PCIEPHYADDR); - rcar_pci_write_reg(pcie, 0x00381203, GEN2_PCIEPHYDATA); - rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL); - rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL); - - rcar_pci_write_reg(pcie, 0x000f0054, GEN2_PCIEPHYADDR); - /* The following value is for DC connection, no termination resistor */ - rcar_pci_write_reg(pcie, 0x13802007, GEN2_PCIEPHYDATA); - rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL); - rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL); - - return 0; -} - -static int rcar_pcie_phy_init_gen3(struct rcar_pcie *pcie) -{ - int err; - - err = phy_init(pcie->phy); - if (err) - return err; - - err = phy_power_on(pcie->phy); - if (err) - phy_exit(pcie->phy); - - return err; -} - -static int rcar_msi_alloc(struct rcar_msi *chip) -{ - int msi; - - mutex_lock(&chip->lock); - - msi = find_first_zero_bit(chip->used, INT_PCI_MSI_NR); - if (msi < INT_PCI_MSI_NR) - set_bit(msi, chip->used); - else - msi = -ENOSPC; - - mutex_unlock(&chip->lock); - - return msi; -} - -static int rcar_msi_alloc_region(struct rcar_msi *chip, int no_irqs) -{ - int msi; - - mutex_lock(&chip->lock); - msi = bitmap_find_free_region(chip->used, INT_PCI_MSI_NR, - order_base_2(no_irqs)); - mutex_unlock(&chip->lock); - - return msi; -} - -static void rcar_msi_free(struct rcar_msi *chip, unsigned long irq) -{ - mutex_lock(&chip->lock); - clear_bit(irq, chip->used); - mutex_unlock(&chip->lock); -} - -static irqreturn_t rcar_pcie_msi_irq(int irq, void *data) -{ - struct rcar_pcie *pcie = data; - struct rcar_msi *msi = &pcie->msi; - struct device *dev = pcie->dev; - unsigned long reg; - - reg = rcar_pci_read_reg(pcie, PCIEMSIFR); - - /* MSI & INTx share an interrupt - we only handle MSI here */ - if (!reg) - return IRQ_NONE; - - while (reg) { - unsigned int index = find_first_bit(®, 32); - unsigned int msi_irq; - - /* clear the interrupt */ - rcar_pci_write_reg(pcie, 1 << index, PCIEMSIFR); - - msi_irq = irq_find_mapping(msi->domain, index); - if (msi_irq) { - if (test_bit(index, msi->used)) - generic_handle_irq(msi_irq); - else - dev_info(dev, "unhandled MSI\n"); - } else { - /* Unknown MSI, just clear it */ - dev_dbg(dev, "unexpected MSI\n"); - } - - /* see if there's any more pending in this vector */ - reg = rcar_pci_read_reg(pcie, PCIEMSIFR); - } - - return IRQ_HANDLED; -} - -static int rcar_msi_setup_irq(struct msi_controller *chip, struct pci_dev *pdev, - struct msi_desc *desc) -{ - struct rcar_msi *msi = to_rcar_msi(chip); - struct rcar_pcie *pcie = container_of(chip, struct rcar_pcie, msi.chip); - struct msi_msg msg; - unsigned int irq; - int hwirq; - - hwirq = rcar_msi_alloc(msi); - if (hwirq < 0) - return hwirq; - - irq = irq_find_mapping(msi->domain, hwirq); - if (!irq) { - rcar_msi_free(msi, hwirq); - return -EINVAL; - } - - irq_set_msi_desc(irq, desc); - - msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; - msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR); - msg.data = hwirq; - - pci_write_msi_msg(irq, &msg); - - return 0; -} - -static int rcar_msi_setup_irqs(struct msi_controller *chip, - struct pci_dev *pdev, int nvec, int type) -{ - struct rcar_pcie *pcie = container_of(chip, struct rcar_pcie, msi.chip); - struct rcar_msi *msi = to_rcar_msi(chip); - struct msi_desc *desc; - struct msi_msg msg; - unsigned int irq; - int hwirq; - int i; - - /* MSI-X interrupts are not supported */ - if (type == PCI_CAP_ID_MSIX) - return -EINVAL; - - WARN_ON(!list_is_singular(&pdev->dev.msi_list)); - desc = list_entry(pdev->dev.msi_list.next, struct msi_desc, list); - - hwirq = rcar_msi_alloc_region(msi, nvec); - if (hwirq < 0) - return -ENOSPC; - - irq = irq_find_mapping(msi->domain, hwirq); - if (!irq) - return -ENOSPC; - - for (i = 0; i < nvec; i++) { - /* - * irq_create_mapping() called from rcar_pcie_probe() pre- - * allocates descs, so there is no need to allocate descs here. - * We can therefore assume that if irq_find_mapping() above - * returns non-zero, then the descs are also successfully - * allocated. - */ - if (irq_set_msi_desc_off(irq, i, desc)) { - /* TODO: clear */ - return -EINVAL; - } - } - - desc->nvec_used = nvec; - desc->msi_attrib.multiple = order_base_2(nvec); - - msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; - msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR); - msg.data = hwirq; - - pci_write_msi_msg(irq, &msg); - - return 0; -} - -static void rcar_msi_teardown_irq(struct msi_controller *chip, unsigned int irq) -{ - struct rcar_msi *msi = to_rcar_msi(chip); - struct irq_data *d = irq_get_irq_data(irq); - - rcar_msi_free(msi, d->hwirq); -} - -static struct irq_chip rcar_msi_irq_chip = { - .name = "R-Car PCIe MSI", - .irq_enable = pci_msi_unmask_irq, - .irq_disable = pci_msi_mask_irq, - .irq_mask = pci_msi_mask_irq, - .irq_unmask = pci_msi_unmask_irq, -}; - -static int rcar_msi_map(struct irq_domain *domain, unsigned int irq, - irq_hw_number_t hwirq) -{ - irq_set_chip_and_handler(irq, &rcar_msi_irq_chip, handle_simple_irq); - irq_set_chip_data(irq, domain->host_data); - - return 0; -} - -static const struct irq_domain_ops msi_domain_ops = { - .map = rcar_msi_map, -}; - -static void rcar_pcie_unmap_msi(struct rcar_pcie *pcie) -{ - struct rcar_msi *msi = &pcie->msi; - int i, irq; - - for (i = 0; i < INT_PCI_MSI_NR; i++) { - irq = irq_find_mapping(msi->domain, i); - if (irq > 0) - irq_dispose_mapping(irq); - } - - irq_domain_remove(msi->domain); -} - -static int rcar_pcie_enable_msi(struct rcar_pcie *pcie) -{ - struct device *dev = pcie->dev; - struct rcar_msi *msi = &pcie->msi; - phys_addr_t base; - int err, i; - - mutex_init(&msi->lock); - - msi->chip.dev = dev; - msi->chip.setup_irq = rcar_msi_setup_irq; - msi->chip.setup_irqs = rcar_msi_setup_irqs; - msi->chip.teardown_irq = rcar_msi_teardown_irq; - - msi->domain = irq_domain_add_linear(dev->of_node, INT_PCI_MSI_NR, - &msi_domain_ops, &msi->chip); - if (!msi->domain) { - dev_err(dev, "failed to create IRQ domain\n"); - return -ENOMEM; - } - - for (i = 0; i < INT_PCI_MSI_NR; i++) - irq_create_mapping(msi->domain, i); - - /* Two irqs are for MSI, but they are also used for non-MSI irqs */ - err = devm_request_irq(dev, msi->irq1, rcar_pcie_msi_irq, - IRQF_SHARED | IRQF_NO_THREAD, - rcar_msi_irq_chip.name, pcie); - if (err < 0) { - dev_err(dev, "failed to request IRQ: %d\n", err); - goto err; - } - - err = devm_request_irq(dev, msi->irq2, rcar_pcie_msi_irq, - IRQF_SHARED | IRQF_NO_THREAD, - rcar_msi_irq_chip.name, pcie); - if (err < 0) { - dev_err(dev, "failed to request IRQ: %d\n", err); - goto err; - } - - /* setup MSI data target */ - msi->pages = __get_free_pages(GFP_KERNEL, 0); - if (!msi->pages) { - err = -ENOMEM; - goto err; - } - base = virt_to_phys((void *)msi->pages); - - rcar_pci_write_reg(pcie, lower_32_bits(base) | MSIFE, PCIEMSIALR); - rcar_pci_write_reg(pcie, upper_32_bits(base), PCIEMSIAUR); - - /* enable all MSI interrupts */ - rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER); - - return 0; - -err: - rcar_pcie_unmap_msi(pcie); - return err; -} - -static void rcar_pcie_teardown_msi(struct rcar_pcie *pcie) -{ - struct rcar_msi *msi = &pcie->msi; - - /* Disable all MSI interrupts */ - rcar_pci_write_reg(pcie, 0, PCIEMSIIER); - - /* Disable address decoding of the MSI interrupt, MSIFE */ - rcar_pci_write_reg(pcie, 0, PCIEMSIALR); - - free_pages(msi->pages, 0); - - rcar_pcie_unmap_msi(pcie); -} - -static int rcar_pcie_get_resources(struct rcar_pcie *pcie) -{ - struct device *dev = pcie->dev; - struct resource res; - int err, i; - - pcie->phy = devm_phy_optional_get(dev, "pcie"); - if (IS_ERR(pcie->phy)) - return PTR_ERR(pcie->phy); - - err = of_address_to_resource(dev->of_node, 0, &res); - if (err) - return err; - - pcie->base = devm_ioremap_resource(dev, &res); - if (IS_ERR(pcie->base)) - return PTR_ERR(pcie->base); - - pcie->bus_clk = devm_clk_get(dev, "pcie_bus"); - if (IS_ERR(pcie->bus_clk)) { - dev_err(dev, "cannot get pcie bus clock\n"); - return PTR_ERR(pcie->bus_clk); - } - - i = irq_of_parse_and_map(dev->of_node, 0); - if (!i) { - dev_err(dev, "cannot get platform resources for msi interrupt\n"); - err = -ENOENT; - goto err_irq1; - } - pcie->msi.irq1 = i; - - i = irq_of_parse_and_map(dev->of_node, 1); - if (!i) { - dev_err(dev, "cannot get platform resources for msi interrupt\n"); - err = -ENOENT; - goto err_irq2; - } - pcie->msi.irq2 = i; - - return 0; - -err_irq2: - irq_dispose_mapping(pcie->msi.irq1); -err_irq1: - return err; -} - -static int rcar_pcie_inbound_ranges(struct rcar_pcie *pcie, - struct resource_entry *entry, - int *index) -{ - u64 restype = entry->res->flags; - u64 cpu_addr = entry->res->start; - u64 cpu_end = entry->res->end; - u64 pci_addr = entry->res->start - entry->offset; - u32 flags = LAM_64BIT | LAR_ENABLE; - u64 mask; - u64 size = resource_size(entry->res); - int idx = *index; - - if (restype & IORESOURCE_PREFETCH) - flags |= LAM_PREFETCH; - - while (cpu_addr < cpu_end) { - if (idx >= MAX_NR_INBOUND_MAPS - 1) { - dev_err(pcie->dev, "Failed to map inbound regions!\n"); - return -EINVAL; - } - /* - * If the size of the range is larger than the alignment of - * the start address, we have to use multiple entries to - * perform the mapping. - */ - if (cpu_addr > 0) { - unsigned long nr_zeros = __ffs64(cpu_addr); - u64 alignment = 1ULL << nr_zeros; - - size = min(size, alignment); - } - /* Hardware supports max 4GiB inbound region */ - size = min(size, 1ULL << 32); - - mask = roundup_pow_of_two(size) - 1; - mask &= ~0xf; - - /* - * Set up 64-bit inbound regions as the range parser doesn't - * distinguish between 32 and 64-bit types. - */ + if (host) rcar_pci_write_reg(pcie, lower_32_bits(pci_addr), PCIEPRAR(idx)); - rcar_pci_write_reg(pcie, lower_32_bits(cpu_addr), PCIELAR(idx)); - rcar_pci_write_reg(pcie, lower_32_bits(mask) | flags, - PCIELAMR(idx)); + rcar_pci_write_reg(pcie, lower_32_bits(cpu_addr), PCIELAR(idx)); + rcar_pci_write_reg(pcie, flags, PCIELAMR(idx)); + if (host) rcar_pci_write_reg(pcie, upper_32_bits(pci_addr), PCIEPRAR(idx + 1)); - rcar_pci_write_reg(pcie, upper_32_bits(cpu_addr), - PCIELAR(idx + 1)); - rcar_pci_write_reg(pcie, 0, PCIELAMR(idx + 1)); - - pci_addr += size; - cpu_addr += size; - idx += 2; - } - *index = idx; - - return 0; -} - -static int rcar_pcie_parse_map_dma_ranges(struct rcar_pcie *pcie) -{ - struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); - struct resource_entry *entry; - int index = 0, err = 0; - - resource_list_for_each_entry(entry, &bridge->dma_ranges) { - err = rcar_pcie_inbound_ranges(pcie, entry, &index); - if (err) - break; - } - - return err; -} - -static const struct of_device_id rcar_pcie_of_match[] = { - { .compatible = "renesas,pcie-r8a7779", - .data = rcar_pcie_phy_init_h1 }, - { .compatible = "renesas,pcie-r8a7790", - .data = rcar_pcie_phy_init_gen2 }, - { .compatible = "renesas,pcie-r8a7791", - .data = rcar_pcie_phy_init_gen2 }, - { .compatible = "renesas,pcie-rcar-gen2", - .data = rcar_pcie_phy_init_gen2 }, - { .compatible = "renesas,pcie-r8a7795", - .data = rcar_pcie_phy_init_gen3 }, - { .compatible = "renesas,pcie-rcar-gen3", - .data = rcar_pcie_phy_init_gen3 }, - {}, -}; - -static int rcar_pcie_probe(struct platform_device *pdev) -{ - struct device *dev = &pdev->dev; - struct rcar_pcie *pcie; - u32 data; - int err; - int (*phy_init_fn)(struct rcar_pcie *); - struct pci_host_bridge *bridge; - - bridge = pci_alloc_host_bridge(sizeof(*pcie)); - if (!bridge) - return -ENOMEM; - - pcie = pci_host_bridge_priv(bridge); - - pcie->dev = dev; - platform_set_drvdata(pdev, pcie); - - err = pci_parse_request_of_pci_ranges(dev, &pcie->resources, - &bridge->dma_ranges, NULL); - if (err) - goto err_free_bridge; - - pm_runtime_enable(pcie->dev); - err = pm_runtime_get_sync(pcie->dev); - if (err < 0) { - dev_err(pcie->dev, "pm_runtime_get_sync failed\n"); - goto err_pm_disable; - } - - err = rcar_pcie_get_resources(pcie); - if (err < 0) { - dev_err(dev, "failed to request resources: %d\n", err); - goto err_pm_put; - } - - err = clk_prepare_enable(pcie->bus_clk); - if (err) { - dev_err(dev, "failed to enable bus clock: %d\n", err); - goto err_unmap_msi_irqs; - } - - err = rcar_pcie_parse_map_dma_ranges(pcie); - if (err) - goto err_clk_disable; - - phy_init_fn = of_device_get_match_data(dev); - err = phy_init_fn(pcie); - if (err) { - dev_err(dev, "failed to init PCIe PHY\n"); - goto err_clk_disable; - } - - /* Failure to get a link might just be that no cards are inserted */ - if (rcar_pcie_hw_init(pcie)) { - dev_info(dev, "PCIe link down\n"); - err = -ENODEV; - goto err_phy_shutdown; - } - - data = rcar_pci_read_reg(pcie, MACSR); - dev_info(dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f); - - if (IS_ENABLED(CONFIG_PCI_MSI)) { - err = rcar_pcie_enable_msi(pcie); - if (err < 0) { - dev_err(dev, - "failed to enable MSI support: %d\n", - err); - goto err_phy_shutdown; - } - } - - err = rcar_pcie_enable(pcie); - if (err) - goto err_msi_teardown; - - return 0; - -err_msi_teardown: - if (IS_ENABLED(CONFIG_PCI_MSI)) - rcar_pcie_teardown_msi(pcie); - -err_phy_shutdown: - if (pcie->phy) { - phy_power_off(pcie->phy); - phy_exit(pcie->phy); - } - -err_clk_disable: - clk_disable_unprepare(pcie->bus_clk); - -err_unmap_msi_irqs: - irq_dispose_mapping(pcie->msi.irq2); - irq_dispose_mapping(pcie->msi.irq1); - -err_pm_put: - pm_runtime_put(dev); - -err_pm_disable: - pm_runtime_disable(dev); - pci_free_resource_list(&pcie->resources); - -err_free_bridge: - pci_free_host_bridge(bridge); - - return err; + rcar_pci_write_reg(pcie, upper_32_bits(cpu_addr), PCIELAR(idx + 1)); + rcar_pci_write_reg(pcie, 0, PCIELAMR(idx + 1)); } - -static int rcar_pcie_resume_noirq(struct device *dev) -{ - struct rcar_pcie *pcie = dev_get_drvdata(dev); - - if (rcar_pci_read_reg(pcie, PMSR) && - !(rcar_pci_read_reg(pcie, PCIETCTLR) & DL_DOWN)) - return 0; - - /* Re-establish the PCIe link */ - rcar_pci_write_reg(pcie, MACCTLR_INIT_VAL, MACCTLR); - rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR); - return rcar_pcie_wait_for_dl(pcie); -} - -static const struct dev_pm_ops rcar_pcie_pm_ops = { - .resume_noirq = rcar_pcie_resume_noirq, -}; - -static struct platform_driver rcar_pcie_driver = { - .driver = { - .name = "rcar-pcie", - .of_match_table = rcar_pcie_of_match, - .pm = &rcar_pcie_pm_ops, - .suppress_bind_attrs = true, - }, - .probe = rcar_pcie_probe, -}; -builtin_platform_driver(rcar_pcie_driver); diff --git a/drivers/pci/controller/pcie-rcar.h b/drivers/pci/controller/pcie-rcar.h new file mode 100644 index 000000000000..d4c698b5f821 --- /dev/null +++ b/drivers/pci/controller/pcie-rcar.h @@ -0,0 +1,140 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * PCIe driver for Renesas R-Car SoCs + * Copyright (C) 2014-2020 Renesas Electronics Europe Ltd + * + * Author: Phil Edworthy <phil.edworthy@renesas.com> + */ + +#ifndef _PCIE_RCAR_H +#define _PCIE_RCAR_H + +#define PCIECAR 0x000010 +#define PCIECCTLR 0x000018 +#define CONFIG_SEND_ENABLE BIT(31) +#define TYPE0 (0 << 8) +#define TYPE1 BIT(8) +#define PCIECDR 0x000020 +#define PCIEMSR 0x000028 +#define PCIEINTXR 0x000400 +#define ASTINTX BIT(16) +#define PCIEPHYSR 0x0007f0 +#define PHYRDY BIT(0) +#define PCIEMSITXR 0x000840 + +/* Transfer control */ +#define PCIETCTLR 0x02000 +#define DL_DOWN BIT(3) +#define CFINIT BIT(0) +#define PCIETSTR 0x02004 +#define DATA_LINK_ACTIVE BIT(0) +#define PCIEERRFR 0x02020 +#define UNSUPPORTED_REQUEST BIT(4) +#define PCIEMSIFR 0x02044 +#define PCIEMSIALR 0x02048 +#define MSIFE BIT(0) +#define PCIEMSIAUR 0x0204c +#define PCIEMSIIER 0x02050 + +/* root port address */ +#define PCIEPRAR(x) (0x02080 + ((x) * 0x4)) + +/* local address reg & mask */ +#define PCIELAR(x) (0x02200 + ((x) * 0x20)) +#define PCIELAMR(x) (0x02208 + ((x) * 0x20)) +#define LAM_PREFETCH BIT(3) +#define LAM_64BIT BIT(2) +#define LAR_ENABLE BIT(1) + +/* PCIe address reg & mask */ +#define PCIEPALR(x) (0x03400 + ((x) * 0x20)) +#define PCIEPAUR(x) (0x03404 + ((x) * 0x20)) +#define PCIEPAMR(x) (0x03408 + ((x) * 0x20)) +#define PCIEPTCTLR(x) (0x0340c + ((x) * 0x20)) +#define PAR_ENABLE BIT(31) +#define IO_SPACE BIT(8) + +/* Configuration */ +#define PCICONF(x) (0x010000 + ((x) * 0x4)) +#define INTDIS BIT(10) +#define PMCAP(x) (0x010040 + ((x) * 0x4)) +#define MSICAP(x) (0x010050 + ((x) * 0x4)) +#define MSICAP0_MSIE BIT(16) +#define MSICAP0_MMESCAP_OFFSET 17 +#define MSICAP0_MMESE_OFFSET 20 +#define MSICAP0_MMESE_MASK GENMASK(22, 20) +#define EXPCAP(x) (0x010070 + ((x) * 0x4)) +#define VCCAP(x) (0x010100 + ((x) * 0x4)) + +/* link layer */ +#define IDSETR0 0x011000 +#define IDSETR1 0x011004 +#define SUBIDSETR 0x011024 +#define TLCTLR 0x011048 +#define MACSR 0x011054 +#define SPCHGFIN BIT(4) +#define SPCHGFAIL BIT(6) +#define SPCHGSUC BIT(7) +#define LINK_SPEED (0xf << 16) +#define LINK_SPEED_2_5GTS (1 << 16) +#define LINK_SPEED_5_0GTS (2 << 16) +#define MACCTLR 0x011058 +#define MACCTLR_NFTS_MASK GENMASK(23, 16) /* The name is from SH7786 */ +#define SPEED_CHANGE BIT(24) +#define SCRAMBLE_DISABLE BIT(27) +#define LTSMDIS BIT(31) +#define MACCTLR_INIT_VAL (LTSMDIS | MACCTLR_NFTS_MASK) +#define PMSR 0x01105c +#define MACS2R 0x011078 +#define MACCGSPSETR 0x011084 +#define SPCNGRSN BIT(31) + +/* R-Car H1 PHY */ +#define H1_PCIEPHYADRR 0x04000c +#define WRITE_CMD BIT(16) +#define PHY_ACK BIT(24) +#define RATE_POS 12 +#define LANE_POS 8 +#define ADR_POS 0 +#define H1_PCIEPHYDOUTR 0x040014 + +/* R-Car Gen2 PHY */ +#define GEN2_PCIEPHYADDR 0x780 +#define GEN2_PCIEPHYDATA 0x784 +#define GEN2_PCIEPHYCTRL 0x78c + +#define INT_PCI_MSI_NR 32 + +#define RCONF(x) (PCICONF(0) + (x)) +#define RPMCAP(x) (PMCAP(0) + (x)) +#define REXPCAP(x) (EXPCAP(0) + (x)) +#define RVCCAP(x) (VCCAP(0) + (x)) + +#define PCIE_CONF_BUS(b) (((b) & 0xff) << 24) +#define PCIE_CONF_DEV(d) (((d) & 0x1f) << 19) +#define PCIE_CONF_FUNC(f) (((f) & 0x7) << 16) + +#define RCAR_PCI_MAX_RESOURCES 4 +#define MAX_NR_INBOUND_MAPS 6 + +struct rcar_pcie { + struct device *dev; + void __iomem *base; +}; + +enum { + RCAR_PCI_ACCESS_READ, + RCAR_PCI_ACCESS_WRITE, +}; + +void rcar_pci_write_reg(struct rcar_pcie *pcie, u32 val, unsigned int reg); +u32 rcar_pci_read_reg(struct rcar_pcie *pcie, unsigned int reg); +void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data); +int rcar_pcie_wait_for_phyrdy(struct rcar_pcie *pcie); +int rcar_pcie_wait_for_dl(struct rcar_pcie *pcie); +void rcar_pcie_set_outbound(struct rcar_pcie *pcie, int win, + struct resource_entry *window); +void rcar_pcie_set_inbound(struct rcar_pcie *pcie, u64 cpu_addr, + u64 pci_addr, u64 flags, int idx, bool host); + +#endif diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c index d743b0a48988..5eaf36629a75 100644 --- a/drivers/pci/controller/pcie-rockchip-ep.c +++ b/drivers/pci/controller/pcie-rockchip-ep.c @@ -615,7 +615,7 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev) rockchip_pcie_write(rockchip, BIT(0), PCIE_CORE_PHY_FUNC_CFG); err = pci_epc_mem_init(epc, rockchip->mem_res->start, - resource_size(rockchip->mem_res)); + resource_size(rockchip->mem_res), PAGE_SIZE); if (err < 0) { dev_err(dev, "failed to initialize the memory space\n"); goto err_uninit_port; diff --git a/drivers/pci/controller/pcie-tango.c b/drivers/pci/controller/pcie-tango.c index 21a208da3f59..8f640c70f936 100644 --- a/drivers/pci/controller/pcie-tango.c +++ b/drivers/pci/controller/pcie-tango.c @@ -207,7 +207,7 @@ static int smp8759_config_write(struct pci_bus *bus, unsigned int devfn, return ret; } -static struct pci_ecam_ops smp8759_ecam_ops = { +static const struct pci_ecam_ops smp8759_ecam_ops = { .bus_shift = 20, .pci_ops = { .map_bus = pci_ecam_map_bus, @@ -273,9 +273,9 @@ static int tango_pcie_probe(struct platform_device *pdev) writel_relaxed(0, pcie->base + SMP8759_ENABLE + offset); virq = platform_get_irq(pdev, 1); - if (virq <= 0) { + if (virq < 0) { dev_err(dev, "Failed to map IRQ\n"); - return -ENXIO; + return virq; } irq_dom = irq_domain_create_linear(fwnode, MSI_MAX, &dom_ops, pcie); @@ -295,11 +295,14 @@ static int tango_pcie_probe(struct platform_device *pdev) spin_lock_init(&pcie->used_msi_lock); irq_set_chained_handler_and_data(virq, tango_msi_isr, pcie); - return pci_host_common_probe(pdev, &smp8759_ecam_ops); + return pci_host_common_probe(pdev); } static const struct of_device_id tango_pcie_ids[] = { - { .compatible = "sigma,smp8759-pcie" }, + { + .compatible = "sigma,smp8759-pcie", + .data = &smp8759_ecam_ops, + }, { }, }; diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index dac91d60701d..e386d4eac407 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -445,9 +445,11 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) if (!membar2) return -ENOMEM; offset[0] = vmd->dev->resource[VMD_MEMBAR1].start - - readq(membar2 + MB2_SHADOW_OFFSET); + (readq(membar2 + MB2_SHADOW_OFFSET) & + PCI_BASE_ADDRESS_MEM_MASK); offset[1] = vmd->dev->resource[VMD_MEMBAR2].start - - readq(membar2 + MB2_SHADOW_OFFSET + 8); + (readq(membar2 + MB2_SHADOW_OFFSET + 8) & + PCI_BASE_ADDRESS_MEM_MASK); pci_iounmap(vmd->dev, membar2); } } diff --git a/drivers/pci/ecam.c b/drivers/pci/ecam.c index 1a81af0ba961..8f065a42fc1a 100644 --- a/drivers/pci/ecam.c +++ b/drivers/pci/ecam.c @@ -26,7 +26,7 @@ static const bool per_bus_mapping = !IS_ENABLED(CONFIG_64BIT); */ struct pci_config_window *pci_ecam_create(struct device *dev, struct resource *cfgres, struct resource *busr, - struct pci_ecam_ops *ops) + const struct pci_ecam_ops *ops) { struct pci_config_window *cfg; unsigned int bus_range, bus_range_max, bsz; @@ -101,6 +101,7 @@ err_exit: pci_ecam_free(cfg); return ERR_PTR(err); } +EXPORT_SYMBOL_GPL(pci_ecam_create); void pci_ecam_free(struct pci_config_window *cfg) { @@ -121,6 +122,7 @@ void pci_ecam_free(struct pci_config_window *cfg) release_resource(&cfg->res); kfree(cfg); } +EXPORT_SYMBOL_GPL(pci_ecam_free); /* * Function to implement the pci_ops ->map_bus method @@ -143,9 +145,10 @@ void __iomem *pci_ecam_map_bus(struct pci_bus *bus, unsigned int devfn, base = cfg->win + (busn << cfg->ops->bus_shift); return base + (devfn << devfn_shift) + where; } +EXPORT_SYMBOL_GPL(pci_ecam_map_bus); /* ECAM ops */ -struct pci_ecam_ops pci_generic_ecam_ops = { +const struct pci_ecam_ops pci_generic_ecam_ops = { .bus_shift = 20, .pci_ops = { .map_bus = pci_ecam_map_bus, @@ -153,10 +156,11 @@ struct pci_ecam_ops pci_generic_ecam_ops = { .write = pci_generic_config_write, } }; +EXPORT_SYMBOL_GPL(pci_generic_ecam_ops); #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) /* ECAM ops for 32-bit access only (non-compliant) */ -struct pci_ecam_ops pci_32b_ops = { +const struct pci_ecam_ops pci_32b_ops = { .bus_shift = 20, .pci_ops = { .map_bus = pci_ecam_map_bus, diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c index 60330f3e3751..c89a9561439f 100644 --- a/drivers/pci/endpoint/functions/pci-epf-test.c +++ b/drivers/pci/endpoint/functions/pci-epf-test.c @@ -187,6 +187,9 @@ static int pci_epf_test_init_dma_chan(struct pci_epf_test *epf_test) */ static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test) { + if (!epf_test->dma_supported) + return; + dma_release_channel(epf_test->dma_chan); epf_test->dma_chan = NULL; } diff --git a/drivers/pci/endpoint/pci-epc-mem.c b/drivers/pci/endpoint/pci-epc-mem.c index abfac1109a13..80c46f3a4590 100644 --- a/drivers/pci/endpoint/pci-epc-mem.c +++ b/drivers/pci/endpoint/pci-epc-mem.c @@ -23,7 +23,7 @@ static int pci_epc_mem_get_order(struct pci_epc_mem *mem, size_t size) { int order; - unsigned int page_shift = ilog2(mem->page_size); + unsigned int page_shift = ilog2(mem->window.page_size); size--; size >>= page_shift; @@ -36,62 +36,97 @@ static int pci_epc_mem_get_order(struct pci_epc_mem *mem, size_t size) } /** - * __pci_epc_mem_init() - initialize the pci_epc_mem structure + * pci_epc_multi_mem_init() - initialize the pci_epc_mem structure * @epc: the EPC device that invoked pci_epc_mem_init - * @phys_base: the physical address of the base - * @size: the size of the address space - * @page_size: size of each page + * @windows: pointer to windows supported by the device + * @num_windows: number of windows device supports * * Invoke to initialize the pci_epc_mem structure used by the * endpoint functions to allocate mapped PCI address. */ -int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size, - size_t page_size) +int pci_epc_multi_mem_init(struct pci_epc *epc, + struct pci_epc_mem_window *windows, + unsigned int num_windows) { - int ret; - struct pci_epc_mem *mem; - unsigned long *bitmap; + struct pci_epc_mem *mem = NULL; + unsigned long *bitmap = NULL; unsigned int page_shift; - int pages; + size_t page_size; int bitmap_size; + int pages; + int ret; + int i; - if (page_size < PAGE_SIZE) - page_size = PAGE_SIZE; + epc->num_windows = 0; - page_shift = ilog2(page_size); - pages = size >> page_shift; - bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); + if (!windows || !num_windows) + return -EINVAL; - mem = kzalloc(sizeof(*mem), GFP_KERNEL); - if (!mem) { - ret = -ENOMEM; - goto err; - } + epc->windows = kcalloc(num_windows, sizeof(*epc->windows), GFP_KERNEL); + if (!epc->windows) + return -ENOMEM; - bitmap = kzalloc(bitmap_size, GFP_KERNEL); - if (!bitmap) { - ret = -ENOMEM; - goto err_mem; - } + for (i = 0; i < num_windows; i++) { + page_size = windows[i].page_size; + if (page_size < PAGE_SIZE) + page_size = PAGE_SIZE; + page_shift = ilog2(page_size); + pages = windows[i].size >> page_shift; + bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); - mem->bitmap = bitmap; - mem->phys_base = phys_base; - mem->page_size = page_size; - mem->pages = pages; - mem->size = size; - mutex_init(&mem->lock); + mem = kzalloc(sizeof(*mem), GFP_KERNEL); + if (!mem) { + ret = -ENOMEM; + i--; + goto err_mem; + } + + bitmap = kzalloc(bitmap_size, GFP_KERNEL); + if (!bitmap) { + ret = -ENOMEM; + kfree(mem); + i--; + goto err_mem; + } + + mem->window.phys_base = windows[i].phys_base; + mem->window.size = windows[i].size; + mem->window.page_size = page_size; + mem->bitmap = bitmap; + mem->pages = pages; + mutex_init(&mem->lock); + epc->windows[i] = mem; + } - epc->mem = mem; + epc->mem = epc->windows[0]; + epc->num_windows = num_windows; return 0; err_mem: - kfree(mem); + for (; i >= 0; i--) { + mem = epc->windows[i]; + kfree(mem->bitmap); + kfree(mem); + } + kfree(epc->windows); + + return ret; +} +EXPORT_SYMBOL_GPL(pci_epc_multi_mem_init); + +int pci_epc_mem_init(struct pci_epc *epc, phys_addr_t base, + size_t size, size_t page_size) +{ + struct pci_epc_mem_window mem_window; + + mem_window.phys_base = base; + mem_window.size = size; + mem_window.page_size = page_size; -err: -return ret; + return pci_epc_multi_mem_init(epc, &mem_window, 1); } -EXPORT_SYMBOL_GPL(__pci_epc_mem_init); +EXPORT_SYMBOL_GPL(pci_epc_mem_init); /** * pci_epc_mem_exit() - cleanup the pci_epc_mem structure @@ -102,11 +137,22 @@ EXPORT_SYMBOL_GPL(__pci_epc_mem_init); */ void pci_epc_mem_exit(struct pci_epc *epc) { - struct pci_epc_mem *mem = epc->mem; + struct pci_epc_mem *mem; + int i; + + if (!epc->num_windows) + return; + for (i = 0; i < epc->num_windows; i++) { + mem = epc->windows[i]; + kfree(mem->bitmap); + kfree(mem); + } + kfree(epc->windows); + + epc->windows = NULL; epc->mem = NULL; - kfree(mem->bitmap); - kfree(mem); + epc->num_windows = 0; } EXPORT_SYMBOL_GPL(pci_epc_mem_exit); @@ -122,31 +168,60 @@ EXPORT_SYMBOL_GPL(pci_epc_mem_exit); void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc, phys_addr_t *phys_addr, size_t size) { - int pageno; void __iomem *virt_addr = NULL; - struct pci_epc_mem *mem = epc->mem; - unsigned int page_shift = ilog2(mem->page_size); + struct pci_epc_mem *mem; + unsigned int page_shift; + size_t align_size; + int pageno; int order; + int i; - size = ALIGN(size, mem->page_size); - order = pci_epc_mem_get_order(mem, size); - - mutex_lock(&mem->lock); - pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order); - if (pageno < 0) - goto ret; + for (i = 0; i < epc->num_windows; i++) { + mem = epc->windows[i]; + mutex_lock(&mem->lock); + align_size = ALIGN(size, mem->window.page_size); + order = pci_epc_mem_get_order(mem, align_size); - *phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift); - virt_addr = ioremap(*phys_addr, size); - if (!virt_addr) - bitmap_release_region(mem->bitmap, pageno, order); + pageno = bitmap_find_free_region(mem->bitmap, mem->pages, + order); + if (pageno >= 0) { + page_shift = ilog2(mem->window.page_size); + *phys_addr = mem->window.phys_base + + ((phys_addr_t)pageno << page_shift); + virt_addr = ioremap(*phys_addr, align_size); + if (!virt_addr) { + bitmap_release_region(mem->bitmap, + pageno, order); + mutex_unlock(&mem->lock); + continue; + } + mutex_unlock(&mem->lock); + return virt_addr; + } + mutex_unlock(&mem->lock); + } -ret: - mutex_unlock(&mem->lock); return virt_addr; } EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr); +static struct pci_epc_mem *pci_epc_get_matching_window(struct pci_epc *epc, + phys_addr_t phys_addr) +{ + struct pci_epc_mem *mem; + int i; + + for (i = 0; i < epc->num_windows; i++) { + mem = epc->windows[i]; + + if (phys_addr >= mem->window.phys_base && + phys_addr < (mem->window.phys_base + mem->window.size)) + return mem; + } + + return NULL; +} + /** * pci_epc_mem_free_addr() - free the allocated memory address * @epc: the EPC device on which memory was allocated @@ -159,14 +234,23 @@ EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr); void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr, void __iomem *virt_addr, size_t size) { + struct pci_epc_mem *mem; + unsigned int page_shift; + size_t page_size; int pageno; - struct pci_epc_mem *mem = epc->mem; - unsigned int page_shift = ilog2(mem->page_size); int order; + mem = pci_epc_get_matching_window(epc, phys_addr); + if (!mem) { + pr_err("failed to get matching window\n"); + return; + } + + page_size = mem->window.page_size; + page_shift = ilog2(page_size); iounmap(virt_addr); - pageno = (phys_addr - mem->phys_base) >> page_shift; - size = ALIGN(size, mem->page_size); + pageno = (phys_addr - mem->window.phys_base) >> page_shift; + size = ALIGN(size, page_size); order = pci_epc_mem_get_order(mem, size); mutex_lock(&mem->lock); bitmap_release_region(mem->bitmap, pageno, order); diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h index ae44f46d1bf3..4fd200d8b0a9 100644 --- a/drivers/pci/hotplug/pciehp.h +++ b/drivers/pci/hotplug/pciehp.h @@ -148,8 +148,6 @@ struct controller { #define MRL_SENS(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_MRLSP) #define ATTN_LED(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_AIP) #define PWR_LED(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_PIP) -#define HP_SUPR_RM(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_HPS) -#define EMI(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_EIP) #define NO_CMD_CMPL(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_NCCS) #define PSN(ctrl) (((ctrl)->slot_cap & PCI_EXP_SLTCAP_PSN) >> 19) diff --git a/drivers/pci/hotplug/rpaphp_core.c b/drivers/pci/hotplug/rpaphp_core.c index 6504869efabc..9887c9de08c3 100644 --- a/drivers/pci/hotplug/rpaphp_core.c +++ b/drivers/pci/hotplug/rpaphp_core.c @@ -435,7 +435,7 @@ static int rpaphp_drc_add_slot(struct device_node *dn) */ int rpaphp_add_slot(struct device_node *dn) { - if (!dn->name || strcmp(dn->name, "pci")) + if (!of_node_name_eq(dn, "pci")) return 0; if (of_find_property(dn, "ibm,drc-info", NULL)) diff --git a/drivers/pci/hotplug/shpchp.h b/drivers/pci/hotplug/shpchp.h index f7f13ee5d06e..6e85885b554c 100644 --- a/drivers/pci/hotplug/shpchp.h +++ b/drivers/pci/hotplug/shpchp.h @@ -164,7 +164,7 @@ u8 shpchp_handle_switch_change(u8 hp_slot, struct controller *ctrl); u8 shpchp_handle_presence_change(u8 hp_slot, struct controller *ctrl); u8 shpchp_handle_power_fault(u8 hp_slot, struct controller *ctrl); int shpchp_configure_device(struct slot *p_slot); -int shpchp_unconfigure_device(struct slot *p_slot); +void shpchp_unconfigure_device(struct slot *p_slot); void cleanup_slots(struct controller *ctrl); void shpchp_queue_pushbutton_work(struct work_struct *work); int shpc_init(struct controller *ctrl, struct pci_dev *pdev); diff --git a/drivers/pci/hotplug/shpchp_ctrl.c b/drivers/pci/hotplug/shpchp_ctrl.c index 078003dcde5b..afdc52d1cae7 100644 --- a/drivers/pci/hotplug/shpchp_ctrl.c +++ b/drivers/pci/hotplug/shpchp_ctrl.c @@ -341,8 +341,7 @@ static int remove_board(struct slot *p_slot) u8 hp_slot; int rc; - if (shpchp_unconfigure_device(p_slot)) - return(1); + shpchp_unconfigure_device(p_slot); hp_slot = p_slot->device - ctrl->slot_device_offset; p_slot = shpchp_find_slot(ctrl, hp_slot + ctrl->slot_device_offset); diff --git a/drivers/pci/hotplug/shpchp_pci.c b/drivers/pci/hotplug/shpchp_pci.c index 115701301487..36db0c3c4ea6 100644 --- a/drivers/pci/hotplug/shpchp_pci.c +++ b/drivers/pci/hotplug/shpchp_pci.c @@ -61,9 +61,8 @@ int shpchp_configure_device(struct slot *p_slot) return ret; } -int shpchp_unconfigure_device(struct slot *p_slot) +void shpchp_unconfigure_device(struct slot *p_slot) { - int rc = 0; struct pci_bus *parent = p_slot->ctrl->pci_dev->subordinate; struct pci_dev *dev, *temp; struct controller *ctrl = p_slot->ctrl; @@ -83,6 +82,4 @@ int shpchp_unconfigure_device(struct slot *p_slot) } pci_unlock_rescan_remove(); - return rc; } - diff --git a/drivers/pci/of.c b/drivers/pci/of.c index 81ceeaa6f1d5..27839cd2459f 100644 --- a/drivers/pci/of.c +++ b/drivers/pci/of.c @@ -592,7 +592,7 @@ int of_pci_get_max_link_speed(struct device_node *node) u32 max_link_speed; if (of_property_read_u32(node, "max-link-speed", &max_link_speed) || - max_link_speed > 4) + max_link_speed == 0 || max_link_speed > 4) return -EINVAL; return max_link_speed; diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index b73b10bce0df..e8e444eeb1cd 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -282,6 +282,8 @@ static const struct pci_p2pdma_whitelist_entry { } pci_p2pdma_whitelist[] = { /* AMD ZEN */ {PCI_VENDOR_ID_AMD, 0x1450, 0}, + {PCI_VENDOR_ID_AMD, 0x15d0, 0}, + {PCI_VENDOR_ID_AMD, 0x1630, 0}, /* Intel Xeon E5/Core i7 */ {PCI_VENDOR_ID_INTEL, 0x3c00, REQ_SAME_HOST_BRIDGE}, diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c index d21969fba6ab..7224b1e5f2a8 100644 --- a/drivers/pci/pci-acpi.c +++ b/drivers/pci/pci-acpi.c @@ -948,7 +948,7 @@ static bool acpi_pci_bridge_d3(struct pci_dev *dev) * Look for a special _DSD property for the root port and if it * is set we know the hierarchy behind it supports D3 just fine. */ - root = pci_find_pcie_root_port(dev); + root = pcie_find_root_port(dev); if (!root) return false; @@ -1128,7 +1128,7 @@ void acpi_pci_add_bus(struct pci_bus *bus) return; obj = acpi_evaluate_dsm(ACPI_HANDLE(bus->bridge), &pci_acpi_dsm_guid, 3, - RESET_DELAY_DSM, NULL); + DSM_PCI_POWER_ON_RESET_DELAY, NULL); if (!obj) return; @@ -1193,7 +1193,7 @@ static void pci_acpi_optimize_delay(struct pci_dev *pdev, pdev->d3cold_delay = 0; obj = acpi_evaluate_dsm(handle, &pci_acpi_dsm_guid, 3, - FUNCTION_DELAY_DSM, NULL); + DSM_PCI_DEVICE_READINESS_DURATIONS, NULL); if (!obj) return; diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c index 4f4f54bc732e..ccf26d12ec61 100644 --- a/drivers/pci/pci-bridge-emul.c +++ b/drivers/pci/pci-bridge-emul.c @@ -24,6 +24,17 @@ #define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END #define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2) +/** + * struct pci_bridge_reg_behavior - register bits behaviors + * @ro: Read-Only bits + * @rw: Read-Write bits + * @w1c: Write-1-to-Clear bits + * + * Reads and Writes will be filtered by specified behavior. All other bits not + * declared are assumed 'Reserved' and will return 0 on reads, per PCIe 5.0: + * "Reserved register fields must be read only and must return 0 (all 0's for + * multi-bit fields) when read". + */ struct pci_bridge_reg_behavior { /* Read-only bits */ u32 ro; @@ -33,9 +44,6 @@ struct pci_bridge_reg_behavior { /* Write-1-to-clear bits */ u32 w1c; - - /* Reserved bits (hardwired to 0) */ - u32 rsvd; }; static const struct pci_bridge_reg_behavior pci_regs_behavior[] = { @@ -49,7 +57,6 @@ static const struct pci_bridge_reg_behavior pci_regs_behavior[] = { PCI_COMMAND_FAST_BACK) | (PCI_STATUS_CAP_LIST | PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK | PCI_STATUS_DEVSEL_MASK) << 16), - .rsvd = GENMASK(15, 10) | ((BIT(6) | GENMASK(3, 0)) << 16), .w1c = PCI_STATUS_ERROR_BITS << 16, }, [PCI_CLASS_REVISION / 4] = { .ro = ~0 }, @@ -96,8 +103,6 @@ static const struct pci_bridge_reg_behavior pci_regs_behavior[] = { GENMASK(11, 8) | GENMASK(3, 0)), .w1c = PCI_STATUS_ERROR_BITS << 16, - - .rsvd = ((BIT(6) | GENMASK(4, 0)) << 16), }, [PCI_MEMORY_BASE / 4] = { @@ -130,12 +135,10 @@ static const struct pci_bridge_reg_behavior pci_regs_behavior[] = { [PCI_CAPABILITY_LIST / 4] = { .ro = GENMASK(7, 0), - .rsvd = GENMASK(31, 8), }, [PCI_ROM_ADDRESS1 / 4] = { .rw = GENMASK(31, 11) | BIT(0), - .rsvd = GENMASK(10, 1), }, /* @@ -158,8 +161,6 @@ static const struct pci_bridge_reg_behavior pci_regs_behavior[] = { .ro = (GENMASK(15, 8) | ((PCI_BRIDGE_CTL_FAST_BACK) << 16)), .w1c = BIT(10) << 16, - - .rsvd = (GENMASK(15, 12) | BIT(4)) << 16, }, }; @@ -181,31 +182,29 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { .rw = GENMASK(15, 0), /* - * Device status register has 4 bits W1C, then 2 bits - * RO, the rest is reserved + * Device status register has bits 6 and [3:0] W1C, [5:4] RO, + * the rest is reserved */ - .w1c = GENMASK(19, 16), - .ro = GENMASK(20, 19), - .rsvd = GENMASK(31, 21), + .w1c = (BIT(6) | GENMASK(3, 0)) << 16, + .ro = GENMASK(5, 4) << 16, }, [PCI_EXP_LNKCAP / 4] = { /* All bits are RO, except bit 23 which is reserved */ .ro = lower_32_bits(~BIT(23)), - .rsvd = BIT(23), }, [PCI_EXP_LNKCTL / 4] = { /* - * Link control has bits [1:0] and [11:3] RW, the - * other bits are reserved. - * Link status has bits [13:0] RO, and bits [14:15] + * Link control has bits [15:14], [11:3] and [1:0] RW, the + * rest is reserved. + * + * Link status has bits [13:0] RO, and bits [15:14] * W1C. */ - .rw = GENMASK(11, 3) | GENMASK(1, 0), + .rw = GENMASK(15, 14) | GENMASK(11, 3) | GENMASK(1, 0), .ro = GENMASK(13, 0) << 16, .w1c = GENMASK(15, 14) << 16, - .rsvd = GENMASK(15, 12) | BIT(2), }, [PCI_EXP_SLTCAP / 4] = { @@ -214,19 +213,18 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { [PCI_EXP_SLTCTL / 4] = { /* - * Slot control has bits [12:0] RW, the rest is + * Slot control has bits [14:0] RW, the rest is * reserved. * - * Slot status has a mix of W1C and RO bits, as well - * as reserved bits. + * Slot status has bits 8 and [4:0] W1C, bits [7:5] RO, the + * rest is reserved. */ - .rw = GENMASK(12, 0), + .rw = GENMASK(14, 0), .w1c = (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC) << 16, .ro = (PCI_EXP_SLTSTA_MRLSS | PCI_EXP_SLTSTA_PDS | PCI_EXP_SLTSTA_EIS) << 16, - .rsvd = GENMASK(15, 12) | (GENMASK(15, 9) << 16), }, [PCI_EXP_RTCTL / 4] = { @@ -234,19 +232,21 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { * Root control has bits [4:0] RW, the rest is * reserved. * - * Root status has bit 0 RO, the rest is reserved. + * Root capabilities has bit 0 RO, the rest is reserved. */ .rw = (PCI_EXP_RTCTL_SECEE | PCI_EXP_RTCTL_SENFEE | PCI_EXP_RTCTL_SEFEE | PCI_EXP_RTCTL_PMEIE | PCI_EXP_RTCTL_CRSSVE), .ro = PCI_EXP_RTCAP_CRSVIS << 16, - .rsvd = GENMASK(15, 5) | (GENMASK(15, 1) << 16), }, [PCI_EXP_RTSTA / 4] = { + /* + * Root status has bits 17 and [15:0] RO, bit 16 W1C, the rest + * is reserved. + */ .ro = GENMASK(15, 0) | PCI_EXP_RTSTA_PENDING, .w1c = PCI_EXP_RTSTA_PME, - .rsvd = GENMASK(31, 18), }, }; @@ -354,7 +354,8 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, * Make sure we never return any reserved bit with a value * different from 0. */ - *value &= ~behavior[reg / 4].rsvd; + *value &= behavior[reg / 4].ro | behavior[reg / 4].rw | + behavior[reg / 4].w1c; if (size == 1) *value = (*value >> (8 * (where & 3))) & 0xff; diff --git a/drivers/pci/pci-label.c b/drivers/pci/pci-label.c index a5910f942857..707dd9808676 100644 --- a/drivers/pci/pci-label.c +++ b/drivers/pci/pci-label.c @@ -178,7 +178,7 @@ static int dsm_get_label(struct device *dev, char *buf, return -1; obj = acpi_evaluate_dsm(handle, &pci_acpi_dsm_guid, 0x2, - DEVICE_LABEL_DSM, NULL); + DSM_PCI_DEVICE_NAME, NULL); if (!obj) return -1; @@ -218,7 +218,7 @@ static bool device_has_dsm(struct device *dev) return false; return !!acpi_check_dsm(handle, &pci_acpi_dsm_guid, 0x2, - 1 << DEVICE_LABEL_DSM); + 1 << DSM_PCI_DEVICE_NAME); } static umode_t acpi_index_string_exist(struct kobject *kobj, diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 595fcf59843f..ce096272f52b 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -752,30 +752,6 @@ struct resource *pci_find_resource(struct pci_dev *dev, struct resource *res) EXPORT_SYMBOL(pci_find_resource); /** - * pci_find_pcie_root_port - return PCIe Root Port - * @dev: PCI device to query - * - * Traverse up the parent chain and return the PCIe Root Port PCI Device - * for a given PCI Device. - */ -struct pci_dev *pci_find_pcie_root_port(struct pci_dev *dev) -{ - struct pci_dev *bridge, *highest_pcie_bridge = dev; - - bridge = pci_upstream_bridge(dev); - while (bridge && pci_is_pcie(bridge)) { - highest_pcie_bridge = bridge; - bridge = pci_upstream_bridge(bridge); - } - - if (pci_pcie_type(highest_pcie_bridge) != PCI_EXP_TYPE_ROOT_PORT) - return NULL; - - return highest_pcie_bridge; -} -EXPORT_SYMBOL(pci_find_pcie_root_port); - -/** * pci_wait_for_pending - wait for @mask bit(s) to clear in status word @pos * @dev: the PCI device to operate on * @pos: config space offset of status word @@ -868,7 +844,9 @@ static inline bool platform_pci_need_resume(struct pci_dev *dev) static inline bool platform_pci_bridge_d3(struct pci_dev *dev) { - return pci_platform_pm ? pci_platform_pm->bridge_d3(dev) : false; + if (pci_platform_pm && pci_platform_pm->bridge_d3) + return pci_platform_pm->bridge_d3(dev); + return false; } /** @@ -1578,7 +1556,7 @@ EXPORT_SYMBOL(pci_restore_state); struct pci_saved_state { u32 config_space[16]; - struct pci_cap_saved_data cap[0]; + struct pci_cap_saved_data cap[]; }; /** @@ -4660,7 +4638,8 @@ static int pci_pm_reset(struct pci_dev *dev, int probe) * pcie_wait_for_link_delay - Wait until link is active or inactive * @pdev: Bridge device * @active: waiting for active or inactive? - * @delay: Delay to wait after link has become active (in ms) + * @delay: Delay to wait after link has become active (in ms). Specify %0 + * for no delay. * * Use this to wait till link becomes active or inactive. */ @@ -4673,10 +4652,10 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active, /* * Some controllers might not implement link active reporting. In this - * case, we wait for 1000 + 100 ms. + * case, we wait for 1000 ms + any delay requested by the caller. */ if (!pdev->link_active_reporting) { - msleep(1100); + msleep(timeout + delay); return true; } @@ -4701,7 +4680,7 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active, msleep(10); timeout -= 10; } - if (active && ret) + if (active && ret && delay) msleep(delay); else if (ret != active) pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n", @@ -4822,17 +4801,28 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev) if (!pcie_downstream_port(dev)) return; - if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) { - pci_dbg(dev, "waiting %d ms for downstream link\n", delay); - msleep(delay); - } else { - pci_dbg(dev, "waiting %d ms for downstream link, after activation\n", - delay); - if (!pcie_wait_for_link_delay(dev, true, delay)) { + /* + * Per PCIe r5.0, sec 6.6.1, for downstream ports that support + * speeds > 5 GT/s, we must wait for link training to complete + * before the mandatory delay. + * + * We can only tell when link training completes via DLL Link + * Active, which is required for downstream ports that support + * speeds > 5 GT/s (sec 7.5.3.6). Unfortunately some common + * devices do not implement Link Active reporting even when it's + * required, so we'll check for that directly instead of checking + * the supported link speed. We assume devices without Link Active + * reporting can train in 100 ms regardless of speed. + */ + if (dev->link_active_reporting) { + pci_dbg(dev, "waiting for link to train\n"); + if (!pcie_wait_for_link_delay(dev, true, 0)) { /* Did not train, no need to wait any further */ return; } } + pci_dbg(child, "waiting %d ms to become accessible\n", delay); + msleep(delay); if (!pci_device_is_present(child)) { pci_dbg(child, "waiting additional %d ms to become accessible\n", delay); diff --git a/drivers/pci/pcie/Kconfig b/drivers/pci/pcie/Kconfig index 66386811cfde..9cd31331aee9 100644 --- a/drivers/pci/pcie/Kconfig +++ b/drivers/pci/pcie/Kconfig @@ -25,7 +25,6 @@ config PCIEAER bool "PCI Express Advanced Error Reporting support" depends on PCIEPORTBUS select RAS - default y help This enables PCI Express Root Port Advanced Error Reporting (AER) driver support. Error reporting messages sent to Root diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c index f4274d301235..3acf56683915 100644 --- a/drivers/pci/pcie/aer.c +++ b/drivers/pci/pcie/aer.c @@ -136,22 +136,18 @@ static const char * const ecrc_policy_str[] = { */ static int enable_ecrc_checking(struct pci_dev *dev) { - int pos; + int aer = dev->aer_cap; u32 reg32; - if (!pci_is_pcie(dev)) + if (!aer) return -ENODEV; - pos = dev->aer_cap; - if (!pos) - return -ENODEV; - - pci_read_config_dword(dev, pos + PCI_ERR_CAP, ®32); + pci_read_config_dword(dev, aer + PCI_ERR_CAP, ®32); if (reg32 & PCI_ERR_CAP_ECRC_GENC) reg32 |= PCI_ERR_CAP_ECRC_GENE; if (reg32 & PCI_ERR_CAP_ECRC_CHKC) reg32 |= PCI_ERR_CAP_ECRC_CHKE; - pci_write_config_dword(dev, pos + PCI_ERR_CAP, reg32); + pci_write_config_dword(dev, aer + PCI_ERR_CAP, reg32); return 0; } @@ -164,19 +160,15 @@ static int enable_ecrc_checking(struct pci_dev *dev) */ static int disable_ecrc_checking(struct pci_dev *dev) { - int pos; + int aer = dev->aer_cap; u32 reg32; - if (!pci_is_pcie(dev)) + if (!aer) return -ENODEV; - pos = dev->aer_cap; - if (!pos) - return -ENODEV; - - pci_read_config_dword(dev, pos + PCI_ERR_CAP, ®32); + pci_read_config_dword(dev, aer + PCI_ERR_CAP, ®32); reg32 &= ~(PCI_ERR_CAP_ECRC_GENE | PCI_ERR_CAP_ECRC_CHKE); - pci_write_config_dword(dev, pos + PCI_ERR_CAP, reg32); + pci_write_config_dword(dev, aer + PCI_ERR_CAP, reg32); return 0; } @@ -217,142 +209,22 @@ void pcie_ecrc_get_policy(char *str) } #endif /* CONFIG_PCIE_ECRC */ -#ifdef CONFIG_ACPI_APEI -static inline int hest_match_pci(struct acpi_hest_aer_common *p, - struct pci_dev *pci) -{ - return ACPI_HEST_SEGMENT(p->bus) == pci_domain_nr(pci->bus) && - ACPI_HEST_BUS(p->bus) == pci->bus->number && - p->device == PCI_SLOT(pci->devfn) && - p->function == PCI_FUNC(pci->devfn); -} - -static inline bool hest_match_type(struct acpi_hest_header *hest_hdr, - struct pci_dev *dev) -{ - u16 hest_type = hest_hdr->type; - u8 pcie_type = pci_pcie_type(dev); - - if ((hest_type == ACPI_HEST_TYPE_AER_ROOT_PORT && - pcie_type == PCI_EXP_TYPE_ROOT_PORT) || - (hest_type == ACPI_HEST_TYPE_AER_ENDPOINT && - pcie_type == PCI_EXP_TYPE_ENDPOINT) || - (hest_type == ACPI_HEST_TYPE_AER_BRIDGE && - (dev->class >> 16) == PCI_BASE_CLASS_BRIDGE)) - return true; - return false; -} - -struct aer_hest_parse_info { - struct pci_dev *pci_dev; - int firmware_first; -}; - -static int hest_source_is_pcie_aer(struct acpi_hest_header *hest_hdr) -{ - if (hest_hdr->type == ACPI_HEST_TYPE_AER_ROOT_PORT || - hest_hdr->type == ACPI_HEST_TYPE_AER_ENDPOINT || - hest_hdr->type == ACPI_HEST_TYPE_AER_BRIDGE) - return 1; - return 0; -} - -static int aer_hest_parse(struct acpi_hest_header *hest_hdr, void *data) -{ - struct aer_hest_parse_info *info = data; - struct acpi_hest_aer_common *p; - int ff; - - if (!hest_source_is_pcie_aer(hest_hdr)) - return 0; - - p = (struct acpi_hest_aer_common *)(hest_hdr + 1); - ff = !!(p->flags & ACPI_HEST_FIRMWARE_FIRST); - - /* - * If no specific device is supplied, determine whether - * FIRMWARE_FIRST is set for *any* PCIe device. - */ - if (!info->pci_dev) { - info->firmware_first |= ff; - return 0; - } - - /* Otherwise, check the specific device */ - if (p->flags & ACPI_HEST_GLOBAL) { - if (hest_match_type(hest_hdr, info->pci_dev)) - info->firmware_first = ff; - } else - if (hest_match_pci(p, info->pci_dev)) - info->firmware_first = ff; - - return 0; -} - -static void aer_set_firmware_first(struct pci_dev *pci_dev) -{ - int rc; - struct aer_hest_parse_info info = { - .pci_dev = pci_dev, - .firmware_first = 0, - }; - - rc = apei_hest_parse(aer_hest_parse, &info); - - if (rc) - pci_dev->__aer_firmware_first = 0; - else - pci_dev->__aer_firmware_first = info.firmware_first; - pci_dev->__aer_firmware_first_valid = 1; -} +#define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \ + PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE) -int pcie_aer_get_firmware_first(struct pci_dev *dev) +int pcie_aer_is_native(struct pci_dev *dev) { - if (!pci_is_pcie(dev)) - return 0; + struct pci_host_bridge *host = pci_find_host_bridge(dev->bus); - if (pcie_ports_native) + if (!dev->aer_cap) return 0; - if (!dev->__aer_firmware_first_valid) - aer_set_firmware_first(dev); - return dev->__aer_firmware_first; -} - -static bool aer_firmware_first; - -/** - * aer_acpi_firmware_first - Check if APEI should control AER. - */ -bool aer_acpi_firmware_first(void) -{ - static bool parsed = false; - struct aer_hest_parse_info info = { - .pci_dev = NULL, /* Check all PCIe devices */ - .firmware_first = 0, - }; - - if (pcie_ports_native) - return false; - - if (!parsed) { - apei_hest_parse(aer_hest_parse, &info); - aer_firmware_first = info.firmware_first; - parsed = true; - } - return aer_firmware_first; + return pcie_ports_native || host->native_aer; } -#endif - -#define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \ - PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE) int pci_enable_pcie_error_reporting(struct pci_dev *dev) { - if (pcie_aer_get_firmware_first(dev)) - return -EIO; - - if (!dev->aer_cap) + if (!pcie_aer_is_native(dev)) return -EIO; return pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_AER_FLAGS); @@ -361,7 +233,7 @@ EXPORT_SYMBOL_GPL(pci_enable_pcie_error_reporting); int pci_disable_pcie_error_reporting(struct pci_dev *dev) { - if (pcie_aer_get_firmware_first(dev)) + if (!pcie_aer_is_native(dev)) return -EIO; return pcie_capability_clear_word(dev, PCI_EXP_DEVCTL, @@ -379,22 +251,18 @@ void pci_aer_clear_device_status(struct pci_dev *dev) int pci_aer_clear_nonfatal_status(struct pci_dev *dev) { - int pos; + int aer = dev->aer_cap; u32 status, sev; - pos = dev->aer_cap; - if (!pos) - return -EIO; - - if (pcie_aer_get_firmware_first(dev)) + if (!pcie_aer_is_native(dev)) return -EIO; /* Clear status bits for ERR_NONFATAL errors only */ - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev); + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, &status); + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_SEVER, &sev); status &= ~sev; if (status) - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); + pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, status); return 0; } @@ -402,22 +270,18 @@ EXPORT_SYMBOL_GPL(pci_aer_clear_nonfatal_status); void pci_aer_clear_fatal_status(struct pci_dev *dev) { - int pos; + int aer = dev->aer_cap; u32 status, sev; - pos = dev->aer_cap; - if (!pos) - return; - - if (pcie_aer_get_firmware_first(dev)) + if (!pcie_aer_is_native(dev)) return; /* Clear status bits for ERR_FATAL errors only */ - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev); + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, &status); + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_SEVER, &sev); status &= sev; if (status) - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); + pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, status); } /** @@ -431,35 +295,31 @@ void pci_aer_clear_fatal_status(struct pci_dev *dev) */ int pci_aer_raw_clear_status(struct pci_dev *dev) { - int pos; + int aer = dev->aer_cap; u32 status; int port_type; - if (!pci_is_pcie(dev)) - return -ENODEV; - - pos = dev->aer_cap; - if (!pos) + if (!aer) return -EIO; port_type = pci_pcie_type(dev); if (port_type == PCI_EXP_TYPE_ROOT_PORT) { - pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &status); - pci_write_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, status); + pci_read_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, &status); + pci_write_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, status); } - pci_read_config_dword(dev, pos + PCI_ERR_COR_STATUS, &status); - pci_write_config_dword(dev, pos + PCI_ERR_COR_STATUS, status); + pci_read_config_dword(dev, aer + PCI_ERR_COR_STATUS, &status); + pci_write_config_dword(dev, aer + PCI_ERR_COR_STATUS, status); - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, &status); + pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, status); return 0; } int pci_aer_clear_status(struct pci_dev *dev) { - if (pcie_aer_get_firmware_first(dev)) + if (!pcie_aer_is_native(dev)) return -EIO; return pci_aer_raw_clear_status(dev); @@ -467,12 +327,11 @@ int pci_aer_clear_status(struct pci_dev *dev) void pci_save_aer_state(struct pci_dev *dev) { + int aer = dev->aer_cap; struct pci_cap_saved_state *save_state; u32 *cap; - int pos; - pos = dev->aer_cap; - if (!pos) + if (!aer) return; save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_ERR); @@ -480,22 +339,21 @@ void pci_save_aer_state(struct pci_dev *dev) return; cap = &save_state->cap.data[0]; - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, cap++); - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, cap++); - pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, cap++); - pci_read_config_dword(dev, pos + PCI_ERR_CAP, cap++); + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, cap++); + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_SEVER, cap++); + pci_read_config_dword(dev, aer + PCI_ERR_COR_MASK, cap++); + pci_read_config_dword(dev, aer + PCI_ERR_CAP, cap++); if (pcie_cap_has_rtctl(dev)) - pci_read_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, cap++); + pci_read_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, cap++); } void pci_restore_aer_state(struct pci_dev *dev) { + int aer = dev->aer_cap; struct pci_cap_saved_state *save_state; u32 *cap; - int pos; - pos = dev->aer_cap; - if (!pos) + if (!aer) return; save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_ERR); @@ -503,12 +361,12 @@ void pci_restore_aer_state(struct pci_dev *dev) return; cap = &save_state->cap.data[0]; - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, *cap++); - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, *cap++); - pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, *cap++); - pci_write_config_dword(dev, pos + PCI_ERR_CAP, *cap++); + pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, *cap++); + pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_SEVER, *cap++); + pci_write_config_dword(dev, aer + PCI_ERR_COR_MASK, *cap++); + pci_write_config_dword(dev, aer + PCI_ERR_CAP, *cap++); if (pcie_cap_has_rtctl(dev)) - pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, *cap++); + pci_write_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, *cap++); } void pci_aer_init(struct pci_dev *dev) @@ -939,7 +797,7 @@ static int add_error_device(struct aer_err_info *e_info, struct pci_dev *dev) */ static bool is_error_source(struct pci_dev *dev, struct aer_err_info *e_info) { - int pos; + int aer = dev->aer_cap; u32 status, mask; u16 reg16; @@ -974,17 +832,16 @@ static bool is_error_source(struct pci_dev *dev, struct aer_err_info *e_info) if (!(reg16 & PCI_EXP_AER_FLAGS)) return false; - pos = dev->aer_cap; - if (!pos) + if (!aer) return false; /* Check if error is recorded */ if (e_info->severity == AER_CORRECTABLE) { - pci_read_config_dword(dev, pos + PCI_ERR_COR_STATUS, &status); - pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, &mask); + pci_read_config_dword(dev, aer + PCI_ERR_COR_STATUS, &status); + pci_read_config_dword(dev, aer + PCI_ERR_COR_MASK, &mask); } else { - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &mask); + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, &status); + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, &mask); } if (status & ~mask) return true; @@ -1055,16 +912,15 @@ static bool find_source_device(struct pci_dev *parent, */ static void handle_error_source(struct pci_dev *dev, struct aer_err_info *info) { - int pos; + int aer = dev->aer_cap; if (info->severity == AER_CORRECTABLE) { /* * Correctable error does not need software intervention. * No need to go through error recovery process. */ - pos = dev->aer_cap; - if (pos) - pci_write_config_dword(dev, pos + PCI_ERR_COR_STATUS, + if (aer) + pci_write_config_dword(dev, aer + PCI_ERR_COR_STATUS, info->status); pci_aer_clear_device_status(dev); } else if (info->severity == AER_NONFATAL) @@ -1155,22 +1011,21 @@ EXPORT_SYMBOL_GPL(aer_recover_queue); */ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info) { - int pos, temp; + int aer = dev->aer_cap; + int temp; /* Must reset in this function */ info->status = 0; info->tlp_header_valid = 0; - pos = dev->aer_cap; - /* The device might not support AER */ - if (!pos) + if (!aer) return 0; if (info->severity == AER_CORRECTABLE) { - pci_read_config_dword(dev, pos + PCI_ERR_COR_STATUS, + pci_read_config_dword(dev, aer + PCI_ERR_COR_STATUS, &info->status); - pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, + pci_read_config_dword(dev, aer + PCI_ERR_COR_MASK, &info->mask); if (!(info->status & ~info->mask)) return 0; @@ -1179,27 +1034,27 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info) info->severity == AER_NONFATAL) { /* Link is still healthy for IO reads */ - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, &info->status); - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, &info->mask); if (!(info->status & ~info->mask)) return 0; /* Get First Error Pointer */ - pci_read_config_dword(dev, pos + PCI_ERR_CAP, &temp); + pci_read_config_dword(dev, aer + PCI_ERR_CAP, &temp); info->first_error = PCI_ERR_CAP_FEP(temp); if (info->status & AER_LOG_TLP_MASKS) { info->tlp_header_valid = 1; pci_read_config_dword(dev, - pos + PCI_ERR_HEADER_LOG, &info->tlp.dw0); + aer + PCI_ERR_HEADER_LOG, &info->tlp.dw0); pci_read_config_dword(dev, - pos + PCI_ERR_HEADER_LOG + 4, &info->tlp.dw1); + aer + PCI_ERR_HEADER_LOG + 4, &info->tlp.dw1); pci_read_config_dword(dev, - pos + PCI_ERR_HEADER_LOG + 8, &info->tlp.dw2); + aer + PCI_ERR_HEADER_LOG + 8, &info->tlp.dw2); pci_read_config_dword(dev, - pos + PCI_ERR_HEADER_LOG + 12, &info->tlp.dw3); + aer + PCI_ERR_HEADER_LOG + 12, &info->tlp.dw3); } } @@ -1305,15 +1160,15 @@ static irqreturn_t aer_irq(int irq, void *context) struct pcie_device *pdev = (struct pcie_device *)context; struct aer_rpc *rpc = get_service_data(pdev); struct pci_dev *rp = rpc->rpd; + int aer = rp->aer_cap; struct aer_err_source e_src = {}; - int pos = rp->aer_cap; - pci_read_config_dword(rp, pos + PCI_ERR_ROOT_STATUS, &e_src.status); + pci_read_config_dword(rp, aer + PCI_ERR_ROOT_STATUS, &e_src.status); if (!(e_src.status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV))) return IRQ_NONE; - pci_read_config_dword(rp, pos + PCI_ERR_ROOT_ERR_SRC, &e_src.id); - pci_write_config_dword(rp, pos + PCI_ERR_ROOT_STATUS, e_src.status); + pci_read_config_dword(rp, aer + PCI_ERR_ROOT_ERR_SRC, &e_src.id); + pci_write_config_dword(rp, aer + PCI_ERR_ROOT_STATUS, e_src.status); if (!kfifo_put(&rpc->aer_fifo, e_src)) return IRQ_HANDLED; @@ -1365,7 +1220,7 @@ static void set_downstream_devices_error_reporting(struct pci_dev *dev, static void aer_enable_rootport(struct aer_rpc *rpc) { struct pci_dev *pdev = rpc->rpd; - int aer_pos; + int aer = pdev->aer_cap; u16 reg16; u32 reg32; @@ -1377,14 +1232,13 @@ static void aer_enable_rootport(struct aer_rpc *rpc) pcie_capability_clear_word(pdev, PCI_EXP_RTCTL, SYSTEM_ERROR_INTR_ON_MESG_MASK); - aer_pos = pdev->aer_cap; /* Clear error status */ - pci_read_config_dword(pdev, aer_pos + PCI_ERR_ROOT_STATUS, ®32); - pci_write_config_dword(pdev, aer_pos + PCI_ERR_ROOT_STATUS, reg32); - pci_read_config_dword(pdev, aer_pos + PCI_ERR_COR_STATUS, ®32); - pci_write_config_dword(pdev, aer_pos + PCI_ERR_COR_STATUS, reg32); - pci_read_config_dword(pdev, aer_pos + PCI_ERR_UNCOR_STATUS, ®32); - pci_write_config_dword(pdev, aer_pos + PCI_ERR_UNCOR_STATUS, reg32); + pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, ®32); + pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, reg32); + pci_read_config_dword(pdev, aer + PCI_ERR_COR_STATUS, ®32); + pci_write_config_dword(pdev, aer + PCI_ERR_COR_STATUS, reg32); + pci_read_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, ®32); + pci_write_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, reg32); /* * Enable error reporting for the root port device and downstream port @@ -1393,9 +1247,9 @@ static void aer_enable_rootport(struct aer_rpc *rpc) set_downstream_devices_error_reporting(pdev, true); /* Enable Root Port's interrupt in response to error messages */ - pci_read_config_dword(pdev, aer_pos + PCI_ERR_ROOT_COMMAND, ®32); + pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, ®32); reg32 |= ROOT_PORT_INTR_ON_MESG_MASK; - pci_write_config_dword(pdev, aer_pos + PCI_ERR_ROOT_COMMAND, reg32); + pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32); } /** @@ -1407,8 +1261,8 @@ static void aer_enable_rootport(struct aer_rpc *rpc) static void aer_disable_rootport(struct aer_rpc *rpc) { struct pci_dev *pdev = rpc->rpd; + int aer = pdev->aer_cap; u32 reg32; - int pos; /* * Disable error reporting for the root port device and downstream port @@ -1416,15 +1270,14 @@ static void aer_disable_rootport(struct aer_rpc *rpc) */ set_downstream_devices_error_reporting(pdev, false); - pos = pdev->aer_cap; /* Disable Root's interrupt in response to error messages */ - pci_read_config_dword(pdev, pos + PCI_ERR_ROOT_COMMAND, ®32); + pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, ®32); reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; - pci_write_config_dword(pdev, pos + PCI_ERR_ROOT_COMMAND, reg32); + pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32); /* Clear Root's error status reg */ - pci_read_config_dword(pdev, pos + PCI_ERR_ROOT_STATUS, ®32); - pci_write_config_dword(pdev, pos + PCI_ERR_ROOT_STATUS, reg32); + pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, ®32); + pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, reg32); } /** @@ -1481,28 +1334,27 @@ static int aer_probe(struct pcie_device *dev) */ static pci_ers_result_t aer_root_reset(struct pci_dev *dev) { + int aer = dev->aer_cap; u32 reg32; - int pos; int rc; - pos = dev->aer_cap; /* Disable Root's interrupt in response to error messages */ - pci_read_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, ®32); + pci_read_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, ®32); reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; - pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); + pci_write_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, reg32); rc = pci_bus_error_reset(dev); pci_info(dev, "Root Port link has been reset\n"); /* Clear Root Error Status */ - pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, ®32); - pci_write_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, reg32); + pci_read_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, ®32); + pci_write_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, reg32); /* Enable Root Port's interrupt in response to error messages */ - pci_read_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, ®32); + pci_read_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, ®32); reg32 |= ROOT_PORT_INTR_ON_MESG_MASK; - pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); + pci_write_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, reg32); return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; } @@ -1523,7 +1375,7 @@ static struct pcie_port_service_driver aerdriver = { */ int __init pcie_aer_init(void) { - if (!pci_aer_available() || aer_acpi_firmware_first()) + if (!pci_aer_available()) return -ENXIO; return pcie_port_service_register(&aerdriver); } diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c index 2378ed692534..b17e5ffd31b1 100644 --- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c @@ -628,16 +628,6 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist) /* Setup initial capable state. Will be updated later */ link->aspm_capable = link->aspm_support; - /* - * If the downstream component has pci bridge function, don't - * do ASPM for now. - */ - list_for_each_entry(child, &linkbus->devices, bus_list) { - if (pci_pcie_type(child) == PCI_EXP_TYPE_PCI_BRIDGE) { - link->aspm_disable = ASPM_STATE_ALL; - break; - } - } /* Get and check endpoint acceptable latencies */ list_for_each_entry(child, &linkbus->devices, bus_list) { diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c index 762170423fdd..daa9a4153776 100644 --- a/drivers/pci/pcie/dpc.c +++ b/drivers/pci/pcie/dpc.c @@ -284,7 +284,7 @@ static int dpc_probe(struct pcie_device *dev) int status; u16 ctl, cap; - if (pcie_aer_get_firmware_first(pdev) && !pcie_ports_dpc_native) + if (!pcie_aer_is_native(pdev) && !pcie_ports_dpc_native) return -ENOTSUPP; status = devm_request_threaded_irq(device, dev->irq, dpc_irq, @@ -301,6 +301,7 @@ static int dpc_probe(struct pcie_device *dev) ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN; pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl); + pci_info(pdev, "enabled with IRQ %d\n", dev->irq); pci_info(pdev, "error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", cap & PCI_EXP_DPC_IRQ, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT), diff --git a/drivers/pci/pcie/edr.c b/drivers/pci/pcie/edr.c index 594622a6cb16..a6b9b479b97a 100644 --- a/drivers/pci/pcie/edr.c +++ b/drivers/pci/pcie/edr.c @@ -148,11 +148,11 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data) pci_ers_result_t estate = PCI_ERS_RESULT_DISCONNECT; u16 status; - pci_info(pdev, "ACPI event %#x received\n", event); - if (event != ACPI_NOTIFY_DISCONNECT_RECOVER) return; + pci_info(pdev, "EDR event received\n"); + /* Locate the port which issued EDR event */ edev = acpi_dpc_port_get(pdev); if (!edev) { diff --git a/drivers/pci/pcie/pme.c b/drivers/pci/pcie/pme.c index f38e6c19dd50..6a32970bb731 100644 --- a/drivers/pci/pcie/pme.c +++ b/drivers/pci/pcie/pme.c @@ -408,7 +408,7 @@ static int pcie_pme_suspend(struct pcie_device *srv) /** * pcie_pme_resume - Resume PCIe PME service device. - * @srv - PCIe service device to resume. + * @srv: PCIe service device to resume. */ static int pcie_pme_resume(struct pcie_device *srv) { @@ -431,7 +431,7 @@ static int pcie_pme_resume(struct pcie_device *srv) /** * pcie_pme_remove - Prepare PCIe PME service device for removal. - * @srv - PCIe service device to remove. + * @srv: PCIe service device to remove. */ static void pcie_pme_remove(struct pcie_device *srv) { diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h index 64b5e081cdb2..af7cf237432a 100644 --- a/drivers/pci/pcie/portdrv.h +++ b/drivers/pci/pcie/portdrv.h @@ -29,8 +29,10 @@ extern bool pcie_ports_dpc_native; #ifdef CONFIG_PCIEAER int pcie_aer_init(void); +int pcie_aer_is_native(struct pci_dev *dev); #else static inline int pcie_aer_init(void) { return 0; } +static inline int pcie_aer_is_native(struct pci_dev *dev) { return 0; } #endif #ifdef CONFIG_HOTPLUG_PCI_PCIE @@ -147,16 +149,5 @@ static inline bool pcie_pme_no_msi(void) { return false; } static inline void pcie_pme_interrupt_enable(struct pci_dev *dev, bool en) {} #endif /* !CONFIG_PCIE_PME */ -#ifdef CONFIG_ACPI_APEI -int pcie_aer_get_firmware_first(struct pci_dev *pci_dev); -#else -static inline int pcie_aer_get_firmware_first(struct pci_dev *pci_dev) -{ - if (pci_dev->__aer_firmware_first_valid) - return pci_dev->__aer_firmware_first; - return 0; -} -#endif - struct device *pcie_port_find_device(struct pci_dev *dev, u32 service); #endif /* _PORTDRV_H_ */ diff --git a/drivers/pci/pcie/ptm.c b/drivers/pci/pcie/ptm.c index 9361f3aa26ab..357a454cafa0 100644 --- a/drivers/pci/pcie/ptm.c +++ b/drivers/pci/pcie/ptm.c @@ -39,10 +39,6 @@ void pci_ptm_init(struct pci_dev *dev) if (!pci_is_pcie(dev)) return; - pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_PTM); - if (!pos) - return; - /* * Enable PTM only on interior devices (root ports, switch ports, * etc.) on the assumption that it causes no link traffic until an @@ -52,6 +48,23 @@ void pci_ptm_init(struct pci_dev *dev) pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END)) return; + /* + * Switch Downstream Ports are not permitted to have a PTM + * capability; their PTM behavior is controlled by the Upstream + * Port (PCIe r5.0, sec 7.9.16). + */ + ups = pci_upstream_bridge(dev); + if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM && + ups && ups->ptm_enabled) { + dev->ptm_granularity = ups->ptm_granularity; + dev->ptm_enabled = 1; + return; + } + + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_PTM); + if (!pos) + return; + pci_read_config_dword(dev, pos + PCI_PTM_CAP, &cap); local_clock = (cap & PCI_PTM_GRANULARITY_MASK) >> 8; @@ -61,7 +74,6 @@ void pci_ptm_init(struct pci_dev *dev) * the spec recommendation (PCIe r3.1, sec 7.32.3), select the * furthest upstream Time Source as the PTM Root. */ - ups = pci_upstream_bridge(dev); if (ups && ups->ptm_enabled) { ctrl = PCI_PTM_CTRL_ENABLE; if (ups->ptm_granularity == 0) diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index d9c2c3301a8a..2f66988cea25 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -565,7 +565,7 @@ static struct pci_bus *pci_alloc_bus(struct pci_bus *parent) return b; } -static void devm_pci_release_host_bridge_dev(struct device *dev) +static void pci_release_host_bridge_dev(struct device *dev) { struct pci_host_bridge *bridge = to_pci_host_bridge(dev); @@ -574,12 +574,7 @@ static void devm_pci_release_host_bridge_dev(struct device *dev) pci_free_resource_list(&bridge->windows); pci_free_resource_list(&bridge->dma_ranges); -} - -static void pci_release_host_bridge_dev(struct device *dev) -{ - devm_pci_release_host_bridge_dev(dev); - kfree(to_pci_host_bridge(dev)); + kfree(bridge); } static void pci_init_host_bridge(struct pci_host_bridge *bridge) @@ -599,6 +594,8 @@ static void pci_init_host_bridge(struct pci_host_bridge *bridge) bridge->native_pme = 1; bridge->native_ltr = 1; bridge->native_dpc = 1; + + device_initialize(&bridge->dev); } struct pci_host_bridge *pci_alloc_host_bridge(size_t priv) @@ -616,17 +613,25 @@ struct pci_host_bridge *pci_alloc_host_bridge(size_t priv) } EXPORT_SYMBOL(pci_alloc_host_bridge); +static void devm_pci_alloc_host_bridge_release(void *data) +{ + pci_free_host_bridge(data); +} + struct pci_host_bridge *devm_pci_alloc_host_bridge(struct device *dev, size_t priv) { + int ret; struct pci_host_bridge *bridge; - bridge = devm_kzalloc(dev, sizeof(*bridge) + priv, GFP_KERNEL); + bridge = pci_alloc_host_bridge(priv); if (!bridge) return NULL; - pci_init_host_bridge(bridge); - bridge->dev.release = devm_pci_release_host_bridge_dev; + ret = devm_add_action_or_reset(dev, devm_pci_alloc_host_bridge_release, + bridge); + if (ret) + return NULL; return bridge; } @@ -634,10 +639,7 @@ EXPORT_SYMBOL(devm_pci_alloc_host_bridge); void pci_free_host_bridge(struct pci_host_bridge *bridge) { - pci_free_resource_list(&bridge->windows); - pci_free_resource_list(&bridge->dma_ranges); - - kfree(bridge); + put_device(&bridge->dev); } EXPORT_SYMBOL(pci_free_host_bridge); @@ -908,10 +910,11 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge) if (err) goto free; - err = device_register(&bridge->dev); - if (err) + err = device_add(&bridge->dev); + if (err) { put_device(&bridge->dev); - + goto free; + } bus->bridge = get_device(&bridge->dev); device_enable_async_suspend(bus->bridge); pci_set_bus_of_node(bus); @@ -977,7 +980,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge) unregister: put_device(&bridge->dev); - device_unregister(&bridge->dev); + device_del(&bridge->dev); free: kfree(bus); @@ -1934,13 +1937,33 @@ static void pci_configure_mps(struct pci_dev *dev) struct pci_dev *bridge = pci_upstream_bridge(dev); int mps, mpss, p_mps, rc; - if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge)) + if (!pci_is_pcie(dev)) return; /* MPS and MRRS fields are of type 'RsvdP' for VFs, short-circuit out */ if (dev->is_virtfn) return; + /* + * For Root Complex Integrated Endpoints, program the maximum + * supported value unless limited by the PCIE_BUS_PEER2PEER case. + */ + if (pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END) { + if (pcie_bus_config == PCIE_BUS_PEER2PEER) + mps = 128; + else + mps = 128 << dev->pcie_mpss; + rc = pcie_set_mps(dev, mps); + if (rc) { + pci_warn(dev, "can't set Max Payload Size to %d; if necessary, use \"pci=pcie_bus_safe\" and report a bug\n", + mps); + } + return; + } + + if (!bridge || !pci_is_pcie(bridge)) + return; + mps = pcie_get_mps(dev); p_mps = pcie_get_mps(bridge); @@ -2056,7 +2079,7 @@ static void pci_configure_relaxed_ordering(struct pci_dev *dev) * For now, we only deal with Relaxed Ordering issues with Root * Ports. Peer-to-Peer DMA is another can of worms. */ - root = pci_find_pcie_root_port(dev); + root = pcie_find_root_port(dev); if (!root) return; @@ -2952,7 +2975,7 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus, return bridge->bus; err_out: - kfree(bridge); + put_device(&bridge->dev); return NULL; } EXPORT_SYMBOL_GPL(pci_create_root_bus); diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index ca9ed5774eb1..812bfc32ecb8 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -4319,7 +4319,7 @@ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_AMD, 0x1a02, PCI_CLASS_NOT_DEFINED, */ static void quirk_disable_root_port_attributes(struct pci_dev *pdev) { - struct pci_dev *root_port = pci_find_pcie_root_port(pdev); + struct pci_dev *root_port = pcie_find_root_port(pdev); if (!root_port) { pci_warn(pdev, "PCIe Completion erratum may cause device errors\n"); @@ -4682,6 +4682,20 @@ static int pci_quirk_mf_endpoint_acs(struct pci_dev *dev, u16 acs_flags) PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT); } +static int pci_quirk_rciep_acs(struct pci_dev *dev, u16 acs_flags) +{ + /* + * Intel RCiEP's are required to allow p2p only on translated + * addresses. Refer to Intel VT-d specification, r3.1, sec 3.16, + * "Root-Complex Peer to Peer Considerations". + */ + if (pci_pcie_type(dev) != PCI_EXP_TYPE_RC_END) + return -ENOTTY; + + return pci_acs_ctrl_enabled(acs_flags, + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); +} + static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags) { /* @@ -4764,6 +4778,7 @@ static const struct pci_dev_acs_enabled { /* I219 */ { PCI_VENDOR_ID_INTEL, 0x15b7, pci_quirk_mf_endpoint_acs }, { PCI_VENDOR_ID_INTEL, 0x15b8, pci_quirk_mf_endpoint_acs }, + { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_rciep_acs }, /* QCOM QDF2xxx root ports */ { PCI_VENDOR_ID_QCOM, 0x0400, pci_quirk_qcom_rp_acs }, { PCI_VENDOR_ID_QCOM, 0x0401, pci_quirk_qcom_rp_acs }, @@ -5129,13 +5144,25 @@ static void quirk_intel_qat_vf_cap(struct pci_dev *pdev) } DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x443, quirk_intel_qat_vf_cap); -/* FLR may cause some 82579 devices to hang */ -static void quirk_intel_no_flr(struct pci_dev *dev) +/* + * FLR may cause the following to devices to hang: + * + * AMD Starship/Matisse HD Audio Controller 0x1487 + * AMD Starship USB 3.0 Host Controller 0x148c + * AMD Matisse USB 3.0 Host Controller 0x149c + * Intel 82579LM Gigabit Ethernet Controller 0x1502 + * Intel 82579V Gigabit Ethernet Controller 0x1503 + * + */ +static void quirk_no_flr(struct pci_dev *dev) { dev->dev_flags |= PCI_DEV_FLAGS_NO_FLR_RESET; } -DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_intel_no_flr); -DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_intel_no_flr); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1487, quirk_no_flr); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x148c, quirk_no_flr); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x149c, quirk_no_flr); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_no_flr); +DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_no_flr); static void quirk_no_ext_tags(struct pci_dev *pdev) { @@ -5568,6 +5595,19 @@ static void pci_fixup_no_d0_pme(struct pci_dev *dev) } DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ASMEDIA, 0x2142, pci_fixup_no_d0_pme); +/* + * Device [12d8:0x400e] and [12d8:0x400f] + * These devices advertise PME# support in all power states but don't + * reliably assert it. + */ +static void pci_fixup_no_pme(struct pci_dev *dev) +{ + pci_info(dev, "PME# is unreliable, disabling it\n"); + dev->pme_support = 0; +} +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_PERICOM, 0x400e, pci_fixup_no_pme); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_PERICOM, 0x400f, pci_fixup_no_pme); + static void apex_pci_fixup_class(struct pci_dev *pdev) { pdev->class = (PCI_CLASS_SYSTEM_OTHER << 8) | pdev->class; diff --git a/drivers/pci/remove.c b/drivers/pci/remove.c index e9c6b120cf45..95dec03d9f2a 100644 --- a/drivers/pci/remove.c +++ b/drivers/pci/remove.c @@ -160,6 +160,6 @@ void pci_remove_root_bus(struct pci_bus *bus) host_bridge->bus = NULL; /* remove the host bridge */ - device_unregister(&host_bridge->dev); + device_del(&host_bridge->dev); } EXPORT_SYMBOL_GPL(pci_remove_root_bus); diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c index bbcef1a053ab..9b94b1f16d80 100644 --- a/drivers/pci/setup-bus.c +++ b/drivers/pci/setup-bus.c @@ -26,6 +26,7 @@ #include "pci.h" unsigned int pci_flags; +EXPORT_SYMBOL_GPL(pci_flags); struct pci_dev_resource { struct list_head list; @@ -583,7 +584,7 @@ static void pci_setup_bridge_io(struct pci_dev *bridge) io_mask = PCI_IO_1K_RANGE_MASK; /* Set up the top and bottom of the PCI I/O segment for this bus */ - res = &bridge->resource[PCI_BRIDGE_RESOURCES + 0]; + res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; pcibios_resource_to_bus(bridge->bus, ®ion, res); if (res->flags & IORESOURCE_IO) { pci_read_config_word(bridge, PCI_IO_BASE, &l); @@ -613,7 +614,7 @@ static void pci_setup_bridge_mmio(struct pci_dev *bridge) u32 l; /* Set up the top and bottom of the PCI Memory segment for this bus */ - res = &bridge->resource[PCI_BRIDGE_RESOURCES + 1]; + res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; pcibios_resource_to_bus(bridge->bus, ®ion, res); if (res->flags & IORESOURCE_MEM) { l = (region.start >> 16) & 0xfff0; @@ -640,7 +641,7 @@ static void pci_setup_bridge_mmio_pref(struct pci_dev *bridge) /* Set up PREF base/limit */ bu = lu = 0; - res = &bridge->resource[PCI_BRIDGE_RESOURCES + 2]; + res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; pcibios_resource_to_bus(bridge->bus, ®ion, res); if (res->flags & IORESOURCE_PREFETCH) { l = (region.start >> 16) & 0xfff0; @@ -707,14 +708,14 @@ int pci_claim_bridge_resource(struct pci_dev *bridge, int i) if (!pci_bus_clip_resource(bridge, i)) return -EINVAL; /* Clipping didn't change anything */ - switch (i - PCI_BRIDGE_RESOURCES) { - case 0: + switch (i) { + case PCI_BRIDGE_IO_WINDOW: pci_setup_bridge_io(bridge); break; - case 1: + case PCI_BRIDGE_MEM_WINDOW: pci_setup_bridge_mmio(bridge); break; - case 2: + case PCI_BRIDGE_PREF_MEM_WINDOW: pci_setup_bridge_mmio_pref(bridge); break; default: @@ -735,18 +736,22 @@ int pci_claim_bridge_resource(struct pci_dev *bridge, int i) static void pci_bridge_check_ranges(struct pci_bus *bus) { struct pci_dev *bridge = bus->self; - struct resource *b_res = &bridge->resource[PCI_BRIDGE_RESOURCES]; + struct resource *b_res; - b_res[1].flags |= IORESOURCE_MEM; + b_res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; + b_res->flags |= IORESOURCE_MEM; - if (bridge->io_window) - b_res[0].flags |= IORESOURCE_IO; + if (bridge->io_window) { + b_res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; + b_res->flags |= IORESOURCE_IO; + } if (bridge->pref_window) { - b_res[2].flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH; + b_res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; + b_res->flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH; if (bridge->pref_64_window) { - b_res[2].flags |= IORESOURCE_MEM_64; - b_res[2].flags |= PCI_PREF_RANGE_TYPE_64; + b_res->flags |= IORESOURCE_MEM_64 | + PCI_PREF_RANGE_TYPE_64; } } } @@ -1105,35 +1110,37 @@ static void pci_bus_size_cardbus(struct pci_bus *bus, struct list_head *realloc_head) { struct pci_dev *bridge = bus->self; - struct resource *b_res = &bridge->resource[PCI_BRIDGE_RESOURCES]; + struct resource *b_res; resource_size_t b_res_3_size = pci_cardbus_mem_size * 2; u16 ctrl; - if (b_res[0].parent) + b_res = &bridge->resource[PCI_CB_BRIDGE_IO_0_WINDOW]; + if (b_res->parent) goto handle_b_res_1; /* * Reserve some resources for CardBus. We reserve a fixed amount * of bus space for CardBus bridges. */ - b_res[0].start = pci_cardbus_io_size; - b_res[0].end = b_res[0].start + pci_cardbus_io_size - 1; - b_res[0].flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; + b_res->start = pci_cardbus_io_size; + b_res->end = b_res->start + pci_cardbus_io_size - 1; + b_res->flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; if (realloc_head) { - b_res[0].end -= pci_cardbus_io_size; + b_res->end -= pci_cardbus_io_size; add_to_list(realloc_head, bridge, b_res, pci_cardbus_io_size, - pci_cardbus_io_size); + pci_cardbus_io_size); } handle_b_res_1: - if (b_res[1].parent) + b_res = &bridge->resource[PCI_CB_BRIDGE_IO_1_WINDOW]; + if (b_res->parent) goto handle_b_res_2; - b_res[1].start = pci_cardbus_io_size; - b_res[1].end = b_res[1].start + pci_cardbus_io_size - 1; - b_res[1].flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; + b_res->start = pci_cardbus_io_size; + b_res->end = b_res->start + pci_cardbus_io_size - 1; + b_res->flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; if (realloc_head) { - b_res[1].end -= pci_cardbus_io_size; - add_to_list(realloc_head, bridge, b_res+1, pci_cardbus_io_size, - pci_cardbus_io_size); + b_res->end -= pci_cardbus_io_size; + add_to_list(realloc_head, bridge, b_res, pci_cardbus_io_size, + pci_cardbus_io_size); } handle_b_res_2: @@ -1153,21 +1160,22 @@ handle_b_res_2: pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); } - if (b_res[2].parent) + b_res = &bridge->resource[PCI_CB_BRIDGE_MEM_0_WINDOW]; + if (b_res->parent) goto handle_b_res_3; /* * If we have prefetchable memory support, allocate two regions. * Otherwise, allocate one region of twice the size. */ if (ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM0) { - b_res[2].start = pci_cardbus_mem_size; - b_res[2].end = b_res[2].start + pci_cardbus_mem_size - 1; - b_res[2].flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH | - IORESOURCE_STARTALIGN; + b_res->start = pci_cardbus_mem_size; + b_res->end = b_res->start + pci_cardbus_mem_size - 1; + b_res->flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH | + IORESOURCE_STARTALIGN; if (realloc_head) { - b_res[2].end -= pci_cardbus_mem_size; - add_to_list(realloc_head, bridge, b_res+2, - pci_cardbus_mem_size, pci_cardbus_mem_size); + b_res->end -= pci_cardbus_mem_size; + add_to_list(realloc_head, bridge, b_res, + pci_cardbus_mem_size, pci_cardbus_mem_size); } /* Reduce that to half */ @@ -1175,15 +1183,16 @@ handle_b_res_2: } handle_b_res_3: - if (b_res[3].parent) + b_res = &bridge->resource[PCI_CB_BRIDGE_MEM_1_WINDOW]; + if (b_res->parent) goto handle_done; - b_res[3].start = pci_cardbus_mem_size; - b_res[3].end = b_res[3].start + b_res_3_size - 1; - b_res[3].flags |= IORESOURCE_MEM | IORESOURCE_STARTALIGN; + b_res->start = pci_cardbus_mem_size; + b_res->end = b_res->start + b_res_3_size - 1; + b_res->flags |= IORESOURCE_MEM | IORESOURCE_STARTALIGN; if (realloc_head) { - b_res[3].end -= b_res_3_size; - add_to_list(realloc_head, bridge, b_res+3, b_res_3_size, - pci_cardbus_mem_size); + b_res->end -= b_res_3_size; + add_to_list(realloc_head, bridge, b_res, b_res_3_size, + pci_cardbus_mem_size); } handle_done: @@ -1227,7 +1236,7 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) break; hdr_type = -1; /* Intentionally invalid - not a PCI device. */ } else { - pref = &bus->self->resource[PCI_BRIDGE_RESOURCES + 2]; + pref = &bus->self->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; hdr_type = bus->self->hdr_type; } @@ -1885,9 +1894,9 @@ static void pci_bus_distribute_available_resources(struct pci_bus *bus, struct pci_dev *dev, *bridge = bus->self; resource_size_t io_per_hp, mmio_per_hp, mmio_pref_per_hp, align; - io_res = &bridge->resource[PCI_BRIDGE_RESOURCES + 0]; - mmio_res = &bridge->resource[PCI_BRIDGE_RESOURCES + 1]; - mmio_pref_res = &bridge->resource[PCI_BRIDGE_RESOURCES + 2]; + io_res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; + mmio_res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; + mmio_pref_res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; /* * The alignment of this bridge is yet to be considered, hence it must @@ -1960,21 +1969,21 @@ static void pci_bus_distribute_available_resources(struct pci_bus *bus, * Reduce the available resource space by what the * bridge and devices below it occupy. */ - res = &dev->resource[PCI_BRIDGE_RESOURCES + 0]; + res = &dev->resource[PCI_BRIDGE_IO_WINDOW]; align = pci_resource_alignment(dev, res); align = align ? ALIGN(io.start, align) - io.start : 0; used_size = align + resource_size(res); if (!res->parent) io.start = min(io.start + used_size, io.end + 1); - res = &dev->resource[PCI_BRIDGE_RESOURCES + 1]; + res = &dev->resource[PCI_BRIDGE_MEM_WINDOW]; align = pci_resource_alignment(dev, res); align = align ? ALIGN(mmio.start, align) - mmio.start : 0; used_size = align + resource_size(res); if (!res->parent) mmio.start = min(mmio.start + used_size, mmio.end + 1); - res = &dev->resource[PCI_BRIDGE_RESOURCES + 2]; + res = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; align = pci_resource_alignment(dev, res); align = align ? ALIGN(mmio_pref.start, align) - mmio_pref.start : 0; @@ -2027,9 +2036,9 @@ static void pci_bridge_distribute_available_resources(struct pci_dev *bridge, return; /* Take the initial extra resources from the hotplug port */ - available_io = bridge->resource[PCI_BRIDGE_RESOURCES + 0]; - available_mmio = bridge->resource[PCI_BRIDGE_RESOURCES + 1]; - available_mmio_pref = bridge->resource[PCI_BRIDGE_RESOURCES + 2]; + available_io = bridge->resource[PCI_BRIDGE_IO_WINDOW]; + available_mmio = bridge->resource[PCI_BRIDGE_MEM_WINDOW]; + available_mmio_pref = bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; pci_bus_distribute_available_resources(bridge->subordinate, add_list, available_io, diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c index d8ca40a97693..d21fa04fa44d 100644 --- a/drivers/pci/setup-res.c +++ b/drivers/pci/setup-res.c @@ -439,10 +439,11 @@ int pci_resize_resource(struct pci_dev *dev, int resno, int size) res->end = res->start + pci_rebar_size_to_bytes(size) - 1; /* Check if the new config works by trying to assign everything. */ - ret = pci_reassign_bridge_resources(dev->bus->self, res->flags); - if (ret) - goto error_resize; - + if (dev->bus->self) { + ret = pci_reassign_bridge_resources(dev->bus->self, res->flags); + if (ret) + goto error_resize; + } return 0; error_resize: diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c index e69cac84b605..850cfeb74608 100644 --- a/drivers/pci/switch/switchtec.c +++ b/drivers/pci/switch/switchtec.c @@ -25,7 +25,7 @@ static int max_devices = 16; module_param(max_devices, int, 0644); MODULE_PARM_DESC(max_devices, "max number of switchtec device instances"); -static bool use_dma_mrpc = 1; +static bool use_dma_mrpc = true; module_param(use_dma_mrpc, bool, 0644); MODULE_PARM_DESC(use_dma_mrpc, "Enable the use of the DMA MRPC feature"); |