diff options
author | Greg Kroah-Hartman | 2021-10-30 10:48:32 +0200 |
---|---|---|
committer | Greg Kroah-Hartman | 2021-10-30 10:48:32 +0200 |
commit | 28eb3b363df76cb5fdffc5ef0498ca7dcedea4e7 (patch) | |
tree | 99bd13f67b1d9f94132f9ca76848abda5b24bba7 | |
parent | 27182be962006916ed3d43f65b4ff88d2851dadd (diff) | |
parent | 561ced0bb90a4be298b7db5fb54f29731d74a3f6 (diff) |
Merge tag 'coresight-next-v5.16.v3' of gitolite.kernel.org:pub/scm/linux/kernel/git/coresight/linux into char-misc-next
Mathieu writes:
Coresight changes for v5.16
- A new option to make coresight cpu-debug capabilities available as early
as possible in the kernel boot process.
- Make trace sessions more enduring by coping with scenarios where events
are scheduled on CPUs that can't reach the selected sink.
- A set of improvement to make the TMC-ETR driver more efficient.
- Enhancements to the TRBE driver to correct several errata.
- An enhancement to make the AXI burts size configurable for TMC devices
that can't work with the default value.
- A fix in the CTI module to use the correct device when calling
pm_runtime_put()
- The addition of the Kryo-5xx device to the list of support ETMs.
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
* tag 'coresight-next-v5.16.v3' of gitolite.kernel.org:pub/scm/linux/kernel/git/coresight/linux: (39 commits)
arm64: errata: Enable TRBE workaround for write to out-of-range address
arm64: errata: Enable workaround for TRBE overwrite in FILL mode
coresight: trbe: Work around write to out of range
coresight: trbe: Make sure we have enough space
coresight: trbe: Add a helper to determine the minimum buffer size
coresight: trbe: Workaround TRBE errata overwrite in FILL mode
coresight: trbe: Add infrastructure for Errata handling
coresight: trbe: Allow driver to choose a different alignment
coresight: trbe: Decouple buffer base from the hardware base
coresight: trbe: Add a helper to pad a given buffer area
coresight: trbe: Add a helper to calculate the trace generated
coresight: trbe: Defer the probe on offline CPUs
coresight: trbe: Fix incorrect access of the sink specific data
coresight: etm4x: Add ETM PID for Kryo-5XX
coresight: trbe: Prohibit trace before disabling TRBE
coresight: trbe: End the AUX handle on truncation
coresight: trbe: Do not truncate buffer on IRQ
coresight: trbe: Fix handling of spurious interrupts
coresight: trbe: irq handler: Do not disable TRBE if no action is needed
coresight: trbe: Unify the enabling sequence
...
20 files changed, 905 insertions, 154 deletions
diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst index d410a47ffa57..5342e895fb60 100644 --- a/Documentation/arm64/silicon-errata.rst +++ b/Documentation/arm64/silicon-errata.rst @@ -92,12 +92,24 @@ stable kernels. +----------------+-----------------+-----------------+-----------------------------+ | ARM | Cortex-A77 | #1508412 | ARM64_ERRATUM_1508412 | +----------------+-----------------+-----------------+-----------------------------+ +| ARM | Cortex-A710 | #2119858 | ARM64_ERRATUM_2119858 | ++----------------+-----------------+-----------------+-----------------------------+ +| ARM | Cortex-A710 | #2054223 | ARM64_ERRATUM_2054223 | ++----------------+-----------------+-----------------+-----------------------------+ +| ARM | Cortex-A710 | #2224489 | ARM64_ERRATUM_2224489 | ++----------------+-----------------+-----------------+-----------------------------+ | ARM | Neoverse-N1 | #1188873,1418040| ARM64_ERRATUM_1418040 | +----------------+-----------------+-----------------+-----------------------------+ | ARM | Neoverse-N1 | #1349291 | N/A | +----------------+-----------------+-----------------+-----------------------------+ | ARM | Neoverse-N1 | #1542419 | ARM64_ERRATUM_1542419 | +----------------+-----------------+-----------------+-----------------------------+ +| ARM | Neoverse-N2 | #2139208 | ARM64_ERRATUM_2139208 | ++----------------+-----------------+-----------------+-----------------------------+ +| ARM | Neoverse-N2 | #2067961 | ARM64_ERRATUM_2067961 | ++----------------+-----------------+-----------------+-----------------------------+ +| ARM | Neoverse-N2 | #2253138 | ARM64_ERRATUM_2253138 | ++----------------+-----------------+-----------------+-----------------------------+ | ARM | MMU-500 | #841119,826419 | N/A | +----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+ diff --git a/Documentation/devicetree/bindings/arm/coresight.txt b/Documentation/devicetree/bindings/arm/coresight.txt index 7f9c1ca87487..c68d93a35b6c 100644 --- a/Documentation/devicetree/bindings/arm/coresight.txt +++ b/Documentation/devicetree/bindings/arm/coresight.txt @@ -127,6 +127,11 @@ its hardware characteristcs. * arm,scatter-gather: boolean. Indicates that the TMC-ETR can safely use the SG mode on this system. + * arm,max-burst-size: The maximum burst size initiated by TMC on the + AXI master interface. The burst size can be in the range [0..15], + the setting supports one data transfer per burst up to a maximum of + 16 data transfers per burst. + * Optional property for CATU : * interrupts : Exactly one SPI may be listed for reporting the address error diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index fee914c716aa..ebb77b894630 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -666,6 +666,117 @@ config ARM64_ERRATUM_1508412 If unsure, say Y. +config ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE + bool + +config ARM64_ERRATUM_2119858 + bool "Cortex-A710: 2119858: workaround TRBE overwriting trace data in FILL mode" + default y + depends on CORESIGHT_TRBE + select ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE + help + This option adds the workaround for ARM Cortex-A710 erratum 2119858. + + Affected Cortex-A710 cores could overwrite up to 3 cache lines of trace + data at the base of the buffer (pointed to by TRBASER_EL1) in FILL mode in + the event of a WRAP event. + + Work around the issue by always making sure we move the TRBPTR_EL1 by + 256 bytes before enabling the buffer and filling the first 256 bytes of + the buffer with ETM ignore packets upon disabling. + + If unsure, say Y. + +config ARM64_ERRATUM_2139208 + bool "Neoverse-N2: 2139208: workaround TRBE overwriting trace data in FILL mode" + default y + depends on CORESIGHT_TRBE + select ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE + help + This option adds the workaround for ARM Neoverse-N2 erratum 2139208. + + Affected Neoverse-N2 cores could overwrite up to 3 cache lines of trace + data at the base of the buffer (pointed to by TRBASER_EL1) in FILL mode in + the event of a WRAP event. + + Work around the issue by always making sure we move the TRBPTR_EL1 by + 256 bytes before enabling the buffer and filling the first 256 bytes of + the buffer with ETM ignore packets upon disabling. + + If unsure, say Y. + +config ARM64_WORKAROUND_TSB_FLUSH_FAILURE + bool + +config ARM64_ERRATUM_2054223 + bool "Cortex-A710: 2054223: workaround TSB instruction failing to flush trace" + default y + select ARM64_WORKAROUND_TSB_FLUSH_FAILURE + help + Enable workaround for ARM Cortex-A710 erratum 2054223 + + Affected cores may fail to flush the trace data on a TSB instruction, when + the PE is in trace prohibited state. This will cause losing a few bytes + of the trace cached. + + Workaround is to issue two TSB consecutively on affected cores. + + If unsure, say Y. + +config ARM64_ERRATUM_2067961 + bool "Neoverse-N2: 2067961: workaround TSB instruction failing to flush trace" + default y + select ARM64_WORKAROUND_TSB_FLUSH_FAILURE + help + Enable workaround for ARM Neoverse-N2 erratum 2067961 + + Affected cores may fail to flush the trace data on a TSB instruction, when + the PE is in trace prohibited state. This will cause losing a few bytes + of the trace cached. + + Workaround is to issue two TSB consecutively on affected cores. + + If unsure, say Y. + +config ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE + bool + +config ARM64_ERRATUM_2253138 + bool "Neoverse-N2: 2253138: workaround TRBE writing to address out-of-range" + depends on CORESIGHT_TRBE + default y + select ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE + help + This option adds the workaround for ARM Neoverse-N2 erratum 2253138. + + Affected Neoverse-N2 cores might write to an out-of-range address, not reserved + for TRBE. Under some conditions, the TRBE might generate a write to the next + virtually addressed page following the last page of the TRBE address space + (i.e., the TRBLIMITR_EL1.LIMIT), instead of wrapping around to the base. + + Work around this in the driver by always making sure that there is a + page beyond the TRBLIMITR_EL1.LIMIT, within the space allowed for the TRBE. + + If unsure, say Y. + +config ARM64_ERRATUM_2224489 + bool "Cortex-A710: 2224489: workaround TRBE writing to address out-of-range" + depends on CORESIGHT_TRBE + default y + select ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE + help + This option adds the workaround for ARM Cortex-A710 erratum 2224489. + + Affected Cortex-A710 cores might write to an out-of-range address, not reserved + for TRBE. Under some conditions, the TRBE might generate a write to the next + virtually addressed page following the last page of the TRBE address space + (i.e., the TRBLIMITR_EL1.LIMIT), instead of wrapping around to the base. + + Work around this in the driver by always making sure that there is a + page beyond the TRBLIMITR_EL1.LIMIT, within the space allowed for the TRBE. + + If unsure, say Y. + config CAVIUM_ERRATUM_22375 bool "Cavium erratum 22375, 24313" default y diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index 451e11e5fd23..1c5a00598458 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -23,7 +23,7 @@ #define dsb(opt) asm volatile("dsb " #opt : : : "memory") #define psb_csync() asm volatile("hint #17" : : : "memory") -#define tsb_csync() asm volatile("hint #18" : : : "memory") +#define __tsb_csync() asm volatile("hint #18" : : : "memory") #define csdb() asm volatile("hint #20" : : : "memory") #ifdef CONFIG_ARM64_PSEUDO_NMI @@ -46,6 +46,20 @@ #define dma_rmb() dmb(oshld) #define dma_wmb() dmb(oshst) + +#define tsb_csync() \ + do { \ + /* \ + * CPUs affected by Arm Erratum 2054223 or 2067961 needs \ + * another TSB to ensure the trace is flushed. The barriers \ + * don't have to be strictly back to back, as long as the \ + * CPU is in trace prohibited state. \ + */ \ + if (cpus_have_final_cap(ARM64_WORKAROUND_TSB_FLUSH_FAILURE)) \ + __tsb_csync(); \ + __tsb_csync(); \ + } while (0) + /* * Generate a mask for array_index__nospec() that is ~0UL when 0 <= idx < sz * and 0 otherwise. diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index 6231e1f0abe7..19b8441aa8f2 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -73,6 +73,8 @@ #define ARM_CPU_PART_CORTEX_A76 0xD0B #define ARM_CPU_PART_NEOVERSE_N1 0xD0C #define ARM_CPU_PART_CORTEX_A77 0xD0D +#define ARM_CPU_PART_CORTEX_A710 0xD47 +#define ARM_CPU_PART_NEOVERSE_N2 0xD49 #define APM_CPU_PART_POTENZA 0x000 @@ -113,6 +115,8 @@ #define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76) #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1) #define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77) +#define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710) +#define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2) #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX) diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index e2c20c036442..9e1c1aef9ebd 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -340,6 +340,42 @@ static const struct midr_range erratum_1463225[] = { }; #endif +#ifdef CONFIG_ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE +static const struct midr_range trbe_overwrite_fill_mode_cpus[] = { +#ifdef CONFIG_ARM64_ERRATUM_2139208 + MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2), +#endif +#ifdef CONFIG_ARM64_ERRATUM_2119858 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A710), +#endif + {}, +}; +#endif /* CONFIG_ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE */ + +#ifdef CONFIG_ARM64_WORKAROUND_TSB_FLUSH_FAILURE +static const struct midr_range tsb_flush_fail_cpus[] = { +#ifdef CONFIG_ARM64_ERRATUM_2067961 + MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2), +#endif +#ifdef CONFIG_ARM64_ERRATUM_2054223 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A710), +#endif + {}, +}; +#endif /* CONFIG_ARM64_WORKAROUND_TSB_FLUSH_FAILURE */ + +#ifdef CONFIG_ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE +static struct midr_range trbe_write_out_of_range_cpus[] = { +#ifdef CONFIG_ARM64_ERRATUM_2253138 + MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2), +#endif +#ifdef CONFIG_ARM64_ERRATUM_2224489 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A710), +#endif + {}, +}; +#endif /* CONFIG_ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE */ + const struct arm64_cpu_capabilities arm64_errata[] = { #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE { @@ -534,6 +570,34 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ERRATA_MIDR_ALL_VERSIONS(MIDR_NVIDIA_CARMEL), }, #endif +#ifdef CONFIG_ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE + { + /* + * The erratum work around is handled within the TRBE + * driver and can be applied per-cpu. So, we can allow + * a late CPU to come online with this erratum. + */ + .desc = "ARM erratum 2119858 or 2139208", + .capability = ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE, + .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, + CAP_MIDR_RANGE_LIST(trbe_overwrite_fill_mode_cpus), + }, +#endif +#ifdef CONFIG_ARM64_WORKAROUND_TSB_FLUSH_FAILURE + { + .desc = "ARM erratum 2067961 or 2054223", + .capability = ARM64_WORKAROUND_TSB_FLUSH_FAILURE, + ERRATA_MIDR_RANGE_LIST(tsb_flush_fail_cpus), + }, +#endif +#ifdef CONFIG_ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE + { + .desc = "ARM erratum 2253138 or 2224489", + .capability = ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE, + .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, + CAP_MIDR_RANGE_LIST(trbe_write_out_of_range_cpus), + }, +#endif { } }; diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 49305c2e6dfd..90628638e0f9 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -53,6 +53,9 @@ WORKAROUND_1418040 WORKAROUND_1463225 WORKAROUND_1508412 WORKAROUND_1542419 +WORKAROUND_TRBE_OVERWRITE_FILL_MODE +WORKAROUND_TSB_FLUSH_FAILURE +WORKAROUND_TRBE_WRITE_OUT_OF_RANGE WORKAROUND_CAVIUM_23154 WORKAROUND_CAVIUM_27456 WORKAROUND_CAVIUM_30115 diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig index f026e5c0e777..514a9b8086e3 100644 --- a/drivers/hwtracing/coresight/Kconfig +++ b/drivers/hwtracing/coresight/Kconfig @@ -150,6 +150,19 @@ config CORESIGHT_CPU_DEBUG To compile this driver as a module, choose M here: the module will be called coresight-cpu-debug. +config CORESIGHT_CPU_DEBUG_DEFAULT_ON + bool "Enable CoreSight CPU Debug by default" + depends on CORESIGHT_CPU_DEBUG + help + Say Y here to enable the CoreSight Debug panic-debug by default. This + can also be enabled via debugfs, but this ensures the debug feature + is enabled as early as possible. + + Has the same effect as setting coresight_cpu_debug.enable=1 on the + kernel command line. + + Say N if unsure. + config CORESIGHT_CTI tristate "CoreSight Cross Trigger Interface (CTI) driver" depends on ARM || ARM64 diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c b/drivers/hwtracing/coresight/coresight-cpu-debug.c index 00de46565bc4..8845ec4b4402 100644 --- a/drivers/hwtracing/coresight/coresight-cpu-debug.c +++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c @@ -105,7 +105,7 @@ static DEFINE_PER_CPU(struct debug_drvdata *, debug_drvdata); static int debug_count; static struct dentry *debug_debugfs_dir; -static bool debug_enable; +static bool debug_enable = IS_ENABLED(CONFIG_CORESIGHT_CPU_DEBUG_DEFAULT_ON); module_param_named(enable, debug_enable, bool, 0600); MODULE_PARM_DESC(enable, "Control to enable coresight CPU debug functionality"); diff --git a/drivers/hwtracing/coresight/coresight-cti-core.c b/drivers/hwtracing/coresight/coresight-cti-core.c index e2a3620cbf48..8988b2ed2ea6 100644 --- a/drivers/hwtracing/coresight/coresight-cti-core.c +++ b/drivers/hwtracing/coresight/coresight-cti-core.c @@ -175,7 +175,7 @@ static int cti_disable_hw(struct cti_drvdata *drvdata) coresight_disclaim_device_unlocked(csdev); CS_LOCK(drvdata->base); spin_unlock(&drvdata->spinlock); - pm_runtime_put(dev); + pm_runtime_put(dev->parent); return 0; /* not disabled this call */ diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c index f775cbee12b8..efa39820acec 100644 --- a/drivers/hwtracing/coresight/coresight-etb10.c +++ b/drivers/hwtracing/coresight/coresight-etb10.c @@ -557,9 +557,8 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev, /* * In snapshot mode we simply increment the head by the number of byte - * that were written. User space function cs_etm_find_snapshot() will - * figure out how many bytes to get from the AUX buffer based on the - * position of the head. + * that were written. User space will figure out how many bytes to get + * from the AUX buffer based on the position of the head. */ if (buf->snapshot) handle->head += to_read; diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c index 8ebd728d3a80..c039b6ae206f 100644 --- a/drivers/hwtracing/coresight/coresight-etm-perf.c +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c @@ -452,9 +452,14 @@ static void etm_event_start(struct perf_event *event, int flags) * sink from this ETM. We can't do much in this case if * the sink was specified or hinted to the driver. For * now, simply don't record anything on this ETM. + * + * As such we pretend that everything is fine, and let + * it continue without actually tracing. The event could + * continue tracing when it moves to a CPU where it is + * reachable to a sink. */ if (!cpumask_test_cpu(cpu, &event_data->mask)) - goto fail_end_stop; + goto out; path = etm_event_cpu_path(event_data, cpu); /* We need a sink, no need to continue without one */ @@ -466,26 +471,32 @@ static void etm_event_start(struct perf_event *event, int flags) if (coresight_enable_path(path, CS_MODE_PERF, handle)) goto fail_end_stop; - /* Tell the perf core the event is alive */ - event->hw.state = 0; - /* Finally enable the tracer */ if (source_ops(csdev)->enable(csdev, event, CS_MODE_PERF)) goto fail_disable_path; +out: + /* Tell the perf core the event is alive */ + event->hw.state = 0; /* Save the event_data for this ETM */ ctxt->event_data = event_data; -out: return; fail_disable_path: coresight_disable_path(path); fail_end_stop: - perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); - perf_aux_output_end(handle, 0); + /* + * Check if the handle is still associated with the event, + * to handle cases where if the sink failed to start the + * trace and TRUNCATED the handle already. + */ + if (READ_ONCE(handle->event)) { + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + perf_aux_output_end(handle, 0); + } fail: event->hw.state = PERF_HES_STOPPED; - goto out; + return; } static void etm_event_stop(struct perf_event *event, int mode) @@ -517,6 +528,19 @@ static void etm_event_stop(struct perf_event *event, int mode) if (WARN_ON(!event_data)) return; + /* + * Check if this ETM was allowed to trace, as decided at + * etm_setup_aux(). If it wasn't allowed to trace, then + * nothing needs to be torn down other than outputting a + * zero sized record. + */ + if (handle->event && (mode & PERF_EF_UPDATE) && + !cpumask_test_cpu(cpu, &event_data->mask)) { + event->hw.state = PERF_HES_STOPPED; + perf_aux_output_end(handle, 0); + return; + } + if (!csdev) return; @@ -550,7 +574,21 @@ static void etm_event_stop(struct perf_event *event, int mode) size = sink_ops(sink)->update_buffer(sink, handle, event_data->snk_config); - perf_aux_output_end(handle, size); + /* + * Make sure the handle is still valid as the + * sink could have closed it from an IRQ. + * The sink driver must handle the race with + * update_buffer() and IRQ. Thus either we + * should get a valid handle and valid size + * (which may be 0). + * + * But we should never get a non-zero size with + * an invalid handle. + */ + if (READ_ONCE(handle->event)) + perf_aux_output_end(handle, size); + else + WARN_ON(size); } /* Disabling the path make its elements available to other sessions */ diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c index e24252eaf8e4..86a313857b58 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c @@ -40,6 +40,7 @@ #include "coresight-etm4x.h" #include "coresight-etm-perf.h" #include "coresight-etm4x-cfg.h" +#include "coresight-self-hosted-trace.h" #include "coresight-syscfg.h" static int boot_enable; @@ -238,6 +239,45 @@ struct etm4_enable_arg { int rc; }; +/* + * etm4x_prohibit_trace - Prohibit the CPU from tracing at all ELs. + * When the CPU supports FEAT_TRF, we could move the ETM to a trace + * prohibited state by filtering the Exception levels via TRFCR_EL1. + */ +static void etm4x_prohibit_trace(struct etmv4_drvdata *drvdata) +{ + /* If the CPU doesn't support FEAT_TRF, nothing to do */ + if (!drvdata->trfcr) + return; + cpu_prohibit_trace(); +} + +/* + * etm4x_allow_trace - Allow CPU tracing in the respective ELs, + * as configured by the drvdata->config.mode for the current + * session. Even though we have TRCVICTLR bits to filter the + * trace in the ELs, it doesn't prevent the ETM from generating + * a packet (e.g, TraceInfo) that might contain the addresses from + * the excluded levels. Thus we use the additional controls provided + * via the Trace Filtering controls (FEAT_TRF) to make sure no trace + * is generated for the excluded ELs. + */ +static void etm4x_allow_trace(struct etmv4_drvdata *drvdata) +{ + u64 trfcr = drvdata->trfcr; + + /* If the CPU doesn't support FEAT_TRF, nothing to do */ + if (!trfcr) + return; + + if (drvdata->config.mode & ETM_MODE_EXCL_KERN) + trfcr &= ~TRFCR_ELx_ExTRE; + if (drvdata->config.mode & ETM_MODE_EXCL_USER) + trfcr &= ~TRFCR_ELx_E0TRE; + + write_trfcr(trfcr); +} + #ifdef CONFIG_ETM4X_IMPDEF_FEATURE #define HISI_HIP08_AMBA_ID 0x000b6d01 @@ -442,6 +482,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata) if (etm4x_is_ete(drvdata)) etm4x_relaxed_write32(csa, TRCRSR_TA, TRCRSR); + etm4x_allow_trace(drvdata); /* Enable the trace unit */ etm4x_relaxed_write32(csa, 1, TRCPRGCTLR); @@ -737,7 +778,6 @@ static int etm4_enable(struct coresight_device *csdev, static void etm4_disable_hw(void *info) { u32 control; - u64 trfcr; struct etmv4_drvdata *drvdata = info; struct etmv4_config *config = &drvdata->config; struct coresight_device *csdev = drvdata->csdev; @@ -764,12 +804,7 @@ static void etm4_disable_hw(void *info) * If the CPU supports v8.4 Trace filter Control, * set the ETM to trace prohibited region. */ - if (drvdata->trfc) { - trfcr = read_sysreg_s(SYS_TRFCR_EL1); - write_sysreg_s(trfcr & ~(TRFCR_ELx_ExTRE | TRFCR_ELx_E0TRE), - SYS_TRFCR_EL1); - isb(); - } + etm4x_prohibit_trace(drvdata); /* * Make sure everything completes before disabling, as recommended * by section 7.3.77 ("TRCVICTLR, ViewInst Main Control Register, @@ -785,9 +820,6 @@ static void etm4_disable_hw(void *info) if (coresight_timeout(csa, TRCSTATR, TRCSTATR_PMSTABLE_BIT, 1)) dev_err(etm_dev, "timeout while waiting for PM stable Trace Status\n"); - if (drvdata->trfc) - write_sysreg_s(trfcr, SYS_TRFCR_EL1); - /* read the status of the single shot comparators */ for (i = 0; i < drvdata->nr_ss_cmp; i++) { config->ss_status[i] = @@ -989,15 +1021,15 @@ static bool etm4_init_csdev_access(struct etmv4_drvdata *drvdata, return false; } -static void cpu_enable_tracing(struct etmv4_drvdata *drvdata) +static void cpu_detect_trace_filtering(struct etmv4_drvdata *drvdata) { u64 dfr0 = read_sysreg(id_aa64dfr0_el1); u64 trfcr; + drvdata->trfcr = 0; if (!cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRACE_FILT_SHIFT)) return; - drvdata->trfc = true; /* * If the CPU supports v8.4 SelfHosted Tracing, enable * tracing at the kernel EL and EL0, forcing to use the @@ -1011,7 +1043,7 @@ static void cpu_enable_tracing(struct etmv4_drvdata *drvdata) if (is_kernel_in_hyp_mode()) trfcr |= TRFCR_EL2_CX; - write_sysreg_s(trfcr, SYS_TRFCR_EL1); + drvdata->trfcr = trfcr; } static void etm4_init_arch_data(void *info) @@ -1202,7 +1234,7 @@ static void etm4_init_arch_data(void *info) /* NUMCNTR, bits[30:28] number of counters available for tracing */ drvdata->nr_cntr = BMVAL(etmidr5, 28, 30); etm4_cs_lock(drvdata, csa); - cpu_enable_tracing(drvdata); + cpu_detect_trace_filtering(drvdata); } static inline u32 etm4_get_victlr_access_type(struct etmv4_config *config) @@ -1554,7 +1586,7 @@ static void etm4_init_trace_id(struct etmv4_drvdata *drvdata) drvdata->trcid = coresight_get_trace_id(drvdata->cpu); } -static int etm4_cpu_save(struct etmv4_drvdata *drvdata) +static int __etm4_cpu_save(struct etmv4_drvdata *drvdata) { int i, ret = 0; struct etmv4_save_state *state; @@ -1693,7 +1725,23 @@ out: return ret; } -static void etm4_cpu_restore(struct etmv4_drvdata *drvdata) +static int etm4_cpu_save(struct etmv4_drvdata *drvdata) +{ + int ret = 0; + + /* Save the TRFCR irrespective of whether the ETM is ON */ + if (drvdata->trfcr) + drvdata->save_trfcr = read_trfcr(); + /* + * Save and restore the ETM Trace registers only if + * the ETM is active. + */ + if (local_read(&drvdata->mode) && drvdata->save_state) + ret = __etm4_cpu_save(drvdata); + return ret; +} + +static void __etm4_cpu_restore(struct etmv4_drvdata *drvdata) { int i; struct etmv4_save_state *state = drvdata->save_state; @@ -1789,6 +1837,14 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata) etm4_cs_lock(drvdata, csa); } +static void etm4_cpu_restore(struct etmv4_drvdata *drvdata) +{ + if (drvdata->trfcr) + write_trfcr(drvdata->save_trfcr); + if (drvdata->state_needs_restore) + __etm4_cpu_restore(drvdata); +} + static int etm4_cpu_pm_notify(struct notifier_block *nb, unsigned long cmd, void *v) { @@ -1800,23 +1856,17 @@ static int etm4_cpu_pm_notify(struct notifier_block *nb, unsigned long cmd, drvdata = etmdrvdata[cpu]; - if (!drvdata->save_state) - return NOTIFY_OK; - if (WARN_ON_ONCE(drvdata->cpu != cpu)) return NOTIFY_BAD; switch (cmd) { case CPU_PM_ENTER: - /* save the state if self-hosted coresight is in use */ - if (local_read(&drvdata->mode)) - if (etm4_cpu_save(drvdata)) - return NOTIFY_BAD; + if (etm4_cpu_save(drvdata)) + return NOTIFY_BAD; break; case CPU_PM_EXIT: case CPU_PM_ENTER_FAILED: - if (drvdata->state_needs_restore) - etm4_cpu_restore(drvdata); + etm4_cpu_restore(drvdata); break; default: return NOTIFY_DONE; @@ -2099,6 +2149,7 @@ static const struct amba_id etm4_ids[] = { CS_AMBA_UCI_ID(0x000bb803, uci_id_etm4),/* Qualcomm Kryo 385 Cortex-A75 */ CS_AMBA_UCI_ID(0x000bb805, uci_id_etm4),/* Qualcomm Kryo 4XX Cortex-A55 */ CS_AMBA_UCI_ID(0x000bb804, uci_id_etm4),/* Qualcomm Kryo 4XX Cortex-A76 */ + CS_AMBA_UCI_ID(0x000bbd0d, uci_id_etm4),/* Qualcomm Kryo 5XX Cortex-A77 */ CS_AMBA_UCI_ID(0x000cc0af, uci_id_etm4),/* Marvell ThunderX2 */ CS_AMBA_UCI_ID(0x000b6d01, uci_id_etm4),/* HiSilicon-Hip08 */ CS_AMBA_UCI_ID(0x000b6d02, uci_id_etm4),/* HiSilicon-Hip09 */ diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h index e5b79bdb9851..3c4d69b096ca 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x.h +++ b/drivers/hwtracing/coresight/coresight-etm4x.h @@ -919,8 +919,12 @@ struct etmv4_save_state { * @nooverflow: Indicate if overflow prevention is supported. * @atbtrig: If the implementation can support ATB triggers * @lpoverride: If the implementation can support low-power state over. - * @trfc: If the implementation supports Arm v8.4 trace filter controls. + * @trfcr: If the CPU supports FEAT_TRF, value of the TRFCR_ELx that + * allows tracing at all ELs. We don't want to compute this + * at runtime, due to the additional setting of TRFCR_CX when + * in EL2. Otherwise, 0. * @config: structure holding configuration parameters. + * @save_trfcr: Saved TRFCR_EL1 register during a CPU PM event. * @save_state: State to be preserved across power loss * @state_needs_restore: True when there is context to restore after PM exit * @skip_power_up: Indicates if an implementation can skip powering up @@ -971,8 +975,9 @@ struct etmv4_drvdata { bool nooverflow; bool atbtrig; bool lpoverride; - bool trfc; + u64 trfcr; struct etmv4_config config; + u64 save_trfcr; struct etmv4_save_state *save_state; bool state_needs_restore; bool skip_power_up; diff --git a/drivers/hwtracing/coresight/coresight-self-hosted-trace.h b/drivers/hwtracing/coresight/coresight-self-hosted-trace.h new file mode 100644 index 000000000000..53840a2c41f2 --- /dev/null +++ b/drivers/hwtracing/coresight/coresight-self-hosted-trace.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Arm v8 Self-Hosted trace support. + * + * Copyright (C) 2021 ARM Ltd. + */ + +#ifndef __CORESIGHT_SELF_HOSTED_TRACE_H +#define __CORESIGHT_SELF_HOSTED_TRACE_H + +#include <asm/sysreg.h> + +static inline u64 read_trfcr(void) +{ + return read_sysreg_s(SYS_TRFCR_EL1); +} + +static inline void write_trfcr(u64 val) +{ + write_sysreg_s(val, SYS_TRFCR_EL1); + isb(); +} + +static inline u64 cpu_prohibit_trace(void) +{ + u64 trfcr = read_trfcr(); + + /* Prohibit tracing at EL0 & the kernel EL */ + write_trfcr(trfcr & ~(TRFCR_ELx_ExTRE | TRFCR_ELx_E0TRE)); + /* Return the original value of the TRFCR */ + return trfcr; +} +#endif /* __CORESIGHT_SELF_HOSTED_TRACE_H */ diff --git a/drivers/hwtracing/coresight/coresight-tmc-core.c b/drivers/hwtracing/coresight/coresight-tmc-core.c index 74c6323d4d6a..d0276af82494 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-core.c +++ b/drivers/hwtracing/coresight/coresight-tmc-core.c @@ -432,6 +432,21 @@ static u32 tmc_etr_get_default_buffer_size(struct device *dev) return size; } +static u32 tmc_etr_get_max_burst_size(struct device *dev) +{ + u32 burst_size; + + if (fwnode_property_read_u32(dev->fwnode, "arm,max-burst-size", + &burst_size)) + return TMC_AXICTL_WR_BURST_16; + + /* Only permissible values are 0 to 15 */ + if (burst_size > 0xF) + burst_size = TMC_AXICTL_WR_BURST_16; + + return burst_size; +} + static int tmc_probe(struct amba_device *adev, const struct amba_id *id) { int ret = 0; @@ -469,10 +484,12 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id) /* This device is not associated with a session */ drvdata->pid = -1; - if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) + if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { drvdata->size = tmc_etr_get_default_buffer_size(dev); - else + drvdata->max_burst_size = tmc_etr_get_max_burst_size(dev); + } else { drvdata->size = readl_relaxed(drvdata->base + TMC_RSZ) * 4; + } desc.dev = dev; desc.groups = coresight_tmc_groups; diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c index cd0fb7bfba68..4c4cbd1f7258 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-etf.c +++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c @@ -546,13 +546,17 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev, /* * In snapshot mode we simply increment the head by the number of byte - * that were written. User space function cs_etm_find_snapshot() will - * figure out how many bytes to get from the AUX buffer based on the - * position of the head. + * that were written. User space will figure out how many bytes to get + * from the AUX buffer based on the position of the head. */ if (buf->snapshot) handle->head += to_read; + /* + * CS_LOCK() contains mb() so it can ensure visibility of the AUX trace + * data before the aux_head is updated via perf_aux_output_end(), which + * is expected by the perf ring buffer. + */ CS_LOCK(drvdata->base); out: spin_unlock_irqrestore(&drvdata->spinlock, flags); diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c index acdb59e0e661..867ad8bb9b0c 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-etr.c +++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c @@ -32,7 +32,6 @@ struct etr_flat_buf { * @etr_buf - Actual buffer used by the ETR * @pid - The PID this etr_perf_buffer belongs to. * @snaphost - Perf session mode - * @head - handle->head at the beginning of the session. * @nr_pages - Number of pages in the ring buffer. * @pages - Array of Pages in the ring buffer. */ @@ -41,7 +40,6 @@ struct etr_perf_buffer { struct etr_buf *etr_buf; pid_t pid; bool snapshot; - unsigned long head; int nr_pages; void **pages; }; @@ -609,8 +607,9 @@ static int tmc_etr_alloc_flat_buf(struct tmc_drvdata *drvdata, if (!flat_buf) return -ENOMEM; - flat_buf->vaddr = dma_alloc_coherent(real_dev, etr_buf->size, - &flat_buf->daddr, GFP_KERNEL); + flat_buf->vaddr = dma_alloc_noncoherent(real_dev, etr_buf->size, + &flat_buf->daddr, + DMA_FROM_DEVICE, GFP_KERNEL); if (!flat_buf->vaddr) { kfree(flat_buf); return -ENOMEM; @@ -631,14 +630,18 @@ static void tmc_etr_free_flat_buf(struct etr_buf *etr_buf) if (flat_buf && flat_buf->daddr) { struct device *real_dev = flat_buf->dev->parent; - dma_free_coherent(real_dev, flat_buf->size, - flat_buf->vaddr, flat_buf->daddr); + dma_free_noncoherent(real_dev, etr_buf->size, + flat_buf->vaddr, flat_buf->daddr, + DMA_FROM_DEVICE); } kfree(flat_buf); } static void tmc_etr_sync_flat_buf(struct etr_buf *etr_buf, u64 rrp, u64 rwp) { + struct etr_flat_buf *flat_buf = etr_buf->private; + struct device *real_dev = flat_buf->dev->parent; + /* * Adjust the buffer to point to the beginning of the trace data * and update the available trace data. @@ -648,6 +651,19 @@ static void tmc_etr_sync_flat_buf(struct etr_buf *etr_buf, u64 rrp, u64 rwp) etr_buf->len = etr_buf->size; else etr_buf->len = rwp - rrp; + + /* + * The driver always starts tracing at the beginning of the buffer, + * the only reason why we would get a wrap around is when the buffer + * is full. Sync the entire buffer in one go for this case. + */ + if (etr_buf->offset + etr_buf->len > etr_buf->size) + dma_sync_single_for_cpu(real_dev, flat_buf->daddr, + etr_buf->size, DMA_FROM_DEVICE); + else + dma_sync_single_for_cpu(real_dev, + flat_buf->daddr + etr_buf->offset, + etr_buf->len, DMA_FROM_DEVICE); } static ssize_t tmc_etr_get_data_flat_buf(struct etr_buf *etr_buf, @@ -982,7 +998,8 @@ static void __tmc_etr_enable_hw(struct tmc_drvdata *drvdata) axictl = readl_relaxed(drvdata->base + TMC_AXICTL); axictl &= ~TMC_AXICTL_CLEAR_MASK; - axictl |= (TMC_AXICTL_PROT_CTL_B1 | TMC_AXICTL_WR_BURST_16); + axictl |= TMC_AXICTL_PROT_CTL_B1; + axictl |= TMC_AXICTL_WR_BURST(drvdata->max_burst_size); axictl |= TMC_AXICTL_AXCACHE_OS; if (tmc_etr_has_cap(drvdata, TMC_ETR_AXI_ARCACHE)) { @@ -1437,16 +1454,16 @@ free_etr_perf_buffer: * buffer to the perf ring buffer. */ static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf, + unsigned long head, unsigned long src_offset, unsigned long to_copy) { long bytes; long pg_idx, pg_offset; - unsigned long head = etr_perf->head; char **dst_pages, *src_buf; struct etr_buf *etr_buf = etr_perf->etr_buf; - head = etr_perf->head; + head = PERF_IDX2OFF(head, etr_perf); pg_idx = head >> PAGE_SHIFT; pg_offset = head & (PAGE_SIZE - 1); dst_pages = (char **)etr_perf->pages; @@ -1553,16 +1570,23 @@ tmc_update_etr_buffer(struct coresight_device *csdev, /* Insert barrier packets at the beginning, if there was an overflow */ if (lost) tmc_etr_buf_insert_barrier_packet(etr_buf, offset); - tmc_etr_sync_perf_buffer(etr_perf, offset, size); + tmc_etr_sync_perf_buffer(etr_perf, handle->head, offset, size); /* * In snapshot mode we simply increment the head by the number of byte - * that were written. User space function cs_etm_find_snapshot() will - * figure out how many bytes to get from the AUX buffer based on the - * position of the head. + * that were written. User space will figure out how many bytes to get + * from the AUX buffer based on the position of the head. */ if (etr_perf->snapshot) handle->head += size; + + /* + * Ensure that the AUX trace data is visible before the aux_head + * is updated via perf_aux_output_end(), as expected by the + * perf ring buffer. + */ + smp_wmb(); + out: /* * Don't set the TRUNCATED flag in snapshot mode because 1) the @@ -1605,8 +1629,6 @@ static int tmc_enable_etr_sink_perf(struct coresight_device *csdev, void *data) goto unlock_out; } - etr_perf->head = PERF_IDX2OFF(handle->head, etr_perf); - /* * No HW configuration is needed if the sink is already in * use for this session. diff --git a/drivers/hwtracing/coresight/coresight-tmc.h b/drivers/hwtracing/coresight/coresight-tmc.h index b91ec7dde7bc..6bec20a392b3 100644 --- a/drivers/hwtracing/coresight/coresight-tmc.h +++ b/drivers/hwtracing/coresight/coresight-tmc.h @@ -70,7 +70,8 @@ #define TMC_AXICTL_PROT_CTL_B0 BIT(0) #define TMC_AXICTL_PROT_CTL_B1 BIT(1) #define TMC_AXICTL_SCT_GAT_MODE BIT(7) -#define TMC_AXICTL_WR_BURST_16 0xF00 +#define TMC_AXICTL_WR_BURST(v) (((v) & 0xf) << 8) +#define TMC_AXICTL_WR_BURST_16 0xf /* Write-back Read and Write-allocate */ #define TMC_AXICTL_AXCACHE_OS (0xf << 2) #define TMC_AXICTL_ARCACHE_OS (0xf << 16) @@ -174,6 +175,8 @@ struct etr_buf { * @etr_buf: details of buffer used in TMC-ETR * @len: size of the available trace for ETF/ETB. * @size: trace buffer size for this TMC (common for all modes). + * @max_burst_size: The maximum burst size that can be initiated by + * TMC-ETR on AXI bus. * @mode: how this TMC is being used. * @config_type: TMC variant, must be of type @tmc_config_type. * @memwidth: width of the memory interface databus, in bytes. @@ -198,6 +201,7 @@ struct tmc_drvdata { }; u32 len; u32 size; + u32 max_burst_size; u32 mode; enum tmc_config_type config_type; enum tmc_mem_intf_width memwidth; diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c index 176868496879..276862c07e32 100644 --- a/drivers/hwtracing/coresight/coresight-trbe.c +++ b/drivers/hwtracing/coresight/coresight-trbe.c @@ -16,6 +16,9 @@ #define pr_fmt(fmt) DRVNAME ": " fmt #include <asm/barrier.h> +#include <asm/cpufeature.h> + +#include "coresight-self-hosted-trace.h" #include "coresight-trbe.h" #define PERF_IDX2OFF(idx, buf) ((idx) % ((buf)->nr_pages << PAGE_SHIFT)) @@ -56,6 +59,8 @@ struct trbe_buf { * trbe_limit sibling pointers. */ unsigned long trbe_base; + /* The base programmed into the TRBE */ + unsigned long trbe_hw_base; unsigned long trbe_limit; unsigned long trbe_write; int nr_pages; @@ -64,13 +69,63 @@ struct trbe_buf { struct trbe_cpudata *cpudata; }; +/* + * TRBE erratum list + * + * The errata are defined in arm64 generic cpu_errata framework. + * Since the errata work arounds could be applied individually + * to the affected CPUs inside the TRBE driver, we need to know if + * a given CPU is affected by the erratum. Unlike the other erratum + * work arounds, TRBE driver needs to check multiple times during + * a trace session. Thus we need a quicker access to per-CPU + * errata and not issue costly this_cpu_has_cap() everytime. + * We keep a set of the affected errata in trbe_cpudata, per TRBE. + * + * We rely on the corresponding cpucaps to be defined for a given + * TRBE erratum. We map the given cpucap into a TRBE internal number + * to make the tracking of the errata lean. + * + * This helps in : + * - Not duplicating the detection logic + * - Streamlined detection of erratum across the system + */ +#define TRBE_WORKAROUND_OVERWRITE_FILL_MODE 0 +#define TRBE_WORKAROUND_WRITE_OUT_OF_RANGE 1 + +static int trbe_errata_cpucaps[] = { + [TRBE_WORKAROUND_OVERWRITE_FILL_MODE] = ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE, + [TRBE_WORKAROUND_WRITE_OUT_OF_RANGE] = ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE, + -1, /* Sentinel, must be the last entry */ +}; + +/* The total number of listed errata in trbe_errata_cpucaps */ +#define TRBE_ERRATA_MAX (ARRAY_SIZE(trbe_errata_cpucaps) - 1) + +/* + * Safe limit for the number of bytes that may be overwritten + * when ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE is triggered. + */ +#define TRBE_WORKAROUND_OVERWRITE_FILL_MODE_SKIP_BYTES 256 + +/* + * struct trbe_cpudata: TRBE instance specific data + * @trbe_flag - TRBE dirty/access flag support + * @trbe_hw_align - Actual TRBE alignment required for TRBPTR_EL1. + * @trbe_align - Software alignment used for the TRBPTR_EL1. + * @cpu - CPU this TRBE belongs to. + * @mode - Mode of current operation. (perf/disabled) + * @drvdata - TRBE specific drvdata + * @errata - Bit map for the errata on this TRBE. + */ struct trbe_cpudata { bool trbe_flag; + u64 trbe_hw_align; u64 trbe_align; int cpu; enum cs_mode mode; struct trbe_buf *buf; struct trbe_drvdata *drvdata; + DECLARE_BITMAP(errata, TRBE_ERRATA_MAX); }; struct trbe_drvdata { @@ -83,6 +138,35 @@ struct trbe_drvdata { struct platform_device *pdev; }; +static void trbe_check_errata(struct trbe_cpudata *cpudata) +{ + int i; + + for (i = 0; i < TRBE_ERRATA_MAX; i++) { + int cap = trbe_errata_cpucaps[i]; + + if (WARN_ON_ONCE(cap < 0)) + return; + if (this_cpu_has_cap(cap)) + set_bit(i, cpudata->errata); + } +} + +static inline bool trbe_has_erratum(struct trbe_cpudata *cpudata, int i) +{ + return (i < TRBE_ERRATA_MAX) && test_bit(i, cpudata->errata); +} + +static inline bool trbe_may_overwrite_in_fill_mode(struct trbe_cpudata *cpudata) +{ + return trbe_has_erratum(cpudata, TRBE_WORKAROUND_OVERWRITE_FILL_MODE); +} + +static inline bool trbe_may_write_out_of_range(struct trbe_cpudata *cpudata) +{ + return trbe_has_erratum(cpudata, TRBE_WORKAROUND_WRITE_OUT_OF_RANGE); +} + static int trbe_alloc_node(struct perf_event *event) { if (event->cpu == -1) @@ -120,6 +204,25 @@ static void trbe_reset_local(void) write_sysreg_s(0, SYS_TRBSR_EL1); } +static void trbe_report_wrap_event(struct perf_output_handle *handle) +{ + /* + * Mark the buffer to indicate that there was a WRAP event by + * setting the COLLISION flag. This indicates to the user that + * the TRBE trace collection was stopped without stopping the + * ETE and thus there might be some amount of trace that was + * lost between the time the WRAP was detected and the IRQ + * was consumed by the CPU. + * + * Setting the TRUNCATED flag would move the event to STOPPED + * state unnecessarily, even when there is space left in the + * ring buffer. Using the COLLISION flag doesn't have this side + * effect. We only set TRUNCATED flag when there is no space + * left in the ring buffer. + */ + perf_aux_output_flag(handle, PERF_AUX_FLAG_COLLISION); +} + static void trbe_stop_and_truncate_event(struct perf_output_handle *handle) { struct trbe_buf *buf = etm_perf_sink_config(handle); @@ -133,6 +236,7 @@ static void trbe_stop_and_truncate_event(struct perf_output_handle *handle) */ trbe_drain_and_disable_local(); perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + perf_aux_output_end(handle, 0); *this_cpu_ptr(buf->cpudata->drvdata->handle) = NULL; } @@ -178,12 +282,18 @@ static void trbe_stop_and_truncate_event(struct perf_output_handle *handle) * consumed from the user space. The enabled TRBE buffer area is a moving subset of * the allocated perf auxiliary buffer. */ + +static void __trbe_pad_buf(struct trbe_buf *buf, u64 offset, int len) +{ + memset((void *)buf->trbe_base + offset, ETE_IGNORE_PACKET, len); +} + static void trbe_pad_buf(struct perf_output_handle *handle, int len) { struct trbe_buf *buf = etm_perf_sink_config(handle); u64 head = PERF_IDX2OFF(handle->head, buf); - memset((void *)buf->trbe_base + head, ETE_IGNORE_PACKET, len); + __trbe_pad_buf(buf, head, len); if (!buf->snapshot) perf_aux_output_skip(handle, len); } @@ -200,6 +310,25 @@ static unsigned long trbe_snapshot_offset(struct perf_output_handle *handle) return buf->nr_pages * PAGE_SIZE; } +static u64 trbe_min_trace_buf_size(struct perf_output_handle *handle) +{ + u64 size = TRBE_TRACE_MIN_BUF_SIZE; + struct trbe_buf *buf = etm_perf_sink_config(handle); + struct trbe_cpudata *cpudata = buf->cpudata; + + /* + * When the TRBE is affected by an erratum that could make it + * write to the next "virtually addressed" page beyond the LIMIT. + * We need to make sure there is always a PAGE after the LIMIT, + * within the buffer. Thus we ensure there is at least an extra + * page than normal. With this we could then adjust the LIMIT + * pointer down by a PAGE later. + */ + if (trbe_may_write_out_of_range(cpudata)) + size += PAGE_SIZE; + return size; +} + /* * TRBE Limit Calculation * @@ -252,13 +381,9 @@ static unsigned long __trbe_normal_offset(struct perf_output_handle *handle) * trbe_base trbe_base + nr_pages * * Perf aux buffer does not have any space for the driver to write into. - * Just communicate trace truncation event to the user space by marking - * it with PERF_AUX_FLAG_TRUNCATED. */ - if (!handle->size) { - perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + if (!handle->size) return 0; - } /* Compute the tail and wakeup indices now that we've aligned head */ tail = PERF_IDX2OFF(handle->head + handle->size, buf); @@ -360,13 +485,12 @@ static unsigned long __trbe_normal_offset(struct perf_output_handle *handle) return limit; trbe_pad_buf(handle, handle->size); - perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); return 0; } static unsigned long trbe_normal_offset(struct perf_output_handle *handle) { - struct trbe_buf *buf = perf_get_aux(handle); + struct trbe_buf *buf = etm_perf_sink_config(handle); u64 limit = __trbe_normal_offset(handle); u64 head = PERF_IDX2OFF(handle->head, buf); @@ -374,10 +498,14 @@ static unsigned long trbe_normal_offset(struct perf_output_handle *handle) * If the head is too close to the limit and we don't * have space for a meaningful run, we rather pad it * and start fresh. + * + * We might have to do this more than once to make sure + * we have enough required space. */ - if (limit && (limit - head < TRBE_TRACE_MIN_BUF_SIZE)) { + while (limit && ((limit - head) < trbe_min_trace_buf_size(handle))) { trbe_pad_buf(handle, limit - head); limit = __trbe_normal_offset(handle); + head = PERF_IDX2OFF(handle->head, buf); } return limit; } @@ -448,12 +576,13 @@ static void set_trbe_limit_pointer_enabled(unsigned long addr) static void trbe_enable_hw(struct trbe_buf *buf) { - WARN_ON(buf->trbe_write < buf->trbe_base); + WARN_ON(buf->trbe_hw_base < buf->trbe_base); + WARN_ON(buf->trbe_write < buf->trbe_hw_base); WARN_ON(buf->trbe_write >= buf->trbe_limit); set_trbe_disabled(); isb(); clr_trbe_status(); - set_trbe_base_pointer(buf->trbe_base); + set_trbe_base_pointer(buf->trbe_hw_base); set_trbe_write_pointer(buf->trbe_write); /* @@ -464,10 +593,13 @@ static void trbe_enable_hw(struct trbe_buf *buf) set_trbe_limit_pointer_enabled(buf->trbe_limit); } -static enum trbe_fault_action trbe_get_fault_act(u64 trbsr) +static enum trbe_fault_action trbe_get_fault_act(struct perf_output_handle *handle, + u64 trbsr) { int ec = get_trbe_ec(trbsr); int bsc = get_trbe_bsc(trbsr); + struct trbe_buf *buf = etm_perf_sink_config(handle); + struct trbe_cpudata *cpudata = buf->cpudata; WARN_ON(is_trbe_running(trbsr)); if (is_trbe_trg(trbsr) || is_trbe_abort(trbsr)) @@ -476,13 +608,71 @@ static enum trbe_fault_action trbe_get_fault_act(u64 trbsr) if ((ec == TRBE_EC_STAGE1_ABORT) || (ec == TRBE_EC_STAGE2_ABORT)) return TRBE_FAULT_ACT_FATAL; - if (is_trbe_wrap(trbsr) && (ec == TRBE_EC_OTHERS) && (bsc == TRBE_BSC_FILLED)) { - if (get_trbe_write_pointer() == get_trbe_base_pointer()) - return TRBE_FAULT_ACT_WRAP; - } + /* + * If the trbe is affected by TRBE_WORKAROUND_OVERWRITE_FILL_MODE, + * it might write data after a WRAP event in the fill mode. + * Thus the check TRBPTR == TRBBASER will not be honored. + */ + if ((is_trbe_wrap(trbsr) && (ec == TRBE_EC_OTHERS) && (bsc == TRBE_BSC_FILLED)) && + (trbe_may_overwrite_in_fill_mode(cpudata) || + get_trbe_write_pointer() == get_trbe_base_pointer())) + return TRBE_FAULT_ACT_WRAP; + return TRBE_FAULT_ACT_SPURIOUS; } +static unsigned long trbe_get_trace_size(struct perf_output_handle *handle, + struct trbe_buf *buf, bool wrap) +{ + u64 write; + u64 start_off, end_off; + u64 size; + u64 overwrite_skip = TRBE_WORKAROUND_OVERWRITE_FILL_MODE_SKIP_BYTES; + + /* + * If the TRBE has wrapped around the write pointer has + * wrapped and should be treated as limit. + * + * When the TRBE is affected by TRBE_WORKAROUND_WRITE_OUT_OF_RANGE, + * it may write upto 64bytes beyond the "LIMIT". The driver already + * keeps a valid page next to the LIMIT and we could potentially + * consume the trace data that may have been collected there. But we + * cannot be really sure it is available, and the TRBPTR may not + * indicate the same. Also, affected cores are also affected by another + * erratum which forces the PAGE_SIZE alignment on the TRBPTR, and thus + * could potentially pad an entire PAGE_SIZE - 64bytes, to get those + * 64bytes. Thus we ignore the potential triggering of the erratum + * on WRAP and limit the data to LIMIT. + */ + if (wrap) + write = get_trbe_limit_pointer(); + else + write = get_trbe_write_pointer(); + + /* + * TRBE may use a different base address than the base + * of the ring buffer. Thus use the beginning of the ring + * buffer to compute the offsets. + */ + end_off = write - buf->trbe_base; + start_off = PERF_IDX2OFF(handle->head, buf); + + if (WARN_ON_ONCE(end_off < start_off)) + return 0; + + size = end_off - start_off; + /* + * If the TRBE is affected by the following erratum, we must fill + * the space we skipped with IGNORE packets. And we are always + * guaranteed to have at least a PAGE_SIZE space in the buffer. + */ + if (trbe_has_erratum(buf->cpudata, TRBE_WORKAROUND_OVERWRITE_FILL_MODE) && + !WARN_ON(size < overwrite_skip)) + __trbe_pad_buf(buf, start_off, overwrite_skip); + + return size; +} + static void *arm_trbe_alloc_buffer(struct coresight_device *csdev, struct perf_event *event, void **pages, int nr_pages, bool snapshot) @@ -544,9 +734,9 @@ static unsigned long arm_trbe_update_buffer(struct coresight_device *csdev, struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev); struct trbe_buf *buf = config; enum trbe_fault_action act; - unsigned long size, offset; - unsigned long write, base, status; + unsigned long size, status; unsigned long flags; + bool wrap = false; WARN_ON(buf->cpudata != cpudata); WARN_ON(cpudata->cpu != smp_processor_id()); @@ -554,8 +744,6 @@ static unsigned long arm_trbe_update_buffer(struct coresight_device *csdev, if (cpudata->mode != CS_MODE_PERF) return 0; - perf_aux_output_flag(handle, PERF_AUX_FLAG_CORESIGHT_FORMAT_RAW); - /* * We are about to disable the TRBE. And this could in turn * fill up the buffer triggering, an IRQ. This could be consumed @@ -588,8 +776,6 @@ static unsigned long arm_trbe_update_buffer(struct coresight_device *csdev, * handle gets freed in etm_event_stop(). */ trbe_drain_and_disable_local(); - write = get_trbe_write_pointer(); - base = get_trbe_base_pointer(); /* Check if there is a pending interrupt and handle it here */ status = read_sysreg_s(SYS_TRBSR_EL1); @@ -603,7 +789,7 @@ static unsigned long arm_trbe_update_buffer(struct coresight_device *csdev, clr_trbe_irq(); isb(); - act = trbe_get_fault_act(status); + act = trbe_get_fault_act(handle, status); /* * If this was not due to a WRAP event, we have some * errors and as such buffer is empty. @@ -613,20 +799,11 @@ static unsigned long arm_trbe_update_buffer(struct coresight_device *csdev, goto done; } - /* - * Otherwise, the buffer is full and the write pointer - * has reached base. Adjust this back to the Limit pointer - * for correct size. Also, mark the buffer truncated. - */ - write = get_trbe_limit_pointer(); - perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + trbe_report_wrap_event(handle); + wrap = true; } - offset = write - base; - if (WARN_ON_ONCE(offset < PERF_IDX2OFF(handle->head, buf))) - size = 0; - else - size = offset - PERF_IDX2OFF(handle->head, buf); + size = trbe_get_trace_size(handle, buf, wrap); done: local_irq_restore(flags); @@ -636,6 +813,148 @@ done: return size; } + +static int trbe_apply_work_around_before_enable(struct trbe_buf *buf) +{ + /* + * TRBE_WORKAROUND_OVERWRITE_FILL_MODE causes the TRBE to overwrite a few cache + * line size from the "TRBBASER_EL1" in the event of a "FILL". + * Thus, we could loose some amount of the trace at the base. + * + * Before Fix: + * + * normal-BASE head (normal-TRBPTR) tail (normal-LIMIT) + * | \/ / + * ------------------------------------------------------------- + * | Pg0 | Pg1 | | | PgN | + * ------------------------------------------------------------- + * + * In the normal course of action, we would set the TRBBASER to the + * beginning of the ring-buffer (normal-BASE). But with the erratum, + * the TRBE could overwrite the contents at the "normal-BASE", after + * hitting the "normal-LIMIT", since it doesn't stop as expected. And + * this is wrong. This could result in overwriting trace collected in + * one of the previous runs, being consumed by the user. So we must + * always make sure that the TRBBASER is within the region + * [head, head+size]. Note that TRBBASER must be PAGE aligned, + * + * After moving the BASE: + * + * normal-BASE head (normal-TRBPTR) tail (normal-LIMIT) + * | \/ / + * ------------------------------------------------------------- + * | | |xyzdef. |.. tuvw| | + * ------------------------------------------------------------- + * / + * New-BASER + * + * Also, we would set the TRBPTR to head (after adjusting for + * alignment) at normal-PTR. This would mean that the last few bytes + * of the trace (say, "xyz") might overwrite the first few bytes of + * trace written ("abc"). More importantly they will appear in what + * userspace sees as the beginning of the trace, which is wrong. We may + * not always have space to move the latest trace "xyz" to the correct + * order as it must appear beyond the LIMIT. (i.e, [head..head+size]). + * Thus it is easier to ignore those bytes than to complicate the + * driver to move it, assuming that the erratum was triggered and + * doing additional checks to see if there is indeed allowed space at + * TRBLIMITR.LIMIT. + * + * Thus the full workaround will move the BASE and the PTR and would + * look like (after padding at the skipped bytes at the end of + * session) : + * + * normal-BASE head (normal-TRBPTR) tail (normal-LIMIT) + * | \/ / + * ------------------------------------------------------------- + * | | |///abc.. |.. rst| | + * ------------------------------------------------------------- + * / | + * New-BASER New-TRBPTR + * + * To summarize, with the work around: + * + * - We always align the offset for the next session to PAGE_SIZE + * (This is to ensure we can program the TRBBASER to this offset + * within the region [head...head+size]). + * + * - At TRBE enable: + * - Set the TRBBASER to the page aligned offset of the current + * proposed write offset. (which is guaranteed to be aligned + * as above) + * - Move the TRBPTR to skip first 256bytes (that might be + * overwritten with the erratum). This ensures that the trace + * generated in the session is not re-written. + * + * - At trace collection: + * - Pad the 256bytes skipped above again with IGNORE packets. + */ + if (trbe_has_erratum(buf->cpudata, TRBE_WORKAROUND_OVERWRITE_FILL_MODE)) { + if (WARN_ON(!IS_ALIGNED(buf->trbe_write, PAGE_SIZE))) + return -EINVAL; + buf->trbe_hw_base = buf->trbe_write; + buf->trbe_write += TRBE_WORKAROUND_OVERWRITE_FILL_MODE_SKIP_BYTES; + } + + /* + * TRBE_WORKAROUND_WRITE_OUT_OF_RANGE could cause the TRBE to write to + * the next page after the TRBLIMITR.LIMIT. For perf, the "next page" + * may be: + * - The page beyond the ring buffer. This could mean, TRBE could + * corrupt another entity (kernel / user) + * - A portion of the "ring buffer" consumed by the userspace. + * i.e, a page outisde [head, head + size]. + * + * We work around this by: + * - Making sure that we have at least an extra space of PAGE left + * in the ring buffer [head, head + size], than we normally do + * without the erratum. See trbe_min_trace_buf_size(). + * + * - Adjust the TRBLIMITR.LIMIT to leave the extra PAGE outside + * the TRBE's range (i.e [TRBBASER, TRBLIMITR.LIMI] ). + */ + if (trbe_has_erratum(buf->cpudata, TRBE_WORKAROUND_WRITE_OUT_OF_RANGE)) { + s64 space = buf->trbe_limit - buf->trbe_write; + /* + * We must have more than a PAGE_SIZE worth space in the proposed + * range for the TRBE. + */ + if (WARN_ON(space <= PAGE_SIZE || + !IS_ALIGNED(buf->trbe_limit, PAGE_SIZE))) + return -EINVAL; + buf->trbe_limit -= PAGE_SIZE; + } + + return 0; +} + +static int __arm_trbe_enable(struct trbe_buf *buf, + struct perf_output_handle *handle) +{ + int ret = 0; + + perf_aux_output_flag(handle, PERF_AUX_FLAG_CORESIGHT_FORMAT_RAW); + buf->trbe_limit = compute_trbe_buffer_limit(handle); + buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf); + if (buf->trbe_limit == buf->trbe_base) { + ret = -ENOSPC; + goto err; + } + /* Set the base of the TRBE to the buffer base */ + buf->trbe_hw_base = buf->trbe_base; + + ret = trbe_apply_work_around_before_enable(buf); + if (ret) + goto err; + + *this_cpu_ptr(buf->cpudata->drvdata->handle) = handle; + trbe_enable_hw(buf); + return 0; +err: + trbe_stop_and_truncate_event(handle); + return ret; +} + static int arm_trbe_enable(struct coresight_device *csdev, u32 mode, void *data) { struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); @@ -648,18 +967,11 @@ static int arm_trbe_enable(struct coresight_device *csdev, u32 mode, void *data) if (mode != CS_MODE_PERF) return -EINVAL; - *this_cpu_ptr(drvdata->handle) = handle; cpudata->buf = buf; cpudata->mode = mode; buf->cpudata = cpudata; - buf->trbe_limit = compute_trbe_buffer_limit(handle); - buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf); - if (buf->trbe_limit == buf->trbe_base) { - trbe_stop_and_truncate_event(handle); - return 0; - } - trbe_enable_hw(buf); - return 0; + + return __arm_trbe_enable(buf, handle); } static int arm_trbe_disable(struct coresight_device *csdev) @@ -683,35 +995,30 @@ static int arm_trbe_disable(struct coresight_device *csdev) static void trbe_handle_spurious(struct perf_output_handle *handle) { - struct trbe_buf *buf = etm_perf_sink_config(handle); + u64 limitr = read_sysreg_s(SYS_TRBLIMITR_EL1); - buf->trbe_limit = compute_trbe_buffer_limit(handle); - buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf); - if (buf->trbe_limit == buf->trbe_base) { - trbe_drain_and_disable_local(); - return; - } - trbe_enable_hw(buf); + /* + * If the IRQ was spurious, simply re-enable the TRBE + * back without modifying the buffer parameters to + * retain the trace collected so far. + */ + limitr |= TRBLIMITR_ENABLE; + write_sysreg_s(limitr, SYS_TRBLIMITR_EL1); + isb(); } -static void trbe_handle_overflow(struct perf_output_handle *handle) +static int trbe_handle_overflow(struct perf_output_handle *handle) { struct perf_event *event = handle->event; struct trbe_buf *buf = etm_perf_sink_config(handle); - unsigned long offset, size; + unsigned long size; struct etm_event_data *event_data; - offset = get_trbe_limit_pointer() - get_trbe_base_pointer(); - size = offset - PERF_IDX2OFF(handle->head, buf); + size = trbe_get_trace_size(handle, buf, true); if (buf->snapshot) handle->head += size; - /* - * Mark the buffer as truncated, as we have stopped the trace - * collection upon the WRAP event, without stopping the source. - */ - perf_aux_output_flag(handle, PERF_AUX_FLAG_CORESIGHT_FORMAT_RAW | - PERF_AUX_FLAG_TRUNCATED); + trbe_report_wrap_event(handle); perf_aux_output_end(handle, size); event_data = perf_aux_output_begin(handle, event); if (!event_data) { @@ -723,16 +1030,10 @@ static void trbe_handle_overflow(struct perf_output_handle *handle) */ trbe_drain_and_disable_local(); *this_cpu_ptr(buf->cpudata->drvdata->handle) = NULL; - return; - } - buf->trbe_limit = compute_trbe_buffer_limit(handle); - buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf); - if (buf->trbe_limit == buf->trbe_base) { - trbe_stop_and_truncate_event(handle); - return; + return -EINVAL; } - *this_cpu_ptr(buf->cpudata->drvdata->handle) = handle; - trbe_enable_hw(buf); + + return __arm_trbe_enable(buf, handle); } static bool is_perf_trbe(struct perf_output_handle *handle) @@ -742,7 +1043,7 @@ static bool is_perf_trbe(struct perf_output_handle *handle) struct trbe_drvdata *drvdata = cpudata->drvdata; int cpu = smp_processor_id(); - WARN_ON(buf->trbe_base != get_trbe_base_pointer()); + WARN_ON(buf->trbe_hw_base != get_trbe_base_pointer()); WARN_ON(buf->trbe_limit != get_trbe_limit_pointer()); if (cpudata->mode != CS_MODE_PERF) @@ -763,13 +1064,10 @@ static irqreturn_t arm_trbe_irq_handler(int irq, void *dev) struct perf_output_handle *handle = *handle_ptr; enum trbe_fault_action act; u64 status; + bool truncated = false; + u64 trfcr; - /* - * Ensure the trace is visible to the CPUs and - * any external aborts have been resolved. - */ - trbe_drain_and_disable_local(); - + /* Reads to TRBSR_EL1 is fine when TRBE is active */ status = read_sysreg_s(SYS_TRBSR_EL1); /* * If the pending IRQ was handled by update_buffer callback @@ -778,6 +1076,13 @@ static irqreturn_t arm_trbe_irq_handler(int irq, void *dev) if (!is_trbe_irq(status)) return IRQ_NONE; + /* Prohibit the CPU from tracing before we disable the TRBE */ + trfcr = cpu_prohibit_trace(); + /* + * Ensure the trace is visible to the CPUs and + * any external aborts have been resolved. + */ + trbe_drain_and_disable_local(); clr_trbe_irq(); isb(); @@ -787,24 +1092,32 @@ static irqreturn_t arm_trbe_irq_handler(int irq, void *dev) if (!is_perf_trbe(handle)) return IRQ_NONE; - /* - * Ensure perf callbacks have completed, which may disable - * the trace buffer in response to a TRUNCATION flag. - */ - irq_work_run(); - - act = trbe_get_fault_act(status); + act = trbe_get_fault_act(handle, status); switch (act) { case TRBE_FAULT_ACT_WRAP: - trbe_handle_overflow(handle); + truncated = !!trbe_handle_overflow(handle); break; case TRBE_FAULT_ACT_SPURIOUS: trbe_handle_spurious(handle); break; case TRBE_FAULT_ACT_FATAL: trbe_stop_and_truncate_event(handle); + truncated = true; break; } + + /* + * If the buffer was truncated, ensure perf callbacks + * have completed, which will disable the event. + * + * Otherwise, restore the trace filter controls to + * allow the tracing. + */ + if (truncated) + irq_work_run(); + else + write_trfcr(trfcr); + return IRQ_HANDLED; } @@ -824,7 +1137,7 @@ static ssize_t align_show(struct device *dev, struct device_attribute *attr, cha { struct trbe_cpudata *cpudata = dev_get_drvdata(dev); - return sprintf(buf, "%llx\n", cpudata->trbe_align); + return sprintf(buf, "%llx\n", cpudata->trbe_hw_align); } static DEVICE_ATTR_RO(align); @@ -869,6 +1182,10 @@ static void arm_trbe_register_coresight_cpu(struct trbe_drvdata *drvdata, int cp if (WARN_ON(trbe_csdev)) return; + /* If the TRBE was not probed on the CPU, we shouldn't be here */ + if (WARN_ON(!cpudata->drvdata)) + return; + dev = &cpudata->drvdata->pdev->dev; desc.name = devm_kasprintf(dev, GFP_KERNEL, "trbe%d", cpu); if (!desc.name) @@ -891,6 +1208,9 @@ cpu_clear: cpumask_clear_cpu(cpu, &drvdata->supported_cpus); } +/* + * Must be called with preemption disabled, for trbe_check_errata(). + */ static void arm_trbe_probe_cpu(void *info) { struct trbe_drvdata *drvdata = info; @@ -912,11 +1232,34 @@ static void arm_trbe_probe_cpu(void *info) goto cpu_clear; } - cpudata->trbe_align = 1ULL << get_trbe_address_align(trbidr); - if (cpudata->trbe_align > SZ_2K) { + cpudata->trbe_hw_align = 1ULL << get_trbe_address_align(trbidr); + if (cpudata->trbe_hw_align > SZ_2K) { pr_err("Unsupported alignment on cpu %d\n", cpu); goto cpu_clear; } + + /* + * Run the TRBE erratum checks, now that we know + * this instance is about to be registered. + */ + trbe_check_errata(cpudata); + + /* + * If the TRBE is affected by erratum TRBE_WORKAROUND_OVERWRITE_FILL_MODE, + * we must always program the TBRPTR_EL1, 256bytes from a page + * boundary, with TRBBASER_EL1 set to the page, to prevent + * TRBE over-writing 256bytes at TRBBASER_EL1 on FILL event. + * + * Thus make sure we always align our write pointer to a PAGE_SIZE, + * which also guarantees that we have at least a PAGE_SIZE space in + * the buffer (TRBLIMITR is PAGE aligned) and thus we can skip + * the required bytes at the base. + */ + if (trbe_may_overwrite_in_fill_mode(cpudata)) + cpudata->trbe_align = PAGE_SIZE; + else + cpudata->trbe_align = cpudata->trbe_hw_align; + cpudata->trbe_flag = get_trbe_flag_update(trbidr); cpudata->cpu = cpu; cpudata->drvdata = drvdata; @@ -950,7 +1293,9 @@ static int arm_trbe_probe_coresight(struct trbe_drvdata *drvdata) return -ENOMEM; for_each_cpu(cpu, &drvdata->supported_cpus) { - smp_call_function_single(cpu, arm_trbe_probe_cpu, drvdata, 1); + /* If we fail to probe the CPU, let us defer it to hotplug callbacks */ + if (smp_call_function_single(cpu, arm_trbe_probe_cpu, drvdata, 1)) + continue; if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) arm_trbe_register_coresight_cpu(drvdata, cpu); if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) @@ -969,6 +1314,13 @@ static int arm_trbe_remove_coresight(struct trbe_drvdata *drvdata) return 0; } +static void arm_trbe_probe_hotplugged_cpu(struct trbe_drvdata *drvdata) +{ + preempt_disable(); + arm_trbe_probe_cpu(drvdata); + preempt_enable(); +} + static int arm_trbe_cpu_startup(unsigned int cpu, struct hlist_node *node) { struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node); @@ -980,7 +1332,7 @@ static int arm_trbe_cpu_startup(unsigned int cpu, struct hlist_node *node) * initialize it now. */ if (!coresight_get_percpu_sink(cpu)) { - arm_trbe_probe_cpu(drvdata); + arm_trbe_probe_hotplugged_cpu(drvdata); if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) arm_trbe_register_coresight_cpu(drvdata, cpu); if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) |