diff options
author | Markos Chandras | 2016-02-03 03:15:23 +0000 |
---|---|---|
committer | Ralf Baechle | 2016-05-13 14:01:48 +0200 |
commit | 0f2a1484487407390196ca5b3c3a07bee6df15ad (patch) | |
tree | 7836b2bef1626f453e1c56088a7eec422d5b5960 /arch/mips/kernel/pm-cps.c | |
parent | 04d83f948510f17f8f2ab320b2386f4b5fbd0bd4 (diff) |
MIPS: pm-cps: Avoid offset overflow on MIPSr6
This is similar to commit 934c79231c1b ("MIPS: asm: r4kcache: Add MIPS
R6 cache unroll functions"). The CACHE instruction has been redefined
for MIPSr6 and it reduced its offset field to 8 bits. This leads to
micro-assembler field overflow warnings when booting SMP MIPSr6 cores
like the following one:
Call Trace:
[<ffffffff8010af88>] show_stack+0x68/0x88
[<ffffffff8056ddf0>] dump_stack+0x68/0x88
[<ffffffff801305bc>] warn_slowpath_common+0x8c/0xc8
[<ffffffff80130630>] warn_slowpath_fmt+0x38/0x48
[<ffffffff80125814>] build_insn+0x514/0x5c0
[<ffffffff806ee134>] cps_gen_cache_routine.isra.3+0xe0/0x1b8
[<ffffffff806ee570>] cps_pm_init+0x364/0x9ec
[<ffffffff80100538>] do_one_initcall+0x90/0x1a8
[<ffffffff806e8c14>] kernel_init_freeable+0x160/0x21c
[<ffffffff8056b6a0>] kernel_init+0x10/0xf8
[<ffffffff801059f8>] ret_from_kernel_thread+0x14/0x1c
We fix this by incrementing the base register on every loop.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12329/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Diffstat (limited to 'arch/mips/kernel/pm-cps.c')
-rw-r--r-- | arch/mips/kernel/pm-cps.c | 15 |
1 files changed, 11 insertions, 4 deletions
diff --git a/arch/mips/kernel/pm-cps.c b/arch/mips/kernel/pm-cps.c index fa3f9ebad8f4..adda3ffb9b78 100644 --- a/arch/mips/kernel/pm-cps.c +++ b/arch/mips/kernel/pm-cps.c @@ -224,11 +224,18 @@ static void __init cps_gen_cache_routine(u32 **pp, struct uasm_label **pl, uasm_build_label(pl, *pp, lbl); /* Generate the cache ops */ - for (i = 0; i < unroll_lines; i++) - uasm_i_cache(pp, op, i * cache->linesz, t0); + for (i = 0; i < unroll_lines; i++) { + if (cpu_has_mips_r6) { + uasm_i_cache(pp, op, 0, t0); + uasm_i_addiu(pp, t0, t0, cache->linesz); + } else { + uasm_i_cache(pp, op, i * cache->linesz, t0); + } + } - /* Update the base address */ - uasm_i_addiu(pp, t0, t0, unroll_lines * cache->linesz); + if (!cpu_has_mips_r6) + /* Update the base address */ + uasm_i_addiu(pp, t0, t0, unroll_lines * cache->linesz); /* Loop if we haven't reached the end address yet */ uasm_il_bne(pp, pr, t0, t1, lbl); |