aboutsummaryrefslogtreecommitdiff
path: root/arch
diff options
context:
space:
mode:
authorMahesh Salgaonkar2018-08-23 10:26:08 +0530
committerMichael Ellerman2018-08-23 23:40:10 +1000
commit0f52b3a00c789569d7ed822b5a6b30f59a8d4393 (patch)
tree4a63b54f4c745c965eb10a5b9cd2c317ff883d86 /arch
parent8cfbdbdc24815417a3ab35101ccf706b9a23ff17 (diff)
powerpc/mce: Fix SLB rebolting during MCE recovery path.
The commit e7e81847478 ("powerpc/64s: move machine check SLB flushing to mm/slb.c") introduced a bug in reloading bolted SLB entries. Unused bolted entries are stored with .esid=0 in the slb_shadow area, and that value is now used directly as the RB input to slbmte, which means the RB[52:63] index field is set to 0, which causes SLB entry 0 to be cleared. Fix this by storing the index bits in the unused bolted entries, which directs the slbmte to the right place. The SLB shadow area is also used by the hypervisor, but PAPR is okay with that, from LoPAPR v1.1, 14.11.1.3 SLB Shadow Buffer: Note: SLB is filled sequentially starting at index 0 from the shadow buffer ignoring the contents of RB field bits 52-63 Fixes: e7e81847478b ("powerpc/64s: move machine check SLB flushing to mm/slb.c") Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch')
-rw-r--r--arch/powerpc/mm/slb.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 0b095fa54049..9f574e59d178 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -70,7 +70,7 @@ static inline void slb_shadow_update(unsigned long ea, int ssize,
static inline void slb_shadow_clear(enum slb_index index)
{
- WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
+ WRITE_ONCE(get_slb_shadow()->save_area[index].esid, cpu_to_be64(index));
}
static inline void create_shadowed_slbe(unsigned long ea, int ssize,