aboutsummaryrefslogtreecommitdiff
path: root/arch/x86
diff options
context:
space:
mode:
authorAndy Lutomirski2021-02-09 18:33:45 -0800
committerBorislav Petkov2021-02-10 16:27:57 +0100
commitca247283781d754216395a41c5e8be8ec79a5f1c (patch)
tree1c37b9a7d3880b66e489f9e254cf4775982a5429 /arch/x86
parent66fcd98883816dba3b66da20b5fc86fa410638b5 (diff)
x86/fault: Don't run fixups for SMAP violations
A SMAP-violating kernel access is not a recoverable condition. Imagine kernel code that, outside of a uaccess region, dereferences a pointer to the user range by accident. If SMAP is on, this will reliably generate as an intentional user access. This makes it easy for bugs to be overlooked if code is inadequately tested both with and without SMAP. This was discovered because BPF can generate invalid accesses to user memory, but those warnings only got printed if SMAP was off. Make it so that this type of error will be discovered with SMAP on as well. [ bp: Massage commit message. ] Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/66a02343624b1ff46f02a838c497fc05c1a871b3.1612924255.git.luto@kernel.org
Diffstat (limited to 'arch/x86')
-rw-r--r--arch/x86/mm/fault.c9
1 files changed, 6 insertions, 3 deletions
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 1a0cfede8822..1c3054bb4a5b 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1279,9 +1279,12 @@ void do_user_addr_fault(struct pt_regs *regs,
*/
if (unlikely(cpu_feature_enabled(X86_FEATURE_SMAP) &&
!(error_code & X86_PF_USER) &&
- !(regs->flags & X86_EFLAGS_AC)))
- {
- bad_area_nosemaphore(regs, error_code, address);
+ !(regs->flags & X86_EFLAGS_AC))) {
+ /*
+ * No extable entry here. This was a kernel access to an
+ * invalid pointer. get_kernel_nofault() will not get here.
+ */
+ page_fault_oops(regs, error_code, address);
return;
}