From 54a611b605901c7d5d05b6b8f5d04a6ceb0962aa Mon Sep 17 00:00:00 2001 From: Liam R. Howlett Date: Tue, 6 Sep 2022 19:48:39 +0000 Subject: Maple Tree: add new data structure Patch series "Introducing the Maple Tree" The maple tree is an RCU-safe range based B-tree designed to use modern processor cache efficiently. There are a number of places in the kernel that a non-overlapping range-based tree would be beneficial, especially one with a simple interface. If you use an rbtree with other data structures to improve performance or an interval tree to track non-overlapping ranges, then this is for you. The tree has a branching factor of 10 for non-leaf nodes and 16 for leaf nodes. With the increased branching factor, it is significantly shorter than the rbtree so it has fewer cache misses. The removal of the linked list between subsequent entries also reduces the cache misses and the need to pull in the previous and next VMA during many tree alterations. The first user that is covered in this patch set is the vm_area_struct, where three data structures are replaced by the maple tree: the augmented rbtree, the vma cache, and the linked list of VMAs in the mm_struct. The long term goal is to reduce or remove the mmap_lock contention. The plan is to get to the point where we use the maple tree in RCU mode. Readers will not block for writers. A single write operation will be allowed at a time. A reader re-walks if stale data is encountered. VMAs would be RCU enabled and this mode would be entered once multiple tasks are using the mm_struct. Davidlor said : Yes I like the maple tree, and at this stage I don't think we can ask for : more from this series wrt the MM - albeit there seems to still be some : folks reporting breakage. Fundamentally I see Liam's work to (re)move : complexity out of the MM (not to say that the actual maple tree is not : complex) by consolidating the three complimentary data structures very : much worth it considering performance does not take a hit. This was very : much a turn off with the range locking approach, which worst case scenario : incurred in prohibitive overhead. Also as Liam and Matthew have : mentioned, RCU opens up a lot of nice performance opportunities, and in : addition academia[1] has shown outstanding scalability of address spaces : with the foundation of replacing the locked rbtree with RCU aware trees. A similar work has been discovered in the academic press https://pdos.csail.mit.edu/papers/rcuvm:asplos12.pdf Sheer coincidence. We designed our tree with the intention of solving the hardest problem first. Upon settling on a b-tree variant and a rough outline, we researched ranged based b-trees and RCU b-trees and did find that article. So it was nice to find reassurances that we were on the right path, but our design choice of using ranges made that paper unusable for us. This patch (of 70): The maple tree is an RCU-safe range based B-tree designed to use modern processor cache efficiently. There are a number of places in the kernel that a non-overlapping range-based tree would be beneficial, especially one with a simple interface. If you use an rbtree with other data structures to improve performance or an interval tree to track non-overlapping ranges, then this is for you. The tree has a branching factor of 10 for non-leaf nodes and 16 for leaf nodes. With the increased branching factor, it is significantly shorter than the rbtree so it has fewer cache misses. The removal of the linked list between subsequent entries also reduces the cache misses and the need to pull in the previous and next VMA during many tree alterations. The first user that is covered in this patch set is the vm_area_struct, where three data structures are replaced by the maple tree: the augmented rbtree, the vma cache, and the linked list of VMAs in the mm_struct. The long term goal is to reduce or remove the mmap_lock contention. The plan is to get to the point where we use the maple tree in RCU mode. Readers will not block for writers. A single write operation will be allowed at a time. A reader re-walks if stale data is encountered. VMAs would be RCU enabled and this mode would be entered once multiple tasks are using the mm_struct. There is additional BUG_ON() calls added within the tree, most of which are in debug code. These will be replaced with a WARN_ON() call in the future. There is also additional BUG_ON() calls within the code which will also be reduced in number at a later date. These exist to catch things such as out-of-range accesses which would crash anyways. Link: https://lkml.kernel.org/r/20220906194824.2110408-1-Liam.Howlett@oracle.com Link: https://lkml.kernel.org/r/20220906194824.2110408-2-Liam.Howlett@oracle.com Signed-off-by: Liam R. Howlett Signed-off-by: Matthew Wilcox (Oracle) Tested-by: David Howells Tested-by: Sven Schnelle Tested-by: Yu Zhao Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Davidlohr Bueso Cc: Catalin Marinas Cc: SeongJae Park Cc: Will Deacon Signed-off-by: Andrew Morton --- MAINTAINERS | 12 ++++++++++++ 1 file changed, 12 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 589517372408..c66b63ad83d8 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12092,6 +12092,18 @@ L: linux-man@vger.kernel.org S: Maintained W: http://www.kernel.org/doc/man-pages +MAPLE TREE +M: Liam R. Howlett +L: linux-mm@kvack.org +S: Supported +F: Documentation/core-api/maple_tree.rst +F: include/linux/maple_tree.h +F: include/trace/events/maple_tree.h +F: lib/maple_tree.c +F: lib/test_maple_tree.c +F: tools/testing/radix-tree/linux/maple_tree.h +F: tools/testing/radix-tree/maple.c + MARDUK (CREATOR CI40) DEVICE TREE SUPPORT M: Rahul Bedarkar L: linux-mips@vger.kernel.org -- cgit v1.2.3 From f7e01ab828fd4bf6d25b1f143a3994241e8572bf Mon Sep 17 00:00:00 2001 From: Andrey Konovalov Date: Tue, 6 Sep 2022 00:18:36 +0200 Subject: kasan: move tests to mm/kasan/ Move KASAN tests to mm/kasan/ to keep the test code alongside the implementation. Link: https://lkml.kernel.org/r/676398f0aeecd47d2f8e3369ea0e95563f641a36.1662416260.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Cc: Alexander Potapenko Cc: Andrey Konovalov Cc: Andrey Ryabinin Cc: Dmitry Vyukov Cc: Marco Elver Signed-off-by: Andrew Morton --- MAINTAINERS | 1 - lib/Makefile | 5 - lib/test_kasan.c | 1450 ------------------------------------------ lib/test_kasan_module.c | 141 ---- mm/kasan/Makefile | 8 + mm/kasan/kasan_test.c | 1450 ++++++++++++++++++++++++++++++++++++++++++ mm/kasan/kasan_test_module.c | 141 ++++ 7 files changed, 1599 insertions(+), 1597 deletions(-) delete mode 100644 lib/test_kasan.c delete mode 100644 lib/test_kasan_module.c create mode 100644 mm/kasan/kasan_test.c create mode 100644 mm/kasan/kasan_test_module.c (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c66b63ad83d8..6f1033f3c1ed 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10938,7 +10938,6 @@ F: arch/*/include/asm/*kasan.h F: arch/*/mm/kasan_init* F: include/linux/kasan*.h F: lib/Kconfig.kasan -F: lib/test_kasan*.c F: mm/kasan/ F: scripts/Makefile.kasan diff --git a/lib/Makefile b/lib/Makefile index 6dc0d6f8e57d..d7d94102991b 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -65,11 +65,6 @@ obj-$(CONFIG_TEST_SYSCTL) += test_sysctl.o obj-$(CONFIG_TEST_SIPHASH) += test_siphash.o obj-$(CONFIG_HASH_KUNIT_TEST) += test_hash.o obj-$(CONFIG_TEST_IDA) += test_ida.o -obj-$(CONFIG_KASAN_KUNIT_TEST) += test_kasan.o -CFLAGS_test_kasan.o += -fno-builtin -CFLAGS_test_kasan.o += $(call cc-disable-warning, vla) -obj-$(CONFIG_KASAN_MODULE_TEST) += test_kasan_module.o -CFLAGS_test_kasan_module.o += -fno-builtin obj-$(CONFIG_TEST_UBSAN) += test_ubsan.o CFLAGS_test_ubsan.o += $(call cc-disable-warning, vla) UBSAN_SANITIZE_test_ubsan.o := y diff --git a/lib/test_kasan.c b/lib/test_kasan.c deleted file mode 100644 index 505f77ffad27..000000000000 --- a/lib/test_kasan.c +++ /dev/null @@ -1,1450 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * - * Copyright (c) 2014 Samsung Electronics Co., Ltd. - * Author: Andrey Ryabinin - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include - -#include - -#include "../mm/kasan/kasan.h" - -#define OOB_TAG_OFF (IS_ENABLED(CONFIG_KASAN_GENERIC) ? 0 : KASAN_GRANULE_SIZE) - -/* - * Some tests use these global variables to store return values from function - * calls that could otherwise be eliminated by the compiler as dead code. - */ -void *kasan_ptr_result; -int kasan_int_result; - -static struct kunit_resource resource; -static struct kunit_kasan_status test_status; -static bool multishot; - -/* - * Temporarily enable multi-shot mode. Otherwise, KASAN would only report the - * first detected bug and panic the kernel if panic_on_warn is enabled. For - * hardware tag-based KASAN also allow tag checking to be reenabled for each - * test, see the comment for KUNIT_EXPECT_KASAN_FAIL(). - */ -static int kasan_test_init(struct kunit *test) -{ - if (!kasan_enabled()) { - kunit_err(test, "can't run KASAN tests with KASAN disabled"); - return -1; - } - - multishot = kasan_save_enable_multi_shot(); - test_status.report_found = false; - test_status.sync_fault = false; - kunit_add_named_resource(test, NULL, NULL, &resource, - "kasan_status", &test_status); - return 0; -} - -static void kasan_test_exit(struct kunit *test) -{ - kasan_restore_multi_shot(multishot); - KUNIT_EXPECT_FALSE(test, test_status.report_found); -} - -/** - * KUNIT_EXPECT_KASAN_FAIL() - check that the executed expression produces a - * KASAN report; causes a test failure otherwise. This relies on a KUnit - * resource named "kasan_status". Do not use this name for KUnit resources - * outside of KASAN tests. - * - * For hardware tag-based KASAN, when a synchronous tag fault happens, tag - * checking is auto-disabled. When this happens, this test handler reenables - * tag checking. As tag checking can be only disabled or enabled per CPU, - * this handler disables migration (preemption). - * - * Since the compiler doesn't see that the expression can change the test_status - * fields, it can reorder or optimize away the accesses to those fields. - * Use READ/WRITE_ONCE() for the accesses and compiler barriers around the - * expression to prevent that. - * - * In between KUNIT_EXPECT_KASAN_FAIL checks, test_status.report_found is kept - * as false. This allows detecting KASAN reports that happen outside of the - * checks by asserting !test_status.report_found at the start of - * KUNIT_EXPECT_KASAN_FAIL and in kasan_test_exit. - */ -#define KUNIT_EXPECT_KASAN_FAIL(test, expression) do { \ - if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \ - kasan_sync_fault_possible()) \ - migrate_disable(); \ - KUNIT_EXPECT_FALSE(test, READ_ONCE(test_status.report_found)); \ - barrier(); \ - expression; \ - barrier(); \ - if (kasan_async_fault_possible()) \ - kasan_force_async_fault(); \ - if (!READ_ONCE(test_status.report_found)) { \ - KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN failure " \ - "expected in \"" #expression \ - "\", but none occurred"); \ - } \ - if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \ - kasan_sync_fault_possible()) { \ - if (READ_ONCE(test_status.report_found) && \ - READ_ONCE(test_status.sync_fault)) \ - kasan_enable_tagging(); \ - migrate_enable(); \ - } \ - WRITE_ONCE(test_status.report_found, false); \ -} while (0) - -#define KASAN_TEST_NEEDS_CONFIG_ON(test, config) do { \ - if (!IS_ENABLED(config)) \ - kunit_skip((test), "Test requires " #config "=y"); \ -} while (0) - -#define KASAN_TEST_NEEDS_CONFIG_OFF(test, config) do { \ - if (IS_ENABLED(config)) \ - kunit_skip((test), "Test requires " #config "=n"); \ -} while (0) - -static void kmalloc_oob_right(struct kunit *test) -{ - char *ptr; - size_t size = 128 - KASAN_GRANULE_SIZE - 5; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - OPTIMIZER_HIDE_VAR(ptr); - /* - * An unaligned access past the requested kmalloc size. - * Only generic KASAN can precisely detect these. - */ - if (IS_ENABLED(CONFIG_KASAN_GENERIC)) - KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = 'x'); - - /* - * An aligned access into the first out-of-bounds granule that falls - * within the aligned kmalloc object. - */ - KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + 5] = 'y'); - - /* Out-of-bounds access past the aligned kmalloc object. */ - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = - ptr[size + KASAN_GRANULE_SIZE + 5]); - - kfree(ptr); -} - -static void kmalloc_oob_left(struct kunit *test) -{ - char *ptr; - size_t size = 15; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - OPTIMIZER_HIDE_VAR(ptr); - KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1)); - kfree(ptr); -} - -static void kmalloc_node_oob_right(struct kunit *test) -{ - char *ptr; - size_t size = 4096; - - ptr = kmalloc_node(size, GFP_KERNEL, 0); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - OPTIMIZER_HIDE_VAR(ptr); - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]); - kfree(ptr); -} - -/* - * These kmalloc_pagealloc_* tests try allocating a memory chunk that doesn't - * fit into a slab cache and therefore is allocated via the page allocator - * fallback. Since this kind of fallback is only implemented for SLUB, these - * tests are limited to that allocator. - */ -static void kmalloc_pagealloc_oob_right(struct kunit *test) -{ - char *ptr; - size_t size = KMALLOC_MAX_CACHE_SIZE + 10; - - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - OPTIMIZER_HIDE_VAR(ptr); - KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + OOB_TAG_OFF] = 0); - - kfree(ptr); -} - -static void kmalloc_pagealloc_uaf(struct kunit *test) -{ - char *ptr; - size_t size = KMALLOC_MAX_CACHE_SIZE + 10; - - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - kfree(ptr); - - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]); -} - -static void kmalloc_pagealloc_invalid_free(struct kunit *test) -{ - char *ptr; - size_t size = KMALLOC_MAX_CACHE_SIZE + 10; - - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - KUNIT_EXPECT_KASAN_FAIL(test, kfree(ptr + 1)); -} - -static void pagealloc_oob_right(struct kunit *test) -{ - char *ptr; - struct page *pages; - size_t order = 4; - size_t size = (1UL << (PAGE_SHIFT + order)); - - /* - * With generic KASAN page allocations have no redzones, thus - * out-of-bounds detection is not guaranteed. - * See https://bugzilla.kernel.org/show_bug.cgi?id=210503. - */ - KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); - - pages = alloc_pages(GFP_KERNEL, order); - ptr = page_address(pages); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]); - free_pages((unsigned long)ptr, order); -} - -static void pagealloc_uaf(struct kunit *test) -{ - char *ptr; - struct page *pages; - size_t order = 4; - - pages = alloc_pages(GFP_KERNEL, order); - ptr = page_address(pages); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - free_pages((unsigned long)ptr, order); - - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]); -} - -static void kmalloc_large_oob_right(struct kunit *test) -{ - char *ptr; - size_t size = KMALLOC_MAX_CACHE_SIZE - 256; - - /* - * Allocate a chunk that is large enough, but still fits into a slab - * and does not trigger the page allocator fallback in SLUB. - */ - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - OPTIMIZER_HIDE_VAR(ptr); - KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = 0); - kfree(ptr); -} - -static void krealloc_more_oob_helper(struct kunit *test, - size_t size1, size_t size2) -{ - char *ptr1, *ptr2; - size_t middle; - - KUNIT_ASSERT_LT(test, size1, size2); - middle = size1 + (size2 - size1) / 2; - - ptr1 = kmalloc(size1, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); - - ptr2 = krealloc(ptr1, size2, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); - - /* All offsets up to size2 must be accessible. */ - ptr2[size1 - 1] = 'x'; - ptr2[size1] = 'x'; - ptr2[middle] = 'x'; - ptr2[size2 - 1] = 'x'; - - /* Generic mode is precise, so unaligned size2 must be inaccessible. */ - if (IS_ENABLED(CONFIG_KASAN_GENERIC)) - KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2] = 'x'); - - /* For all modes first aligned offset after size2 must be inaccessible. */ - KUNIT_EXPECT_KASAN_FAIL(test, - ptr2[round_up(size2, KASAN_GRANULE_SIZE)] = 'x'); - - kfree(ptr2); -} - -static void krealloc_less_oob_helper(struct kunit *test, - size_t size1, size_t size2) -{ - char *ptr1, *ptr2; - size_t middle; - - KUNIT_ASSERT_LT(test, size2, size1); - middle = size2 + (size1 - size2) / 2; - - ptr1 = kmalloc(size1, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); - - ptr2 = krealloc(ptr1, size2, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); - - /* Must be accessible for all modes. */ - ptr2[size2 - 1] = 'x'; - - /* Generic mode is precise, so unaligned size2 must be inaccessible. */ - if (IS_ENABLED(CONFIG_KASAN_GENERIC)) - KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2] = 'x'); - - /* For all modes first aligned offset after size2 must be inaccessible. */ - KUNIT_EXPECT_KASAN_FAIL(test, - ptr2[round_up(size2, KASAN_GRANULE_SIZE)] = 'x'); - - /* - * For all modes all size2, middle, and size1 should land in separate - * granules and thus the latter two offsets should be inaccessible. - */ - KUNIT_EXPECT_LE(test, round_up(size2, KASAN_GRANULE_SIZE), - round_down(middle, KASAN_GRANULE_SIZE)); - KUNIT_EXPECT_LE(test, round_up(middle, KASAN_GRANULE_SIZE), - round_down(size1, KASAN_GRANULE_SIZE)); - KUNIT_EXPECT_KASAN_FAIL(test, ptr2[middle] = 'x'); - KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size1 - 1] = 'x'); - KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size1] = 'x'); - - kfree(ptr2); -} - -static void krealloc_more_oob(struct kunit *test) -{ - krealloc_more_oob_helper(test, 201, 235); -} - -static void krealloc_less_oob(struct kunit *test) -{ - krealloc_less_oob_helper(test, 235, 201); -} - -static void krealloc_pagealloc_more_oob(struct kunit *test) -{ - /* page_alloc fallback in only implemented for SLUB. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - - krealloc_more_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 201, - KMALLOC_MAX_CACHE_SIZE + 235); -} - -static void krealloc_pagealloc_less_oob(struct kunit *test) -{ - /* page_alloc fallback in only implemented for SLUB. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - - krealloc_less_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 235, - KMALLOC_MAX_CACHE_SIZE + 201); -} - -/* - * Check that krealloc() detects a use-after-free, returns NULL, - * and doesn't unpoison the freed object. - */ -static void krealloc_uaf(struct kunit *test) -{ - char *ptr1, *ptr2; - int size1 = 201; - int size2 = 235; - - ptr1 = kmalloc(size1, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); - kfree(ptr1); - - KUNIT_EXPECT_KASAN_FAIL(test, ptr2 = krealloc(ptr1, size2, GFP_KERNEL)); - KUNIT_ASSERT_NULL(test, ptr2); - KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1); -} - -static void kmalloc_oob_16(struct kunit *test) -{ - struct { - u64 words[2]; - } *ptr1, *ptr2; - - /* This test is specifically crafted for the generic mode. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); - - ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); - - ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); - - OPTIMIZER_HIDE_VAR(ptr1); - OPTIMIZER_HIDE_VAR(ptr2); - KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2); - kfree(ptr1); - kfree(ptr2); -} - -static void kmalloc_uaf_16(struct kunit *test) -{ - struct { - u64 words[2]; - } *ptr1, *ptr2; - - ptr1 = kmalloc(sizeof(*ptr1), GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); - - ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); - kfree(ptr2); - - KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2); - kfree(ptr1); -} - -/* - * Note: in the memset tests below, the written range touches both valid and - * invalid memory. This makes sure that the instrumentation does not only check - * the starting address but the whole range. - */ - -static void kmalloc_oob_memset_2(struct kunit *test) -{ - char *ptr; - size_t size = 128 - KASAN_GRANULE_SIZE; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - OPTIMIZER_HIDE_VAR(size); - KUNIT_EXPECT_KASAN_FAIL(test, memset(ptr + size - 1, 0, 2)); - kfree(ptr); -} - -static void kmalloc_oob_memset_4(struct kunit *test) -{ - char *ptr; - size_t size = 128 - KASAN_GRANULE_SIZE; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - OPTIMIZER_HIDE_VAR(size); - KUNIT_EXPECT_KASAN_FAIL(test, memset(ptr + size - 3, 0, 4)); - kfree(ptr); -} - -static void kmalloc_oob_memset_8(struct kunit *test) -{ - char *ptr; - size_t size = 128 - KASAN_GRANULE_SIZE; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - OPTIMIZER_HIDE_VAR(size); - KUNIT_EXPECT_KASAN_FAIL(test, memset(ptr + size - 7, 0, 8)); - kfree(ptr); -} - -static void kmalloc_oob_memset_16(struct kunit *test) -{ - char *ptr; - size_t size = 128 - KASAN_GRANULE_SIZE; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - OPTIMIZER_HIDE_VAR(size); - KUNIT_EXPECT_KASAN_FAIL(test, memset(ptr + size - 15, 0, 16)); - kfree(ptr); -} - -static void kmalloc_oob_in_memset(struct kunit *test) -{ - char *ptr; - size_t size = 128 - KASAN_GRANULE_SIZE; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - OPTIMIZER_HIDE_VAR(ptr); - OPTIMIZER_HIDE_VAR(size); - KUNIT_EXPECT_KASAN_FAIL(test, - memset(ptr, 0, size + KASAN_GRANULE_SIZE)); - kfree(ptr); -} - -static void kmalloc_memmove_negative_size(struct kunit *test) -{ - char *ptr; - size_t size = 64; - size_t invalid_size = -2; - - /* - * Hardware tag-based mode doesn't check memmove for negative size. - * As a result, this test introduces a side-effect memory corruption, - * which can result in a crash. - */ - KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_HW_TAGS); - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - memset((char *)ptr, 0, 64); - OPTIMIZER_HIDE_VAR(ptr); - OPTIMIZER_HIDE_VAR(invalid_size); - KUNIT_EXPECT_KASAN_FAIL(test, - memmove((char *)ptr, (char *)ptr + 4, invalid_size)); - kfree(ptr); -} - -static void kmalloc_memmove_invalid_size(struct kunit *test) -{ - char *ptr; - size_t size = 64; - volatile size_t invalid_size = size; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - memset((char *)ptr, 0, 64); - OPTIMIZER_HIDE_VAR(ptr); - KUNIT_EXPECT_KASAN_FAIL(test, - memmove((char *)ptr, (char *)ptr + 4, invalid_size)); - kfree(ptr); -} - -static void kmalloc_uaf(struct kunit *test) -{ - char *ptr; - size_t size = 10; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - kfree(ptr); - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]); -} - -static void kmalloc_uaf_memset(struct kunit *test) -{ - char *ptr; - size_t size = 33; - - /* - * Only generic KASAN uses quarantine, which is required to avoid a - * kernel memory corruption this test causes. - */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - kfree(ptr); - KUNIT_EXPECT_KASAN_FAIL(test, memset(ptr, 0, size)); -} - -static void kmalloc_uaf2(struct kunit *test) -{ - char *ptr1, *ptr2; - size_t size = 43; - int counter = 0; - -again: - ptr1 = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); - - kfree(ptr1); - - ptr2 = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); - - /* - * For tag-based KASAN ptr1 and ptr2 tags might happen to be the same. - * Allow up to 16 attempts at generating different tags. - */ - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && ptr1 == ptr2 && counter++ < 16) { - kfree(ptr2); - goto again; - } - - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]); - KUNIT_EXPECT_PTR_NE(test, ptr1, ptr2); - - kfree(ptr2); -} - -/* - * Check that KASAN detects use-after-free when another object was allocated in - * the same slot. Relevant for the tag-based modes, which do not use quarantine. - */ -static void kmalloc_uaf3(struct kunit *test) -{ - char *ptr1, *ptr2; - size_t size = 100; - - /* This test is specifically crafted for tag-based modes. */ - KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); - - ptr1 = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); - kfree(ptr1); - - ptr2 = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); - kfree(ptr2); - - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]); -} - -static void kfree_via_page(struct kunit *test) -{ - char *ptr; - size_t size = 8; - struct page *page; - unsigned long offset; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - page = virt_to_page(ptr); - offset = offset_in_page(ptr); - kfree(page_address(page) + offset); -} - -static void kfree_via_phys(struct kunit *test) -{ - char *ptr; - size_t size = 8; - phys_addr_t phys; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - phys = virt_to_phys(ptr); - kfree(phys_to_virt(phys)); -} - -static void kmem_cache_oob(struct kunit *test) -{ - char *p; - size_t size = 200; - struct kmem_cache *cache; - - cache = kmem_cache_create("test_cache", size, 0, 0, NULL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); - - p = kmem_cache_alloc(cache, GFP_KERNEL); - if (!p) { - kunit_err(test, "Allocation failed: %s\n", __func__); - kmem_cache_destroy(cache); - return; - } - - KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]); - - kmem_cache_free(cache, p); - kmem_cache_destroy(cache); -} - -static void kmem_cache_accounted(struct kunit *test) -{ - int i; - char *p; - size_t size = 200; - struct kmem_cache *cache; - - cache = kmem_cache_create("test_cache", size, 0, SLAB_ACCOUNT, NULL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); - - /* - * Several allocations with a delay to allow for lazy per memcg kmem - * cache creation. - */ - for (i = 0; i < 5; i++) { - p = kmem_cache_alloc(cache, GFP_KERNEL); - if (!p) - goto free_cache; - - kmem_cache_free(cache, p); - msleep(100); - } - -free_cache: - kmem_cache_destroy(cache); -} - -static void kmem_cache_bulk(struct kunit *test) -{ - struct kmem_cache *cache; - size_t size = 200; - char *p[10]; - bool ret; - int i; - - cache = kmem_cache_create("test_cache", size, 0, 0, NULL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); - - ret = kmem_cache_alloc_bulk(cache, GFP_KERNEL, ARRAY_SIZE(p), (void **)&p); - if (!ret) { - kunit_err(test, "Allocation failed: %s\n", __func__); - kmem_cache_destroy(cache); - return; - } - - for (i = 0; i < ARRAY_SIZE(p); i++) - p[i][0] = p[i][size - 1] = 42; - - kmem_cache_free_bulk(cache, ARRAY_SIZE(p), (void **)&p); - kmem_cache_destroy(cache); -} - -static char global_array[10]; - -static void kasan_global_oob_right(struct kunit *test) -{ - /* - * Deliberate out-of-bounds access. To prevent CONFIG_UBSAN_LOCAL_BOUNDS - * from failing here and panicking the kernel, access the array via a - * volatile pointer, which will prevent the compiler from being able to - * determine the array bounds. - * - * This access uses a volatile pointer to char (char *volatile) rather - * than the more conventional pointer to volatile char (volatile char *) - * because we want to prevent the compiler from making inferences about - * the pointer itself (i.e. its array bounds), not the data that it - * refers to. - */ - char *volatile array = global_array; - char *p = &array[ARRAY_SIZE(global_array) + 3]; - - /* Only generic mode instruments globals. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); - - KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); -} - -static void kasan_global_oob_left(struct kunit *test) -{ - char *volatile array = global_array; - char *p = array - 3; - - /* - * GCC is known to fail this test, skip it. - * See https://bugzilla.kernel.org/show_bug.cgi?id=215051. - */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_CC_IS_CLANG); - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); - KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); -} - -/* Check that ksize() makes the whole object accessible. */ -static void ksize_unpoisons_memory(struct kunit *test) -{ - char *ptr; - size_t size = 123, real_size; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - real_size = ksize(ptr); - - OPTIMIZER_HIDE_VAR(ptr); - - /* This access shouldn't trigger a KASAN report. */ - ptr[size] = 'x'; - - /* This one must. */ - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size]); - - kfree(ptr); -} - -/* - * Check that a use-after-free is detected by ksize() and via normal accesses - * after it. - */ -static void ksize_uaf(struct kunit *test) -{ - char *ptr; - int size = 128 - KASAN_GRANULE_SIZE; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - kfree(ptr); - - OPTIMIZER_HIDE_VAR(ptr); - KUNIT_EXPECT_KASAN_FAIL(test, ksize(ptr)); - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]); - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]); -} - -static void kasan_stack_oob(struct kunit *test) -{ - char stack_array[10]; - /* See comment in kasan_global_oob_right. */ - char *volatile array = stack_array; - char *p = &array[ARRAY_SIZE(stack_array) + OOB_TAG_OFF]; - - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_STACK); - - KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); -} - -static void kasan_alloca_oob_left(struct kunit *test) -{ - volatile int i = 10; - char alloca_array[i]; - /* See comment in kasan_global_oob_right. */ - char *volatile array = alloca_array; - char *p = array - 1; - - /* Only generic mode instruments dynamic allocas. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_STACK); - - KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); -} - -static void kasan_alloca_oob_right(struct kunit *test) -{ - volatile int i = 10; - char alloca_array[i]; - /* See comment in kasan_global_oob_right. */ - char *volatile array = alloca_array; - char *p = array + i; - - /* Only generic mode instruments dynamic allocas. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_STACK); - - KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); -} - -static void kmem_cache_double_free(struct kunit *test) -{ - char *p; - size_t size = 200; - struct kmem_cache *cache; - - cache = kmem_cache_create("test_cache", size, 0, 0, NULL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); - - p = kmem_cache_alloc(cache, GFP_KERNEL); - if (!p) { - kunit_err(test, "Allocation failed: %s\n", __func__); - kmem_cache_destroy(cache); - return; - } - - kmem_cache_free(cache, p); - KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_free(cache, p)); - kmem_cache_destroy(cache); -} - -static void kmem_cache_invalid_free(struct kunit *test) -{ - char *p; - size_t size = 200; - struct kmem_cache *cache; - - cache = kmem_cache_create("test_cache", size, 0, SLAB_TYPESAFE_BY_RCU, - NULL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); - - p = kmem_cache_alloc(cache, GFP_KERNEL); - if (!p) { - kunit_err(test, "Allocation failed: %s\n", __func__); - kmem_cache_destroy(cache); - return; - } - - /* Trigger invalid free, the object doesn't get freed. */ - KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_free(cache, p + 1)); - - /* - * Properly free the object to prevent the "Objects remaining in - * test_cache on __kmem_cache_shutdown" BUG failure. - */ - kmem_cache_free(cache, p); - - kmem_cache_destroy(cache); -} - -static void empty_cache_ctor(void *object) { } - -static void kmem_cache_double_destroy(struct kunit *test) -{ - struct kmem_cache *cache; - - /* Provide a constructor to prevent cache merging. */ - cache = kmem_cache_create("test_cache", 200, 0, 0, empty_cache_ctor); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); - kmem_cache_destroy(cache); - KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_destroy(cache)); -} - -static void kasan_memchr(struct kunit *test) -{ - char *ptr; - size_t size = 24; - - /* - * str* functions are not instrumented with CONFIG_AMD_MEM_ENCRYPT. - * See https://bugzilla.kernel.org/show_bug.cgi?id=206337 for details. - */ - KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_AMD_MEM_ENCRYPT); - - if (OOB_TAG_OFF) - size = round_up(size, OOB_TAG_OFF); - - ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - OPTIMIZER_HIDE_VAR(ptr); - OPTIMIZER_HIDE_VAR(size); - KUNIT_EXPECT_KASAN_FAIL(test, - kasan_ptr_result = memchr(ptr, '1', size + 1)); - - kfree(ptr); -} - -static void kasan_memcmp(struct kunit *test) -{ - char *ptr; - size_t size = 24; - int arr[9]; - - /* - * str* functions are not instrumented with CONFIG_AMD_MEM_ENCRYPT. - * See https://bugzilla.kernel.org/show_bug.cgi?id=206337 for details. - */ - KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_AMD_MEM_ENCRYPT); - - if (OOB_TAG_OFF) - size = round_up(size, OOB_TAG_OFF); - - ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - memset(arr, 0, sizeof(arr)); - - OPTIMIZER_HIDE_VAR(ptr); - OPTIMIZER_HIDE_VAR(size); - KUNIT_EXPECT_KASAN_FAIL(test, - kasan_int_result = memcmp(ptr, arr, size+1)); - kfree(ptr); -} - -static void kasan_strings(struct kunit *test) -{ - char *ptr; - size_t size = 24; - - /* - * str* functions are not instrumented with CONFIG_AMD_MEM_ENCRYPT. - * See https://bugzilla.kernel.org/show_bug.cgi?id=206337 for details. - */ - KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_AMD_MEM_ENCRYPT); - - ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - kfree(ptr); - - /* - * Try to cause only 1 invalid access (less spam in dmesg). - * For that we need ptr to point to zeroed byte. - * Skip metadata that could be stored in freed object so ptr - * will likely point to zeroed byte. - */ - ptr += 16; - KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1')); - - KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1')); - - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2")); - - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1)); - - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr)); - - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1)); -} - -static void kasan_bitops_modify(struct kunit *test, int nr, void *addr) -{ - KUNIT_EXPECT_KASAN_FAIL(test, set_bit(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, __set_bit(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, clear_bit(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, clear_bit_unlock(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit_unlock(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, change_bit(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, __change_bit(nr, addr)); -} - -static void kasan_bitops_test_and_modify(struct kunit *test, int nr, void *addr) -{ - KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, __test_and_set_bit(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, test_and_clear_bit(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, __test_and_clear_bit(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, test_and_change_bit(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, __test_and_change_bit(nr, addr)); - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr)); - -#if defined(clear_bit_unlock_is_negative_byte) - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = - clear_bit_unlock_is_negative_byte(nr, addr)); -#endif -} - -static void kasan_bitops_generic(struct kunit *test) -{ - long *bits; - - /* This test is specifically crafted for the generic mode. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); - - /* - * Allocate 1 more byte, which causes kzalloc to round up to 16 bytes; - * this way we do not actually corrupt other memory. - */ - bits = kzalloc(sizeof(*bits) + 1, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits); - - /* - * Below calls try to access bit within allocated memory; however, the - * below accesses are still out-of-bounds, since bitops are defined to - * operate on the whole long the bit is in. - */ - kasan_bitops_modify(test, BITS_PER_LONG, bits); - - /* - * Below calls try to access bit beyond allocated memory. - */ - kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, bits); - - kfree(bits); -} - -static void kasan_bitops_tags(struct kunit *test) -{ - long *bits; - - /* This test is specifically crafted for tag-based modes. */ - KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); - - /* kmalloc-64 cache will be used and the last 16 bytes will be the redzone. */ - bits = kzalloc(48, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits); - - /* Do the accesses past the 48 allocated bytes, but within the redone. */ - kasan_bitops_modify(test, BITS_PER_LONG, (void *)bits + 48); - kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, (void *)bits + 48); - - kfree(bits); -} - -static void kmalloc_double_kzfree(struct kunit *test) -{ - char *ptr; - size_t size = 16; - - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - kfree_sensitive(ptr); - KUNIT_EXPECT_KASAN_FAIL(test, kfree_sensitive(ptr)); -} - -static void vmalloc_helpers_tags(struct kunit *test) -{ - void *ptr; - - /* This test is intended for tag-based modes. */ - KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); - - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC); - - ptr = vmalloc(PAGE_SIZE); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - /* Check that the returned pointer is tagged. */ - KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); - KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); - - /* Make sure exported vmalloc helpers handle tagged pointers. */ - KUNIT_ASSERT_TRUE(test, is_vmalloc_addr(ptr)); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vmalloc_to_page(ptr)); - -#if !IS_MODULE(CONFIG_KASAN_KUNIT_TEST) - { - int rv; - - /* Make sure vmalloc'ed memory permissions can be changed. */ - rv = set_memory_ro((unsigned long)ptr, 1); - KUNIT_ASSERT_GE(test, rv, 0); - rv = set_memory_rw((unsigned long)ptr, 1); - KUNIT_ASSERT_GE(test, rv, 0); - } -#endif - - vfree(ptr); -} - -static void vmalloc_oob(struct kunit *test) -{ - char *v_ptr, *p_ptr; - struct page *page; - size_t size = PAGE_SIZE / 2 - KASAN_GRANULE_SIZE - 5; - - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC); - - v_ptr = vmalloc(size); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr); - - OPTIMIZER_HIDE_VAR(v_ptr); - - /* - * We have to be careful not to hit the guard page in vmalloc tests. - * The MMU will catch that and crash us. - */ - - /* Make sure in-bounds accesses are valid. */ - v_ptr[0] = 0; - v_ptr[size - 1] = 0; - - /* - * An unaligned access past the requested vmalloc size. - * Only generic KASAN can precisely detect these. - */ - if (IS_ENABLED(CONFIG_KASAN_GENERIC)) - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]); - - /* An aligned access into the first out-of-bounds granule. */ - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]); - - /* Check that in-bounds accesses to the physical page are valid. */ - page = vmalloc_to_page(v_ptr); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, page); - p_ptr = page_address(page); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr); - p_ptr[0] = 0; - - vfree(v_ptr); - - /* - * We can't check for use-after-unmap bugs in this nor in the following - * vmalloc tests, as the page might be fully unmapped and accessing it - * will crash the kernel. - */ -} - -static void vmap_tags(struct kunit *test) -{ - char *p_ptr, *v_ptr; - struct page *p_page, *v_page; - - /* - * This test is specifically crafted for the software tag-based mode, - * the only tag-based mode that poisons vmap mappings. - */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS); - - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC); - - p_page = alloc_pages(GFP_KERNEL, 1); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_page); - p_ptr = page_address(p_page); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr); - - v_ptr = vmap(&p_page, 1, VM_MAP, PAGE_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr); - - /* - * We can't check for out-of-bounds bugs in this nor in the following - * vmalloc tests, as allocations have page granularity and accessing - * the guard page will crash the kernel. - */ - - KUNIT_EXPECT_GE(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_MIN); - KUNIT_EXPECT_LT(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_KERNEL); - - /* Make sure that in-bounds accesses through both pointers work. */ - *p_ptr = 0; - *v_ptr = 0; - - /* Make sure vmalloc_to_page() correctly recovers the page pointer. */ - v_page = vmalloc_to_page(v_ptr); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_page); - KUNIT_EXPECT_PTR_EQ(test, p_page, v_page); - - vunmap(v_ptr); - free_pages((unsigned long)p_ptr, 1); -} - -static void vm_map_ram_tags(struct kunit *test) -{ - char *p_ptr, *v_ptr; - struct page *page; - - /* - * This test is specifically crafted for the software tag-based mode, - * the only tag-based mode that poisons vm_map_ram mappings. - */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS); - - page = alloc_pages(GFP_KERNEL, 1); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, page); - p_ptr = page_address(page); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr); - - v_ptr = vm_map_ram(&page, 1, -1); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr); - - KUNIT_EXPECT_GE(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_MIN); - KUNIT_EXPECT_LT(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_KERNEL); - - /* Make sure that in-bounds accesses through both pointers work. */ - *p_ptr = 0; - *v_ptr = 0; - - vm_unmap_ram(v_ptr, 1); - free_pages((unsigned long)p_ptr, 1); -} - -static void vmalloc_percpu(struct kunit *test) -{ - char __percpu *ptr; - int cpu; - - /* - * This test is specifically crafted for the software tag-based mode, - * the only tag-based mode that poisons percpu mappings. - */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS); - - ptr = __alloc_percpu(PAGE_SIZE, PAGE_SIZE); - - for_each_possible_cpu(cpu) { - char *c_ptr = per_cpu_ptr(ptr, cpu); - - KUNIT_EXPECT_GE(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_MIN); - KUNIT_EXPECT_LT(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_KERNEL); - - /* Make sure that in-bounds accesses don't crash the kernel. */ - *c_ptr = 0; - } - - free_percpu(ptr); -} - -/* - * Check that the assigned pointer tag falls within the [KASAN_TAG_MIN, - * KASAN_TAG_KERNEL) range (note: excluding the match-all tag) for tag-based - * modes. - */ -static void match_all_not_assigned(struct kunit *test) -{ - char *ptr; - struct page *pages; - int i, size, order; - - KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); - - for (i = 0; i < 256; i++) { - size = (get_random_int() % 1024) + 1; - ptr = kmalloc(size, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); - KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); - kfree(ptr); - } - - for (i = 0; i < 256; i++) { - order = (get_random_int() % 4) + 1; - pages = alloc_pages(GFP_KERNEL, order); - ptr = page_address(pages); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); - KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); - free_pages((unsigned long)ptr, order); - } - - if (!IS_ENABLED(CONFIG_KASAN_VMALLOC)) - return; - - for (i = 0; i < 256; i++) { - size = (get_random_int() % 1024) + 1; - ptr = vmalloc(size); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); - KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); - vfree(ptr); - } -} - -/* Check that 0xff works as a match-all pointer tag for tag-based modes. */ -static void match_all_ptr_tag(struct kunit *test) -{ - char *ptr; - u8 tag; - - KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); - - ptr = kmalloc(128, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - - /* Backup the assigned tag. */ - tag = get_tag(ptr); - KUNIT_EXPECT_NE(test, tag, (u8)KASAN_TAG_KERNEL); - - /* Reset the tag to 0xff.*/ - ptr = set_tag(ptr, KASAN_TAG_KERNEL); - - /* This access shouldn't trigger a KASAN report. */ - *ptr = 0; - - /* Recover the pointer tag and free. */ - ptr = set_tag(ptr, tag); - kfree(ptr); -} - -/* Check that there are no match-all memory tags for tag-based modes. */ -static void match_all_mem_tag(struct kunit *test) -{ - char *ptr; - int tag; - - KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); - - ptr = kmalloc(128, GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - KUNIT_EXPECT_NE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); - - /* For each possible tag value not matching the pointer tag. */ - for (tag = KASAN_TAG_MIN; tag <= KASAN_TAG_KERNEL; tag++) { - if (tag == get_tag(ptr)) - continue; - - /* Mark the first memory granule with the chosen memory tag. */ - kasan_poison(ptr, KASAN_GRANULE_SIZE, (u8)tag, false); - - /* This access must cause a KASAN report. */ - KUNIT_EXPECT_KASAN_FAIL(test, *ptr = 0); - } - - /* Recover the memory tag and free. */ - kasan_poison(ptr, KASAN_GRANULE_SIZE, get_tag(ptr), false); - kfree(ptr); -} - -static struct kunit_case kasan_kunit_test_cases[] = { - KUNIT_CASE(kmalloc_oob_right), - KUNIT_CASE(kmalloc_oob_left), - KUNIT_CASE(kmalloc_node_oob_right), - KUNIT_CASE(kmalloc_pagealloc_oob_right), - KUNIT_CASE(kmalloc_pagealloc_uaf), - KUNIT_CASE(kmalloc_pagealloc_invalid_free), - KUNIT_CASE(pagealloc_oob_right), - KUNIT_CASE(pagealloc_uaf), - KUNIT_CASE(kmalloc_large_oob_right), - KUNIT_CASE(krealloc_more_oob), - KUNIT_CASE(krealloc_less_oob), - KUNIT_CASE(krealloc_pagealloc_more_oob), - KUNIT_CASE(krealloc_pagealloc_less_oob), - KUNIT_CASE(krealloc_uaf), - KUNIT_CASE(kmalloc_oob_16), - KUNIT_CASE(kmalloc_uaf_16), - KUNIT_CASE(kmalloc_oob_in_memset), - KUNIT_CASE(kmalloc_oob_memset_2), - KUNIT_CASE(kmalloc_oob_memset_4), - KUNIT_CASE(kmalloc_oob_memset_8), - KUNIT_CASE(kmalloc_oob_memset_16), - KUNIT_CASE(kmalloc_memmove_negative_size), - KUNIT_CASE(kmalloc_memmove_invalid_size), - KUNIT_CASE(kmalloc_uaf), - KUNIT_CASE(kmalloc_uaf_memset), - KUNIT_CASE(kmalloc_uaf2), - KUNIT_CASE(kmalloc_uaf3), - KUNIT_CASE(kfree_via_page), - KUNIT_CASE(kfree_via_phys), - KUNIT_CASE(kmem_cache_oob), - KUNIT_CASE(kmem_cache_accounted), - KUNIT_CASE(kmem_cache_bulk), - KUNIT_CASE(kasan_global_oob_right), - KUNIT_CASE(kasan_global_oob_left), - KUNIT_CASE(kasan_stack_oob), - KUNIT_CASE(kasan_alloca_oob_left), - KUNIT_CASE(kasan_alloca_oob_right), - KUNIT_CASE(ksize_unpoisons_memory), - KUNIT_CASE(ksize_uaf), - KUNIT_CASE(kmem_cache_double_free), - KUNIT_CASE(kmem_cache_invalid_free), - KUNIT_CASE(kmem_cache_double_destroy), - KUNIT_CASE(kasan_memchr), - KUNIT_CASE(kasan_memcmp), - KUNIT_CASE(kasan_strings), - KUNIT_CASE(kasan_bitops_generic), - KUNIT_CASE(kasan_bitops_tags), - KUNIT_CASE(kmalloc_double_kzfree), - KUNIT_CASE(vmalloc_helpers_tags), - KUNIT_CASE(vmalloc_oob), - KUNIT_CASE(vmap_tags), - KUNIT_CASE(vm_map_ram_tags), - KUNIT_CASE(vmalloc_percpu), - KUNIT_CASE(match_all_not_assigned), - KUNIT_CASE(match_all_ptr_tag), - KUNIT_CASE(match_all_mem_tag), - {} -}; - -static struct kunit_suite kasan_kunit_test_suite = { - .name = "kasan", - .init = kasan_test_init, - .test_cases = kasan_kunit_test_cases, - .exit = kasan_test_exit, -}; - -kunit_test_suite(kasan_kunit_test_suite); - -MODULE_LICENSE("GPL"); diff --git a/lib/test_kasan_module.c b/lib/test_kasan_module.c deleted file mode 100644 index b112cbc835e9..000000000000 --- a/lib/test_kasan_module.c +++ /dev/null @@ -1,141 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * - * Copyright (c) 2014 Samsung Electronics Co., Ltd. - * Author: Andrey Ryabinin - */ - -#define pr_fmt(fmt) "kasan test: %s " fmt, __func__ - -#include -#include -#include -#include -#include - -#include "../mm/kasan/kasan.h" - -static noinline void __init copy_user_test(void) -{ - char *kmem; - char __user *usermem; - size_t size = 128 - KASAN_GRANULE_SIZE; - int __maybe_unused unused; - - kmem = kmalloc(size, GFP_KERNEL); - if (!kmem) - return; - - usermem = (char __user *)vm_mmap(NULL, 0, PAGE_SIZE, - PROT_READ | PROT_WRITE | PROT_EXEC, - MAP_ANONYMOUS | MAP_PRIVATE, 0); - if (IS_ERR(usermem)) { - pr_err("Failed to allocate user memory\n"); - kfree(kmem); - return; - } - - OPTIMIZER_HIDE_VAR(size); - - pr_info("out-of-bounds in copy_from_user()\n"); - unused = copy_from_user(kmem, usermem, size + 1); - - pr_info("out-of-bounds in copy_to_user()\n"); - unused = copy_to_user(usermem, kmem, size + 1); - - pr_info("out-of-bounds in __copy_from_user()\n"); - unused = __copy_from_user(kmem, usermem, size + 1); - - pr_info("out-of-bounds in __copy_to_user()\n"); - unused = __copy_to_user(usermem, kmem, size + 1); - - pr_info("out-of-bounds in __copy_from_user_inatomic()\n"); - unused = __copy_from_user_inatomic(kmem, usermem, size + 1); - - pr_info("out-of-bounds in __copy_to_user_inatomic()\n"); - unused = __copy_to_user_inatomic(usermem, kmem, size + 1); - - pr_info("out-of-bounds in strncpy_from_user()\n"); - unused = strncpy_from_user(kmem, usermem, size + 1); - - vm_munmap((unsigned long)usermem, PAGE_SIZE); - kfree(kmem); -} - -static struct kasan_rcu_info { - int i; - struct rcu_head rcu; -} *global_rcu_ptr; - -static noinline void __init kasan_rcu_reclaim(struct rcu_head *rp) -{ - struct kasan_rcu_info *fp = container_of(rp, - struct kasan_rcu_info, rcu); - - kfree(fp); - ((volatile struct kasan_rcu_info *)fp)->i; -} - -static noinline void __init kasan_rcu_uaf(void) -{ - struct kasan_rcu_info *ptr; - - pr_info("use-after-free in kasan_rcu_reclaim\n"); - ptr = kmalloc(sizeof(struct kasan_rcu_info), GFP_KERNEL); - if (!ptr) { - pr_err("Allocation failed\n"); - return; - } - - global_rcu_ptr = rcu_dereference_protected(ptr, NULL); - call_rcu(&global_rcu_ptr->rcu, kasan_rcu_reclaim); -} - -static noinline void __init kasan_workqueue_work(struct work_struct *work) -{ - kfree(work); -} - -static noinline void __init kasan_workqueue_uaf(void) -{ - struct workqueue_struct *workqueue; - struct work_struct *work; - - workqueue = create_workqueue("kasan_wq_test"); - if (!workqueue) { - pr_err("Allocation failed\n"); - return; - } - work = kmalloc(sizeof(struct work_struct), GFP_KERNEL); - if (!work) { - pr_err("Allocation failed\n"); - return; - } - - INIT_WORK(work, kasan_workqueue_work); - queue_work(workqueue, work); - destroy_workqueue(workqueue); - - pr_info("use-after-free on workqueue\n"); - ((volatile struct work_struct *)work)->data; -} - -static int __init test_kasan_module_init(void) -{ - /* - * Temporarily enable multi-shot mode. Otherwise, KASAN would only - * report the first detected bug and panic the kernel if panic_on_warn - * is enabled. - */ - bool multishot = kasan_save_enable_multi_shot(); - - copy_user_test(); - kasan_rcu_uaf(); - kasan_workqueue_uaf(); - - kasan_restore_multi_shot(multishot); - return -EAGAIN; -} - -module_init(test_kasan_module_init); -MODULE_LICENSE("GPL"); diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile index 1f84df9c302e..d4837bff3b60 100644 --- a/mm/kasan/Makefile +++ b/mm/kasan/Makefile @@ -35,7 +35,15 @@ CFLAGS_shadow.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_hw_tags.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_sw_tags.o := $(CC_FLAGS_KASAN_RUNTIME) +CFLAGS_KASAN_TEST := $(CFLAGS_KASAN) -fno-builtin $(call cc-disable-warning, vla) + +CFLAGS_kasan_test.o := $(CFLAGS_KASAN_TEST) +CFLAGS_kasan_test_module.o := $(CFLAGS_KASAN_TEST) + obj-y := common.o report.o obj-$(CONFIG_KASAN_GENERIC) += init.o generic.o report_generic.o shadow.o quarantine.o obj-$(CONFIG_KASAN_HW_TAGS) += hw_tags.o report_hw_tags.o tags.o report_tags.o obj-$(CONFIG_KASAN_SW_TAGS) += init.o report_sw_tags.o shadow.o sw_tags.o tags.o report_tags.o + +obj-$(CONFIG_KASAN_KUNIT_TEST) += kasan_test.o +obj-$(CONFIG_KASAN_MODULE_TEST) += kasan_test_module.o diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c new file mode 100644 index 000000000000..f25692def781 --- /dev/null +++ b/mm/kasan/kasan_test.c @@ -0,0 +1,1450 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * + * Copyright (c) 2014 Samsung Electronics Co., Ltd. + * Author: Andrey Ryabinin + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include + +#include "kasan.h" + +#define OOB_TAG_OFF (IS_ENABLED(CONFIG_KASAN_GENERIC) ? 0 : KASAN_GRANULE_SIZE) + +/* + * Some tests use these global variables to store return values from function + * calls that could otherwise be eliminated by the compiler as dead code. + */ +void *kasan_ptr_result; +int kasan_int_result; + +static struct kunit_resource resource; +static struct kunit_kasan_status test_status; +static bool multishot; + +/* + * Temporarily enable multi-shot mode. Otherwise, KASAN would only report the + * first detected bug and panic the kernel if panic_on_warn is enabled. For + * hardware tag-based KASAN also allow tag checking to be reenabled for each + * test, see the comment for KUNIT_EXPECT_KASAN_FAIL(). + */ +static int kasan_test_init(struct kunit *test) +{ + if (!kasan_enabled()) { + kunit_err(test, "can't run KASAN tests with KASAN disabled"); + return -1; + } + + multishot = kasan_save_enable_multi_shot(); + test_status.report_found = false; + test_status.sync_fault = false; + kunit_add_named_resource(test, NULL, NULL, &resource, + "kasan_status", &test_status); + return 0; +} + +static void kasan_test_exit(struct kunit *test) +{ + kasan_restore_multi_shot(multishot); + KUNIT_EXPECT_FALSE(test, test_status.report_found); +} + +/** + * KUNIT_EXPECT_KASAN_FAIL() - check that the executed expression produces a + * KASAN report; causes a test failure otherwise. This relies on a KUnit + * resource named "kasan_status". Do not use this name for KUnit resources + * outside of KASAN tests. + * + * For hardware tag-based KASAN, when a synchronous tag fault happens, tag + * checking is auto-disabled. When this happens, this test handler reenables + * tag checking. As tag checking can be only disabled or enabled per CPU, + * this handler disables migration (preemption). + * + * Since the compiler doesn't see that the expression can change the test_status + * fields, it can reorder or optimize away the accesses to those fields. + * Use READ/WRITE_ONCE() for the accesses and compiler barriers around the + * expression to prevent that. + * + * In between KUNIT_EXPECT_KASAN_FAIL checks, test_status.report_found is kept + * as false. This allows detecting KASAN reports that happen outside of the + * checks by asserting !test_status.report_found at the start of + * KUNIT_EXPECT_KASAN_FAIL and in kasan_test_exit. + */ +#define KUNIT_EXPECT_KASAN_FAIL(test, expression) do { \ + if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \ + kasan_sync_fault_possible()) \ + migrate_disable(); \ + KUNIT_EXPECT_FALSE(test, READ_ONCE(test_status.report_found)); \ + barrier(); \ + expression; \ + barrier(); \ + if (kasan_async_fault_possible()) \ + kasan_force_async_fault(); \ + if (!READ_ONCE(test_status.report_found)) { \ + KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN failure " \ + "expected in \"" #expression \ + "\", but none occurred"); \ + } \ + if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \ + kasan_sync_fault_possible()) { \ + if (READ_ONCE(test_status.report_found) && \ + READ_ONCE(test_status.sync_fault)) \ + kasan_enable_tagging(); \ + migrate_enable(); \ + } \ + WRITE_ONCE(test_status.report_found, false); \ +} while (0) + +#define KASAN_TEST_NEEDS_CONFIG_ON(test, config) do { \ + if (!IS_ENABLED(config)) \ + kunit_skip((test), "Test requires " #config "=y"); \ +} while (0) + +#define KASAN_TEST_NEEDS_CONFIG_OFF(test, config) do { \ + if (IS_ENABLED(config)) \ + kunit_skip((test), "Test requires " #config "=n"); \ +} while (0) + +static void kmalloc_oob_right(struct kunit *test) +{ + char *ptr; + size_t size = 128 - KASAN_GRANULE_SIZE - 5; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + OPTIMIZER_HIDE_VAR(ptr); + /* + * An unaligned access past the requested kmalloc size. + * Only generic KASAN can precisely detect these. + */ + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = 'x'); + + /* + * An aligned access into the first out-of-bounds granule that falls + * within the aligned kmalloc object. + */ + KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + 5] = 'y'); + + /* Out-of-bounds access past the aligned kmalloc object. */ + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = + ptr[size + KASAN_GRANULE_SIZE + 5]); + + kfree(ptr); +} + +static void kmalloc_oob_left(struct kunit *test) +{ + char *ptr; + size_t size = 15; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + OPTIMIZER_HIDE_VAR(ptr); + KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1)); + kfree(ptr); +} + +static void kmalloc_node_oob_right(struct kunit *test) +{ + char *ptr; + size_t size = 4096; + + ptr = kmalloc_node(size, GFP_KERNEL, 0); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + OPTIMIZER_HIDE_VAR(ptr); + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]); + kfree(ptr); +} + +/* + * These kmalloc_pagealloc_* tests try allocating a memory chunk that doesn't + * fit into a slab cache and therefore is allocated via the page allocator + * fallback. Since this kind of fallback is only implemented for SLUB, these + * tests are limited to that allocator. + */ +static void kmalloc_pagealloc_oob_right(struct kunit *test) +{ + char *ptr; + size_t size = KMALLOC_MAX_CACHE_SIZE + 10; + + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + OPTIMIZER_HIDE_VAR(ptr); + KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + OOB_TAG_OFF] = 0); + + kfree(ptr); +} + +static void kmalloc_pagealloc_uaf(struct kunit *test) +{ + char *ptr; + size_t size = KMALLOC_MAX_CACHE_SIZE + 10; + + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + kfree(ptr); + + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]); +} + +static void kmalloc_pagealloc_invalid_free(struct kunit *test) +{ + char *ptr; + size_t size = KMALLOC_MAX_CACHE_SIZE + 10; + + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + KUNIT_EXPECT_KASAN_FAIL(test, kfree(ptr + 1)); +} + +static void pagealloc_oob_right(struct kunit *test) +{ + char *ptr; + struct page *pages; + size_t order = 4; + size_t size = (1UL << (PAGE_SHIFT + order)); + + /* + * With generic KASAN page allocations have no redzones, thus + * out-of-bounds detection is not guaranteed. + * See https://bugzilla.kernel.org/show_bug.cgi?id=210503. + */ + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); + + pages = alloc_pages(GFP_KERNEL, order); + ptr = page_address(pages); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]); + free_pages((unsigned long)ptr, order); +} + +static void pagealloc_uaf(struct kunit *test) +{ + char *ptr; + struct page *pages; + size_t order = 4; + + pages = alloc_pages(GFP_KERNEL, order); + ptr = page_address(pages); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + free_pages((unsigned long)ptr, order); + + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]); +} + +static void kmalloc_large_oob_right(struct kunit *test) +{ + char *ptr; + size_t size = KMALLOC_MAX_CACHE_SIZE - 256; + + /* + * Allocate a chunk that is large enough, but still fits into a slab + * and does not trigger the page allocator fallback in SLUB. + */ + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + OPTIMIZER_HIDE_VAR(ptr); + KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = 0); + kfree(ptr); +} + +static void krealloc_more_oob_helper(struct kunit *test, + size_t size1, size_t size2) +{ + char *ptr1, *ptr2; + size_t middle; + + KUNIT_ASSERT_LT(test, size1, size2); + middle = size1 + (size2 - size1) / 2; + + ptr1 = kmalloc(size1, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); + + ptr2 = krealloc(ptr1, size2, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); + + /* All offsets up to size2 must be accessible. */ + ptr2[size1 - 1] = 'x'; + ptr2[size1] = 'x'; + ptr2[middle] = 'x'; + ptr2[size2 - 1] = 'x'; + + /* Generic mode is precise, so unaligned size2 must be inaccessible. */ + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2] = 'x'); + + /* For all modes first aligned offset after size2 must be inaccessible. */ + KUNIT_EXPECT_KASAN_FAIL(test, + ptr2[round_up(size2, KASAN_GRANULE_SIZE)] = 'x'); + + kfree(ptr2); +} + +static void krealloc_less_oob_helper(struct kunit *test, + size_t size1, size_t size2) +{ + char *ptr1, *ptr2; + size_t middle; + + KUNIT_ASSERT_LT(test, size2, size1); + middle = size2 + (size1 - size2) / 2; + + ptr1 = kmalloc(size1, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); + + ptr2 = krealloc(ptr1, size2, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); + + /* Must be accessible for all modes. */ + ptr2[size2 - 1] = 'x'; + + /* Generic mode is precise, so unaligned size2 must be inaccessible. */ + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2] = 'x'); + + /* For all modes first aligned offset after size2 must be inaccessible. */ + KUNIT_EXPECT_KASAN_FAIL(test, + ptr2[round_up(size2, KASAN_GRANULE_SIZE)] = 'x'); + + /* + * For all modes all size2, middle, and size1 should land in separate + * granules and thus the latter two offsets should be inaccessible. + */ + KUNIT_EXPECT_LE(test, round_up(size2, KASAN_GRANULE_SIZE), + round_down(middle, KASAN_GRANULE_SIZE)); + KUNIT_EXPECT_LE(test, round_up(middle, KASAN_GRANULE_SIZE), + round_down(size1, KASAN_GRANULE_SIZE)); + KUNIT_EXPECT_KASAN_FAIL(test, ptr2[middle] = 'x'); + KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size1 - 1] = 'x'); + KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size1] = 'x'); + + kfree(ptr2); +} + +static void krealloc_more_oob(struct kunit *test) +{ + krealloc_more_oob_helper(test, 201, 235); +} + +static void krealloc_less_oob(struct kunit *test) +{ + krealloc_less_oob_helper(test, 235, 201); +} + +static void krealloc_pagealloc_more_oob(struct kunit *test) +{ + /* page_alloc fallback in only implemented for SLUB. */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); + + krealloc_more_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 201, + KMALLOC_MAX_CACHE_SIZE + 235); +} + +static void krealloc_pagealloc_less_oob(struct kunit *test) +{ + /* page_alloc fallback in only implemented for SLUB. */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); + + krealloc_less_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 235, + KMALLOC_MAX_CACHE_SIZE + 201); +} + +/* + * Check that krealloc() detects a use-after-free, returns NULL, + * and doesn't unpoison the freed object. + */ +static void krealloc_uaf(struct kunit *test) +{ + char *ptr1, *ptr2; + int size1 = 201; + int size2 = 235; + + ptr1 = kmalloc(size1, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); + kfree(ptr1); + + KUNIT_EXPECT_KASAN_FAIL(test, ptr2 = krealloc(ptr1, size2, GFP_KERNEL)); + KUNIT_ASSERT_NULL(test, ptr2); + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1); +} + +static void kmalloc_oob_16(struct kunit *test) +{ + struct { + u64 words[2]; + } *ptr1, *ptr2; + + /* This test is specifically crafted for the generic mode. */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); + + ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); + + ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); + + OPTIMIZER_HIDE_VAR(ptr1); + OPTIMIZER_HIDE_VAR(ptr2); + KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2); + kfree(ptr1); + kfree(ptr2); +} + +static void kmalloc_uaf_16(struct kunit *test) +{ + struct { + u64 words[2]; + } *ptr1, *ptr2; + + ptr1 = kmalloc(sizeof(*ptr1), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); + + ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); + kfree(ptr2); + + KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2); + kfree(ptr1); +} + +/* + * Note: in the memset tests below, the written range touches both valid and + * invalid memory. This makes sure that the instrumentation does not only check + * the starting address but the whole range. + */ + +static void kmalloc_oob_memset_2(struct kunit *test) +{ + char *ptr; + size_t size = 128 - KASAN_GRANULE_SIZE; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + OPTIMIZER_HIDE_VAR(size); + KUNIT_EXPECT_KASAN_FAIL(test, memset(ptr + size - 1, 0, 2)); + kfree(ptr); +} + +static void kmalloc_oob_memset_4(struct kunit *test) +{ + char *ptr; + size_t size = 128 - KASAN_GRANULE_SIZE; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + OPTIMIZER_HIDE_VAR(size); + KUNIT_EXPECT_KASAN_FAIL(test, memset(ptr + size - 3, 0, 4)); + kfree(ptr); +} + +static void kmalloc_oob_memset_8(struct kunit *test) +{ + char *ptr; + size_t size = 128 - KASAN_GRANULE_SIZE; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + OPTIMIZER_HIDE_VAR(size); + KUNIT_EXPECT_KASAN_FAIL(test, memset(ptr + size - 7, 0, 8)); + kfree(ptr); +} + +static void kmalloc_oob_memset_16(struct kunit *test) +{ + char *ptr; + size_t size = 128 - KASAN_GRANULE_SIZE; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + OPTIMIZER_HIDE_VAR(size); + KUNIT_EXPECT_KASAN_FAIL(test, memset(ptr + size - 15, 0, 16)); + kfree(ptr); +} + +static void kmalloc_oob_in_memset(struct kunit *test) +{ + char *ptr; + size_t size = 128 - KASAN_GRANULE_SIZE; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + OPTIMIZER_HIDE_VAR(ptr); + OPTIMIZER_HIDE_VAR(size); + KUNIT_EXPECT_KASAN_FAIL(test, + memset(ptr, 0, size + KASAN_GRANULE_SIZE)); + kfree(ptr); +} + +static void kmalloc_memmove_negative_size(struct kunit *test) +{ + char *ptr; + size_t size = 64; + size_t invalid_size = -2; + + /* + * Hardware tag-based mode doesn't check memmove for negative size. + * As a result, this test introduces a side-effect memory corruption, + * which can result in a crash. + */ + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_HW_TAGS); + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + memset((char *)ptr, 0, 64); + OPTIMIZER_HIDE_VAR(ptr); + OPTIMIZER_HIDE_VAR(invalid_size); + KUNIT_EXPECT_KASAN_FAIL(test, + memmove((char *)ptr, (char *)ptr + 4, invalid_size)); + kfree(ptr); +} + +static void kmalloc_memmove_invalid_size(struct kunit *test) +{ + char *ptr; + size_t size = 64; + volatile size_t invalid_size = size; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + memset((char *)ptr, 0, 64); + OPTIMIZER_HIDE_VAR(ptr); + KUNIT_EXPECT_KASAN_FAIL(test, + memmove((char *)ptr, (char *)ptr + 4, invalid_size)); + kfree(ptr); +} + +static void kmalloc_uaf(struct kunit *test) +{ + char *ptr; + size_t size = 10; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + kfree(ptr); + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]); +} + +static void kmalloc_uaf_memset(struct kunit *test) +{ + char *ptr; + size_t size = 33; + + /* + * Only generic KASAN uses quarantine, which is required to avoid a + * kernel memory corruption this test causes. + */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + kfree(ptr); + KUNIT_EXPECT_KASAN_FAIL(test, memset(ptr, 0, size)); +} + +static void kmalloc_uaf2(struct kunit *test) +{ + char *ptr1, *ptr2; + size_t size = 43; + int counter = 0; + +again: + ptr1 = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); + + kfree(ptr1); + + ptr2 = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); + + /* + * For tag-based KASAN ptr1 and ptr2 tags might happen to be the same. + * Allow up to 16 attempts at generating different tags. + */ + if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && ptr1 == ptr2 && counter++ < 16) { + kfree(ptr2); + goto again; + } + + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]); + KUNIT_EXPECT_PTR_NE(test, ptr1, ptr2); + + kfree(ptr2); +} + +/* + * Check that KASAN detects use-after-free when another object was allocated in + * the same slot. Relevant for the tag-based modes, which do not use quarantine. + */ +static void kmalloc_uaf3(struct kunit *test) +{ + char *ptr1, *ptr2; + size_t size = 100; + + /* This test is specifically crafted for tag-based modes. */ + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); + + ptr1 = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); + kfree(ptr1); + + ptr2 = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); + kfree(ptr2); + + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]); +} + +static void kfree_via_page(struct kunit *test) +{ + char *ptr; + size_t size = 8; + struct page *page; + unsigned long offset; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + page = virt_to_page(ptr); + offset = offset_in_page(ptr); + kfree(page_address(page) + offset); +} + +static void kfree_via_phys(struct kunit *test) +{ + char *ptr; + size_t size = 8; + phys_addr_t phys; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + phys = virt_to_phys(ptr); + kfree(phys_to_virt(phys)); +} + +static void kmem_cache_oob(struct kunit *test) +{ + char *p; + size_t size = 200; + struct kmem_cache *cache; + + cache = kmem_cache_create("test_cache", size, 0, 0, NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); + + p = kmem_cache_alloc(cache, GFP_KERNEL); + if (!p) { + kunit_err(test, "Allocation failed: %s\n", __func__); + kmem_cache_destroy(cache); + return; + } + + KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]); + + kmem_cache_free(cache, p); + kmem_cache_destroy(cache); +} + +static void kmem_cache_accounted(struct kunit *test) +{ + int i; + char *p; + size_t size = 200; + struct kmem_cache *cache; + + cache = kmem_cache_create("test_cache", size, 0, SLAB_ACCOUNT, NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); + + /* + * Several allocations with a delay to allow for lazy per memcg kmem + * cache creation. + */ + for (i = 0; i < 5; i++) { + p = kmem_cache_alloc(cache, GFP_KERNEL); + if (!p) + goto free_cache; + + kmem_cache_free(cache, p); + msleep(100); + } + +free_cache: + kmem_cache_destroy(cache); +} + +static void kmem_cache_bulk(struct kunit *test) +{ + struct kmem_cache *cache; + size_t size = 200; + char *p[10]; + bool ret; + int i; + + cache = kmem_cache_create("test_cache", size, 0, 0, NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); + + ret = kmem_cache_alloc_bulk(cache, GFP_KERNEL, ARRAY_SIZE(p), (void **)&p); + if (!ret) { + kunit_err(test, "Allocation failed: %s\n", __func__); + kmem_cache_destroy(cache); + return; + } + + for (i = 0; i < ARRAY_SIZE(p); i++) + p[i][0] = p[i][size - 1] = 42; + + kmem_cache_free_bulk(cache, ARRAY_SIZE(p), (void **)&p); + kmem_cache_destroy(cache); +} + +static char global_array[10]; + +static void kasan_global_oob_right(struct kunit *test) +{ + /* + * Deliberate out-of-bounds access. To prevent CONFIG_UBSAN_LOCAL_BOUNDS + * from failing here and panicking the kernel, access the array via a + * volatile pointer, which will prevent the compiler from being able to + * determine the array bounds. + * + * This access uses a volatile pointer to char (char *volatile) rather + * than the more conventional pointer to volatile char (volatile char *) + * because we want to prevent the compiler from making inferences about + * the pointer itself (i.e. its array bounds), not the data that it + * refers to. + */ + char *volatile array = global_array; + char *p = &array[ARRAY_SIZE(global_array) + 3]; + + /* Only generic mode instruments globals. */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); + + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); +} + +static void kasan_global_oob_left(struct kunit *test) +{ + char *volatile array = global_array; + char *p = array - 3; + + /* + * GCC is known to fail this test, skip it. + * See https://bugzilla.kernel.org/show_bug.cgi?id=215051. + */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_CC_IS_CLANG); + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); +} + +/* Check that ksize() makes the whole object accessible. */ +static void ksize_unpoisons_memory(struct kunit *test) +{ + char *ptr; + size_t size = 123, real_size; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + real_size = ksize(ptr); + + OPTIMIZER_HIDE_VAR(ptr); + + /* This access shouldn't trigger a KASAN report. */ + ptr[size] = 'x'; + + /* This one must. */ + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size]); + + kfree(ptr); +} + +/* + * Check that a use-after-free is detected by ksize() and via normal accesses + * after it. + */ +static void ksize_uaf(struct kunit *test) +{ + char *ptr; + int size = 128 - KASAN_GRANULE_SIZE; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + kfree(ptr); + + OPTIMIZER_HIDE_VAR(ptr); + KUNIT_EXPECT_KASAN_FAIL(test, ksize(ptr)); + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]); + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]); +} + +static void kasan_stack_oob(struct kunit *test) +{ + char stack_array[10]; + /* See comment in kasan_global_oob_right. */ + char *volatile array = stack_array; + char *p = &array[ARRAY_SIZE(stack_array) + OOB_TAG_OFF]; + + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_STACK); + + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); +} + +static void kasan_alloca_oob_left(struct kunit *test) +{ + volatile int i = 10; + char alloca_array[i]; + /* See comment in kasan_global_oob_right. */ + char *volatile array = alloca_array; + char *p = array - 1; + + /* Only generic mode instruments dynamic allocas. */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_STACK); + + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); +} + +static void kasan_alloca_oob_right(struct kunit *test) +{ + volatile int i = 10; + char alloca_array[i]; + /* See comment in kasan_global_oob_right. */ + char *volatile array = alloca_array; + char *p = array + i; + + /* Only generic mode instruments dynamic allocas. */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_STACK); + + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); +} + +static void kmem_cache_double_free(struct kunit *test) +{ + char *p; + size_t size = 200; + struct kmem_cache *cache; + + cache = kmem_cache_create("test_cache", size, 0, 0, NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); + + p = kmem_cache_alloc(cache, GFP_KERNEL); + if (!p) { + kunit_err(test, "Allocation failed: %s\n", __func__); + kmem_cache_destroy(cache); + return; + } + + kmem_cache_free(cache, p); + KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_free(cache, p)); + kmem_cache_destroy(cache); +} + +static void kmem_cache_invalid_free(struct kunit *test) +{ + char *p; + size_t size = 200; + struct kmem_cache *cache; + + cache = kmem_cache_create("test_cache", size, 0, SLAB_TYPESAFE_BY_RCU, + NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); + + p = kmem_cache_alloc(cache, GFP_KERNEL); + if (!p) { + kunit_err(test, "Allocation failed: %s\n", __func__); + kmem_cache_destroy(cache); + return; + } + + /* Trigger invalid free, the object doesn't get freed. */ + KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_free(cache, p + 1)); + + /* + * Properly free the object to prevent the "Objects remaining in + * test_cache on __kmem_cache_shutdown" BUG failure. + */ + kmem_cache_free(cache, p); + + kmem_cache_destroy(cache); +} + +static void empty_cache_ctor(void *object) { } + +static void kmem_cache_double_destroy(struct kunit *test) +{ + struct kmem_cache *cache; + + /* Provide a constructor to prevent cache merging. */ + cache = kmem_cache_create("test_cache", 200, 0, 0, empty_cache_ctor); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); + kmem_cache_destroy(cache); + KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_destroy(cache)); +} + +static void kasan_memchr(struct kunit *test) +{ + char *ptr; + size_t size = 24; + + /* + * str* functions are not instrumented with CONFIG_AMD_MEM_ENCRYPT. + * See https://bugzilla.kernel.org/show_bug.cgi?id=206337 for details. + */ + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_AMD_MEM_ENCRYPT); + + if (OOB_TAG_OFF) + size = round_up(size, OOB_TAG_OFF); + + ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + OPTIMIZER_HIDE_VAR(ptr); + OPTIMIZER_HIDE_VAR(size); + KUNIT_EXPECT_KASAN_FAIL(test, + kasan_ptr_result = memchr(ptr, '1', size + 1)); + + kfree(ptr); +} + +static void kasan_memcmp(struct kunit *test) +{ + char *ptr; + size_t size = 24; + int arr[9]; + + /* + * str* functions are not instrumented with CONFIG_AMD_MEM_ENCRYPT. + * See https://bugzilla.kernel.org/show_bug.cgi?id=206337 for details. + */ + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_AMD_MEM_ENCRYPT); + + if (OOB_TAG_OFF) + size = round_up(size, OOB_TAG_OFF); + + ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + memset(arr, 0, sizeof(arr)); + + OPTIMIZER_HIDE_VAR(ptr); + OPTIMIZER_HIDE_VAR(size); + KUNIT_EXPECT_KASAN_FAIL(test, + kasan_int_result = memcmp(ptr, arr, size+1)); + kfree(ptr); +} + +static void kasan_strings(struct kunit *test) +{ + char *ptr; + size_t size = 24; + + /* + * str* functions are not instrumented with CONFIG_AMD_MEM_ENCRYPT. + * See https://bugzilla.kernel.org/show_bug.cgi?id=206337 for details. + */ + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_AMD_MEM_ENCRYPT); + + ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + kfree(ptr); + + /* + * Try to cause only 1 invalid access (less spam in dmesg). + * For that we need ptr to point to zeroed byte. + * Skip metadata that could be stored in freed object so ptr + * will likely point to zeroed byte. + */ + ptr += 16; + KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1')); + + KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1')); + + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2")); + + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1)); + + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr)); + + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1)); +} + +static void kasan_bitops_modify(struct kunit *test, int nr, void *addr) +{ + KUNIT_EXPECT_KASAN_FAIL(test, set_bit(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, __set_bit(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, clear_bit(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, clear_bit_unlock(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit_unlock(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, change_bit(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, __change_bit(nr, addr)); +} + +static void kasan_bitops_test_and_modify(struct kunit *test, int nr, void *addr) +{ + KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, __test_and_set_bit(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, test_and_clear_bit(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, __test_and_clear_bit(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, test_and_change_bit(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, __test_and_change_bit(nr, addr)); + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr)); + +#if defined(clear_bit_unlock_is_negative_byte) + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = + clear_bit_unlock_is_negative_byte(nr, addr)); +#endif +} + +static void kasan_bitops_generic(struct kunit *test) +{ + long *bits; + + /* This test is specifically crafted for the generic mode. */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); + + /* + * Allocate 1 more byte, which causes kzalloc to round up to 16 bytes; + * this way we do not actually corrupt other memory. + */ + bits = kzalloc(sizeof(*bits) + 1, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits); + + /* + * Below calls try to access bit within allocated memory; however, the + * below accesses are still out-of-bounds, since bitops are defined to + * operate on the whole long the bit is in. + */ + kasan_bitops_modify(test, BITS_PER_LONG, bits); + + /* + * Below calls try to access bit beyond allocated memory. + */ + kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, bits); + + kfree(bits); +} + +static void kasan_bitops_tags(struct kunit *test) +{ + long *bits; + + /* This test is specifically crafted for tag-based modes. */ + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); + + /* kmalloc-64 cache will be used and the last 16 bytes will be the redzone. */ + bits = kzalloc(48, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits); + + /* Do the accesses past the 48 allocated bytes, but within the redone. */ + kasan_bitops_modify(test, BITS_PER_LONG, (void *)bits + 48); + kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, (void *)bits + 48); + + kfree(bits); +} + +static void kmalloc_double_kzfree(struct kunit *test) +{ + char *ptr; + size_t size = 16; + + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + kfree_sensitive(ptr); + KUNIT_EXPECT_KASAN_FAIL(test, kfree_sensitive(ptr)); +} + +static void vmalloc_helpers_tags(struct kunit *test) +{ + void *ptr; + + /* This test is intended for tag-based modes. */ + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); + + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC); + + ptr = vmalloc(PAGE_SIZE); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + /* Check that the returned pointer is tagged. */ + KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); + KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); + + /* Make sure exported vmalloc helpers handle tagged pointers. */ + KUNIT_ASSERT_TRUE(test, is_vmalloc_addr(ptr)); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vmalloc_to_page(ptr)); + +#if !IS_MODULE(CONFIG_KASAN_KUNIT_TEST) + { + int rv; + + /* Make sure vmalloc'ed memory permissions can be changed. */ + rv = set_memory_ro((unsigned long)ptr, 1); + KUNIT_ASSERT_GE(test, rv, 0); + rv = set_memory_rw((unsigned long)ptr, 1); + KUNIT_ASSERT_GE(test, rv, 0); + } +#endif + + vfree(ptr); +} + +static void vmalloc_oob(struct kunit *test) +{ + char *v_ptr, *p_ptr; + struct page *page; + size_t size = PAGE_SIZE / 2 - KASAN_GRANULE_SIZE - 5; + + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC); + + v_ptr = vmalloc(size); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr); + + OPTIMIZER_HIDE_VAR(v_ptr); + + /* + * We have to be careful not to hit the guard page in vmalloc tests. + * The MMU will catch that and crash us. + */ + + /* Make sure in-bounds accesses are valid. */ + v_ptr[0] = 0; + v_ptr[size - 1] = 0; + + /* + * An unaligned access past the requested vmalloc size. + * Only generic KASAN can precisely detect these. + */ + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]); + + /* An aligned access into the first out-of-bounds granule. */ + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]); + + /* Check that in-bounds accesses to the physical page are valid. */ + page = vmalloc_to_page(v_ptr); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, page); + p_ptr = page_address(page); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr); + p_ptr[0] = 0; + + vfree(v_ptr); + + /* + * We can't check for use-after-unmap bugs in this nor in the following + * vmalloc tests, as the page might be fully unmapped and accessing it + * will crash the kernel. + */ +} + +static void vmap_tags(struct kunit *test) +{ + char *p_ptr, *v_ptr; + struct page *p_page, *v_page; + + /* + * This test is specifically crafted for the software tag-based mode, + * the only tag-based mode that poisons vmap mappings. + */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS); + + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC); + + p_page = alloc_pages(GFP_KERNEL, 1); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_page); + p_ptr = page_address(p_page); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr); + + v_ptr = vmap(&p_page, 1, VM_MAP, PAGE_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr); + + /* + * We can't check for out-of-bounds bugs in this nor in the following + * vmalloc tests, as allocations have page granularity and accessing + * the guard page will crash the kernel. + */ + + KUNIT_EXPECT_GE(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_MIN); + KUNIT_EXPECT_LT(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_KERNEL); + + /* Make sure that in-bounds accesses through both pointers work. */ + *p_ptr = 0; + *v_ptr = 0; + + /* Make sure vmalloc_to_page() correctly recovers the page pointer. */ + v_page = vmalloc_to_page(v_ptr); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_page); + KUNIT_EXPECT_PTR_EQ(test, p_page, v_page); + + vunmap(v_ptr); + free_pages((unsigned long)p_ptr, 1); +} + +static void vm_map_ram_tags(struct kunit *test) +{ + char *p_ptr, *v_ptr; + struct page *page; + + /* + * This test is specifically crafted for the software tag-based mode, + * the only tag-based mode that poisons vm_map_ram mappings. + */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS); + + page = alloc_pages(GFP_KERNEL, 1); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, page); + p_ptr = page_address(page); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr); + + v_ptr = vm_map_ram(&page, 1, -1); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr); + + KUNIT_EXPECT_GE(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_MIN); + KUNIT_EXPECT_LT(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_KERNEL); + + /* Make sure that in-bounds accesses through both pointers work. */ + *p_ptr = 0; + *v_ptr = 0; + + vm_unmap_ram(v_ptr, 1); + free_pages((unsigned long)p_ptr, 1); +} + +static void vmalloc_percpu(struct kunit *test) +{ + char __percpu *ptr; + int cpu; + + /* + * This test is specifically crafted for the software tag-based mode, + * the only tag-based mode that poisons percpu mappings. + */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS); + + ptr = __alloc_percpu(PAGE_SIZE, PAGE_SIZE); + + for_each_possible_cpu(cpu) { + char *c_ptr = per_cpu_ptr(ptr, cpu); + + KUNIT_EXPECT_GE(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_MIN); + KUNIT_EXPECT_LT(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_KERNEL); + + /* Make sure that in-bounds accesses don't crash the kernel. */ + *c_ptr = 0; + } + + free_percpu(ptr); +} + +/* + * Check that the assigned pointer tag falls within the [KASAN_TAG_MIN, + * KASAN_TAG_KERNEL) range (note: excluding the match-all tag) for tag-based + * modes. + */ +static void match_all_not_assigned(struct kunit *test) +{ + char *ptr; + struct page *pages; + int i, size, order; + + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); + + for (i = 0; i < 256; i++) { + size = (get_random_int() % 1024) + 1; + ptr = kmalloc(size, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); + KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); + kfree(ptr); + } + + for (i = 0; i < 256; i++) { + order = (get_random_int() % 4) + 1; + pages = alloc_pages(GFP_KERNEL, order); + ptr = page_address(pages); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); + KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); + free_pages((unsigned long)ptr, order); + } + + if (!IS_ENABLED(CONFIG_KASAN_VMALLOC)) + return; + + for (i = 0; i < 256; i++) { + size = (get_random_int() % 1024) + 1; + ptr = vmalloc(size); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); + KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); + vfree(ptr); + } +} + +/* Check that 0xff works as a match-all pointer tag for tag-based modes. */ +static void match_all_ptr_tag(struct kunit *test) +{ + char *ptr; + u8 tag; + + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); + + ptr = kmalloc(128, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + /* Backup the assigned tag. */ + tag = get_tag(ptr); + KUNIT_EXPECT_NE(test, tag, (u8)KASAN_TAG_KERNEL); + + /* Reset the tag to 0xff.*/ + ptr = set_tag(ptr, KASAN_TAG_KERNEL); + + /* This access shouldn't trigger a KASAN report. */ + *ptr = 0; + + /* Recover the pointer tag and free. */ + ptr = set_tag(ptr, tag); + kfree(ptr); +} + +/* Check that there are no match-all memory tags for tag-based modes. */ +static void match_all_mem_tag(struct kunit *test) +{ + char *ptr; + int tag; + + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); + + ptr = kmalloc(128, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + KUNIT_EXPECT_NE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); + + /* For each possible tag value not matching the pointer tag. */ + for (tag = KASAN_TAG_MIN; tag <= KASAN_TAG_KERNEL; tag++) { + if (tag == get_tag(ptr)) + continue; + + /* Mark the first memory granule with the chosen memory tag. */ + kasan_poison(ptr, KASAN_GRANULE_SIZE, (u8)tag, false); + + /* This access must cause a KASAN report. */ + KUNIT_EXPECT_KASAN_FAIL(test, *ptr = 0); + } + + /* Recover the memory tag and free. */ + kasan_poison(ptr, KASAN_GRANULE_SIZE, get_tag(ptr), false); + kfree(ptr); +} + +static struct kunit_case kasan_kunit_test_cases[] = { + KUNIT_CASE(kmalloc_oob_right), + KUNIT_CASE(kmalloc_oob_left), + KUNIT_CASE(kmalloc_node_oob_right), + KUNIT_CASE(kmalloc_pagealloc_oob_right), + KUNIT_CASE(kmalloc_pagealloc_uaf), + KUNIT_CASE(kmalloc_pagealloc_invalid_free), + KUNIT_CASE(pagealloc_oob_right), + KUNIT_CASE(pagealloc_uaf), + KUNIT_CASE(kmalloc_large_oob_right), + KUNIT_CASE(krealloc_more_oob), + KUNIT_CASE(krealloc_less_oob), + KUNIT_CASE(krealloc_pagealloc_more_oob), + KUNIT_CASE(krealloc_pagealloc_less_oob), + KUNIT_CASE(krealloc_uaf), + KUNIT_CASE(kmalloc_oob_16), + KUNIT_CASE(kmalloc_uaf_16), + KUNIT_CASE(kmalloc_oob_in_memset), + KUNIT_CASE(kmalloc_oob_memset_2), + KUNIT_CASE(kmalloc_oob_memset_4), + KUNIT_CASE(kmalloc_oob_memset_8), + KUNIT_CASE(kmalloc_oob_memset_16), + KUNIT_CASE(kmalloc_memmove_negative_size), + KUNIT_CASE(kmalloc_memmove_invalid_size), + KUNIT_CASE(kmalloc_uaf), + KUNIT_CASE(kmalloc_uaf_memset), + KUNIT_CASE(kmalloc_uaf2), + KUNIT_CASE(kmalloc_uaf3), + KUNIT_CASE(kfree_via_page), + KUNIT_CASE(kfree_via_phys), + KUNIT_CASE(kmem_cache_oob), + KUNIT_CASE(kmem_cache_accounted), + KUNIT_CASE(kmem_cache_bulk), + KUNIT_CASE(kasan_global_oob_right), + KUNIT_CASE(kasan_global_oob_left), + KUNIT_CASE(kasan_stack_oob), + KUNIT_CASE(kasan_alloca_oob_left), + KUNIT_CASE(kasan_alloca_oob_right), + KUNIT_CASE(ksize_unpoisons_memory), + KUNIT_CASE(ksize_uaf), + KUNIT_CASE(kmem_cache_double_free), + KUNIT_CASE(kmem_cache_invalid_free), + KUNIT_CASE(kmem_cache_double_destroy), + KUNIT_CASE(kasan_memchr), + KUNIT_CASE(kasan_memcmp), + KUNIT_CASE(kasan_strings), + KUNIT_CASE(kasan_bitops_generic), + KUNIT_CASE(kasan_bitops_tags), + KUNIT_CASE(kmalloc_double_kzfree), + KUNIT_CASE(vmalloc_helpers_tags), + KUNIT_CASE(vmalloc_oob), + KUNIT_CASE(vmap_tags), + KUNIT_CASE(vm_map_ram_tags), + KUNIT_CASE(vmalloc_percpu), + KUNIT_CASE(match_all_not_assigned), + KUNIT_CASE(match_all_ptr_tag), + KUNIT_CASE(match_all_mem_tag), + {} +}; + +static struct kunit_suite kasan_kunit_test_suite = { + .name = "kasan", + .init = kasan_test_init, + .test_cases = kasan_kunit_test_cases, + .exit = kasan_test_exit, +}; + +kunit_test_suite(kasan_kunit_test_suite); + +MODULE_LICENSE("GPL"); diff --git a/mm/kasan/kasan_test_module.c b/mm/kasan/kasan_test_module.c new file mode 100644 index 000000000000..e4ca82dc2c16 --- /dev/null +++ b/mm/kasan/kasan_test_module.c @@ -0,0 +1,141 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * + * Copyright (c) 2014 Samsung Electronics Co., Ltd. + * Author: Andrey Ryabinin + */ + +#define pr_fmt(fmt) "kasan test: %s " fmt, __func__ + +#include +#include +#include +#include +#include + +#include "kasan.h" + +static noinline void __init copy_user_test(void) +{ + char *kmem; + char __user *usermem; + size_t size = 128 - KASAN_GRANULE_SIZE; + int __maybe_unused unused; + + kmem = kmalloc(size, GFP_KERNEL); + if (!kmem) + return; + + usermem = (char __user *)vm_mmap(NULL, 0, PAGE_SIZE, + PROT_READ | PROT_WRITE | PROT_EXEC, + MAP_ANONYMOUS | MAP_PRIVATE, 0); + if (IS_ERR(usermem)) { + pr_err("Failed to allocate user memory\n"); + kfree(kmem); + return; + } + + OPTIMIZER_HIDE_VAR(size); + + pr_info("out-of-bounds in copy_from_user()\n"); + unused = copy_from_user(kmem, usermem, size + 1); + + pr_info("out-of-bounds in copy_to_user()\n"); + unused = copy_to_user(usermem, kmem, size + 1); + + pr_info("out-of-bounds in __copy_from_user()\n"); + unused = __copy_from_user(kmem, usermem, size + 1); + + pr_info("out-of-bounds in __copy_to_user()\n"); + unused = __copy_to_user(usermem, kmem, size + 1); + + pr_info("out-of-bounds in __copy_from_user_inatomic()\n"); + unused = __copy_from_user_inatomic(kmem, usermem, size + 1); + + pr_info("out-of-bounds in __copy_to_user_inatomic()\n"); + unused = __copy_to_user_inatomic(usermem, kmem, size + 1); + + pr_info("out-of-bounds in strncpy_from_user()\n"); + unused = strncpy_from_user(kmem, usermem, size + 1); + + vm_munmap((unsigned long)usermem, PAGE_SIZE); + kfree(kmem); +} + +static struct kasan_rcu_info { + int i; + struct rcu_head rcu; +} *global_rcu_ptr; + +static noinline void __init kasan_rcu_reclaim(struct rcu_head *rp) +{ + struct kasan_rcu_info *fp = container_of(rp, + struct kasan_rcu_info, rcu); + + kfree(fp); + ((volatile struct kasan_rcu_info *)fp)->i; +} + +static noinline void __init kasan_rcu_uaf(void) +{ + struct kasan_rcu_info *ptr; + + pr_info("use-after-free in kasan_rcu_reclaim\n"); + ptr = kmalloc(sizeof(struct kasan_rcu_info), GFP_KERNEL); + if (!ptr) { + pr_err("Allocation failed\n"); + return; + } + + global_rcu_ptr = rcu_dereference_protected(ptr, NULL); + call_rcu(&global_rcu_ptr->rcu, kasan_rcu_reclaim); +} + +static noinline void __init kasan_workqueue_work(struct work_struct *work) +{ + kfree(work); +} + +static noinline void __init kasan_workqueue_uaf(void) +{ + struct workqueue_struct *workqueue; + struct work_struct *work; + + workqueue = create_workqueue("kasan_wq_test"); + if (!workqueue) { + pr_err("Allocation failed\n"); + return; + } + work = kmalloc(sizeof(struct work_struct), GFP_KERNEL); + if (!work) { + pr_err("Allocation failed\n"); + return; + } + + INIT_WORK(work, kasan_workqueue_work); + queue_work(workqueue, work); + destroy_workqueue(workqueue); + + pr_info("use-after-free on workqueue\n"); + ((volatile struct work_struct *)work)->data; +} + +static int __init test_kasan_module_init(void) +{ + /* + * Temporarily enable multi-shot mode. Otherwise, KASAN would only + * report the first detected bug and panic the kernel if panic_on_warn + * is enabled. + */ + bool multishot = kasan_save_enable_multi_shot(); + + copy_user_test(); + kasan_rcu_uaf(); + kasan_workqueue_uaf(); + + kasan_restore_multi_shot(multishot); + return -EAGAIN; +} + +module_init(test_kasan_module_init); +MODULE_LICENSE("GPL"); -- cgit v1.2.3 From d596b04f5967c75c196eb582fefba49488c57289 Mon Sep 17 00:00:00 2001 From: Alexander Potapenko Date: Thu, 15 Sep 2022 17:03:47 +0200 Subject: MAINTAINERS: add entry for KMSAN Add entry for KMSAN maintainers/reviewers. Link: https://lkml.kernel.org/r/20220915150417.722975-14-glider@google.com Signed-off-by: Alexander Potapenko Cc: Alexander Viro Cc: Alexei Starovoitov Cc: Andrey Konovalov Cc: Andrey Konovalov Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Christoph Hellwig Cc: Christoph Lameter Cc: David Rientjes Cc: Dmitry Vyukov Cc: Eric Biggers Cc: Eric Biggers Cc: Eric Dumazet Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Ilya Leoshkevich Cc: Ingo Molnar Cc: Jens Axboe Cc: Joonsoo Kim Cc: Kees Cook Cc: Marco Elver Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael S. Tsirkin Cc: Pekka Enberg Cc: Peter Zijlstra Cc: Petr Mladek Cc: Stephen Rothwell Cc: Steven Rostedt Cc: Thomas Gleixner Cc: Vasily Gorbik Cc: Vegard Nossum Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- MAINTAINERS | 13 +++++++++++++ 1 file changed, 13 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 6f1033f3c1ed..3c7dfe9bb712 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11371,6 +11371,19 @@ F: kernel/kmod.c F: lib/test_kmod.c F: tools/testing/selftests/kmod/ +KMSAN +M: Alexander Potapenko +R: Marco Elver +R: Dmitry Vyukov +L: kasan-dev@googlegroups.com +S: Maintained +F: Documentation/dev-tools/kmsan.rst +F: arch/*/include/asm/kmsan.h +F: include/linux/kmsan*.h +F: lib/Kconfig.kmsan +F: mm/kmsan/ +F: scripts/Makefile.kmsan + KPROBES M: Naveen N. Rao M: Anil S Keshavamurthy -- cgit v1.2.3 From ce732a7520b093091c345cba1b84542d1abd83ed Mon Sep 17 00:00:00 2001 From: Alexander Potapenko Date: Wed, 28 Sep 2022 14:32:19 +0200 Subject: x86: kmsan: handle CPU entry area Among other data, CPU entry area holds exception stacks, so addresses from this area can be passed to kmsan_get_metadata(). This previously led to kmsan_get_metadata() returning NULL, which in turn resulted in a warning that triggered further attempts to call kmsan_get_metadata() in the exception context, which quickly exhausted the exception stack. This patch allocates shadow and origin for the CPU entry area on x86 and introduces arch_kmsan_get_meta_or_null(), which performs arch-specific metadata mapping. Link: https://lkml.kernel.org/r/20220928123219.1101883-1-glider@google.com Signed-off-by: Alexander Potapenko Fixes: 21d723a7c1409 ("kmsan: add KMSAN runtime core") Cc: Alexander Viro Cc: Alexei Starovoitov Cc: Andrey Konovalov Cc: Andrey Konovalov Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Christoph Hellwig Cc: Christoph Lameter Cc: David Rientjes Cc: Dmitry Vyukov Cc: Eric Biggers Cc: Eric Biggers Cc: Eric Dumazet Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Ilya Leoshkevich Cc: Ingo Molnar Cc: Jens Axboe Cc: Joonsoo Kim Cc: Kees Cook Cc: Marco Elver Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael S. Tsirkin Cc: Pekka Enberg Cc: Peter Zijlstra Cc: Petr Mladek Cc: Stephen Rothwell Cc: Steven Rostedt Cc: Thomas Gleixner Cc: Vasily Gorbik Cc: Vegard Nossum Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- MAINTAINERS | 1 + arch/x86/include/asm/kmsan.h | 32 ++++++++++++++++++++++++++++++++ arch/x86/mm/Makefile | 3 +++ arch/x86/mm/kmsan_shadow.c | 20 ++++++++++++++++++++ mm/kmsan/shadow.c | 6 +++++- 5 files changed, 61 insertions(+), 1 deletion(-) create mode 100644 arch/x86/mm/kmsan_shadow.c (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 3c7dfe9bb712..456b07f02803 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11379,6 +11379,7 @@ L: kasan-dev@googlegroups.com S: Maintained F: Documentation/dev-tools/kmsan.rst F: arch/*/include/asm/kmsan.h +F: arch/*/mm/kmsan_* F: include/linux/kmsan*.h F: lib/Kconfig.kmsan F: mm/kmsan/ diff --git a/arch/x86/include/asm/kmsan.h b/arch/x86/include/asm/kmsan.h index a790b865d0a6..8fa6ac0e2d76 100644 --- a/arch/x86/include/asm/kmsan.h +++ b/arch/x86/include/asm/kmsan.h @@ -11,9 +11,41 @@ #ifndef MODULE +#include #include #include +DECLARE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_shadow); +DECLARE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_origin); + +/* + * Functions below are declared in the header to make sure they are inlined. + * They all are called from kmsan_get_metadata() for every memory access in + * the kernel, so speed is important here. + */ + +/* + * Compute metadata addresses for the CPU entry area on x86. + */ +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin) +{ + unsigned long addr64 = (unsigned long)addr; + char *metadata_array; + unsigned long off; + int cpu; + + if ((addr64 < CPU_ENTRY_AREA_BASE) || + (addr64 >= (CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE))) + return NULL; + cpu = (addr64 - CPU_ENTRY_AREA_BASE) / CPU_ENTRY_AREA_SIZE; + off = addr64 - (unsigned long)get_cpu_entry_area(cpu); + if ((off < 0) || (off >= CPU_ENTRY_AREA_SIZE)) + return NULL; + metadata_array = is_origin ? cpu_entry_area_origin : + cpu_entry_area_shadow; + return &per_cpu(metadata_array[off], cpu); +} + /* * Taken from arch/x86/mm/physaddr.h to avoid using an instrumented version. */ diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index afb6f7187dad..c80febc44cd2 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -46,6 +46,9 @@ obj-$(CONFIG_HIGHMEM) += highmem_32.o KASAN_SANITIZE_kasan_init_$(BITS).o := n obj-$(CONFIG_KASAN) += kasan_init_$(BITS).o +KMSAN_SANITIZE_kmsan_shadow.o := n +obj-$(CONFIG_KMSAN) += kmsan_shadow.o + obj-$(CONFIG_MMIOTRACE) += mmiotrace.o mmiotrace-y := kmmio.o pf_in.o mmio-mod.o obj-$(CONFIG_MMIOTRACE_TEST) += testmmiotrace.o diff --git a/arch/x86/mm/kmsan_shadow.c b/arch/x86/mm/kmsan_shadow.c new file mode 100644 index 000000000000..bee2ec4a3bfa --- /dev/null +++ b/arch/x86/mm/kmsan_shadow.c @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * x86-specific bits of KMSAN shadow implementation. + * + * Copyright (C) 2022 Google LLC + * Author: Alexander Potapenko + */ + +#include +#include + +/* + * Addresses within the CPU entry area (including e.g. exception stacks) do not + * have struct page entries corresponding to them, so they need separate + * handling. + * arch_kmsan_get_meta_or_null() (declared in the header) maps the addresses in + * CPU entry area to addresses in cpu_entry_area_shadow/cpu_entry_area_origin. + */ +DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_shadow); +DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_origin); diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c index 6e90a806a704..21e3e196ec3c 100644 --- a/mm/kmsan/shadow.c +++ b/mm/kmsan/shadow.c @@ -12,7 +12,6 @@ #include #include #include -#include #include #include #include @@ -126,6 +125,7 @@ void *kmsan_get_metadata(void *address, bool is_origin) { u64 addr = (u64)address, pad, off; struct page *page; + void *ret; if (is_origin && !IS_ALIGNED(addr, KMSAN_ORIGIN_SIZE)) { pad = addr % KMSAN_ORIGIN_SIZE; @@ -136,6 +136,10 @@ void *kmsan_get_metadata(void *address, bool is_origin) kmsan_internal_is_module_addr(address)) return (void *)vmalloc_meta(address, is_origin); + ret = arch_kmsan_get_meta_or_null(address, is_origin); + if (ret) + return ret; + page = virt_to_page_or_null(address); if (!page) return NULL; -- cgit v1.2.3