diff options
author | Linus Torvalds | 2022-12-12 16:22:22 -0800 |
---|---|---|
committer | Linus Torvalds | 2022-12-12 16:22:22 -0800 |
commit | 268325bda5299836a6ad4c3952474a2be125da5f (patch) | |
tree | 5f7b22109b7a21d0aab68cab8de0ee201426aae1 | |
parent | ca1443c7e75a28c6fde5c67cb1904b624cf43c36 (diff) | |
parent | 3e6743e28b9b43d37ced234bdf8e19955d0216f8 (diff) |
Merge tag 'random-6.2-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random
Pull random number generator updates from Jason Donenfeld:
- Replace prandom_u32_max() and various open-coded variants of it,
there is now a new family of functions that uses fast rejection
sampling to choose properly uniformly random numbers within an
interval:
get_random_u32_below(ceil) - [0, ceil)
get_random_u32_above(floor) - (floor, U32_MAX]
get_random_u32_inclusive(floor, ceil) - [floor, ceil]
Coccinelle was used to convert all current users of
prandom_u32_max(), as well as many open-coded patterns, resulting in
improvements throughout the tree.
I'll have a "late" 6.1-rc1 pull for you that removes the now unused
prandom_u32_max() function, just in case any other trees add a new
use case of it that needs to converted. According to linux-next,
there may be two trivial cases of prandom_u32_max() reintroductions
that are fixable with a 's/.../.../'. So I'll have for you a final
conversion patch doing that alongside the removal patch during the
second week.
This is a treewide change that touches many files throughout.
- More consistent use of get_random_canary().
- Updates to comments, documentation, tests, headers, and
simplification in configuration.
- The arch_get_random*_early() abstraction was only used by arm64 and
wasn't entirely useful, so this has been replaced by code that works
in all relevant contexts.
- The kernel will use and manage random seeds in non-volatile EFI
variables, refreshing a variable with a fresh seed when the RNG is
initialized. The RNG GUID namespace is then hidden from efivarfs to
prevent accidental leakage.
These changes are split into random.c infrastructure code used in the
EFI subsystem, in this pull request, and related support inside of
EFISTUB, in Ard's EFI tree. These are co-dependent for full
functionality, but the order of merging doesn't matter.
- Part of the infrastructure added for the EFI support is also used for
an improvement to the way vsprintf initializes its siphash key,
replacing an sleep loop wart.
- The hardware RNG framework now always calls its correct random.c
input function, add_hwgenerator_randomness(), rather than sometimes
going through helpers better suited for other cases.
- The add_latent_entropy() function has long been called from the fork
handler, but is a no-op when the latent entropy gcc plugin isn't
used, which is fine for the purposes of latent entropy.
But it was missing out on the cycle counter that was also being mixed
in beside the latent entropy variable. So now, if the latent entropy
gcc plugin isn't enabled, add_latent_entropy() will expand to a call
to add_device_randomness(NULL, 0), which adds a cycle counter,
without the absent latent entropy variable.
- The RNG is now reseeded from a delayed worker, rather than on demand
when used. Always running from a worker allows it to make use of the
CPU RNG on platforms like S390x, whose instructions are too slow to
do so from interrupts. It also has the effect of adding in new inputs
more frequently with more regularity, amounting to a long term
transcript of random values. Plus, it helps a bit with the upcoming
vDSO implementation (which isn't yet ready for 6.2).
- The jitter entropy algorithm now tries to execute on many different
CPUs, round-robining, in hopes of hitting even more memory latencies
and other unpredictable effects. It also will mix in a cycle counter
when the entropy timer fires, in addition to being mixed in from the
main loop, to account more explicitly for fluctuations in that timer
firing. And the state it touches is now kept within the same cache
line, so that it's assured that the different execution contexts will
cause latencies.
* tag 'random-6.2-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: (23 commits)
random: include <linux/once.h> in the right header
random: align entropy_timer_state to cache line
random: mix in cycle counter when jitter timer fires
random: spread out jitter callback to different CPUs
random: remove extraneous period and add a missing one in comments
efi: random: refresh non-volatile random seed when RNG is initialized
vsprintf: initialize siphash key using notifier
random: add back async readiness notifier
random: reseed in delayed work rather than on-demand
random: always mix cycle counter in add_latent_entropy()
hw_random: use add_hwgenerator_randomness() for early entropy
random: modernize documentation comment on get_random_bytes()
random: adjust comment to account for removed function
random: remove early archrandom abstraction
random: use random.trust_{bootloader,cpu} command line option only
stackprotector: actually use get_random_canary()
stackprotector: move get_random_canary() into stackprotector.h
treewide: use get_random_u32_inclusive() when possible
treewide: use get_random_u32_{above,below}() instead of manual loop
treewide: use get_random_u32_below() instead of deprecated function
...
165 files changed, 611 insertions, 649 deletions
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 75f6d485df8e..b36c0e0fbc83 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4574,17 +4574,15 @@ ramdisk_start= [RAM] RAM disk image start address - random.trust_cpu={on,off} - [KNL] Enable or disable trusting the use of the - CPU's random number generator (if available) to - fully seed the kernel's CRNG. Default is controlled - by CONFIG_RANDOM_TRUST_CPU. - - random.trust_bootloader={on,off} - [KNL] Enable or disable trusting the use of a - seed passed by the bootloader (if available) to - fully seed the kernel's CRNG. Default is controlled - by CONFIG_RANDOM_TRUST_BOOTLOADER. + random.trust_cpu=off + [KNL] Disable trusting the use of the CPU's + random number generator (if available) to + initialize the kernel's RNG. + + random.trust_bootloader=off + [KNL] Disable trusting the use of the a seed + passed by the bootloader (if available) to + initialize the kernel's RNG. randomize_kstack_offset= [KNL] Enable or disable kernel stack offset diff --git a/arch/arm/include/asm/stackprotector.h b/arch/arm/include/asm/stackprotector.h index 088d03161be5..0bd4979759f1 100644 --- a/arch/arm/include/asm/stackprotector.h +++ b/arch/arm/include/asm/stackprotector.h @@ -15,9 +15,6 @@ #ifndef _ASM_STACKPROTECTOR_H #define _ASM_STACKPROTECTOR_H 1 -#include <linux/random.h> -#include <linux/version.h> - #include <asm/thread_info.h> extern unsigned long __stack_chk_guard; @@ -30,11 +27,7 @@ extern unsigned long __stack_chk_guard; */ static __always_inline void boot_init_stack_canary(void) { - unsigned long canary; - - /* Try to get a semi random initial value. */ - get_random_bytes(&canary, sizeof(canary)); - canary ^= LINUX_VERSION_CODE; + unsigned long canary = get_random_canary(); current->stack_canary = canary; #ifndef CONFIG_STACKPROTECTOR_PER_TASK diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index a2b31d91a1b6..f811733a8fc5 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -371,7 +371,7 @@ static unsigned long sigpage_addr(const struct mm_struct *mm, slots = ((last - first) >> PAGE_SHIFT) + 1; - offset = prandom_u32_max(slots); + offset = get_random_u32_below(slots); addr = first + (offset << PAGE_SHIFT); diff --git a/arch/arm64/include/asm/archrandom.h b/arch/arm64/include/asm/archrandom.h index 109e2a4454be..2f5f3da34782 100644 --- a/arch/arm64/include/asm/archrandom.h +++ b/arch/arm64/include/asm/archrandom.h @@ -5,6 +5,7 @@ #include <linux/arm-smccc.h> #include <linux/bug.h> #include <linux/kernel.h> +#include <linux/irqflags.h> #include <asm/cpufeature.h> #define ARM_SMCCC_TRNG_MIN_VERSION 0x10000UL @@ -58,6 +59,13 @@ static inline bool __arm64_rndrrs(unsigned long *v) return ok; } +static __always_inline bool __cpu_has_rng(void) +{ + if (unlikely(!system_capabilities_finalized() && !preemptible())) + return this_cpu_has_cap(ARM64_HAS_RNG); + return cpus_have_const_cap(ARM64_HAS_RNG); +} + static inline size_t __must_check arch_get_random_longs(unsigned long *v, size_t max_longs) { /* @@ -66,7 +74,7 @@ static inline size_t __must_check arch_get_random_longs(unsigned long *v, size_t * cpufeature code and with potential scheduling between CPUs * with and without the feature. */ - if (max_longs && cpus_have_const_cap(ARM64_HAS_RNG) && __arm64_rndr(v)) + if (max_longs && __cpu_has_rng() && __arm64_rndr(v)) return 1; return 0; } @@ -108,7 +116,7 @@ static inline size_t __must_check arch_get_random_seed_longs(unsigned long *v, s * reseeded after each invocation. This is not a 100% fit but good * enough to implement this API if no other entropy source exists. */ - if (cpus_have_const_cap(ARM64_HAS_RNG) && __arm64_rndrrs(v)) + if (__cpu_has_rng() && __arm64_rndrrs(v)) return 1; return 0; @@ -121,40 +129,4 @@ static inline bool __init __early_cpu_has_rndr(void) return (ftr >> ID_AA64ISAR0_EL1_RNDR_SHIFT) & 0xf; } -static inline size_t __init __must_check -arch_get_random_seed_longs_early(unsigned long *v, size_t max_longs) -{ - WARN_ON(system_state != SYSTEM_BOOTING); - - if (!max_longs) - return 0; - - if (smccc_trng_available) { - struct arm_smccc_res res; - - max_longs = min_t(size_t, 3, max_longs); - arm_smccc_1_1_invoke(ARM_SMCCC_TRNG_RND64, max_longs * 64, &res); - if ((int)res.a0 >= 0) { - switch (max_longs) { - case 3: - *v++ = res.a1; - fallthrough; - case 2: - *v++ = res.a2; - fallthrough; - case 1: - *v++ = res.a3; - break; - } - return max_longs; - } - } - - if (__early_cpu_has_rndr() && __arm64_rndr(v)) - return 1; - - return 0; -} -#define arch_get_random_seed_longs_early arch_get_random_seed_longs_early - #endif /* _ASM_ARCHRANDOM_H */ diff --git a/arch/arm64/include/asm/stackprotector.h b/arch/arm64/include/asm/stackprotector.h index 33f1bb453150..ae3ad80f51fe 100644 --- a/arch/arm64/include/asm/stackprotector.h +++ b/arch/arm64/include/asm/stackprotector.h @@ -13,8 +13,6 @@ #ifndef __ASM_STACKPROTECTOR_H #define __ASM_STACKPROTECTOR_H -#include <linux/random.h> -#include <linux/version.h> #include <asm/pointer_auth.h> extern unsigned long __stack_chk_guard; @@ -28,12 +26,7 @@ extern unsigned long __stack_chk_guard; static __always_inline void boot_init_stack_canary(void) { #if defined(CONFIG_STACKPROTECTOR) - unsigned long canary; - - /* Try to get a semi random initial value. */ - get_random_bytes(&canary, sizeof(canary)); - canary ^= LINUX_VERSION_CODE; - canary &= CANARY_MASK; + unsigned long canary = get_random_canary(); current->stack_canary = canary; if (!IS_ENABLED(CONFIG_STACKPROTECTOR_PER_TASK)) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 19cd05eea3f0..269ac1c25ae2 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -593,7 +593,7 @@ unsigned long __get_wchan(struct task_struct *p) unsigned long arch_align_stack(unsigned long sp) { if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space) - sp -= prandom_u32_max(PAGE_SIZE); + sp -= get_random_u32_below(PAGE_SIZE); return sp & ~0xf; } diff --git a/arch/csky/include/asm/stackprotector.h b/arch/csky/include/asm/stackprotector.h index d7cd4e51edd9..d23747447166 100644 --- a/arch/csky/include/asm/stackprotector.h +++ b/arch/csky/include/asm/stackprotector.h @@ -2,9 +2,6 @@ #ifndef _ASM_STACKPROTECTOR_H #define _ASM_STACKPROTECTOR_H 1 -#include <linux/random.h> -#include <linux/version.h> - extern unsigned long __stack_chk_guard; /* @@ -15,12 +12,7 @@ extern unsigned long __stack_chk_guard; */ static __always_inline void boot_init_stack_canary(void) { - unsigned long canary; - - /* Try to get a semi random initial value. */ - get_random_bytes(&canary, sizeof(canary)); - canary ^= LINUX_VERSION_CODE; - canary &= CANARY_MASK; + unsigned long canary = get_random_canary(); current->stack_canary = canary; __stack_chk_guard = current->stack_canary; diff --git a/arch/loongarch/kernel/process.c b/arch/loongarch/kernel/process.c index ddb8ba4eb399..d61c9f465b95 100644 --- a/arch/loongarch/kernel/process.c +++ b/arch/loongarch/kernel/process.c @@ -294,7 +294,7 @@ unsigned long stack_top(void) unsigned long arch_align_stack(unsigned long sp) { if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space) - sp -= prandom_u32_max(PAGE_SIZE); + sp -= get_random_u32_below(PAGE_SIZE); return sp & STACK_ALIGN; } diff --git a/arch/loongarch/kernel/vdso.c b/arch/loongarch/kernel/vdso.c index 8c9826062652..eaebd2e0f725 100644 --- a/arch/loongarch/kernel/vdso.c +++ b/arch/loongarch/kernel/vdso.c @@ -78,7 +78,7 @@ static unsigned long vdso_base(void) unsigned long base = STACK_TOP; if (current->flags & PF_RANDOMIZE) { - base += prandom_u32_max(VDSO_RANDOMIZE_SIZE); + base += get_random_u32_below(VDSO_RANDOMIZE_SIZE); base = PAGE_ALIGN(base); } diff --git a/arch/mips/include/asm/stackprotector.h b/arch/mips/include/asm/stackprotector.h index 68d4be9e1254..518c192ad982 100644 --- a/arch/mips/include/asm/stackprotector.h +++ b/arch/mips/include/asm/stackprotector.h @@ -15,9 +15,6 @@ #ifndef _ASM_STACKPROTECTOR_H #define _ASM_STACKPROTECTOR_H 1 -#include <linux/random.h> -#include <linux/version.h> - extern unsigned long __stack_chk_guard; /* @@ -28,11 +25,7 @@ extern unsigned long __stack_chk_guard; */ static __always_inline void boot_init_stack_canary(void) { - unsigned long canary; - - /* Try to get a semi random initial value. */ - get_random_bytes(&canary, sizeof(canary)); - canary ^= LINUX_VERSION_CODE; + unsigned long canary = get_random_canary(); current->stack_canary = canary; __stack_chk_guard = current->stack_canary; diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c index bbe9ce471791..093dbbd6b843 100644 --- a/arch/mips/kernel/process.c +++ b/arch/mips/kernel/process.c @@ -711,7 +711,7 @@ unsigned long mips_stack_top(void) unsigned long arch_align_stack(unsigned long sp) { if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space) - sp -= prandom_u32_max(PAGE_SIZE); + sp -= get_random_u32_below(PAGE_SIZE); return sp & ALMASK; } diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c index 5fd9bf1d596c..f6d40e43f108 100644 --- a/arch/mips/kernel/vdso.c +++ b/arch/mips/kernel/vdso.c @@ -79,7 +79,7 @@ static unsigned long vdso_base(void) } if (current->flags & PF_RANDOMIZE) { - base += prandom_u32_max(VDSO_RANDOMIZE_SIZE); + base += get_random_u32_below(VDSO_RANDOMIZE_SIZE); base = PAGE_ALIGN(base); } diff --git a/arch/parisc/kernel/vdso.c b/arch/parisc/kernel/vdso.c index 47e5960a2f96..c5cbfce7a84c 100644 --- a/arch/parisc/kernel/vdso.c +++ b/arch/parisc/kernel/vdso.c @@ -75,7 +75,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, map_base = mm->mmap_base; if (current->flags & PF_RANDOMIZE) - map_base -= prandom_u32_max(0x20) * PAGE_SIZE; + map_base -= get_random_u32_below(0x20) * PAGE_SIZE; vdso_text_start = get_unmapped_area(NULL, map_base, vdso_text_len, 0, 0); diff --git a/arch/powerpc/configs/microwatt_defconfig b/arch/powerpc/configs/microwatt_defconfig index ea2dbd778aad..18d4fe4108cb 100644 --- a/arch/powerpc/configs/microwatt_defconfig +++ b/arch/powerpc/configs/microwatt_defconfig @@ -68,7 +68,6 @@ CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_OF_PLATFORM=y CONFIG_SERIAL_NONSTANDARD=y # CONFIG_NVRAM is not set -CONFIG_RANDOM_TRUST_CPU=y CONFIG_SPI=y CONFIG_SPI_DEBUG=y CONFIG_SPI_BITBANG=y diff --git a/arch/powerpc/crypto/crc-vpmsum_test.c b/arch/powerpc/crypto/crc-vpmsum_test.c index 273c527868db..c61a874a3a5c 100644 --- a/arch/powerpc/crypto/crc-vpmsum_test.c +++ b/arch/powerpc/crypto/crc-vpmsum_test.c @@ -77,8 +77,8 @@ static int __init crc_test_init(void) pr_info("crc-vpmsum_test begins, %lu iterations\n", iterations); for (i=0; i<iterations; i++) { - size_t offset = prandom_u32_max(16); - size_t len = prandom_u32_max(MAX_CRC_LENGTH); + size_t offset = get_random_u32_below(16); + size_t len = get_random_u32_below(MAX_CRC_LENGTH); if (len <= offset) continue; diff --git a/arch/powerpc/include/asm/stackprotector.h b/arch/powerpc/include/asm/stackprotector.h index 1c8460e23583..283c34647856 100644 --- a/arch/powerpc/include/asm/stackprotector.h +++ b/arch/powerpc/include/asm/stackprotector.h @@ -7,8 +7,6 @@ #ifndef _ASM_STACKPROTECTOR_H #define _ASM_STACKPROTECTOR_H -#include <linux/random.h> -#include <linux/version.h> #include <asm/reg.h> #include <asm/current.h> #include <asm/paca.h> @@ -21,13 +19,7 @@ */ static __always_inline void boot_init_stack_canary(void) { - unsigned long canary; - - /* Try to get a semi random initial value. */ - canary = get_random_canary(); - canary ^= mftb(); - canary ^= LINUX_VERSION_CODE; - canary &= CANARY_MASK; + unsigned long canary = get_random_canary(); current->stack_canary = canary; #ifdef CONFIG_PPC64 diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 67da147fe34d..fcf604370c66 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -2303,6 +2303,6 @@ void notrace __ppc64_runlatch_off(void) unsigned long arch_align_stack(unsigned long sp) { if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space) - sp -= prandom_u32_max(PAGE_SIZE); + sp -= get_random_u32_below(PAGE_SIZE); return sp & ~0xf; } diff --git a/arch/riscv/include/asm/stackprotector.h b/arch/riscv/include/asm/stackprotector.h index 09093af46565..43895b90fe3f 100644 --- a/arch/riscv/include/asm/stackprotector.h +++ b/arch/riscv/include/asm/stackprotector.h @@ -3,9 +3,6 @@ #ifndef _ASM_RISCV_STACKPROTECTOR_H #define _ASM_RISCV_STACKPROTECTOR_H -#include <linux/random.h> -#include <linux/version.h> - extern unsigned long __stack_chk_guard; /* @@ -16,12 +13,7 @@ extern unsigned long __stack_chk_guard; */ static __always_inline void boot_init_stack_canary(void) { - unsigned long canary; - - /* Try to get a semi random initial value. */ - get_random_bytes(&canary, sizeof(canary)); - canary ^= LINUX_VERSION_CODE; - canary &= CANARY_MASK; + unsigned long canary = get_random_canary(); current->stack_canary = canary; if (!IS_ENABLED(CONFIG_STACKPROTECTOR_PER_TASK)) diff --git a/arch/s390/configs/debug_defconfig b/arch/s390/configs/debug_defconfig index 63807bd0b536..a7b4e1d82758 100644 --- a/arch/s390/configs/debug_defconfig +++ b/arch/s390/configs/debug_defconfig @@ -573,8 +573,6 @@ CONFIG_VIRTIO_CONSOLE=m CONFIG_HW_RANDOM_VIRTIO=m CONFIG_HANGCHECK_TIMER=m CONFIG_TN3270_FS=y -# CONFIG_RANDOM_TRUST_CPU is not set -# CONFIG_RANDOM_TRUST_BOOTLOADER is not set CONFIG_PPS=m # CONFIG_PTP_1588_CLOCK is not set # CONFIG_HWMON is not set diff --git a/arch/s390/configs/defconfig b/arch/s390/configs/defconfig index 4f9a98247442..2bc2d0fe5774 100644 --- a/arch/s390/configs/defconfig +++ b/arch/s390/configs/defconfig @@ -563,8 +563,6 @@ CONFIG_VIRTIO_CONSOLE=m CONFIG_HW_RANDOM_VIRTIO=m CONFIG_HANGCHECK_TIMER=m CONFIG_TN3270_FS=y -# CONFIG_RANDOM_TRUST_CPU is not set -# CONFIG_RANDOM_TRUST_BOOTLOADER is not set # CONFIG_PTP_1588_CLOCK is not set # CONFIG_HWMON is not set CONFIG_WATCHDOG=y diff --git a/arch/s390/configs/zfcpdump_defconfig b/arch/s390/configs/zfcpdump_defconfig index 5fe9948be644..ae14ab0b864d 100644 --- a/arch/s390/configs/zfcpdump_defconfig +++ b/arch/s390/configs/zfcpdump_defconfig @@ -58,7 +58,6 @@ CONFIG_ZFCP=y # CONFIG_VMCP is not set # CONFIG_MONWRITER is not set # CONFIG_S390_VMUR is not set -# CONFIG_RANDOM_TRUST_BOOTLOADER is not set # CONFIG_HID is not set # CONFIG_VIRTIO_MENU is not set # CONFIG_VHOST_MENU is not set diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c index 42af4b3aa02b..3f5d2db0b854 100644 --- a/arch/s390/kernel/process.c +++ b/arch/s390/kernel/process.c @@ -224,7 +224,7 @@ unsigned long __get_wchan(struct task_struct *p) unsigned long arch_align_stack(unsigned long sp) { if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space) - sp -= prandom_u32_max(PAGE_SIZE); + sp -= get_random_u32_below(PAGE_SIZE); return sp & ~0xf; } diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c index d6df7169c01f..ff7bf4432229 100644 --- a/arch/s390/kernel/vdso.c +++ b/arch/s390/kernel/vdso.c @@ -207,7 +207,7 @@ static unsigned long vdso_addr(unsigned long start, unsigned long len) end -= len; if (end > start) { - offset = prandom_u32_max(((end - start) >> PAGE_SHIFT) + 1); + offset = get_random_u32_below(((end - start) >> PAGE_SHIFT) + 1); addr = start + (offset << PAGE_SHIFT); } else { addr = start; diff --git a/arch/sh/include/asm/stackprotector.h b/arch/sh/include/asm/stackprotector.h index 35616841d0a1..665dafac376f 100644 --- a/arch/sh/include/asm/stackprotector.h +++ b/arch/sh/include/asm/stackprotector.h @@ -2,9 +2,6 @@ #ifndef __ASM_SH_STACKPROTECTOR_H #define __ASM_SH_STACKPROTECTOR_H -#include <linux/random.h> -#include <linux/version.h> - extern unsigned long __stack_chk_guard; /* @@ -15,12 +12,7 @@ extern unsigned long __stack_chk_guard; */ static __always_inline void boot_init_stack_canary(void) { - unsigned long canary; - - /* Try to get a semi random initial value. */ - get_random_bytes(&canary, sizeof(canary)); - canary ^= LINUX_VERSION_CODE; - canary &= CANARY_MASK; + unsigned long canary = get_random_canary(); current->stack_canary = canary; __stack_chk_guard = current->stack_canary; diff --git a/arch/sparc/vdso/vma.c b/arch/sparc/vdso/vma.c index ae9a86cb6f3d..136c78f28f8b 100644 --- a/arch/sparc/vdso/vma.c +++ b/arch/sparc/vdso/vma.c @@ -354,7 +354,7 @@ static unsigned long vdso_addr(unsigned long start, unsigned int len) unsigned int offset; /* This loses some more bits than a modulo, but is cheaper */ - offset = prandom_u32_max(PTRS_PER_PTE); + offset = get_random_u32_below(PTRS_PER_PTE); return start + (offset << PAGE_SHIFT); } diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c index 010bc422a09d..e38f41444721 100644 --- a/arch/um/kernel/process.c +++ b/arch/um/kernel/process.c @@ -356,7 +356,7 @@ int singlestepping(void * t) unsigned long arch_align_stack(unsigned long sp) { if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space) - sp -= prandom_u32_max(8192); + sp -= get_random_u32_below(8192); return sp & ~0xf; } #endif diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c index 3c6b488b2f11..b8f3f9b9e53c 100644 --- a/arch/x86/entry/vdso/vma.c +++ b/arch/x86/entry/vdso/vma.c @@ -303,7 +303,7 @@ static unsigned long vdso_addr(unsigned long start, unsigned len) end -= len; if (end > start) { - offset = prandom_u32_max(((end - start) >> PAGE_SHIFT) + 1); + offset = get_random_u32_below(((end - start) >> PAGE_SHIFT) + 1); addr = start + (offset << PAGE_SHIFT); } else { addr = start; diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h index 24a8d6c4fb18..00473a650f51 100644 --- a/arch/x86/include/asm/stackprotector.h +++ b/arch/x86/include/asm/stackprotector.h @@ -34,7 +34,6 @@ #include <asm/percpu.h> #include <asm/desc.h> -#include <linux/random.h> #include <linux/sched.h> /* @@ -50,22 +49,11 @@ */ static __always_inline void boot_init_stack_canary(void) { - u64 canary; - u64 tsc; + unsigned long canary = get_random_canary(); #ifdef CONFIG_X86_64 BUILD_BUG_ON(offsetof(struct fixed_percpu_data, stack_canary) != 40); #endif - /* - * We both use the random pool and the current TSC as a source - * of randomness. The TSC only matters for very early init, - * there it already has some randomness on most systems. Later - * on during the bootup the random pool has true entropy too. - */ - get_random_bytes(&canary, sizeof(canary)); - tsc = rdtsc(); - canary += tsc + (tsc << 32UL); - canary &= CANARY_MASK; current->stack_canary = canary; #ifdef CONFIG_X86_64 diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 3e508f239098..3f66dd03c091 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -22,9 +22,9 @@ #include <linux/io.h> #include <linux/syscore_ops.h> #include <linux/pgtable.h> +#include <linux/stackprotector.h> #include <asm/cmdline.h> -#include <asm/stackprotector.h> #include <asm/perf_event.h> #include <asm/mmu_context.h> #include <asm/doublefault.h> diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index c8a25aed604d..d85a6980e263 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -53,7 +53,7 @@ static unsigned long int get_module_load_offset(void) */ if (module_load_offset == 0) module_load_offset = - (prandom_u32_max(1024) + 1) * PAGE_SIZE; + get_random_u32_inclusive(1, 1024) * PAGE_SIZE; mutex_unlock(&module_kaslr_mutex); } return module_load_offset; diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index e436c9c1ef3b..40d156a31676 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -965,7 +965,7 @@ early_param("idle", idle_setup); unsigned long arch_align_stack(unsigned long sp) { if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space) - sp -= prandom_u32_max(8192); + sp -= get_random_u32_below(8192); return sp & ~0xf; } diff --git a/arch/x86/kernel/setup_percpu.c b/arch/x86/kernel/setup_percpu.c index 49325caa7307..b26123c90b4f 100644 --- a/arch/x86/kernel/setup_percpu.c +++ b/arch/x86/kernel/setup_percpu.c @@ -11,6 +11,7 @@ #include <linux/smp.h> #include <linux/topology.h> #include <linux/pfn.h> +#include <linux/stackprotector.h> #include <asm/sections.h> #include <asm/processor.h> #include <asm/desc.h> @@ -21,7 +22,6 @@ #include <asm/proto.h> #include <asm/cpumask.h> #include <asm/cpu.h> -#include <asm/stackprotector.h> DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number); EXPORT_PER_CPU_SYMBOL(cpu_number); diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 3f3ea0287f69..5a742b6ec46d 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -56,6 +56,7 @@ #include <linux/numa.h> #include <linux/pgtable.h> #include <linux/overflow.h> +#include <linux/stackprotector.h> #include <asm/acpi.h> #include <asm/desc.h> diff --git a/arch/x86/mm/pat/cpa-test.c b/arch/x86/mm/pat/cpa-test.c index 423b21e80929..3d2f7f0a6ed1 100644 --- a/arch/x86/mm/pat/cpa-test.c +++ b/arch/x86/mm/pat/cpa-test.c @@ -136,10 +136,10 @@ static int pageattr_test(void) failed += print_split(&sa); for (i = 0; i < NTEST; i++) { - unsigned long pfn = prandom_u32_max(max_pfn_mapped); + unsigned long pfn = get_random_u32_below(max_pfn_mapped); addr[i] = (unsigned long)__va(pfn << PAGE_SHIFT); - len[i] = prandom_u32_max(NPAGES); + len[i] = get_random_u32_below(NPAGES); len[i] = min_t(unsigned long, len[i], max_pfn_mapped - pfn - 1); if (len[i] == 0) diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c index 038da45f057a..1a2ba31635a5 100644 --- a/arch/x86/xen/enlighten_pv.c +++ b/arch/x86/xen/enlighten_pv.c @@ -33,6 +33,7 @@ #include <linux/edd.h> #include <linux/reboot.h> #include <linux/virtio_anchor.h> +#include <linux/stackprotector.h> #include <xen/xen.h> #include <xen/events.h> @@ -65,7 +66,6 @@ #include <asm/pgalloc.h> #include <asm/tlbflush.h> #include <asm/reboot.h> -#include <asm/stackprotector.h> #include <asm/hypervisor.h> #include <asm/mach_traps.h> #include <asm/mwait.h> diff --git a/arch/xtensa/include/asm/stackprotector.h b/arch/xtensa/include/asm/stackprotector.h index e368f94fd2af..dd10279a2378 100644 --- a/arch/xtensa/include/asm/stackprotector.h +++ b/arch/xtensa/include/asm/stackprotector.h @@ -14,9 +14,6 @@ #ifndef _ASM_STACKPROTECTOR_H #define _ASM_STACKPROTECTOR_H 1 -#include <linux/random.h> -#include <linux/version.h> - extern unsigned long __stack_chk_guard; /* @@ -27,11 +24,7 @@ extern unsigned long __stack_chk_guard; */ static __always_inline void boot_init_stack_canary(void) { - unsigned long canary; - - /* Try to get a semi random initial value. */ - get_random_bytes(&canary, sizeof(canary)); - canary ^= LINUX_VERSION_CODE; + unsigned long canary = get_random_canary(); current->stack_canary = canary; __stack_chk_guard = current->stack_canary; diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c index 3285e3af43e1..e75728f87ce5 100644 --- a/crypto/rsa-pkcs1pad.c +++ b/crypto/rsa-pkcs1pad.c @@ -253,7 +253,7 @@ static int pkcs1pad_encrypt(struct akcipher_request *req) ps_end = ctx->key_size - req->src_len - 2; req_ctx->in_buf[0] = 0x02; for (i = 1; i < ps_end; i++) - req_ctx->in_buf[i] = 1 + prandom_u32_max(255); + req_ctx->in_buf[i] = get_random_u32_inclusive(1, 255); req_ctx->in_buf[ps_end] = 0x00; pkcs1pad_sg_set_buf(req_ctx->in_sg, req_ctx->in_buf, diff --git a/crypto/testmgr.c b/crypto/testmgr.c index bcd059caa1c8..e669acd2ebdd 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -855,9 +855,9 @@ static int prepare_keybuf(const u8 *key, unsigned int ksize, /* Generate a random length in range [0, max_len], but prefer smaller values */ static unsigned int generate_random_length(unsigned int max_len) { - unsigned int len = prandom_u32_max(max_len + 1); + unsigned int len = get_random_u32_below(max_len + 1); - switch (prandom_u32_max(4)) { + switch (get_random_u32_below(4)) { case 0: return len % 64; case 1: @@ -874,14 +874,14 @@ static void flip_random_bit(u8 *buf, size_t size) { size_t bitpos; - bitpos = prandom_u32_max(size * 8); + bitpos = get_random_u32_below(size * 8); buf[bitpos / 8] ^= 1 << (bitpos % 8); } /* Flip a random byte in the given nonempty data buffer */ static void flip_random_byte(u8 *buf, size_t size) { - buf[prandom_u32_max(size)] ^= 0xff; + buf[get_random_u32_below(size)] ^= 0xff; } /* Sometimes make some random changes to the given nonempty data buffer */ @@ -891,15 +891,15 @@ static void mutate_buffer(u8 *buf, size_t size) size_t i; /* Sometimes flip some bits */ - if (prandom_u32_max(4) == 0) { - num_flips = min_t(size_t, 1 << prandom_u32_max(8), size * 8); + if (get_random_u32_below(4) == 0) { + num_flips = min_t(size_t, 1 << get_random_u32_below(8), size * 8); for (i = 0; i < num_flips; i++) flip_random_bit(buf, size); } /* Sometimes flip some bytes */ - if (prandom_u32_max(4) == 0) { - num_flips = min_t(size_t, 1 << prandom_u32_max(8), size); + if (get_random_u32_below(4) == 0) { + num_flips = min_t(size_t, 1 << get_random_u32_below(8), size); for (i = 0; i < num_flips; i++) flip_random_byte(buf, size); } @@ -915,11 +915,11 @@ static void generate_random_bytes(u8 *buf, size_t count) if (count == 0) return; - switch (prandom_u32_max(8)) { /* Choose a generation strategy */ + switch (get_random_u32_below(8)) { /* Choose a generation strategy */ case 0: case 1: /* All the same byte, plus optional mutations */ - switch (prandom_u32_max(4)) { + switch (get_random_u32_below(4)) { case 0: b = 0x00; break; @@ -959,24 +959,24 @@ static char *generate_random_sgl_divisions(struct test_sg_division *divs, unsigned int this_len; const char *flushtype_str; - if (div == &divs[max_divs - 1] || prandom_u32_max(2) == 0) + if (div == &divs[max_divs - 1] || get_random_u32_below(2) == 0) this_len = remaining; else - this_len = 1 + prandom_u32_max(remaining); + this_len = get_random_u32_inclusive(1, remaining); div->proportion_of_total = this_len; - if (prandom_u32_max(4) == 0) - div->offset = (PAGE_SIZE - 128) + prandom_u32_max(128); - else if (prandom_u32_max(2) == 0) - div->offset = prandom_u32_max(32); + if (get_random_u32_below(4) == 0) + div->offset = get_random_u32_inclusive(PAGE_SIZE - 128, PAGE_SIZE - 1); + else if (get_random_u32_below(2) == 0) + div->offset = get_random_u32_below(32); else - div->offset = prandom_u32_max(PAGE_SIZE); - if (prandom_u32_max(8) == 0) + div->offset = get_random_u32_below(PAGE_SIZE); + if (get_random_u32_below(8) == 0) div->offset_relative_to_alignmask = true; div->flush_type = FLUSH_TYPE_NONE; if (gen_flushes) { - switch (prandom_u32_max(4)) { + switch (get_random_u32_below(4)) { case 0: div->flush_type = FLUSH_TYPE_REIMPORT; break; @@ -988,7 +988,7 @@ static char *generate_random_sgl_divisions(struct test_sg_division *divs, if (div->flush_type != FLUSH_TYPE_NONE && !(req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) && - prandom_u32_max(2) == 0) + get_random_u32_below(2) == 0) div->nosimd = true; switch (div->flush_type) { @@ -1035,7 +1035,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg, p += scnprintf(p, end - p, "random:"); - switch (prandom_u32_max(4)) { + switch (get_random_u32_below(4)) { case 0: case 1: cfg->inplace_mode = OUT_OF_PLACE; @@ -1050,12 +1050,12 @@ static void generate_random_testvec_config(struct testvec_config *cfg, break; } - if (prandom_u32_max(2) == 0) { + if (get_random_u32_below(2) == 0) { cfg->req_flags |= CRYPTO_TFM_REQ_MAY_SLEEP; p += scnprintf(p, end - p, " may_sleep"); } - switch (prandom_u32_max(4)) { + switch (get_random_u32_below(4)) { case 0: cfg->finalization_type = FINALIZATION_TYPE_FINAL; p += scnprintf(p, end - p, " use_final"); @@ -1071,7 +1071,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg, } if (!(cfg->req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) && - prandom_u32_max(2) == 0) { + get_random_u32_below(2) == 0) { cfg->nosimd = true; p += scnprintf(p, end - p, " nosimd"); } @@ -1084,7 +1084,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg, cfg->req_flags); p += scnprintf(p, end - p, "]"); - if (cfg->inplace_mode == OUT_OF_PLACE && prandom_u32_max(2) == 0) { + if (cfg->inplace_mode == OUT_OF_PLACE && get_random_u32_below(2) == 0) { p += scnprintf(p, end - p, " dst_divs=["); p = generate_random_sgl_divisions(cfg->dst_divs, ARRAY_SIZE(cfg->dst_divs), @@ -1093,13 +1093,13 @@ static void generate_random_testvec_config(struct testvec_config *cfg, p += scnprintf(p, end - p, "]"); } - if (prandom_u32_max(2) == 0) { - cfg->iv_offset = 1 + prandom_u32_max(MAX_ALGAPI_ALIGNMASK); + if (get_random_u32_below(2) == 0) { + cfg->iv_offset = get_random_u32_inclusive(1, MAX_ALGAPI_ALIGNMASK); p += scnprintf(p, end - p, " iv_offset=%u", cfg->iv_offset); } - if (prandom_u32_max(2) == 0) { - cfg->key_offset = 1 + prandom_u32_max(MAX_ALGAPI_ALIGNMASK); + if (get_random_u32_below(2) == 0) { + cfg->key_offset = get_random_u32_inclusive(1, MAX_ALGAPI_ALIGNMASK); p += scnprintf(p, end - p, " key_offset=%u", cfg->key_offset); } @@ -1652,8 +1652,8 @@ static void generate_random_hash_testvec(struct shash_desc *desc, vec->ksize = 0; if (maxkeysize) { vec->ksize = maxkeysize; - if (prandom_u32_max(4) == 0) - vec->ksize = 1 + prandom_u32_max(maxkeysize); + if (get_random_u32_below(4) == 0) + vec->ksize = get_random_u32_inclusive(1, maxkeysize); generate_random_bytes((u8 *)vec->key, vec->ksize); vec->setkey_error = crypto_shash_setkey(desc->tfm, vec->key, @@ -2218,13 +2218,13 @@ static void mutate_aead_message(struct aead_testvec *vec, bool aad_iv, const unsigned int aad_tail_size = aad_iv ? ivsize : 0; const unsigned int authsize = vec->clen - vec->plen; - if (prandom_u32_max(2) == 0 && vec->alen > aad_tail_size) { + if (get_random_u32_below(2) == 0 && vec->alen > aad_tail_size) { /* Mutate the AAD */ flip_random_bit((u8 *)vec->assoc, vec->alen - aad_tail_size); - if (prandom_u32_max(2) == 0) + if (get_random_u32_below(2) == 0) return; } - if (prandom_u32_max(2) == 0) { + if (get_random_u32_below(2) == 0) { /* Mutate auth tag (assuming it's at the end of ciphertext) */ flip_random_bit((u8 *)vec->ctext + vec->plen, authsize); } else { @@ -2249,7 +2249,7 @@ static void generate_aead_message(struct aead_request *req, const unsigned int ivsize = crypto_aead_ivsize(tfm); const unsigned int authsize = vec->clen - vec->plen; const bool inauthentic = (authsize >= MIN_COLLISION_FREE_AUTHSIZE) && - (prefer_inauthentic || prandom_u32_max(4) == 0); + (prefer_inauthentic || get_random_u32_below(4) == 0); /* Generate the AAD. */ generate_random_bytes((u8 *)vec->assoc, vec->alen); @@ -2257,7 +2257,7 @@ static void generate_aead_message(struct aead_request *req, /* Avoid implementation-defined behavior. */ memcpy((u8 *)vec->assoc + vec->alen - ivsize, vec->iv, ivsize); - if (inauthentic && prandom_u32_max(2) == 0) { + if (inauthentic && get_random_u32_below(2) == 0) { /* Generate a random ciphertext. */ generate_random_bytes((u8 *)vec->ctext, vec->clen); } else { @@ -2321,8 +2321,8 @@ static void generate_random_aead_testvec(struct aead_request *req, /* Key: length in [0, maxkeysize], but usually choose maxkeysize */ vec->klen = maxkeysize; - if (prandom_u32_max(4) == 0) - vec->klen = prandom_u32_max(maxkeysize + 1); + if (get_random_u32_below(4) == 0) + vec->klen = get_random_u32_below(maxkeysize + 1); generate_random_bytes((u8 *)vec->key, vec->klen); vec->setkey_error = crypto_aead_setkey(tfm, vec->key, vec->klen); @@ -2331,8 +2331,8 @@ static void generate_random_aead_testvec(struct aead_request *req, /* Tag length: in [0, maxauthsize], but usually choose maxauthsize */ authsize = maxauthsize; - if (prandom_u32_max(4) == 0) - authsize = prandom_u32_max(maxauthsize + 1); + if (get_random_u32_below(4) == 0) + authsize = get_random_u32_below(maxauthsize + 1); if (prefer_inauthentic && authsize < MIN_COLLISION_FREE_AUTHSIZE) authsize = MIN_COLLISION_FREE_AUTHSIZE; if (WARN_ON(authsize > maxdatasize)) @@ -2342,7 +2342,7 @@ static void generate_random_aead_testvec(struct aead_request *req, /* AAD, plaintext, and ciphertext lengths */ total_len = generate_random_length(maxdatasize); - if (prandom_u32_max(4) == 0) + if (get_random_u32_below(4) == 0) vec->alen = 0; else vec->alen = generate_random_length(total_len); @@ -2958,8 +2958,8 @@ static void generate_random_cipher_testvec(struct skcipher_request *req, /* Key: length in [0, maxkeysize], but usually choose maxkeysize */ vec->klen = maxkeysize; - if (prandom_u32_max(4) == 0) - vec->klen = prandom_u32_max(maxkeysize + 1); + if (get_random_u32_below(4) == 0) + vec->klen = get_random_u32_below(maxkeysize + 1); generate_random_bytes((u8 *)vec->key, vec->klen); vec->setkey_error = crypto_skcipher_setkey(tfm, vec->key, vec->klen); diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c index ee69d50ba4fd..3eccc6cd5004 100644 --- a/drivers/block/drbd/drbd_receiver.c +++ b/drivers/block/drbd/drbd_receiver.c @@ -781,7 +781,7 @@ static struct socket *drbd_wait_for_connect(struct drbd_connection *connection, timeo = connect_int * HZ; /* 28.5% random jitter */ - timeo += prandom_u32_max(2) ? timeo / 7 : -timeo / 7; + timeo += get_random_u32_below(2) ? timeo / 7 : -timeo / 7; err = wait_for_completion_interruptible_timeout(&ad->door_bell, timeo); if (err <= 0) @@ -1004,7 +1004,7 @@ retry: drbd_warn(connection, "Error receiving initial packet\n"); sock_release(s); randomize: - if (prandom_u32_max(2)) + if (get_random_u32_below(2)) goto retry; } } diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h index 01fd10a399b6..2e139e76de4c 100644 --- a/drivers/bus/mhi/host/internal.h +++ b/drivers/bus/mhi/host/internal.h @@ -129,7 +129,7 @@ enum mhi_pm_state { #define PRIMARY_CMD_RING 0 #define MHI_DEV_WAKE_DB 127 #define MHI_MAX_MTU 0xffff -#define MHI_RANDOM_U32_NONZERO(bmsk) (prandom_u32_max(bmsk) + 1) +#define MHI_RANDOM_U32_NONZERO(bmsk) (get_random_u32_inclusive(1, bmsk)) enum mhi_er_type { MHI_ER_TYPE_INVALID = 0x0, diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig index 0f378d29dab0..30fe9848dac1 100644 --- a/drivers/char/Kconfig +++ b/drivers/char/Kconfig @@ -423,40 +423,4 @@ config ADI and SSM (Silicon Secured Memory). Intended consumers of this driver include crash and makedumpfile. -config RANDOM_TRUST_CPU - bool "Initialize RNG using CPU RNG instructions" - default y - help - Initialize the RNG using random numbers supplied by the CPU's - RNG instructions (e.g. RDRAND), if supported and available. These - random numbers are never used directly, but are rather hashed into - the main input pool, and this happens regardless of whether or not - this option is enabled. Instead, this option controls whether the - they are credited and hence can initialize the RNG. Additionally, - other sources of randomness are always used, regardless of this - setting. Enabling this implies trusting that the CPU can supply high - quality and non-backdoored random numbers. - - Say Y here unless you have reason to mistrust your CPU or believe - its RNG facilities may be faulty. This may also be configured at - boot time with "random.trust_cpu=on/off". - -config RANDOM_TRUST_BOOTLOADER - bool "Initialize RNG using bootloader-supplied seed" - default y - help - Initialize the RNG using a seed supplied by the bootloader or boot - environment (e.g. EFI or a bootloader-generated device tree). This - seed is not used directly, but is rather hashed into the main input - pool, and this happens regardless of whether or not this option is - enabled. Instead, this option controls whether the seed is credited - and hence can initialize the RNG. Additionally, other sources of - randomness are always used, regardless of this setting. Enabling - this implies trusting that the bootloader can supply high quality and - non-backdoored seeds. - - Say Y here unless you have reason to mistrust your bootloader or - believe its RNG facilities may be faulty. This may also be configured - at boot time with "random.trust_bootloader=on/off". - endmenu diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c index cc002b0c2f0c..63a0a8e4505d 100644 --- a/drivers/char/hw_random/core.c +++ b/drivers/char/hw_random/core.c @@ -69,8 +69,10 @@ static void add_early_randomness(struct hwrng *rng) mutex_lock(&reading_mutex); bytes_read = rng_get_data(rng, rng_fillbuf, 32, 0); mutex_unlock(&reading_mutex); - if (bytes_read > 0) - add_device_randomness(rng_fillbuf, bytes_read); + if (bytes_read > 0) { + size_t entropy = bytes_read * 8 * rng->quality / 1024; + add_hwgenerator_randomness(rng_fillbuf, bytes_read, entropy, false); + } } static inline void cleanup_rng(struct kref *kref) @@ -528,7 +530,7 @@ static int hwrng_fillfn(void *unused) /* Outside lock, sure, but y'know: randomness. */ add_hwgenerator_randomness((void *)rng_fillbuf, rc, - entropy >> 10); + entropy >> 10, true); } hwrng_fill = NULL; return 0; diff --git a/drivers/char/random.c b/drivers/char/random.c index 69754155300e..e872acc1238f 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -53,6 +53,7 @@ #include <linux/uaccess.h> #include <linux/suspend.h> #include <linux/siphash.h> +#include <linux/sched/isolation.h> #include <crypto/chacha.h> #include <crypto/blake2s.h> #include <asm/processor.h> @@ -84,6 +85,7 @@ static DEFINE_STATIC_KEY_FALSE(crng_is_ready); /* Various types of waiters for crng_init->CRNG_READY transition. */ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct fasync_struct *fasync; +static ATOMIC_NOTIFIER_HEAD(random_ready_notifier); /* Control how we warn userspace. */ static struct ratelimit_state urandom_warning = @@ -120,7 +122,7 @@ static void try_to_generate_entropy(void); * Wait for the input pool to be seeded and thus guaranteed to supply * cryptographically secure random numbers. This applies to: the /dev/urandom * device, the get_random_bytes function, and the get_random_{u8,u16,u32,u64, - * int,long} family of functions. Using any of these functions without first + * long} family of functions. Using any of these functions without first * calling this function forfeits the guarantee of security. * * Returns: 0 if the input pool has been seeded. @@ -140,6 +142,26 @@ int wait_for_random_bytes(void) } EXPORT_SYMBOL(wait_for_random_bytes); +/* + * Add a callback function that will be invoked when the crng is initialised, + * or immediately if it already has been. Only use this is you are absolutely + * sure it is required. Most users should instead be able to test + * `rng_is_initialized()` on demand, or make use of `get_random_bytes_wait()`. + */ +int __cold execute_with_initialized_rng(struct notifier_block *nb) +{ + unsigned long flags; + int ret = 0; + + spin_lock_irqsave(&random_ready_notifier.lock, flags); + if (crng_ready()) + nb->notifier_call(nb, 0, NULL); + else + ret = raw_notifier_chain_register((struct raw_notifier_head *)&random_ready_notifier.head, nb); + spin_unlock_irqrestore(&random_ready_notifier.lock, flags); + return ret; +} + #define warn_unseeded_randomness() \ if (IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) && !crng_ready()) \ printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", \ @@ -160,6 +182,9 @@ EXPORT_SYMBOL(wait_for_random_bytes); * u8 get_random_u8() * u16 get_random_u16() * u32 get_random_u32() + * u32 get_random_u32_below(u32 ceil) + * u32 get_random_u32_above(u32 floor) + * u32 get_random_u32_inclusive(u32 floor, u32 ceil) * u64 get_random_u64() * unsigned long get_random_long() * @@ -179,7 +204,6 @@ enum { static struct { u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long)); - unsigned long birth; unsigned long generation; spinlock_t lock; } base_crng = { @@ -197,16 +221,41 @@ static DEFINE_PER_CPU(struct crng, crngs) = { .lock = INIT_LOCAL_LOCK(crngs.lock), }; +/* + * Return the interval until the next reseeding, which is normally + * CRNG_RESEED_INTERVAL, but during early boot, it is at an interval + * proportional to the uptime. + */ +static unsigned int crng_reseed_interval(void) +{ + static bool early_boot = true; + + if (unlikely(READ_ONCE(early_boot))) { + time64_t uptime = ktime_get_seconds(); + if (uptime >= CRNG_RESEED_INTERVAL / HZ * 2) + WRITE_ONCE(early_boot, false); + else + return max_t(unsigned int, CRNG_RESEED_START_INTERVAL, + (unsigned int)uptime / 2 * HZ); + } + return CRNG_RESEED_INTERVAL; +} + /* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */ static void extract_entropy(void *buf, size_t len); /* This extracts a new crng key from the input pool. */ -static void crng_reseed(void) +static void crng_reseed(struct work_struct *work) { + static DECLARE_DELAYED_WORK(next_reseed, crng_reseed); unsigned long flags; unsigned long next_gen; u8 key[CHACHA_KEY_SIZE]; + /* Immediately schedule the next reseeding, so that it fires sooner rather than later. */ + if (likely(system_unbound_wq)) + queue_delayed_work(system_unbound_wq, &next_reseed, crng_reseed_interval()); + extract_entropy(key, sizeof(key)); /* @@ -221,7 +270,6 @@ static void crng_reseed(void) if (next_gen == ULONG_MAX) ++next_gen; WRITE_ONCE(base_crng.generation, next_gen); - WRITE_ONCE(base_crng.birth, jiffies); if (!static_branch_likely(&crng_is_ready)) crng_init = CRNG_READY; spin_unlock_irqrestore(&base_crng.lock, flags); @@ -261,26 +309,6 @@ static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE], } /* - * Return the interval until the next reseeding, which is normally - * CRNG_RESEED_INTERVAL, but during early boot, it is at an interval - * proportional to the uptime. - */ -static unsigned int crng_reseed_interval(void) -{ - static bool early_boot = true; - - if (unlikely(READ_ONCE(early_boot))) { - time64_t uptime = ktime_get_seconds(); - if (uptime >= CRNG_RESEED_INTERVAL / HZ * 2) - WRITE_ONCE(early_boot, false); - else - return max_t(unsigned int, CRNG_RESEED_START_INTERVAL, - (unsigned int)uptime / 2 * HZ); - } - return CRNG_RESEED_INTERVAL; -} - -/* * This function returns a ChaCha state that you may use for generating * random data. It also returns up to 32 bytes on its own of random data * that may be used; random_data_len may not be greater than 32. @@ -315,13 +343,6 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], return; } - /* - * If the base_crng is old enough, we reseed, which in turn bumps the - * generation counter that we check below. - */ - if (unlikely(time_is_before_jiffies(READ_ONCE(base_crng.birth) + crng_reseed_interval()))) - crng_reseed(); - local_lock_irqsave(&crngs.lock, flags); crng = raw_cpu_ptr(&crngs); @@ -383,11 +404,11 @@ static void _get_random_bytes(void *buf, size_t len) } /* - * This function is the exported kernel interface. It returns some number of - * good random numbers, suitable for key generation, seeding TCP sequence - * numbers, etc. In order to ensure that the randomness returned by this - * function is okay, the function wait_for_random_bytes() should be called and - * return 0 at least once at any point prior. + * This returns random bytes in arbitrary quantities. The quality of the + * random bytes is good as /dev/urandom. In order to ensure that the + * randomness provided by this function is okay, the function + * wait_for_random_bytes() should be called and return 0 at least once + * at any point prior. */ void get_random_bytes(void *buf, size_t len) { @@ -510,6 +531,41 @@ DEFINE_BATCHED_ENTROPY(u16) DEFINE_BATCHED_ENTROPY(u32) DEFINE_BATCHED_ENTROPY(u64) +u32 __get_random_u32_below(u32 ceil) +{ + /* + * This is the slow path for variable ceil. It is still fast, most of + * the time, by doing traditional reciprocal multiplication and + * opportunistically comparing the lower half to ceil itself, before + * falling back to computing a larger bound, and then rejecting samples + * whose lower half would indicate a range indivisible by ceil. The use + * of `-ceil % ceil` is analogous to `2^32 % ceil`, but is computable + * in 32-bits. + */ + u32 rand = get_random_u32(); + u64 mult; + + /* + * This function is technically undefined for ceil == 0, and in fact + * for the non-underscored constant version in the header, we build bug + * on that. But for the non-constant case, it's convenient to have that + * evaluate to being a straight call to get_random_u32(), so that + * get_random_u32_inclusive() can work over its whole range without + * undefined behavior. + */ + if (unlikely(!ceil)) + return rand; + + mult = (u64)ceil * rand; + if (unlikely((u32)mult < ceil)) { + u32 bound = -ceil % ceil; + while (unlikely((u32)mult < bound)) + mult = (u64)ceil * get_random_u32(); + } + return mult >> 32; +} +EXPORT_SYMBOL(__get_random_u32_below); + #ifdef CONFIG_SMP /* * This function is called when the CPU is coming up, with entry @@ -660,9 +716,10 @@ static void __cold _credit_init_bits(size_t bits) } while (!try_cmpxchg(&input_pool.init_bits, &orig, new)); if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) { - crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */ + crng_reseed(NULL); /* Sets crng_init to CRNG_READY under base_crng.lock. */ if (static_key_initialized) execute_in_process_context(crng_set_ready, &set_ready); + atomic_notifier_call_chain(&random_ready_notifier, 0, NULL); wake_up_interruptible(&crng_init_wait); kill_fasync(&fasync, SIGIO, POLL_IN); pr_notice("crng init done\n"); @@ -689,7 +746,7 @@ static void __cold _credit_init_bits(size_t bits) * the above entropy accumulation routines: * * void add_device_randomness(const void *buf, size_t len); - * void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy); + * void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy, bool sleep_after); * void add_bootloader_randomness(const void *buf, size_t len); * void add_vmfork_randomness(const void *unique_vm_id, size_t len); * void add_interrupt_randomness(int irq); @@ -710,7 +767,7 @@ static void __cold _credit_init_bits(size_t bits) * * add_bootloader_randomness() is called by bootloader drivers, such as EFI * and device tree, and credits its input depending on whether or not the - * configuration option CONFIG_RANDOM_TRUST_BOOTLOADER is set. + * command line option 'random.trust_bootloader'. * * add_vmfork_randomness() adds a unique (but not necessarily secret) ID * representing the current instance of a VM to the pool, without crediting, @@ -736,8 +793,8 @@ static void __cold _credit_init_bits(size_t bits) * **********************************************************************/ -static bool trust_cpu __initdata = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); -static bool trust_bootloader __initdata = IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER); +static bool trust_cpu __initdata = true; +static bool trust_bootloader __initdata = true; static int __init parse_trust_cpu(char *arg) { return kstrtobool(arg, &trust_cpu); @@ -768,7 +825,7 @@ static int random_pm_notification(struct notifier_block *nb, unsigned long actio if (crng_ready() && (action == PM_RESTORE_PREPARE || (action == PM_POST_SUSPEND && !IS_ENABLED(CONFIG_PM_AUTOSLEEP) && !IS_ENABLED(CONFIG_PM_USERSPACE_AUTOSLEEP)))) { - crng_reseed(); + crng_reseed(NULL); pr_notice("crng reseeded on system resumption\n"); } return 0; @@ -791,13 +848,13 @@ void __init random_init_early(const char *command_line) #endif for (i = 0, arch_bits = sizeof(entropy) * 8; i < ARRAY_SIZE(entropy);) { - longs = arch_get_random_seed_longs_early(entropy, ARRAY_SIZE(entropy) - i); + longs = arch_get_random_seed_longs(entropy, ARRAY_SIZE(entropy) - i); if (longs) { _mix_pool_bytes(entropy, sizeof(*entropy) * longs); i += longs; continue; } - longs = arch_get_random_longs_early(entropy, ARRAY_SIZE(entropy) - i); + longs = arch_get_random_longs(entropy, ARRAY_SIZE(entropy) - i); if (longs) { _mix_pool_bytes(entropy, sizeof(*entropy) * longs); i += longs; @@ -812,7 +869,7 @@ void __init random_init_early(const char *command_line) /* Reseed if already seeded by earlier phases. */ if (crng_ready()) - crng_reseed(); + crng_reseed(NULL); else if (trust_cpu) _credit_init_bits(arch_bits); } @@ -840,7 +897,7 @@ void __init random_init(void) /* Reseed if already seeded by earlier phases. */ if (crng_ready()) - crng_reseed(); + crng_reseed(NULL); WARN_ON(register_pm_notifier(&pm_notifier)); @@ -869,11 +926,11 @@ void add_device_randomness(const void *buf, size_t len) EXPORT_SYMBOL(add_device_randomness); /* - * Interface for in-kernel drivers of true hardware RNGs. - * Those devices may produce endless random bits and will be throttled - * when our pool is full. + * Interface for in-kernel drivers of true hardware RNGs. Those devices + * may produce endless random bits, so this function will sleep for + * some amount of time after, if the sleep_after parameter is true. */ -void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy) +void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy, bool sleep_after) { mix_pool_bytes(buf, len); credit_init_bits(entropy); @@ -882,14 +939,14 @@ void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy) * Throttle writing to once every reseed interval, unless we're not yet * initialized or no entropy is credited. */ - if (!kthread_should_stop() && (crng_ready() || !entropy)) + if (sleep_after && !kthread_should_stop() && (crng_ready() || !entropy)) schedule_timeout_interruptible(crng_reseed_interval()); } EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); /* - * Handle random seed passed by bootloader, and credit it if - * CONFIG_RANDOM_TRUST_BOOTLOADER is set. + * Handle random seed passed by bootloader, and credit it depending + * on the command line option 'random.trust_bootloader'. */ void __init add_bootloader_randomness(const void *buf, size_t len) { @@ -910,7 +967,7 @@ void __cold add_vmfork_randomness(const void *unique_vm_id, size_t len) { add_device_randomness(unique_vm_id, len); if (crng_ready()) { - crng_reseed(); + crng_reseed(NULL); pr_notice("crng reseeded due to virtual machine fork\n"); } blocking_notifier_call_chain(&vmfork_chain, 0, NULL); @@ -1176,66 +1233,102 @@ void __cold rand_initialize_disk(struct gendisk *disk) struct entropy_timer_state { unsigned long entropy; struct timer_list timer; - unsigned int samples, samples_per_bit; + atomic_t samples; + unsigned int samples_per_bit; }; /* - * Each time the timer fires, we expect that we got an unpredictable - * jump in the cycle counter. Even if the timer is running on another - * CPU, the timer activity will be touching the stack of the CPU that is - * generating entropy.. + * Each time the timer fires, we expect that we got an unpredictable jump in + * the cycle counter. Even if the timer is running on another CPU, the timer + * activity will be touching the stack of the CPU that is generating entropy. * - * Note that we don't re-arm the timer in the timer itself - we are - * happy to be scheduled away, since that just makes the load more - * complex, but we do not want the timer to keep ticking unless the - * entropy loop is running. + * Note that we don't re-arm the timer in the timer itself - we are happy to be + * scheduled away, since that just makes the load more complex, but we do not + * want the timer to keep ticking unless the entropy loop is running. * * So the re-arming always happens in the entropy loop itself. */ static void __cold entropy_timer(struct timer_list *timer) { struct entropy_timer_state *state = container_of(timer, struct entropy_timer_state, timer); + unsigned long entropy = random_get_entropy(); - if (++state->samples == state->samples_per_bit) { + mix_pool_bytes(&entropy, sizeof(entropy)); + if (atomic_inc_return(&state->samples) % state->samples_per_bit == 0) credit_init_bits(1); - state->samples = 0; - } } /* - * If we have an actual cycle counter, see if we can - * generate enough entropy with timing noise + * If we have an actual cycle counter, see if we can generate enough entropy + * with timing noise. */ static void __cold try_to_generate_entropy(void) { enum { NUM_TRIAL_SAMPLES = 8192, MAX_SAMPLES_PER_BIT = HZ / 15 }; - struct entropy_timer_state stack; + u8 stack_bytes[sizeof(struct entropy_timer_state) + SMP_CACHE_BYTES - 1]; + struct entropy_timer_state *stack = PTR_ALIGN((void *)stack_bytes, SMP_CACHE_BYTES); unsigned int i, num_different = 0; unsigned long last = random_get_entropy(); + int cpu = -1; for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) { - stack.entropy = random_get_entropy(); - if (stack.entropy != last) + stack->entropy = random_get_entropy(); + if (stack->entropy != last) ++num_different; - last = stack.entropy; + last = stack->entropy; } - stack.samples_per_bit = DIV_ROUND_UP(NUM_TRIAL_SAMPLES, num_different + 1); - if (stack.samples_per_bit > MAX_SAMPLES_PER_BIT) + stack->samples_per_bit = DIV_ROUND_UP(NUM_TRIAL_SAMPLES, num_different + 1); + if (stack->samples_per_bit > MAX_SAMPLES_PER_BIT) return; - stack.samples = 0; - timer_setup_on_stack(&stack.timer, entropy_timer, 0); + atomic_set(&stack->samples, 0); + timer_setup_on_stack(&stack->timer, entropy_timer, 0); while (!crng_ready() && !signal_pending(current)) { - if (!timer_pending(&stack.timer)) - mod_timer(&stack.timer, jiffies); - mix_pool_bytes(&stack.entropy, sizeof(stack.entropy)); + /* + * Check !timer_pending() and then ensure that any previous callback has finished + * executing by checking try_to_del_timer_sync(), before queueing the next one. + */ + if (!timer_pending(&stack->timer) && try_to_del_timer_sync(&stack->timer) >= 0) { + struct cpumask timer_cpus; + unsigned int num_cpus; + + /* + * Preemption must be disabled here, both to read the current CPU number + * and to avoid scheduling a timer on a dead CPU. + */ + preempt_disable(); + + /* Only schedule callbacks on timer CPUs that are online. */ + cpumask_and(&timer_cpus, housekeeping_cpumask(HK_TYPE_TIMER), cpu_online_mask); + num_cpus = cpumask_weight(&timer_cpus); + /* In very bizarre case of misconfiguration, fallback to all online. */ + if (unlikely(num_cpus == 0)) { + timer_cpus = *cpu_online_mask; + num_cpus = cpumask_weight(&timer_cpus); + } + + /* Basic CPU round-robin, which avoids the current CPU. */ + do { + cpu = cpumask_next(cpu, &timer_cpus); + if (cpu == nr_cpumask_bits) + cpu = cpumask_first(&timer_cpus); + } while (cpu == smp_processor_id() && num_cpus > 1); + + /* Expiring the timer at `jiffies` means it's the next tick. */ + stack->timer.expires = jiffies; + + add_timer_on(&stack->timer, cpu); + + preempt_enable(); + } + mix_pool_bytes(&stack->entropy, sizeof(stack->entropy)); schedule(); - stack.entropy = random_get_entropy(); + stack->entropy = random_get_entropy(); } + mix_pool_bytes(&stack->entropy, sizeof(stack->entropy)); - del_timer_sync(&stack.timer); - destroy_timer_on_stack(&stack.timer); - mix_pool_bytes(&stack.entropy, sizeof(stack.entropy)); + del_timer_sync(&stack->timer); + destroy_timer_on_stack(&stack->timer); } @@ -1432,7 +1525,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) return -EPERM; if (!crng_ready()) return -ENODATA; - crng_reseed(); + crng_reseed(NULL); return 0; default: return -EINVAL; diff --git a/drivers/dma-buf/st-dma-fence-chain.c b/drivers/dma-buf/st-dma-fence-chain.c index 0a9b099d0518..c0979c8049b5 100644 --- a/drivers/dma-buf/st-dma-fence-chain.c +++ b/drivers/dma-buf/st-dma-fence-chain.c @@ -400,7 +400,7 @@ static int __find_race(void *arg) struct dma_fence *fence = dma_fence_get(data->fc.tail); int seqno; - seqno = prandom_u32_max(data->fc.chain_length) + 1; + seqno = get_random_u32_inclusive(1, data->fc.chain_length); err = dma_fence_chain_find_seqno(&fence, seqno); if (err) { @@ -429,7 +429,7 @@ static int __find_race(void *arg) dma_fence_put(fence); signal: - seqno = prandom_u32_max(data->fc.chain_length - 1); + seqno = get_random_u32_below(data->fc.chain_length - 1); dma_fence_signal(data->fc.fences[seqno]); cond_resched(); } @@ -637,7 +637,7 @@ static void randomise_fences(struct fence_chains *fc) while (--count) { unsigned int swp; - swp = prandom_u32_max(count + 1); + swp = get_random_u32_below(count + 1); if (swp == count) continue; diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index a46df5d1d094..16dae588f0e3 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -337,6 +337,24 @@ static void __init efi_debugfs_init(void) static inline void efi_debugfs_init(void) {} #endif +static void refresh_nv_rng_seed(struct work_struct *work) +{ + u8 seed[EFI_RANDOM_SEED_SIZE]; + + get_random_bytes(seed, sizeof(seed)); + efi.set_variable(L"RandomSeed", &LINUX_EFI_RANDOM_SEED_TABLE_GUID, + EFI_VARIABLE_NON_VOLATILE | EFI_VARIABLE_BOOTSERVICE_ACCESS | + EFI_VARIABLE_RUNTIME_ACCESS, sizeof(seed), seed); + memzero_explicit(seed, sizeof(seed)); +} +static int refresh_nv_rng_seed_notification(struct notifier_block *nb, unsigned long action, void *data) +{ + static DECLARE_WORK(work, refresh_nv_rng_seed); + schedule_work(&work); + return NOTIFY_DONE; +} +static struct notifier_block refresh_nv_rng_seed_nb = { .notifier_call = refresh_nv_rng_seed_notification }; + /* * We register the efi subsystem with the firmware subsystem and the * efivars subsystem with the efi subsystem, if the system was booted with @@ -413,6 +431,7 @@ static int __init efisubsys_init(void) platform_device_register_simple("efi_secret", 0, NULL, 0); #endif + execute_with_initialized_rng(&refresh_nv_rng_seed_nb); return 0; err_remove_group: diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 845023c14eb3..29d2459bcc90 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -2424,7 +2424,7 @@ gen8_dispatch_bsd_engine(struct drm_i915_private *dev_priv, /* Check whether the file_priv has already selected one ring. */ if ((int)file_priv->bsd_engine < 0) file_priv->bsd_engine = - prandom_u32_max(num_vcs_engines(dev_priv)); + get_random_u32_below(num_vcs_engines(dev_priv)); return file_priv->bsd_engine; } diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index c718e6dc40b5..45b605e32c87 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -3689,7 +3689,7 @@ static void virtual_engine_initial_hint(struct virtual_engine *ve) * NB This does not force us to execute on this engine, it will just * typically be the first we inspect for submission. */ - swp = prandom_u32_max(ve->num_siblings); + swp = get_random_u32_below(ve->num_siblings); if (swp) swap(ve->siblings[swp], ve->siblings[0]); } diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c index 9a4a7fb55582..b9a164efd6ae 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.c +++ b/drivers/gpu/drm/i915/intel_memory_region.c @@ -38,7 +38,7 @@ static int __iopagetest(struct intel_memory_region *mem, u8 value, resource_size_t offset, const void *caller) { - int byte = prandom_u32_max(pagesize); + int byte = get_random_u32_below(pagesize); u8 result[3]; memset_io(va, value, pagesize); /* or GPF! */ @@ -92,7 +92,7 @@ static int iopagetest(struct intel_memory_region *mem, static resource_size_t random_page(resource_size_t last) { /* Limited to low 44b (16TiB), but should suffice for a spot check */ - return prandom_u32_max(last >> PAGE_SHIFT) << PAGE_SHIFT; + return get_random_u32_below(last >> PAGE_SHIFT) << PAGE_SHIFT; } static int iomemtest(struct intel_memory_region *mem, diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c index 26d1772179b8..aacd6254df77 100644 --- a/drivers/infiniband/core/cma.c +++ b/drivers/infiniband/core/cma.c @@ -3807,7 +3807,7 @@ static int cma_alloc_any_port(enum rdma_ucm_port_space ps, inet_get_local_port_range(net, &low, &high); remaining = (high - low) + 1; - rover = prandom_u32_max(remaining) + low; + rover = get_random_u32_inclusive(low, remaining + low - 1); retry: if (last_used_port != rover) { struct rdma_bind_list *bind_list; diff --git a/drivers/infiniband/hw/cxgb4/id_table.c b/drivers/infiniband/hw/cxgb4/id_table.c index 280d61466855..e2188b335e76 100644 --- a/drivers/infiniband/hw/cxgb4/id_table.c +++ b/drivers/infiniband/hw/cxgb4/id_table.c @@ -54,7 +54,7 @@ u32 c4iw_id_alloc(struct c4iw_id_table *alloc) if (obj < alloc->max) { if (alloc->flags & C4IW_ID_TABLE_F_RANDOM) - alloc->last += prandom_u32_max(RANDOM_SKIP); + alloc->last += get_random_u32_below(RANDOM_SKIP); else alloc->last = obj + 1; if (alloc->last >= alloc->max) @@ -85,7 +85,7 @@ int c4iw_id_table_alloc(struct c4iw_id_table *alloc, u32 start, u32 num, alloc->start = start; alloc->flags = flags; if (flags & C4IW_ID_TABLE_F_RANDOM) - alloc->last = prandom_u32_max(RANDOM_SKIP); + alloc->last = get_random_u32_below(RANDOM_SKIP); else alloc->last = 0; alloc->max = num; diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c index 480c062dd04f..e77fcc74f15c 100644 --- a/drivers/infiniband/hw/hns/hns_roce_ah.c +++ b/drivers/infiniband/hw/hns/hns_roce_ah.c @@ -41,9 +41,8 @@ static inline u16 get_ah_udp_sport(const struct rdma_ah_attr *ah_attr) u16 sport; if (!fl) - sport = prandom_u32_max(IB_ROCE_UDP_ENCAP_VALID_PORT_MAX + 1 - - IB_ROCE_UDP_ENCAP_VALID_PORT_MIN) + - IB_ROCE_UDP_ENCAP_VALID_PORT_MIN; + sport = get_random_u32_inclusive(IB_ROCE_UDP_ENCAP_VALID_PORT_MIN, + IB_ROCE_UDP_ENCAP_VALID_PORT_MAX); else sport = rdma_flow_label_to_udp_sport(fl); diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 8546b8816524..ab75b690ad08 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -1517,7 +1517,7 @@ static void rtrs_clt_err_recovery_work(struct work_struct *work) rtrs_clt_stop_and_destroy_conns(clt_path); queue_delayed_work(rtrs_wq, &clt_path->reconnect_dwork, msecs_to_jiffies(delay_ms + - prandom_u32_max(RTRS_RECONNECT_SEED))); + get_random_u32_below(RTRS_RECONNECT_SEED))); } static struct rtrs_clt_path *alloc_path(struct rtrs_clt_sess *clt, diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index 3427555b0cca..32e21ba64357 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -401,7 +401,7 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio) } if (bypass_torture_test(dc)) { - if (prandom_u32_max(4) == 3) + if (get_random_u32_below(4) == 3) goto skip; else goto rescale; diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c index 303d02b1d71c..a366566f22c3 100644 --- a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c +++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c @@ -872,7 +872,7 @@ static void precalculate_color(struct tpg_data *tpg, int k) } else if (tpg->pattern == TPG_PAT_NOISE) { r = g = b = get_random_u8(); } else if (k == TPG_COLOR_RANDOM) { - r = g = b = tpg->qual_offset + prandom_u32_max(196); + r = g = b = tpg->qual_offset + get_random_u32_below(196); } else if (k >= TPG_COLOR_RAMP) { r = g = b = k - TPG_COLOR_RAMP; } @@ -2286,7 +2286,7 @@ static void tpg_fill_params_extras(const struct tpg_data *tpg, params->wss_width = tpg->crop.width; params->wss_width = tpg_hscale_div(tpg, p, params->wss_width); params->wss_random_offset = - params->twopixsize * prandom_u32_max(tpg->src_width / 2); + params->twopixsize * get_random_u32_below(tpg->src_width / 2); if (tpg->crop.left < tpg->border.left) { left_pillar_width = tpg->border.left - tpg->crop.left; @@ -2495,9 +2495,9 @@ static void tpg_fill_plane_pattern(const struct tpg_data *tpg, linestart_newer = tpg->black_line[p]; } else if (tpg->pattern == TPG_PAT_NOISE || tpg->qual == TPG_QUAL_NOISE) { linestart_older = tpg->random_line[p] + - twopixsize * prandom_u32_max(tpg->src_width / 2); + twopixsize * get_random_u32_below(tpg->src_width / 2); linestart_newer = tpg->random_line[p] + - twopixsize * prandom_u32_max(tpg->src_width / 2); + twopixsize * get_random_u32_below(tpg->src_width / 2); } else { unsigned frame_line_old = (frame_line + mv_vert_old) % tpg->src_height; diff --git a/drivers/media/test-drivers/vidtv/vidtv_demod.c b/drivers/media/test-drivers/vidtv/vidtv_demod.c index e7959ab1add8..d60c6d16beea 100644 --- a/drivers/media/test-drivers/vidtv/vidtv_demod.c +++ b/drivers/media/test-drivers/vidtv/vidtv_demod.c @@ -188,11 +188,11 @@ static void vidtv_demod_update_stats(struct dvb_frontend *fe) * Also, usually, signal strength is a negative number in dBm. */ c->strength.stat[0].svalue = state->tuner_cnr; - c->strength.stat[0].svalue -= prandom_u32_max(state->tuner_cnr / 50); + c->strength.stat[0].svalue -= get_random_u32_below(state->tuner_cnr / 50); c->strength.stat[0].svalue -= 68000; /* Adjust to a better range */ c->cnr.stat[0].svalue = state->tuner_cnr; - c->cnr.stat[0].svalue -= prandom_u32_max(state->tuner_cnr / 50); + c->cnr.stat[0].svalue -= get_random_u32_below(state->tuner_cnr / 50); } static int vidtv_demod_read_status(struct dvb_frontend *fe, @@ -213,11 +213,11 @@ static int vidtv_demod_read_status(struct dvb_frontend *fe, if (snr < cnr2qual->cnr_ok) { /* eventually lose the TS lock */ - if (prandom_u32_max(100) < config->drop_tslock_prob_on_low_snr) + if (get_random_u32_below(100) < config->drop_tslock_prob_on_low_snr) state->status = 0; } else { /* recover if the signal improves */ - if (prandom_u32_max(100) < + if (get_random_u32_below(100) < config->recover_tslock_prob_on_good_snr) state->status = FE_HAS_SIGNAL | FE_HAS_CARRIER | diff --git a/drivers/media/test-drivers/vivid/vivid-kthread-cap.c b/drivers/media/test-drivers/vivid/vivid-kthread-cap.c index 690daada7db4..ee65d20314d3 100644 --- a/drivers/media/test-drivers/vivid/vivid-kthread-cap.c +++ b/drivers/media/test-drivers/vivid/vivid-kthread-cap.c @@ -693,7 +693,7 @@ static noinline_for_stack void vivid_thread_vid_cap_tick(struct vivid_dev *dev, /* Drop a certain percentage of buffers. */ if (dev->perc_dropped_buffers && - prandom_u32_max(100) < dev->perc_dropped_buffers) + get_random_u32_below(100) < dev->perc_dropped_buffers) goto update_mv; spin_lock(&dev->slock); diff --git a/drivers/media/test-drivers/vivid/vivid-kthread-out.c b/drivers/media/test-drivers/vivid/vivid-kthread-out.c index 0833e021bb11..fac6208b51da 100644 --- a/drivers/media/test-drivers/vivid/vivid-kthread-out.c +++ b/drivers/media/test-drivers/vivid/vivid-kthread-out.c @@ -51,7 +51,7 @@ static void vivid_thread_vid_out_tick(struct vivid_dev *dev) /* Drop a certain percentage of buffers. */ if (dev->perc_dropped_buffers && - prandom_u32_max(100) < dev->perc_dropped_buffers) + get_random_u32_below(100) < dev->perc_dropped_buffers) return; spin_lock(&dev->slock); diff --git a/drivers/media/test-drivers/vivid/vivid-radio-rx.c b/drivers/media/test-drivers/vivid/vivid-radio-rx.c index 8bd09589fb15..79c1723bd84c 100644 --- a/drivers/media/test-drivers/vivid/vivid-radio-rx.c +++ b/drivers/media/test-drivers/vivid/vivid-radio-rx.c @@ -94,8 +94,8 @@ retry: if (data_blk == 0 && dev->radio_rds_loop) vivid_radio_rds_init(dev); - if (perc && prandom_u32_max(100) < perc) { - switch (prandom_u32_max(4)) { + if (perc && get_random_u32_below(100) < perc) { + switch (get_random_u32_below(4)) { case 0: rds.block |= V4L2_RDS_BLOCK_CORRECTED; break; diff --git a/drivers/media/test-drivers/vivid/vivid-sdr-cap.c b/drivers/media/test-drivers/vivid/vivid-sdr-cap.c index 0ae5628b86c9..a81f26b76988 100644 --- a/drivers/media/test-drivers/vivid/vivid-sdr-cap.c +++ b/drivers/media/test-drivers/vivid/vivid-sdr-cap.c @@ -90,7 +90,7 @@ static void vivid_thread_sdr_cap_tick(struct vivid_dev *dev) /* Drop a certain percentage of buffers. */ if (dev->perc_dropped_buffers && - prandom_u32_max(100) < dev->perc_dropped_buffers) + get_random_u32_below(100) < dev->perc_dropped_buffers) return; spin_lock(&dev->slock); diff --git a/drivers/media/test-drivers/vivid/vivid-touch-cap.c b/drivers/media/test-drivers/vivid/vivid-touch-cap.c index 6cc32eb54f9d..c7f6e23df51e 100644 --- a/drivers/media/test-drivers/vivid/vivid-touch-cap.c +++ b/drivers/media/test-drivers/vivid/vivid-touch-cap.c @@ -221,7 +221,7 @@ static void vivid_fill_buff_noise(__s16 *tch_buf, int size) static inline int get_random_pressure(void) { - return prandom_u32_max(VIVID_PRESSURE_LIMIT); + return get_random_u32_below(VIVID_PRESSURE_LIMIT); } static void vivid_tch_buf_set(struct v4l2_pix_format *f, diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index de1cc9e1ae57..f0d19356ad76 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -97,8 +97,8 @@ static void mmc_should_fail_request(struct mmc_host *host, !should_fail(&host->fail_mmc_request, data->blksz * data->blocks)) return; - data->error = data_errors[prandom_u32_max(ARRAY_SIZE(data_errors))]; - data->bytes_xfered = prandom_u32_max(data->bytes_xfered >> 9) << 9; + data->error = data_errors[get_random_u32_below(ARRAY_SIZE(data_errors))]; + data->bytes_xfered = get_random_u32_below(data->bytes_xfered >> 9) << 9; } #else /* CONFIG_FAIL_MMC_REQUEST */ diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c index c78bbc22e0d1..6ef410053037 100644 --- a/drivers/mmc/host/dw_mmc.c +++ b/drivers/mmc/host/dw_mmc.c @@ -1858,7 +1858,7 @@ static void dw_mci_start_fault_timer(struct dw_mci *host) * Try to inject the error at random points during the data transfer. */ hrtimer_start(&host->fault_timer, - ms_to_ktime(prandom_u32_max(25)), + ms_to_ktime(get_random_u32_below(25)), HRTIMER_MODE_REL); } diff --git a/drivers/mtd/nand/raw/nandsim.c b/drivers/mtd/nand/raw/nandsim.c index 672719023241..c21abf748948 100644 --- a/drivers/mtd/nand/raw/nandsim.c +++ b/drivers/mtd/nand/raw/nandsim.c @@ -1405,9 +1405,9 @@ static void ns_do_bit_flips(struct nandsim *ns, int num) if (bitflips && get_random_u16() < (1 << 6)) { int flips = 1; if (bitflips > 1) - flips = prandom_u32_max(bitflips) + 1; + flips = get_random_u32_inclusive(1, bitflips); while (flips--) { - int pos = prandom_u32_max(num * 8); + int pos = get_random_u32_below(num * 8); ns->buf.byte[pos / 8] ^= (1 << (pos % 8)); NS_WARN("read_page: flipping bit %d in page %d " "reading from %d ecc: corrected=%u failed=%u\n", diff --git a/drivers/mtd/tests/mtd_nandecctest.c b/drivers/mtd/tests/mtd_nandecctest.c index 440988562cfd..824cc1c03b6a 100644 --- a/drivers/mtd/tests/mtd_nandecctest.c +++ b/drivers/mtd/tests/mtd_nandecctest.c @@ -47,7 +47,7 @@ struct nand_ecc_test { static void single_bit_error_data(void *error_data, void *correct_data, size_t size) { - unsigned int offset = prandom_u32_max(size * BITS_PER_BYTE); + unsigned int offset = get_random_u32_below(size * BITS_PER_BYTE); memcpy(error_data, correct_data, size); __change_bit_le(offset, error_data); @@ -58,9 +58,9 @@ static void double_bit_error_data(void *error_data, void *correct_data, { unsigned int offset[2]; - offset[0] = prandom_u32_max(size * BITS_PER_BYTE); + offset[0] = get_random_u32_below(size * BITS_PER_BYTE); do { - offset[1] = prandom_u32_max(size * BITS_PER_BYTE); + offset[1] = get_random_u32_below(size * BITS_PER_BYTE); } while (offset[0] == offset[1]); memcpy(error_data, correct_data, size); @@ -71,7 +71,7 @@ static void double_bit_error_data(void *error_data, void *correct_data, static unsigned int random_ecc_bit(size_t size) { - unsigned int offset = prandom_u32_max(3 * BITS_PER_BYTE); + unsigned int offset = get_random_u32_below(3 * BITS_PER_BYTE); if (size == 256) { /* @@ -79,7 +79,7 @@ static unsigned int random_ecc_bit(size_t size) * and 17th bit) in ECC code for 256 byte data block */ while (offset == 16 || offset == 17) - offset = prandom_u32_max(3 * BITS_PER_BYTE); + offset = get_random_u32_below(3 * BITS_PER_BYTE); } return offset; diff --git a/drivers/mtd/tests/stresstest.c b/drivers/mtd/tests/stresstest.c index 75b6ddc5dc4d..8062098930d6 100644 --- a/drivers/mtd/tests/stresstest.c +++ b/drivers/mtd/tests/stresstest.c @@ -46,7 +46,7 @@ static int rand_eb(void) again: /* Read or write up 2 eraseblocks at a time - hence 'ebcnt - 1' */ - eb = prandom_u32_max(ebcnt - 1); + eb = get_random_u32_below(ebcnt - 1); if (bbt[eb]) goto again; return eb; @@ -54,12 +54,12 @@ again: static int rand_offs(void) { - return prandom_u32_max(bufsize); + return get_random_u32_below(bufsize); } static int rand_len(int offs) { - return prandom_u32_max(bufsize - offs); + return get_random_u32_below(bufsize - offs); } static int do_read(void) @@ -118,7 +118,7 @@ static int do_write(void) static int do_operation(void) { - if (prandom_u32_max(2)) + if (get_random_u32_below(2)) return do_read(); else return do_write(); diff --git a/drivers/mtd/ubi/debug.c b/drivers/mtd/ubi/debug.c index 908d0e088557..fcca6942dbdd 100644 --- a/drivers/mtd/ubi/debug.c +++ b/drivers/mtd/ubi/debug.c @@ -590,7 +590,7 @@ int ubi_dbg_power_cut(struct ubi_device *ubi, int caller) if (ubi->dbg.power_cut_max > ubi->dbg.power_cut_min) { range = ubi->dbg.power_cut_max - ubi->dbg.power_cut_min; - ubi->dbg.power_cut_counter += prandom_u32_max(range); + ubi->dbg.power_cut_counter += get_random_u32_below(range); } return 0; } diff --git a/drivers/mtd/ubi/debug.h b/drivers/mtd/ubi/debug.h index dc8d8f83657a..23676f32b681 100644 --- a/drivers/mtd/ubi/debug.h +++ b/drivers/mtd/ubi/debug.h @@ -73,7 +73,7 @@ static inline int ubi_dbg_is_bgt_disabled(const struct ubi_device *ubi) static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi) { if (ubi->dbg.emulate_bitflips) - return !prandom_u32_max(200); + return !get_random_u32_below(200); return 0; } @@ -87,7 +87,7 @@ static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi) static inline int ubi_dbg_is_write_failure(const struct ubi_device *ubi) { if (ubi->dbg.emulate_io_failures) - return !prandom_u32_max(500); + return !get_random_u32_below(500); return 0; } @@ -101,7 +101,7 @@ static inline int ubi_dbg_is_write_failure(const struct ubi_device *ubi) static inline int ubi_dbg_is_erase_failure(const struct ubi_device *ubi) { if (ubi->dbg.emulate_io_failures) - return !prandom_u32_max(400); + return !get_random_u32_below(400); return 0; } diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c index 2198e35d9e18..74bc053a2078 100644 --- a/drivers/net/ethernet/broadcom/cnic.c +++ b/drivers/net/ethernet/broadcom/cnic.c @@ -4105,7 +4105,7 @@ static int cnic_cm_alloc_mem(struct cnic_dev *dev) for (i = 0; i < MAX_CM_SK_TBL_SZ; i++) atomic_set(&cp->csk_tbl[i].ref_count, 0); - port_id = prandom_u32_max(CNIC_LOCAL_PORT_RANGE); + port_id = get_random_u32_below(CNIC_LOCAL_PORT_RANGE); if (cnic_init_id_tbl(&cp->csk_port_tbl, CNIC_LOCAL_PORT_RANGE, CNIC_LOCAL_PORT_MIN, port_id)) { cnic_cm_free_mem(dev); diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c index a4256087ac82..ae6b17b96bf1 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c @@ -919,8 +919,8 @@ static int csk_wait_memory(struct chtls_dev *cdev, current_timeo = *timeo_p; noblock = (*timeo_p ? false : true); if (csk_mem_free(cdev, sk)) { - current_timeo = prandom_u32_max(HZ / 5) + 2; - vm_wait = prandom_u32_max(HZ / 5) + 2; + current_timeo = get_random_u32_below(HZ / 5) + 2; + vm_wait = get_random_u32_below(HZ / 5) + 2; } add_wait_queue(sk_sleep(sk), &wait); diff --git a/drivers/net/phy/at803x.c b/drivers/net/phy/at803x.c index d49965907561..22f4458274aa 100644 --- a/drivers/net/phy/at803x.c +++ b/drivers/net/phy/at803x.c @@ -1760,7 +1760,7 @@ static int qca808x_phy_fast_retrain_config(struct phy_device *phydev) static int qca808x_phy_ms_random_seed_set(struct phy_device *phydev) { - u16 seed_value = prandom_u32_max(QCA808X_MASTER_SLAVE_SEED_RANGE); + u16 seed_value = get_random_u32_below(QCA808X_MASTER_SLAVE_SEED_RANGE); return at803x_debug_reg_mask(phydev, QCA808X_PHY_DEBUG_LOCAL_SEED, QCA808X_MASTER_SLAVE_SEED_CFG, diff --git a/drivers/net/team/team_mode_random.c b/drivers/net/team/team_mode_random.c index f3f8dd428402..53d0ce34b8ce 100644 --- a/drivers/net/team/team_mode_random.c +++ b/drivers/net/team/team_mode_random.c @@ -16,7 +16,7 @@ static bool rnd_transmit(struct team *team, struct sk_buff *skb) struct team_port *port; int port_index; - port_index = prandom_u32_max(team->en_port_count); + port_index = get_random_u32_below(team->en_port_count); port = team_get_port_by_index_rcu(team, port_index); if (unlikely(!port)) goto drop; diff --git a/drivers/net/wireguard/selftest/allowedips.c b/drivers/net/wireguard/selftest/allowedips.c index 19eac00b2381..78ebe2892a78 100644 --- a/drivers/net/wireguard/selftest/allowedips.c +++ b/drivers/net/wireguard/selftest/allowedips.c @@ -285,8 +285,8 @@ static __init bool randomized_test(void) for (i = 0; i < NUM_RAND_ROUTES; ++i) { get_random_bytes(ip, 4); - cidr = prandom_u32_max(32) + 1; - peer = peers[prandom_u32_max(NUM_PEERS)]; + cidr = get_random_u32_inclusive(1, 32); + peer = peers[get_random_u32_below(NUM_PEERS)]; if (wg_allowedips_insert_v4(&t, (struct in_addr *)ip, cidr, peer, &mutex) < 0) { pr_err("allowedips random self-test malloc: FAIL\n"); @@ -300,7 +300,7 @@ static __init bool randomized_test(void) for (j = 0; j < NUM_MUTATED_ROUTES; ++j) { memcpy(mutated, ip, 4); get_random_bytes(mutate_mask, 4); - mutate_amount = prandom_u32_max(32); + mutate_amount = get_random_u32_below(32); for (k = 0; k < mutate_amount / 8; ++k) mutate_mask[k] = 0xff; mutate_mask[k] = 0xff @@ -311,8 +311,8 @@ static __init bool randomized_test(void) mutated[k] = (mutated[k] & mutate_mask[k]) | (~mutate_mask[k] & get_random_u8()); - cidr = prandom_u32_max(32) + 1; - peer = peers[prandom_u32_max(NUM_PEERS)]; + cidr = get_random_u32_inclusive(1, 32); + peer = peers[get_random_u32_below(NUM_PEERS)]; if (wg_allowedips_insert_v4(&t, (struct in_addr *)mutated, cidr, peer, &mutex) < 0) { @@ -329,8 +329,8 @@ static __init bool randomized_test(void) for (i = 0; i < NUM_RAND_ROUTES; ++i) { get_random_bytes(ip, 16); - cidr = prandom_u32_max(128) + 1; - peer = peers[prandom_u32_max(NUM_PEERS)]; + cidr = get_random_u32_inclusive(1, 128); + peer = peers[get_random_u32_below(NUM_PEERS)]; if (wg_allowedips_insert_v6(&t, (struct in6_addr *)ip, cidr, peer, &mutex) < 0) { pr_err("allowedips random self-test malloc: FAIL\n"); @@ -344,7 +344,7 @@ static __init bool randomized_test(void) for (j = 0; j < NUM_MUTATED_ROUTES; ++j) { memcpy(mutated, ip, 16); get_random_bytes(mutate_mask, 16); - mutate_amount = prandom_u32_max(128); + mutate_amount = get_random_u32_below(128); for (k = 0; k < mutate_amount / 8; ++k) mutate_mask[k] = 0xff; mutate_mask[k] = 0xff @@ -355,8 +355,8 @@ static __init bool randomized_test(void) mutated[k] = (mutated[k] & mutate_mask[k]) | (~mutate_mask[k] & get_random_u8()); - cidr = prandom_u32_max(128) + 1; - peer = peers[prandom_u32_max(NUM_PEERS)]; + cidr = get_random_u32_inclusive(1, 128); + peer = peers[get_random_u32_below(NUM_PEERS)]; if (wg_allowedips_insert_v6(&t, (struct in6_addr *)mutated, cidr, peer, &mutex) < 0) { diff --git a/drivers/net/wireguard/timers.c b/drivers/net/wireguard/timers.c index d54d32ac9bc4..b5706b6718b1 100644 --- a/drivers/net/wireguard/timers.c +++ b/drivers/net/wireguard/timers.c @@ -147,7 +147,7 @@ void wg_timers_data_sent(struct wg_peer *peer) if (!timer_pending(&peer->timer_new_handshake)) mod_peer_timer(peer, &peer->timer_new_handshake, jiffies + (KEEPALIVE_TIMEOUT + REKEY_TIMEOUT) * HZ + - prandom_u32_max(REKEY_TIMEOUT_JITTER_MAX_JIFFIES)); + get_random_u32_below(REKEY_TIMEOUT_JITTER_MAX_JIFFIES)); } /* Should be called after an authenticated data packet is received. */ @@ -183,7 +183,7 @@ void wg_timers_handshake_initiated(struct wg_peer *peer) { mod_peer_timer(peer, &peer->timer_retransmit_handshake, jiffies + REKEY_TIMEOUT * HZ + - prandom_u32_max(REKEY_TIMEOUT_JITTER_MAX_JIFFIES)); + get_random_u32_below(REKEY_TIMEOUT_JITTER_MAX_JIFFIES)); } /* Should be called after a handshake response message is received and processed diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c index 10d9d9c63b28..c704ca752138 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c @@ -1128,7 +1128,7 @@ static void brcmf_p2p_afx_handler(struct work_struct *work) if (afx_hdl->is_listen && afx_hdl->my_listen_chan) /* 100ms ~ 300ms */ err = brcmf_p2p_discover_listen(p2p, afx_hdl->my_listen_chan, - 100 * (1 + prandom_u32_max(3))); + 100 * get_random_u32_inclusive(1, 3)); else err = brcmf_p2p_act_frm_search(p2p, afx_hdl->peer_listen_chan); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c index de0c545d50fd..3a7a44bb3c60 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c @@ -1099,7 +1099,7 @@ static void iwl_mvm_mac_ctxt_cmd_fill_ap(struct iwl_mvm *mvm, iwl_mvm_mac_ap_iterator, &data); if (data.beacon_device_ts) { - u32 rand = prandom_u32_max(64 - 36) + 36; + u32 rand = get_random_u32_inclusive(36, 63); mvmvif->ap_beacon_time = data.beacon_device_ts + ieee80211_tu_to_usec(data.beacon_int * rand / 100); diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 88dc66ee1c46..5565f67d6537 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -673,7 +673,7 @@ struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients) } if (dev_cnt) - pdev = pci_dev_get(closest_pdevs[prandom_u32_max(dev_cnt)]); + pdev = pci_dev_get(closest_pdevs[get_random_u32_below(dev_cnt)]); for (i = 0; i < dev_cnt; i++) pci_dev_put(closest_pdevs[i]); diff --git a/drivers/s390/scsi/zfcp_fc.c b/drivers/s390/scsi/zfcp_fc.c index 77917b339870..f21307537829 100644 --- a/drivers/s390/scsi/zfcp_fc.c +++ b/drivers/s390/scsi/zfcp_fc.c @@ -48,7 +48,7 @@ unsigned int zfcp_fc_port_scan_backoff(void) { if (!port_scan_backoff) return 0; - return prandom_u32_max(port_scan_backoff); + return get_random_u32_below(port_scan_backoff); } static void zfcp_fc_port_scan_time(struct zfcp_adapter *adapter) diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c index ddc048069af2..5c8d1ba3f8f3 100644 --- a/drivers/scsi/fcoe/fcoe_ctlr.c +++ b/drivers/scsi/fcoe/fcoe_ctlr.c @@ -2233,7 +2233,7 @@ static void fcoe_ctlr_vn_restart(struct fcoe_ctlr *fip) if (fip->probe_tries < FIP_VN_RLIM_COUNT) { fip->probe_tries++; - wait = prandom_u32_max(FIP_VN_PROBE_WAIT); + wait = get_random_u32_below(FIP_VN_PROBE_WAIT); } else wait = FIP_VN_RLIM_INT; mod_timer(&fip->timer, jiffies + msecs_to_jiffies(wait)); @@ -3125,7 +3125,7 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip) fcoe_all_vn2vn, 0); fip->port_ka_time = jiffies + msecs_to_jiffies(FIP_VN_BEACON_INT + - prandom_u32_max(FIP_VN_BEACON_FUZZ)); + get_random_u32_below(FIP_VN_BEACON_FUZZ)); } if (time_before(fip->port_ka_time, next_time)) next_time = fip->port_ka_time; diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c index df2fe7bd26d1..f2ee49756df8 100644 --- a/drivers/scsi/qedi/qedi_main.c +++ b/drivers/scsi/qedi/qedi_main.c @@ -618,7 +618,7 @@ static int qedi_cm_alloc_mem(struct qedi_ctx *qedi) sizeof(struct qedi_endpoint *)), GFP_KERNEL); if (!qedi->ep_tbl) return -ENOMEM; - port_id = prandom_u32_max(QEDI_LOCAL_PORT_RANGE); + port_id = get_random_u32_below(QEDI_LOCAL_PORT_RANGE); if (qedi_init_id_tbl(&qedi->lcl_port_tbl, QEDI_LOCAL_PORT_RANGE, QEDI_LOCAL_PORT_MIN, port_id)) { qedi_cm_free_mem(qedi); diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index bebda917b138..a0797101a8a0 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -5702,16 +5702,16 @@ static int schedule_resp(struct scsi_cmnd *cmnd, struct sdebug_dev_info *devip, u64 ns = jiffies_to_nsecs(delta_jiff); if (sdebug_random && ns < U32_MAX) { - ns = prandom_u32_max((u32)ns); + ns = get_random_u32_below((u32)ns); } else if (sdebug_random) { ns >>= 12; /* scale to 4 usec precision */ if (ns < U32_MAX) /* over 4 hours max */ - ns = prandom_u32_max((u32)ns); + ns = get_random_u32_below((u32)ns); ns <<= 12; } kt = ns_to_ktime(ns); } else { /* ndelay has a 4.2 second max */ - kt = sdebug_random ? prandom_u32_max((u32)ndelay) : + kt = sdebug_random ? get_random_u32_below((u32)ndelay) : (u32)ndelay; if (ndelay < INCLUSIVE_TIMING_MAX_NS) { u64 d = ktime_get_boottime_ns() - ns_from_boot; diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index bad9eeb6a1a5..d53399966a2d 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -362,7 +362,7 @@ static int ceph_fill_fragtree(struct inode *inode, if (nsplits != ci->i_fragtree_nsplits) { update = true; } else if (nsplits) { - i = prandom_u32_max(nsplits); + i = get_random_u32_below(nsplits); id = le32_to_cpu(fragtree->splits[i].frag); if (!__ceph_find_frag(ci, id)) update = true; diff --git a/fs/ceph/mdsmap.c b/fs/ceph/mdsmap.c index 3fbabc98e1f7..7dac21ee6ce7 100644 --- a/fs/ceph/mdsmap.c +++ b/fs/ceph/mdsmap.c @@ -29,7 +29,7 @@ static int __mdsmap_get_random_mds(struct ceph_mdsmap *m, bool ignore_laggy) return -1; /* pick */ - n = prandom_u32_max(n); + n = get_random_u32_below(n); for (j = 0, i = 0; i < m->possible_max_rank; i++) { if (CEPH_MDS_IS_READY(i, ignore_laggy)) j++; diff --git a/fs/ext2/ialloc.c b/fs/ext2/ialloc.c index f4944c4dee60..78b8686d9a4a 100644 --- a/fs/ext2/ialloc.c +++ b/fs/ext2/ialloc.c @@ -277,7 +277,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent) int best_ndir = inodes_per_group; int best_group = -1; - parent_group = prandom_u32_max(ngroups); + parent_group = get_random_u32_below(ngroups); for (i = 0; i < ngroups; i++) { group = (parent_group + i) % ngroups; desc = ext2_get_group_desc (sb, group, NULL); diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c index e9bc46684106..9fc1af8e19a3 100644 --- a/fs/ext4/ialloc.c +++ b/fs/ext4/ialloc.c @@ -465,7 +465,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent, ext4fs_dirhash(parent, qstr->name, qstr->len, &hinfo); parent_group = hinfo.hash % ngroups; } else - parent_group = prandom_u32_max(ngroups); + parent_group = get_random_u32_below(ngroups); for (i = 0; i < ngroups; i++) { g = (parent_group + i) % ngroups; get_orlov_stats(sb, g, flex_size, &stats); diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c index 588cb09c5291..4681fff6665f 100644 --- a/fs/ext4/mmp.c +++ b/fs/ext4/mmp.c @@ -262,13 +262,7 @@ void ext4_stop_mmpd(struct ext4_sb_info *sbi) */ static unsigned int mmp_new_seq(void) { - u32 new_seq; - - do { - new_seq = get_random_u32(); - } while (new_seq > EXT4_MMP_SEQ_MAX); - - return new_seq; + return get_random_u32_below(EXT4_MMP_SEQ_MAX + 1); } /* diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 7cdd2138c897..63ef74eb8091 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -3778,7 +3778,7 @@ cont_thread: } if (!progress) { elr->lr_next_sched = jiffies + - prandom_u32_max(EXT4_DEF_LI_MAX_START_DELAY * HZ); + get_random_u32_below(EXT4_DEF_LI_MAX_START_DELAY * HZ); } if (time_before(elr->lr_next_sched, next_wakeup)) next_wakeup = elr->lr_next_sched; @@ -3925,8 +3925,7 @@ static struct ext4_li_request *ext4_li_request_new(struct super_block *sb, * spread the inode table initialization requests * better. */ - elr->lr_next_sched = jiffies + prandom_u32_max( - EXT4_DEF_LI_MAX_START_DELAY * HZ); + elr->lr_next_sched = jiffies + get_random_u32_below(EXT4_DEF_LI_MAX_START_DELAY * HZ); return elr; } diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c index 4546e01b2ee0..536d332d9e2e 100644 --- a/fs/f2fs/gc.c +++ b/fs/f2fs/gc.c @@ -282,7 +282,7 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type, /* let's select beginning hot/small space first in no_heap mode*/ if (f2fs_need_rand_seg(sbi)) - p->offset = prandom_u32_max(MAIN_SECS(sbi) * sbi->segs_per_sec); + p->offset = get_random_u32_below(MAIN_SECS(sbi) * sbi->segs_per_sec); else if (test_opt(sbi, NOHEAP) && (type == CURSEG_HOT_DATA || IS_NODESEG(type))) p->offset = 0; diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index acf3d3fa4363..b304692c0cf5 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -2534,7 +2534,7 @@ static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type) sanity_check_seg_type(sbi, seg_type); if (f2fs_need_rand_seg(sbi)) - return prandom_u32_max(MAIN_SECS(sbi) * sbi->segs_per_sec); + return get_random_u32_below(MAIN_SECS(sbi) * sbi->segs_per_sec); /* if segs_per_sec is large than 1, we need to keep original policy. */ if (__is_large_section(sbi)) @@ -2588,7 +2588,7 @@ static void new_curseg(struct f2fs_sb_info *sbi, int type, bool new_sec) curseg->alloc_type = LFS; if (F2FS_OPTION(sbi).fs_mode == FS_MODE_FRAGMENT_BLK) curseg->fragment_remained_chunk = - prandom_u32_max(sbi->max_fragment_chunk) + 1; + get_random_u32_inclusive(1, sbi->max_fragment_chunk); } static int __next_free_blkoff(struct f2fs_sb_info *sbi, @@ -2625,9 +2625,9 @@ static void __refresh_next_blkoff(struct f2fs_sb_info *sbi, /* To allocate block chunks in different sizes, use random number */ if (--seg->fragment_remained_chunk <= 0) { seg->fragment_remained_chunk = - prandom_u32_max(sbi->max_fragment_chunk) + 1; + get_random_u32_inclusive(1, sbi->max_fragment_chunk); seg->next_blkoff += - prandom_u32_max(sbi->max_fragment_hole) + 1; + get_random_u32_inclusive(1, sbi->max_fragment_hole); } } } diff --git a/fs/ubifs/debug.c b/fs/ubifs/debug.c index 3f128b9fdfbb..9c9d3f0e36a4 100644 --- a/fs/ubifs/debug.c +++ b/fs/ubifs/debug.c @@ -2467,7 +2467,7 @@ error_dump: static inline int chance(unsigned int n, unsigned int out_of) { - return !!(prandom_u32_max(out_of) + 1 <= n); + return !!(get_random_u32_below(out_of) + 1 <= n); } @@ -2485,13 +2485,13 @@ static int power_cut_emulated(struct ubifs_info *c, int lnum, int write) if (chance(1, 2)) { d->pc_delay = 1; /* Fail within 1 minute */ - delay = prandom_u32_max(60000); + delay = get_random_u32_below(60000); d->pc_timeout = jiffies; d->pc_timeout += msecs_to_jiffies(delay); ubifs_warn(c, "failing after %lums", delay); } else { d->pc_delay = 2; - delay = prandom_u32_max(10000); + delay = get_random_u32_below(10000); /* Fail within 10000 operations */ d->pc_cnt_max = delay; ubifs_warn(c, "failing after %lu calls", delay); @@ -2571,7 +2571,7 @@ static int corrupt_data(const struct ubifs_info *c, const void *buf, unsigned int from, to, ffs = chance(1, 2); unsigned char *p = (void *)buf; - from = prandom_u32_max(len); + from = get_random_u32_below(len); /* Corruption span max to end of write unit */ to = min(len, ALIGN(from + 1, c->max_write_size)); diff --git a/fs/ubifs/lpt_commit.c b/fs/ubifs/lpt_commit.c index cfbc31f709f4..c4d079328b92 100644 --- a/fs/ubifs/lpt_commit.c +++ b/fs/ubifs/lpt_commit.c @@ -1970,28 +1970,28 @@ static int dbg_populate_lsave(struct ubifs_info *c) if (!dbg_is_chk_gen(c)) return 0; - if (prandom_u32_max(4)) + if (get_random_u32_below(4)) return 0; for (i = 0; i < c->lsave_cnt; i++) c->lsave[i] = c->main_first; list_for_each_entry(lprops, &c->empty_list, list) - c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum; + c->lsave[get_random_u32_below(c->lsave_cnt)] = lprops->lnum; list_for_each_entry(lprops, &c->freeable_list, list) - c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum; + c->lsave[get_random_u32_below(c->lsave_cnt)] = lprops->lnum; list_for_each_entry(lprops, &c->frdi_idx_list, list) - c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum; + c->lsave[get_random_u32_below(c->lsave_cnt)] = lprops->lnum; heap = &c->lpt_heap[LPROPS_DIRTY_IDX - 1]; for (i = 0; i < heap->cnt; i++) - c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum; + c->lsave[get_random_u32_below(c->lsave_cnt)] = heap->arr[i]->lnum; heap = &c->lpt_heap[LPROPS_DIRTY - 1]; for (i = 0; i < heap->cnt; i++) - c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum; + c->lsave[get_random_u32_below(c->lsave_cnt)] = heap->arr[i]->lnum; heap = &c->lpt_heap[LPROPS_FREE - 1]; for (i = 0; i < heap->cnt; i++) - c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum; + c->lsave[get_random_u32_below(c->lsave_cnt)] = heap->arr[i]->lnum; return 1; } diff --git a/fs/ubifs/tnc_commit.c b/fs/ubifs/tnc_commit.c index 01362ad5f804..a55e04822d16 100644 --- a/fs/ubifs/tnc_commit.c +++ b/fs/ubifs/tnc_commit.c @@ -700,7 +700,7 @@ static int alloc_idx_lebs(struct ubifs_info *c, int cnt) c->ilebs[c->ileb_cnt++] = lnum; dbg_cmt("LEB %d", lnum); } - if (dbg_is_chk_index(c) && !prandom_u32_max(8)) + if (dbg_is_chk_index(c) && !get_random_u32_below(8)) return -ENOSPC; return 0; } diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c index de79f5d07f65..989cf341779b 100644 --- a/fs/xfs/libxfs/xfs_alloc.c +++ b/fs/xfs/libxfs/xfs_alloc.c @@ -1516,7 +1516,7 @@ xfs_alloc_ag_vextent_lastblock( #ifdef DEBUG /* Randomly don't execute the first algorithm. */ - if (prandom_u32_max(2)) + if (get_random_u32_below(2)) return 0; #endif diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c index 94db50eb706a..5118dedf9267 100644 --- a/fs/xfs/libxfs/xfs_ialloc.c +++ b/fs/xfs/libxfs/xfs_ialloc.c @@ -636,7 +636,7 @@ xfs_ialloc_ag_alloc( /* randomly do sparse inode allocations */ if (xfs_has_sparseinodes(tp->t_mountp) && igeo->ialloc_min_blks < igeo->ialloc_blks) - do_sparse = prandom_u32_max(2); + do_sparse = get_random_u32_below(2); #endif /* diff --git a/fs/xfs/xfs_error.c b/fs/xfs/xfs_error.c index c6b2aabd6f18..822e6a0e9d1a 100644 --- a/fs/xfs/xfs_error.c +++ b/fs/xfs/xfs_error.c @@ -279,7 +279,7 @@ xfs_errortag_test( ASSERT(error_tag < XFS_ERRTAG_MAX); randfactor = mp->m_errortag[error_tag]; - if (!randfactor || prandom_u32_max(randfactor)) + if (!randfactor || get_random_u32_below(randfactor)) return false; xfs_warn_ratelimited(mp, diff --git a/include/linux/damon.h b/include/linux/damon.h index 620ada094c3b..84525b9cdf6e 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -21,7 +21,7 @@ /* Get a random number in [l, r) */ static inline unsigned long damon_rand(unsigned long l, unsigned long r) { - return l + prandom_u32_max(r - l); + return l + get_random_u32_below(r - l); } /** diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h index efef68c9352a..bb0ee80526b2 100644 --- a/include/linux/nodemask.h +++ b/include/linux/nodemask.h @@ -516,7 +516,7 @@ static inline int node_random(const nodemask_t *maskp) bit = first_node(*maskp); break; default: - bit = find_nth_bit(maskp->bits, MAX_NUMNODES, prandom_u32_max(w)); + bit = find_nth_bit(maskp->bits, MAX_NUMNODES, get_random_u32_below(w)); break; } return bit; diff --git a/include/linux/prandom.h b/include/linux/prandom.h index e0a0759dd09c..c94c02ba065c 100644 --- a/include/linux/prandom.h +++ b/include/linux/prandom.h @@ -9,6 +9,7 @@ #define _LINUX_PRANDOM_H #include <linux/types.h> +#include <linux/once.h> #include <linux/percpu.h> #include <linux/random.h> @@ -23,24 +24,10 @@ void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state); #define prandom_init_once(pcpu_state) \ DO_ONCE(prandom_seed_full_state, (pcpu_state)) -/** - * prandom_u32_max - returns a pseudo-random number in interval [0, ep_ro) - * @ep_ro: right open interval endpoint - * - * Returns a pseudo-random number that is in interval [0, ep_ro). This is - * useful when requesting a random index of an array containing ep_ro elements, - * for example. The result is somewhat biased when ep_ro is not a power of 2, - * so do not use this for cryptographic purposes. - * - * Returns: pseudo-random number in interval [0, ep_ro) - */ +/* Deprecated: use get_random_u32_below() instead. */ static inline u32 prandom_u32_max(u32 ep_ro) { - if (__builtin_constant_p(ep_ro <= 1U << 8) && ep_ro <= 1U << 8) - return (get_random_u8() * ep_ro) >> 8; - if (__builtin_constant_p(ep_ro <= 1U << 16) && ep_ro <= 1U << 16) - return (get_random_u16() * ep_ro) >> 16; - return ((u64)get_random_u32() * ep_ro) >> 32; + return get_random_u32_below(ep_ro); } /* diff --git a/include/linux/random.h b/include/linux/random.h index 147a5e0d0b8e..4a2a1de423cd 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -6,7 +6,6 @@ #include <linux/bug.h> #include <linux/kernel.h> #include <linux/list.h> -#include <linux/once.h> #include <uapi/linux/random.h> @@ -17,16 +16,16 @@ void __init add_bootloader_randomness(const void *buf, size_t len); void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) __latent_entropy; void add_interrupt_randomness(int irq) __latent_entropy; -void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy); +void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy, bool sleep_after); -#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__) static inline void add_latent_entropy(void) { +#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__) add_device_randomness((const void *)&latent_entropy, sizeof(latent_entropy)); -} #else -static inline void add_latent_entropy(void) { } + add_device_randomness(NULL, 0); #endif +} #if IS_ENABLED(CONFIG_VMGENID) void add_vmfork_randomness(const void *unique_vm_id, size_t len); @@ -51,29 +50,76 @@ static inline unsigned long get_random_long(void) #endif } +u32 __get_random_u32_below(u32 ceil); + /* - * On 64-bit architectures, protect against non-terminated C string overflows - * by zeroing out the first byte of the canary; this leaves 56 bits of entropy. + * Returns a random integer in the interval [0, ceil), with uniform + * distribution, suitable for all uses. Fastest when ceil is a constant, but + * still fast for variable ceil as well. */ -#ifdef CONFIG_64BIT -# ifdef __LITTLE_ENDIAN -# define CANARY_MASK 0xffffffffffffff00UL -# else /* big endian, 64 bits: */ -# define CANARY_MASK 0x00ffffffffffffffUL -# endif -#else /* 32 bits: */ -# define CANARY_MASK 0xffffffffUL -#endif +static inline u32 get_random_u32_below(u32 ceil) +{ + if (!__builtin_constant_p(ceil)) + return __get_random_u32_below(ceil); + + /* + * For the fast path, below, all operations on ceil are precomputed by + * the compiler, so this incurs no overhead for checking pow2, doing + * divisions, or branching based on integer size. The resultant + * algorithm does traditional reciprocal multiplication (typically + * optimized by the compiler into shifts and adds), rejecting samples + * whose lower half would indicate a range indivisible by ceil. + */ + BUILD_BUG_ON_MSG(!ceil, "get_random_u32_below() must take ceil > 0"); + if (ceil <= 1) + return 0; + for (;;) { + if (ceil <= 1U << 8) { + u32 mult = ceil * get_random_u8(); + if (likely(is_power_of_2(ceil) || (u8)mult >= (1U << 8) % ceil)) + return mult >> 8; + } else if (ceil <= 1U << 16) { + u32 mult = ceil * get_random_u16(); + if (likely(is_power_of_2(ceil) || (u16)mult >= (1U << 16) % ceil)) + return mult >> 16; + } else { + u64 mult = (u64)ceil * get_random_u32(); + if (likely(is_power_of_2(ceil) || (u32)mult >= -ceil % ceil)) + return mult >> 32; + } + } +} + +/* + * Returns a random integer in the interval (floor, U32_MAX], with uniform + * distribution, suitable for all uses. Fastest when floor is a constant, but + * still fast for variable floor as well. + */ +static inline u32 get_random_u32_above(u32 floor) +{ + BUILD_BUG_ON_MSG(__builtin_constant_p(floor) && floor == U32_MAX, + "get_random_u32_above() must take floor < U32_MAX"); + return floor + 1 + get_random_u32_below(U32_MAX - floor); +} -static inline unsigned long get_random_canary(void) +/* + * Returns a random integer in the interval [floor, ceil], with uniform + * distribution, suitable for all uses. Fastest when floor and ceil are + * constant, but still fast for variable floor and ceil as well. + */ +static inline u32 get_random_u32_inclusive(u32 floor, u32 ceil) { - return get_random_long() & CANARY_MASK; + BUILD_BUG_ON_MSG(__builtin_constant_p(floor) && __builtin_constant_p(ceil) && + (floor > ceil || ceil - floor == U32_MAX), + "get_random_u32_inclusive() must take floor <= ceil"); + return floor + get_random_u32_below(ceil - floor + 1); } void __init random_init_early(const char *command_line); void __init random_init(void); bool rng_is_initialized(void); int wait_for_random_bytes(void); +int execute_with_initialized_rng(struct notifier_block *nb); /* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbytes). * Returns the result of the call to wait_for_random_bytes. */ @@ -108,26 +154,6 @@ declare_get_random_var_wait(long, unsigned long) #include <asm/archrandom.h> -/* - * Called from the boot CPU during startup; not valid to call once - * secondary CPUs are up and preemption is possible. - */ -#ifndef arch_get_random_seed_longs_early -static inline size_t __init arch_get_random_seed_longs_early(unsigned long *v, size_t max_longs) -{ - WARN_ON(system_state != SYSTEM_BOOTING); - return arch_get_random_seed_longs(v, max_longs); -} -#endif - -#ifndef arch_get_random_longs_early -static inline bool __init arch_get_random_longs_early(unsigned long *v, size_t max_longs) -{ - WARN_ON(system_state != SYSTEM_BOOTING); - return arch_get_random_longs(v, max_longs); -} -#endif - #ifdef CONFIG_SMP int random_prepare_cpu(unsigned int cpu); int random_online_cpu(unsigned int cpu); diff --git a/include/linux/stackprotector.h b/include/linux/stackprotector.h index 4c678c4fec58..9c88707d9a0f 100644 --- a/include/linux/stackprotector.h +++ b/include/linux/stackprotector.h @@ -6,6 +6,25 @@ #include <linux/sched.h> #include <linux/random.h> +/* + * On 64-bit architectures, protect against non-terminated C string overflows + * by zeroing out the first byte of the canary; this leaves 56 bits of entropy. + */ +#ifdef CONFIG_64BIT +# ifdef __LITTLE_ENDIAN +# define CANARY_MASK 0xffffffffffffff00UL +# else /* big endian, 64 bits: */ +# define CANARY_MASK 0x00ffffffffffffffUL +# endif +#else /* 32 bits: */ +# define CANARY_MASK 0xffffffffUL +#endif + +static inline unsigned long get_random_canary(void) +{ + return get_random_long() & CANARY_MASK; +} + #if defined(CONFIG_STACKPROTECTOR) || defined(CONFIG_ARM64_PTR_AUTH) # include <asm/stackprotector.h> #else diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 25a54e04560e..38159f39e2af 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1032,7 +1032,7 @@ bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr, hdr->size = size; hole = min_t(unsigned int, size - (proglen + sizeof(*hdr)), PAGE_SIZE - sizeof(*hdr)); - start = prandom_u32_max(hole) & ~(alignment - 1); + start = get_random_u32_below(hole) & ~(alignment - 1); /* Leave a random number of instructions before BPF code. */ *image_ptr = &hdr->image[start]; @@ -1094,7 +1094,7 @@ bpf_jit_binary_pack_alloc(unsigned int proglen, u8 **image_ptr, hole = min_t(unsigned int, size - (proglen + sizeof(*ro_header)), BPF_PROG_CHUNK_SIZE - sizeof(*ro_header)); - start = prandom_u32_max(hole) & ~(alignment - 1); + start = get_random_u32_below(hole) & ~(alignment - 1); *image_ptr = &ro_header->image[start]; *rw_image = &(*rw_header)->image[start]; diff --git a/kernel/fork.c b/kernel/fork.c index cfb09ca1b1bc..89b8b6c08592 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -75,7 +75,6 @@ #include <linux/freezer.h> #include <linux/delayacct.h> #include <linux/taskstats_kern.h> -#include <linux/random.h> #include <linux/tty.h> #include <linux/fs_struct.h> #include <linux/magic.h> @@ -97,6 +96,7 @@ #include <linux/scs.h> #include <linux/io_uring.h> #include <linux/bpf.h> +#include <linux/stackprotector.h> #include <asm/pgalloc.h> #include <linux/uaccess.h> diff --git a/kernel/kcsan/selftest.c b/kernel/kcsan/selftest.c index 00cdf8fa5693..8679322450f2 100644 --- a/kernel/kcsan/selftest.c +++ b/kernel/kcsan/selftest.c @@ -22,13 +22,6 @@ #define ITERS_PER_TEST 2000 -/* Test requirements. */ -static bool __init test_requires(void) -{ - /* random should be initialized for the below tests */ - return get_random_u32() + get_random_u32() != 0; -} - /* * Test watchpoint encode and decode: check that encoding some access's info, * and then subsequent decode preserves the access's info. @@ -38,8 +31,8 @@ static bool __init test_encode_decode(void) int i; for (i = 0; i < ITERS_PER_TEST; ++i) { - size_t size = prandom_u32_max(MAX_ENCODABLE_SIZE) + 1; - bool is_write = !!prandom_u32_max(2); + size_t size = get_random_u32_inclusive(1, MAX_ENCODABLE_SIZE); + bool is_write = !!get_random_u32_below(2); unsigned long verif_masked_addr; long encoded_watchpoint; bool verif_is_write; @@ -259,7 +252,6 @@ static int __init kcsan_selftest(void) pr_err("selftest: " #do_test " failed"); \ } while (0) - RUN_TEST(test_requires); RUN_TEST(test_encode_decode); RUN_TEST(test_matching_access); RUN_TEST(test_barrier); diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c index 43efb2a04160..29dc253d03af 100644 --- a/kernel/locking/test-ww_mutex.c +++ b/kernel/locking/test-ww_mutex.c @@ -399,7 +399,7 @@ static int *get_random_order(int count) order[n] = n; for (n = count - 1; n > 1; n--) { - r = prandom_u32_max(n + 1); + r = get_random_u32_below(n + 1); if (r != n) { tmp = order[n]; order[n] = order[r]; @@ -538,7 +538,7 @@ static void stress_one_work(struct work_struct *work) { struct stress *stress = container_of(work, typeof(*stress), work); const int nlocks = stress->nlocks; - struct ww_mutex *lock = stress->locks + prandom_u32_max(nlocks); + struct ww_mutex *lock = stress->locks + get_random_u32_below(nlocks); int err; do { diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c index 8058bec87ace..9cf32ccda715 100644 --- a/kernel/time/clocksource.c +++ b/kernel/time/clocksource.c @@ -310,7 +310,7 @@ static void clocksource_verify_choose_cpus(void) * CPUs that are currently online. */ for (i = 1; i < n; i++) { - cpu = prandom_u32_max(nr_cpu_ids); + cpu = get_random_u32_below(nr_cpu_ids); cpu = cpumask_next(cpu - 1, cpu_online_mask); if (cpu >= nr_cpu_ids) cpu = cpumask_first(cpu_online_mask); diff --git a/lib/fault-inject.c b/lib/fault-inject.c index adb2f9355ee6..1421818c9ef7 100644 --- a/lib/fault-inject.c +++ b/lib/fault-inject.c @@ -136,7 +136,7 @@ bool should_fail_ex(struct fault_attr *attr, ssize_t size, int flags) return false; } - if (attr->probability <= prandom_u32_max(100)) + if (attr->probability <= get_random_u32_below(100)) return false; if (!fail_stacktrace(attr)) diff --git a/lib/find_bit_benchmark.c b/lib/find_bit_benchmark.c index 7c3c011abd29..d3fb09e6eff1 100644 --- a/lib/find_bit_benchmark.c +++ b/lib/find_bit_benchmark.c @@ -174,8 +174,8 @@ static int __init find_bit_test(void) bitmap_zero(bitmap2, BITMAP_LEN); while (nbits--) { - __set_bit(prandom_u32_max(BITMAP_LEN), bitmap); - __set_bit(prandom_u32_max(BITMAP_LEN), bitmap2); + __set_bit(get_random_u32_below(BITMAP_LEN), bitmap); + __set_bit(get_random_u32_below(BITMAP_LEN), bitmap2); } test_find_next_bit(bitmap, BITMAP_LEN); diff --git a/lib/kobject.c b/lib/kobject.c index a0b2dbfcfa23..af1f5f2954d4 100644 --- a/lib/kobject.c +++ b/lib/kobject.c @@ -694,7 +694,7 @@ static void kobject_release(struct kref *kref) { struct kobject *kobj = container_of(kref, struct kobject, kref); #ifdef CONFIG_DEBUG_KOBJECT_RELEASE - unsigned long delay = HZ + HZ * prandom_u32_max(4); + unsigned long delay = HZ + HZ * get_random_u32_below(4); pr_info("kobject: '%s' (%p): %s, parent %p (delayed %ld)\n", kobject_name(kobj), kobj, __func__, kobj->parent, delay); INIT_DELAYED_WORK(&kobj->release, kobject_delayed_cleanup); diff --git a/lib/reed_solomon/test_rslib.c b/lib/reed_solomon/test_rslib.c index 848e7eb5da92..75cb1adac884 100644 --- a/lib/reed_solomon/test_rslib.c +++ b/lib/reed_solomon/test_rslib.c @@ -183,7 +183,7 @@ static int get_rcw_we(struct rs_control *rs, struct wspace *ws, do { /* Must not choose the same location twice */ - errloc = prandom_u32_max(len); + errloc = get_random_u32_below(len); } while (errlocs[errloc] != 0); errlocs[errloc] = 1; @@ -194,12 +194,12 @@ static int get_rcw_we(struct rs_control *rs, struct wspace *ws, for (i = 0; i < eras; i++) { do { /* Must not choose the same location twice */ - errloc = prandom_u32_max(len); + errloc = get_random_u32_below(len); } while (errlocs[errloc] != 0); derrlocs[i] = errloc; - if (ewsc && prandom_u32_max(2)) { + if (ewsc && get_random_u32_below(2)) { /* Erasure with the symbol intact */ errlocs[errloc] = 2; } else { diff --git a/lib/sbitmap.c b/lib/sbitmap.c index 7280ae8ca88c..58de526ff051 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -21,7 +21,7 @@ static int init_alloc_hint(struct sbitmap *sb, gfp_t flags) int i; for_each_possible_cpu(i) - *per_cpu_ptr(sb->alloc_hint, i) = prandom_u32_max(depth); + *per_cpu_ptr(sb->alloc_hint, i) = get_random_u32_below(depth); } return 0; } @@ -33,7 +33,7 @@ static inline unsigned update_alloc_hint_before_get(struct sbitmap *sb, hint = this_cpu_read(*sb->alloc_hint); if (unlikely(hint >= depth)) { - hint = depth ? prandom_u32_max(depth) : 0; + hint = depth ? get_random_u32_below(depth) : 0; this_cpu_write(*sb->alloc_hint, hint); } diff --git a/lib/test-string_helpers.c b/lib/test-string_helpers.c index 86fadd3ba08c..41d3447bc3b4 100644 --- a/lib/test-string_helpers.c +++ b/lib/test-string_helpers.c @@ -587,7 +587,7 @@ static int __init test_string_helpers_init(void) for (i = 0; i < UNESCAPE_ALL_MASK + 1; i++) test_string_unescape("unescape", i, false); test_string_unescape("unescape inplace", - prandom_u32_max(UNESCAPE_ANY + 1), true); + get_random_u32_below(UNESCAPE_ANY + 1), true); /* Without dictionary */ for (i = 0; i < ESCAPE_ALL_MASK + 1; i++) diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index e0381b3ec410..1fb56cf5e5ce 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -144,10 +144,7 @@ static unsigned long get_ftrace_location(void *func) static int fprobe_test_init(struct kunit *test) { - do { - rand1 = get_random_u32(); - } while (rand1 <= div_factor); - + rand1 = get_random_u32_above(div_factor); target = fprobe_selftest_target; target2 = fprobe_selftest_target2; target_ip = get_ftrace_location(target); diff --git a/lib/test_hexdump.c b/lib/test_hexdump.c index 0927f44cd478..b916801f23a8 100644 --- a/lib/test_hexdump.c +++ b/lib/test_hexdump.c @@ -149,7 +149,7 @@ static void __init test_hexdump(size_t len, int rowsize, int groupsize, static void __init test_hexdump_set(int rowsize, bool ascii) { size_t d = min_t(size_t, sizeof(data_b), rowsize); - size_t len = prandom_u32_max(d) + 1; + size_t len = get_random_u32_inclusive(1, d); test_hexdump(len, rowsize, 4, ascii); test_hexdump(len, rowsize, 2, ascii); @@ -208,11 +208,11 @@ static void __init test_hexdump_overflow(size_t buflen, size_t len, static void __init test_hexdump_overflow_set(size_t buflen, bool ascii) { unsigned int i = 0; - int rs = (prandom_u32_max(2) + 1) * 16; + int rs = get_random_u32_inclusive(1, 2) * 16; do { int gs = 1 << i; - size_t len = prandom_u32_max(rs) + gs; + size_t len = get_random_u32_below(rs) + gs; test_hexdump_overflow(buflen, rounddown(len, gs), rs, gs, ascii); } while (i++ < 3); @@ -223,11 +223,11 @@ static int __init test_hexdump_init(void) unsigned int i; int rowsize; - rowsize = (prandom_u32_max(2) + 1) * 16; + rowsize = get_random_u32_inclusive(1, 2) * 16; for (i = 0; i < 16; i++) test_hexdump_set(rowsize, false); - rowsize = (prandom_u32_max(2) + 1) * 16; + rowsize = get_random_u32_inclusive(1, 2) * 16; for (i = 0; i < 16; i++) test_hexdump_set(rowsize, true); diff --git a/lib/test_kprobes.c b/lib/test_kprobes.c index eeb1d728d974..1c95e5719802 100644 --- a/lib/test_kprobes.c +++ b/lib/test_kprobes.c @@ -339,10 +339,7 @@ static int kprobes_test_init(struct kunit *test) stacktrace_target = kprobe_stacktrace_target; internal_target = kprobe_stacktrace_internal_target; stacktrace_driver = kprobe_stacktrace_driver; - - do { - rand1 = get_random_u32(); - } while (rand1 <= div_factor); + rand1 = get_random_u32_above(div_factor); return 0; } diff --git a/lib/test_list_sort.c b/lib/test_list_sort.c index 19ff229b9c3a..cc5f335f29b5 100644 --- a/lib/test_list_sort.c +++ b/lib/test_list_sort.c @@ -71,7 +71,7 @@ static void list_sort_test(struct kunit *test) KUNIT_ASSERT_NOT_ERR_OR_NULL(test, el); /* force some equivalencies */ - el->value = prandom_u32_max(TEST_LIST_LEN / 3); + el->value = get_random_u32_below(TEST_LIST_LEN / 3); el->serial = i; el->poison1 = TEST_POISON1; el->poison2 = TEST_POISON2; diff --git a/lib/test_printf.c b/lib/test_printf.c index d6a5d4b5f884..d34dc636b81c 100644 --- a/lib/test_printf.c +++ b/lib/test_printf.c @@ -126,7 +126,7 @@ __test(const char *expect, int elen, const char *fmt, ...) * be able to print it as expected. */ failed_tests += do_test(BUF_SIZE, expect, elen, fmt, ap); - rand = 1 + prandom_u32_max(elen+1); + rand = get_random_u32_inclusive(1, elen + 1); /* Since elen < BUF_SIZE, we have 1 <= rand <= BUF_SIZE. */ failed_tests += do_test(rand, expect, elen, fmt, ap); failed_tests += do_test(0, expect, elen, fmt, ap); diff --git a/lib/test_rhashtable.c b/lib/test_rhashtable.c index f2ba5787055a..6a8e445c8b55 100644 --- a/lib/test_rhashtable.c +++ b/lib/test_rhashtable.c @@ -368,8 +368,8 @@ static int __init test_rhltable(unsigned int entries) pr_info("test %d random rhlist add/delete operations\n", entries); for (j = 0; j < entries; j++) { - u32 i = prandom_u32_max(entries); - u32 prand = prandom_u32_max(4); + u32 i = get_random_u32_below(entries); + u32 prand = get_random_u32_below(4); cond_resched(); @@ -396,7 +396,7 @@ static int __init test_rhltable(unsigned int entries) } if (prand & 2) { - i = prandom_u32_max(entries); + i = get_random_u32_below(entries); if (test_bit(i, obj_in_table)) { err = rhltable_remove(&rhlt, &rhl_test_objects[i].list_node, test_rht_params); WARN(err, "cannot remove element at slot %d", i); diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c index cf7780572f5b..f90d2c27675b 100644 --- a/lib/test_vmalloc.c +++ b/lib/test_vmalloc.c @@ -151,7 +151,7 @@ static int random_size_alloc_test(void) int i; for (i = 0; i < test_loop_count; i++) { - n = prandom_u32_max(100) + 1; + n = get_random_u32_inclusive(1, 100); p = vmalloc(n * PAGE_SIZE); if (!p) @@ -291,12 +291,12 @@ pcpu_alloc_test(void) return -1; for (i = 0; i < 35000; i++) { - size = prandom_u32_max(PAGE_SIZE / 4) + 1; + size = get_random_u32_inclusive(1, PAGE_SIZE / 4); /* * Maximum PAGE_SIZE */ - align = 1 << (prandom_u32_max(11) + 1); + align = 1 << get_random_u32_inclusive(1, 11); pcpu[i] = __alloc_percpu(size, align); if (!pcpu[i]) @@ -391,7 +391,7 @@ static void shuffle_array(int *arr, int n) for (i = n - 1; i > 0; i--) { /* Cut the range. */ - j = prandom_u32_max(i); + j = get_random_u32_below(i); /* Swap indexes. */ swap(arr[i], arr[j]); diff --git a/lib/vsprintf.c b/lib/vsprintf.c index 5b0611c00956..be71a03c936a 100644 --- a/lib/vsprintf.c +++ b/lib/vsprintf.c @@ -41,6 +41,7 @@ #include <linux/siphash.h> #include <linux/compiler.h> #include <linux/property.h> +#include <linux/notifier.h> #ifdef CONFIG_BLOCK #include <linux/blkdev.h> #endif @@ -752,26 +753,21 @@ early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable); static bool filled_random_ptr_key __read_mostly; static siphash_key_t ptr_key __read_mostly; -static void fill_ptr_key_workfn(struct work_struct *work); -static DECLARE_DELAYED_WORK(fill_ptr_key_work, fill_ptr_key_workfn); -static void fill_ptr_key_workfn(struct work_struct *work) +static int fill_ptr_key(struct notifier_block *nb, unsigned long action, void *data) { - if (!rng_is_initialized()) { - queue_delayed_work(system_unbound_wq, &fill_ptr_key_work, HZ * 2); - return; - } - get_random_bytes(&ptr_key, sizeof(ptr_key)); /* Pairs with smp_rmb() before reading ptr_key. */ smp_wmb(); WRITE_ONCE(filled_random_ptr_key, true); + return NOTIFY_DONE; } static int __init vsprintf_init_hashval(void) { - fill_ptr_key_workfn(NULL); + static struct notifier_block fill_ptr_key_nb = { .notifier_call = fill_ptr_key }; + execute_with_initialized_rng(&fill_ptr_key_nb); return 0; } subsys_initcall(vsprintf_init_hashval) diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c index 0d59098f0876..54181eba3e24 100644 --- a/mm/kasan/kasan_test.c +++ b/mm/kasan/kasan_test.c @@ -1299,7 +1299,7 @@ static void match_all_not_assigned(struct kunit *test) KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); for (i = 0; i < 256; i++) { - size = prandom_u32_max(1024) + 1; + size = get_random_u32_inclusive(1, 1024); ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); @@ -1308,7 +1308,7 @@ static void match_all_not_assigned(struct kunit *test) } for (i = 0; i < 256; i++) { - order = prandom_u32_max(4) + 1; + order = get_random_u32_inclusive(1, 4); pages = alloc_pages(GFP_KERNEL, order); ptr = page_address(pages); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); @@ -1321,7 +1321,7 @@ static void match_all_not_assigned(struct kunit *test) return; for (i = 0; i < 256; i++) { - size = prandom_u32_max(1024) + 1; + size = get_random_u32_inclusive(1, 1024); ptr = vmalloc(size); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 141788858b70..6cbd93f2007b 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -360,9 +360,9 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g unsigned long flags; struct slab *slab; void *addr; - const bool random_right_allocate = prandom_u32_max(2); + const bool random_right_allocate = get_random_u32_below(2); const bool random_fault = CONFIG_KFENCE_STRESS_TEST_FAULTS && - !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS); + !get_random_u32_below(CONFIG_KFENCE_STRESS_TEST_FAULTS); /* Try to obtain a free object. */ raw_spin_lock_irqsave(&kfence_freelist_lock, flags); diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c index a97bffe0cc3e..b5d66a69200d 100644 --- a/mm/kfence/kfence_test.c +++ b/mm/kfence/kfence_test.c @@ -532,8 +532,8 @@ static void test_free_bulk(struct kunit *test) int iter; for (iter = 0; iter < 5; iter++) { - const size_t size = setup_test_cache(test, 8 + prandom_u32_max(300), 0, - (iter & 1) ? ctor_set_x : NULL); + const size_t size = setup_test_cache(test, get_random_u32_inclusive(8, 307), + 0, (iter & 1) ? ctor_set_x : NULL); void *objects[] = { test_alloc(test, size, GFP_KERNEL, ALLOCATE_RIGHT), test_alloc(test, size, GFP_KERNEL, ALLOCATE_NONE), diff --git a/mm/slub.c b/mm/slub.c index 891df05a4d45..89b0d962f357 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1939,7 +1939,7 @@ static bool shuffle_freelist(struct kmem_cache *s, struct slab *slab) return false; freelist_count = oo_objects(s->oo); - pos = prandom_u32_max(freelist_count); + pos = get_random_u32_below(freelist_count); page_limit = slab->objects * s->size; start = fixup_red_left(s, slab_address(slab)); diff --git a/mm/swapfile.c b/mm/swapfile.c index 72e481aacd5d..3eedf7ae957f 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -772,8 +772,7 @@ static void set_cluster_next(struct swap_info_struct *si, unsigned long next) /* No free swap slots available */ if (si->highest_bit <= si->lowest_bit) return; - next = si->lowest_bit + - prandom_u32_max(si->highest_bit - si->lowest_bit + 1); + next = get_random_u32_inclusive(si->lowest_bit, si->highest_bit); next = ALIGN_DOWN(next, SWAP_ADDRESS_SPACE_PAGES); next = max_t(unsigned int, next, si->lowest_bit); } @@ -3089,7 +3088,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) */ for_each_possible_cpu(cpu) { per_cpu(*p->cluster_next_cpu, cpu) = - 1 + prandom_u32_max(p->highest_bit); + get_random_u32_inclusive(1, p->highest_bit); } nr_cluster = DIV_ROUND_UP(maxpages, SWAPFILE_CLUSTER); diff --git a/net/802/garp.c b/net/802/garp.c index fc9eb02a912f..77aac2763835 100644 --- a/net/802/garp.c +++ b/net/802/garp.c @@ -407,7 +407,7 @@ static void garp_join_timer_arm(struct garp_applicant *app) { unsigned long delay; - delay = prandom_u32_max(msecs_to_jiffies(garp_join_time)); + delay = get_random_u32_below(msecs_to_jiffies(garp_join_time)); mod_timer(&app->join_timer, jiffies + delay); } diff --git a/net/802/mrp.c b/net/802/mrp.c index 155f74d8b14f..8c6f0381023b 100644 --- a/net/802/mrp.c +++ b/net/802/mrp.c @@ -592,7 +592,7 @@ static void mrp_join_timer_arm(struct mrp_applicant *app) { unsigned long delay; - delay = prandom_u32_max(msecs_to_jiffies(mrp_join_time)); + delay = get_random_u32_below(msecs_to_jiffies(mrp_join_time)); mod_timer(&app->join_timer, jiffies + delay); } diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c index 7f6a7c96ac92..114ee5da261f 100644 --- a/net/batman-adv/bat_iv_ogm.c +++ b/net/batman-adv/bat_iv_ogm.c @@ -280,7 +280,7 @@ batadv_iv_ogm_emit_send_time(const struct batadv_priv *bat_priv) unsigned int msecs; msecs = atomic_read(&bat_priv->orig_interval) - BATADV_JITTER; - msecs += prandom_u32_max(2 * BATADV_JITTER); + msecs += get_random_u32_below(2 * BATADV_JITTER); return jiffies + msecs_to_jiffies(msecs); } @@ -288,7 +288,7 @@ batadv_iv_ogm_emit_send_time(const struct batadv_priv *bat_priv) /* when do we schedule a ogm packet to be sent */ static unsigned long batadv_iv_ogm_fwd_send_time(void) { - return jiffies + msecs_to_jiffies(prandom_u32_max(BATADV_JITTER / 2)); + return jiffies + msecs_to_jiffies(get_random_u32_below(BATADV_JITTER / 2)); } /* apply hop penalty for a normal link */ diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c index f1741fbfb617..f9a58fb5442e 100644 --- a/net/batman-adv/bat_v_elp.c +++ b/net/batman-adv/bat_v_elp.c @@ -51,7 +51,7 @@ static void batadv_v_elp_start_timer(struct batadv_hard_iface *hard_iface) unsigned int msecs; msecs = atomic_read(&hard_iface->bat_v.elp_interval) - BATADV_JITTER; - msecs += prandom_u32_max(2 * BATADV_JITTER); + msecs += get_random_u32_below(2 * BATADV_JITTER); queue_delayed_work(batadv_event_workqueue, &hard_iface->bat_v.elp_wq, msecs_to_jiffies(msecs)); diff --git a/net/batman-adv/bat_v_ogm.c b/net/batman-adv/bat_v_ogm.c index 033639df96d8..addfd8c4fe95 100644 --- a/net/batman-adv/bat_v_ogm.c +++ b/net/batman-adv/bat_v_ogm.c @@ -90,7 +90,7 @@ static void batadv_v_ogm_start_queue_timer(struct batadv_hard_iface *hard_iface) unsigned int msecs = BATADV_MAX_AGGREGATION_MS * 1000; /* msecs * [0.9, 1.1] */ - msecs += prandom_u32_max(msecs / 5) - (msecs / 10); + msecs += get_random_u32_below(msecs / 5) - (msecs / 10); queue_delayed_work(batadv_event_workqueue, &hard_iface->bat_v.aggr_wq, msecs_to_jiffies(msecs / 1000)); } @@ -109,7 +109,7 @@ static void batadv_v_ogm_start_timer(struct batadv_priv *bat_priv) return; msecs = atomic_read(&bat_priv->orig_interval) - BATADV_JITTER; - msecs += prandom_u32_max(2 * BATADV_JITTER); + msecs += get_random_u32_below(2 * BATADV_JITTER); queue_delayed_work(batadv_event_workqueue, &bat_priv->bat_v.ogm_wq, msecs_to_jiffies(msecs)); } diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c index 5f4aeeb60dc4..bf29fba4dde5 100644 --- a/net/batman-adv/network-coding.c +++ b/net/batman-adv/network-coding.c @@ -1009,7 +1009,7 @@ static struct batadv_nc_path *batadv_nc_get_path(struct batadv_priv *bat_priv, static u8 batadv_nc_random_weight_tq(u8 tq) { /* randomize the estimated packet loss (max TQ - estimated TQ) */ - u8 rand_tq = prandom_u32_max(BATADV_TQ_MAX_VALUE + 1 - tq); + u8 rand_tq = get_random_u32_below(BATADV_TQ_MAX_VALUE + 1 - tq); /* convert to (randomized) estimated tq again */ return BATADV_TQ_MAX_VALUE - rand_tq; diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c index a92e7e485feb..81ce668b0b77 100644 --- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -7373,9 +7373,8 @@ static int get_conn_info(struct sock *sk, struct hci_dev *hdev, void *data, /* To avoid client trying to guess when to poll again for information we * calculate conn info age as random value between min/max set in hdev. */ - conn_info_age = hdev->conn_info_min_age + - prandom_u32_max(hdev->conn_info_max_age - - hdev->conn_info_min_age); + conn_info_age = get_random_u32_inclusive(hdev->conn_info_min_age, + hdev->conn_info_max_age - 1); /* Query controller to refresh cached values if they are too old or were * never read. diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c index b670ba03a675..7e90f9e61d9b 100644 --- a/net/can/j1939/socket.c +++ b/net/can/j1939/socket.c @@ -189,7 +189,7 @@ activate_next: int time_ms = 0; if (err) - time_ms = 10 + prandom_u32_max(16); + time_ms = 10 + get_random_u32_below(16); j1939_tp_schedule_txtimer(first, time_ms); } diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c index 55f29c9f9e08..67d36776aff4 100644 --- a/net/can/j1939/transport.c +++ b/net/can/j1939/transport.c @@ -1168,7 +1168,7 @@ static enum hrtimer_restart j1939_tp_txtimer(struct hrtimer *hrtimer) if (session->tx_retry < J1939_XTP_TX_RETRY_LIMIT) { session->tx_retry++; j1939_tp_schedule_txtimer(session, - 10 + prandom_u32_max(16)); + 10 + get_random_u32_below(16)); } else { netdev_alert(priv->ndev, "%s: 0x%p: tx retry count reached\n", __func__, session); diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c index db60217f911b..faabad6603db 100644 --- a/net/ceph/mon_client.c +++ b/net/ceph/mon_client.c @@ -222,7 +222,7 @@ static void pick_new_mon(struct ceph_mon_client *monc) max--; } - n = prandom_u32_max(max); + n = get_random_u32_below(max); if (o >= 0 && n >= o) n++; diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index 4e4f1e4bc265..11c04e7d928e 100644 --- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -1479,7 +1479,7 @@ static bool target_should_be_paused(struct ceph_osd_client *osdc, static int pick_random_replica(const struct ceph_osds *acting) { - int i = prandom_u32_max(acting->size); + int i = get_random_u32_below(acting->size); dout("%s picked osd%d, primary osd%d\n", __func__, acting->osds[i], acting->primary); diff --git a/net/core/neighbour.c b/net/core/neighbour.c index 952a54763358..f00a79fc301b 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -111,7 +111,7 @@ static void neigh_cleanup_and_release(struct neighbour *neigh) unsigned long neigh_rand_reach_time(unsigned long base) { - return base ? prandom_u32_max(base) + (base >> 1) : 0; + return base ? get_random_u32_below(base) + (base >> 1) : 0; } EXPORT_SYMBOL(neigh_rand_reach_time); @@ -1666,7 +1666,7 @@ void pneigh_enqueue(struct neigh_table *tbl, struct neigh_parms *p, struct sk_buff *skb) { unsigned long sched_next = jiffies + - prandom_u32_max(NEIGH_VAR(p, PROXY_DELAY)); + get_random_u32_below(NEIGH_VAR(p, PROXY_DELAY)); if (p->qlen > NEIGH_VAR(p, PROXY_QLEN)) { kfree_skb(skb); diff --git a/net/core/pktgen.c b/net/core/pktgen.c index c3763056c554..760238196db1 100644 --- a/net/core/pktgen.c +++ b/net/core/pktgen.c @@ -2324,7 +2324,7 @@ static inline int f_pick(struct pktgen_dev *pkt_dev) pkt_dev->curfl = 0; /*reset */ } } else { - flow = prandom_u32_max(pkt_dev->cflows); + flow = get_random_u32_below(pkt_dev->cflows); pkt_dev->curfl = flow; if (pkt_dev->flows[flow].count > pkt_dev->lflow) { @@ -2380,9 +2380,8 @@ static void set_cur_queue_map(struct pktgen_dev *pkt_dev) else if (pkt_dev->queue_map_min <= pkt_dev->queue_map_max) { __u16 t; if (pkt_dev->flags & F_QUEUE_MAP_RND) { - t = prandom_u32_max(pkt_dev->queue_map_max - - pkt_dev->queue_map_min + 1) + - pkt_dev->queue_map_min; + t = get_random_u32_inclusive(pkt_dev->queue_map_min, + pkt_dev->queue_map_max); } else { t = pkt_dev->cur_queue_map + 1; if (t > pkt_dev->queue_map_max) @@ -2411,7 +2410,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev) __u32 tmp; if (pkt_dev->flags & F_MACSRC_RND) - mc = prandom_u32_max(pkt_dev->src_mac_count); + mc = get_random_u32_below(pkt_dev->src_mac_count); else { mc = pkt_dev->cur_src_mac_offset++; if (pkt_dev->cur_src_mac_offset >= @@ -2437,7 +2436,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev) __u32 tmp; if (pkt_dev->flags & F_MACDST_RND) - mc = prandom_u32_max(pkt_dev->dst_mac_count); + mc = get_random_u32_below(pkt_dev->dst_mac_count); else { mc = pkt_dev->cur_dst_mac_offset++; @@ -2469,18 +2468,17 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev) } if ((pkt_dev->flags & F_VID_RND) && (pkt_dev->vlan_id != 0xffff)) { - pkt_dev->vlan_id = prandom_u32_max(4096); + pkt_dev->vlan_id = get_random_u32_below(4096); } if ((pkt_dev->flags & F_SVID_RND) && (pkt_dev->svlan_id != 0xffff)) { - pkt_dev->svlan_id = prandom_u32_max(4096); + pkt_dev->svlan_id = get_random_u32_below(4096); } if (pkt_dev->udp_src_min < pkt_dev->udp_src_max) { if (pkt_dev->flags & F_UDPSRC_RND) - pkt_dev->cur_udp_src = prandom_u32_max( - pkt_dev->udp_src_max - pkt_dev->udp_src_min) + - pkt_dev->udp_src_min; + pkt_dev->cur_udp_src = get_random_u32_inclusive(pkt_dev->udp_src_min, + pkt_dev->udp_src_max - 1); else { pkt_dev->cur_udp_src++; @@ -2491,9 +2489,8 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev) if (pkt_dev->udp_dst_min < pkt_dev->udp_dst_max) { if (pkt_dev->flags & F_UDPDST_RND) { - pkt_dev->cur_udp_dst = prandom_u32_max( - pkt_dev->udp_dst_max - pkt_dev->udp_dst_min) + - pkt_dev->udp_dst_min; + pkt_dev->cur_udp_dst = get_random_u32_inclusive(pkt_dev->udp_dst_min, + pkt_dev->udp_dst_max - 1); } else { pkt_dev->cur_udp_dst++; if (pkt_dev->cur_udp_dst >= pkt_dev->udp_dst_max) @@ -2508,7 +2505,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev) if (imn < imx) { __u32 t; if (pkt_dev->flags & F_IPSRC_RND) - t = prandom_u32_max(imx - imn) + imn; + t = get_random_u32_inclusive(imn, imx - 1); else { t = ntohl(pkt_dev->cur_saddr); t++; @@ -2530,8 +2527,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev) if (pkt_dev->flags & F_IPDST_RND) { do { - t = prandom_u32_max(imx - imn) + - imn; + t = get_random_u32_inclusive(imn, imx - 1); s = htonl(t); } while (ipv4_is_loopback(s) || ipv4_is_multicast(s) || @@ -2578,9 +2574,8 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev) if (pkt_dev->min_pkt_size < pkt_dev->max_pkt_size) { __u32 t; if (pkt_dev->flags & F_TXSIZE_RND) { - t = prandom_u32_max(pkt_dev->max_pkt_size - - pkt_dev->min_pkt_size) + - pkt_dev->min_pkt_size; + t = get_random_u32_inclusive(pkt_dev->min_pkt_size, + pkt_dev->max_pkt_size - 1); } else { t = pkt_dev->cur_pkt_size + 1; if (t > pkt_dev->max_pkt_size) @@ -2589,7 +2584,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev) pkt_dev->cur_pkt_size = t; } else if (pkt_dev->n_imix_entries > 0) { struct imix_pkt *entry; - __u32 t = prandom_u32_max(IMIX_PRECISION); + __u32 t = get_random_u32_below(IMIX_PRECISION); __u8 entry_index = pkt_dev->imix_distribution[t]; entry = &pkt_dev->imix_entries[entry_index]; diff --git a/net/core/stream.c b/net/core/stream.c index 75fded8495f5..5b1fe2b82eac 100644 --- a/net/core/stream.c +++ b/net/core/stream.c @@ -123,7 +123,7 @@ int sk_stream_wait_memory(struct sock *sk, long *timeo_p) DEFINE_WAIT_FUNC(wait, woken_wake_function); if (sk_stream_memory_free(sk)) - current_timeo = vm_wait = prandom_u32_max(HZ / 5) + 2; + current_timeo = vm_wait = get_random_u32_below(HZ / 5) + 2; add_wait_queue(sk_sleep(sk), &wait); diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index d5d745c3e345..46aa2d65e40a 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -263,7 +263,7 @@ bool icmp_global_allow(void) /* We want to use a credit of one in average, but need to randomize * it for security reasons. */ - credit = max_t(int, credit - prandom_u32_max(3), 0); + credit = max_t(int, credit - get_random_u32_below(3), 0); rc = true; } WRITE_ONCE(icmp_global.credit, credit); diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index 81be3e0f0e70..c920aa9a62a9 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -213,7 +213,7 @@ static void igmp_stop_timer(struct ip_mc_list *im) /* It must be called with locked im->lock */ static void igmp_start_timer(struct ip_mc_list *im, int max_delay) { - int tv = prandom_u32_max(max_delay); + int tv = get_random_u32_below(max_delay); im->tm_running = 1; if (!mod_timer(&im->timer, jiffies+tv+2)) @@ -222,7 +222,7 @@ static void igmp_start_timer(struct ip_mc_list *im, int max_delay) static void igmp_gq_start_timer(struct in_device *in_dev) { - int tv = prandom_u32_max(in_dev->mr_maxdelay); + int tv = get_random_u32_below(in_dev->mr_maxdelay); unsigned long exp = jiffies + tv + 2; if (in_dev->mr_gq_running && @@ -236,7 +236,7 @@ static void igmp_gq_start_timer(struct in_device *in_dev) static void igmp_ifc_start_timer(struct in_device *in_dev, int delay) { - int tv = prandom_u32_max(delay); + int tv = get_random_u32_below(delay); if (!mod_timer(&in_dev->mr_ifc_timer, jiffies+tv+2)) in_dev_hold(in_dev); diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index 4e84ed21d16f..f22051219b50 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -314,7 +314,7 @@ other_half_scan: if (likely(remaining > 1)) remaining &= ~1U; - offset = prandom_u32_max(remaining); + offset = get_random_u32_below(remaining); /* __inet_hash_connect() favors ports having @low parity * We do the opposite to not pollute connect() users. */ diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 3cec471a2cd2..d039b4e732a3 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -1097,7 +1097,7 @@ ok: * on low contention the randomness is maximal and on high contention * it may be inexistent. */ - i = max_t(int, i, prandom_u32_max(8) * 2); + i = max_t(int, i, get_random_u32_below(8) * 2); WRITE_ONCE(table_perturb[index], READ_ONCE(table_perturb[index]) + i + 2); /* Head lock still held and bh's disabled */ diff --git a/net/ipv4/route.c b/net/ipv4/route.c index cd1fa9f70f1a..de6e3515ab4f 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -471,7 +471,7 @@ static u32 ip_idents_reserve(u32 hash, int segs) old = READ_ONCE(*p_tstamp); if (old != now && cmpxchg(p_tstamp, old, now) == old) - delta = prandom_u32_max(now - old); + delta = get_random_u32_below(now - old); /* If UBSAN reports an error there, please make sure your compiler * supports -fno-strict-overflow before reporting it that was a bug @@ -689,7 +689,7 @@ static void update_or_create_fnhe(struct fib_nh_common *nhc, __be32 daddr, } else { /* Randomize max depth to avoid some side channels attacks. */ int max_depth = FNHE_RECLAIM_DEPTH + - prandom_u32_max(FNHE_RECLAIM_DEPTH); + get_random_u32_below(FNHE_RECLAIM_DEPTH); while (depth > max_depth) { fnhe_remove_oldest(hash); diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c index 54eec33c6e1c..d2c470524e58 100644 --- a/net/ipv4/tcp_bbr.c +++ b/net/ipv4/tcp_bbr.c @@ -618,7 +618,7 @@ static void bbr_reset_probe_bw_mode(struct sock *sk) struct bbr *bbr = inet_csk_ca(sk); bbr->mode = BBR_PROBE_BW; - bbr->cycle_idx = CYCLE_LEN - 1 - prandom_u32_max(bbr_cycle_rand); + bbr->cycle_idx = CYCLE_LEN - 1 - get_random_u32_below(bbr_cycle_rand); bbr_advance_cycle_phase(sk); /* flip to next phase of gain cycle */ } diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 0640453fce54..23cf418efe4f 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3646,7 +3646,8 @@ static void tcp_send_challenge_ack(struct sock *sk) u32 half = (ack_limit + 1) >> 1; WRITE_ONCE(net->ipv4.tcp_challenge_timestamp, now); - WRITE_ONCE(net->ipv4.tcp_challenge_count, half + prandom_u32_max(ack_limit)); + WRITE_ONCE(net->ipv4.tcp_challenge_count, + get_random_u32_inclusive(half, ack_limit + half - 1)); } count = READ_ONCE(net->ipv4.tcp_challenge_count); if (count > 0) { diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index 9c3f5202a97b..d720f6f5de3f 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -104,7 +104,7 @@ static inline u32 cstamp_delta(unsigned long cstamp) static inline s32 rfc3315_s14_backoff_init(s32 irt) { /* multiply 'initial retransmission time' by 0.9 .. 1.1 */ - u64 tmp = (900000 + prandom_u32_max(200001)) * (u64)irt; + u64 tmp = get_random_u32_inclusive(900000, 1100000) * (u64)irt; do_div(tmp, 1000000); return (s32)tmp; } @@ -112,11 +112,11 @@ static inline s32 rfc3315_s14_backoff_init(s32 irt) static inline s32 rfc3315_s14_backoff_update(s32 rt, s32 mrt) { /* multiply 'retransmission timeout' by 1.9 .. 2.1 */ - u64 tmp = (1900000 + prandom_u32_max(200001)) * (u64)rt; + u64 tmp = get_random_u32_inclusive(1900000, 2100000) * (u64)rt; do_div(tmp, 1000000); if ((s32)tmp > mrt) { /* multiply 'maximum retransmission time' by 0.9 .. 1.1 */ - tmp = (900000 + prandom_u32_max(200001)) * (u64)mrt; + tmp = get_random_u32_inclusive(900000, 1100000) * (u64)mrt; do_div(tmp, 1000000); } return (s32)tmp; @@ -3967,7 +3967,7 @@ static void addrconf_dad_kick(struct inet6_ifaddr *ifp) if (ifp->flags & IFA_F_OPTIMISTIC) rand_num = 0; else - rand_num = prandom_u32_max(idev->cnf.rtr_solicit_delay ?: 1); + rand_num = get_random_u32_below(idev->cnf.rtr_solicit_delay ? : 1); nonce = 0; if (idev->cnf.enhanced_dad || diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 7860383295d8..1c02160cf7a4 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -1050,7 +1050,7 @@ bool ipv6_chk_mcast_addr(struct net_device *dev, const struct in6_addr *group, /* called with mc_lock */ static void mld_gq_start_work(struct inet6_dev *idev) { - unsigned long tv = prandom_u32_max(idev->mc_maxdelay); + unsigned long tv = get_random_u32_below(idev->mc_maxdelay); idev->mc_gq_running = 1; if (!mod_delayed_work(mld_wq, &idev->mc_gq_work, tv + 2)) @@ -1068,7 +1068,7 @@ static void mld_gq_stop_work(struct inet6_dev *idev) /* called with mc_lock */ static void mld_ifc_start_work(struct inet6_dev *idev, unsigned long delay) { - unsigned long tv = prandom_u32_max(delay); + unsigned long tv = get_random_u32_below(delay); if (!mod_delayed_work(mld_wq, &idev->mc_ifc_work, tv + 2)) in6_dev_hold(idev); @@ -1085,7 +1085,7 @@ static void mld_ifc_stop_work(struct inet6_dev *idev) /* called with mc_lock */ static void mld_dad_start_work(struct inet6_dev *idev, unsigned long delay) { - unsigned long tv = prandom_u32_max(delay); + unsigned long tv = get_random_u32_below(delay); if (!mod_delayed_work(mld_wq, &idev->mc_dad_work, tv + 2)) in6_dev_hold(idev); @@ -1130,7 +1130,7 @@ static void igmp6_group_queried(struct ifmcaddr6 *ma, unsigned long resptime) } if (delay >= resptime) - delay = prandom_u32_max(resptime); + delay = get_random_u32_below(resptime); if (!mod_delayed_work(mld_wq, &ma->mca_work, delay)) refcount_inc(&ma->mca_refcnt); @@ -2574,7 +2574,7 @@ static void igmp6_join_group(struct ifmcaddr6 *ma) igmp6_send(&ma->mca_addr, ma->idev->dev, ICMPV6_MGM_REPORT); - delay = prandom_u32_max(unsolicited_report_interval(ma->idev)); + delay = get_random_u32_below(unsolicited_report_interval(ma->idev)); if (cancel_delayed_work(&ma->mca_work)) { refcount_dec(&ma->mca_refcnt); diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c index 2685c3f15e9d..b5205311f372 100644 --- a/net/ipv6/output_core.c +++ b/net/ipv6/output_core.c @@ -15,13 +15,7 @@ static u32 __ipv6_select_ident(struct net *net, const struct in6_addr *dst, const struct in6_addr *src) { - u32 id; - - do { - id = get_random_u32(); - } while (!id); - - return id; + return get_random_u32_above(0); } /* This function exists only for tap drivers that must support broken diff --git a/net/ipv6/route.c b/net/ipv6/route.c index 2f355f0ec32a..e74e0361fd92 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -1713,7 +1713,7 @@ static int rt6_insert_exception(struct rt6_info *nrt, net->ipv6.rt6_stats->fib_rt_cache++; /* Randomize max depth to avoid some side channels attacks. */ - max_depth = FIB6_MAX_DEPTH + prandom_u32_max(FIB6_MAX_DEPTH); + max_depth = FIB6_MAX_DEPTH + get_random_u32_below(FIB6_MAX_DEPTH); while (bucket->depth > max_depth) rt6_exception_remove_oldest(bucket); diff --git a/net/netfilter/ipvs/ip_vs_twos.c b/net/netfilter/ipvs/ip_vs_twos.c index f2579fc9c75b..3308e4cc740a 100644 --- a/net/netfilter/ipvs/ip_vs_twos.c +++ b/net/netfilter/ipvs/ip_vs_twos.c @@ -71,8 +71,8 @@ static struct ip_vs_dest *ip_vs_twos_schedule(struct ip_vs_service *svc, * from 0 to total_weight */ total_weight += 1; - rweight1 = prandom_u32_max(total_weight); - rweight2 = prandom_u32_max(total_weight); + rweight1 = get_random_u32_below(total_weight); + rweight2 = get_random_u32_below(total_weight); /* Pick two weighted servers */ list_for_each_entry_rcu(dest, &svc->destinations, n_list) { diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c index 23b3fedd619a..8006ca862551 100644 --- a/net/netfilter/nf_conntrack_core.c +++ b/net/netfilter/nf_conntrack_core.c @@ -906,7 +906,7 @@ nf_conntrack_hash_check_insert(struct nf_conn *ct) nf_ct_zone_id(nf_ct_zone(ct), IP_CT_DIR_REPLY)); } while (nf_conntrack_double_lock(net, hash, reply_hash, sequence)); - max_chainlen = MIN_CHAINLEN + prandom_u32_max(MAX_CHAINLEN); + max_chainlen = MIN_CHAINLEN + get_random_u32_below(MAX_CHAINLEN); /* See if there's one in the list already, including reverse */ hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[hash], hnnode) { @@ -1227,7 +1227,7 @@ __nf_conntrack_confirm(struct sk_buff *skb) goto dying; } - max_chainlen = MIN_CHAINLEN + prandom_u32_max(MAX_CHAINLEN); + max_chainlen = MIN_CHAINLEN + get_random_u32_below(MAX_CHAINLEN); /* See if there's one in the list already, including reverse: NAT could have grabbed it without realizing, since we're not in the hash. If there is, we lost race. */ diff --git a/net/netfilter/nf_nat_helper.c b/net/netfilter/nf_nat_helper.c index a95a25196943..bf591e6af005 100644 --- a/net/netfilter/nf_nat_helper.c +++ b/net/netfilter/nf_nat_helper.c @@ -223,7 +223,7 @@ u16 nf_nat_exp_find_port(struct nf_conntrack_expect *exp, u16 port) if (res != -EBUSY || (--attempts_left < 0)) break; - port = min + prandom_u32_max(range); + port = min + get_random_u32_below(range); } return 0; diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c index a662e8a5ff84..7a401d94463a 100644 --- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -835,7 +835,7 @@ retry: /* Bind collision, search negative portid values. */ if (rover == -4096) /* rover will be in range [S32_MIN, -4097] */ - rover = S32_MIN + prandom_u32_max(-4096 - S32_MIN); + rover = S32_MIN + get_random_u32_below(-4096 - S32_MIN); else if (rover >= -4096) rover = -4097; portid = rover--; diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index 1ab65f7f2a0a..96fea8afc004 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -1350,7 +1350,7 @@ static bool fanout_flow_is_huge(struct packet_sock *po, struct sk_buff *skb) if (READ_ONCE(history[i]) == rxhash) count++; - victim = prandom_u32_max(ROLLOVER_HLEN); + victim = get_random_u32_below(ROLLOVER_HLEN); /* Avoid dirtying the cache line if possible */ if (READ_ONCE(history[victim]) != rxhash) @@ -1386,7 +1386,7 @@ static unsigned int fanout_demux_rnd(struct packet_fanout *f, struct sk_buff *skb, unsigned int num) { - return prandom_u32_max(num); + return get_random_u32_below(num); } static unsigned int fanout_demux_rollover(struct packet_fanout *f, diff --git a/net/sched/act_gact.c b/net/sched/act_gact.c index 62d682b96b88..be267ffaaba7 100644 --- a/net/sched/act_gact.c +++ b/net/sched/act_gact.c @@ -25,7 +25,7 @@ static struct tc_action_ops act_gact_ops; static int gact_net_rand(struct tcf_gact *gact) { smp_rmb(); /* coupled with smp_wmb() in tcf_gact_init() */ - if (prandom_u32_max(gact->tcfg_pval)) + if (get_random_u32_below(gact->tcfg_pval)) return gact->tcf_action; return gact->tcfg_paction; } diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c index 7a25477f5d99..4194480746b0 100644 --- a/net/sched/act_sample.c +++ b/net/sched/act_sample.c @@ -168,7 +168,7 @@ static int tcf_sample_act(struct sk_buff *skb, const struct tc_action *a, psample_group = rcu_dereference_bh(s->psample_group); /* randomly sample packets according to rate */ - if (psample_group && (prandom_u32_max(s->rate) == 0)) { + if (psample_group && (get_random_u32_below(s->rate) == 0)) { if (!skb_at_tc_ingress(skb)) { md.in_ifindex = skb->skb_iif; md.out_ifindex = skb->dev->ifindex; diff --git a/net/sched/sch_choke.c b/net/sched/sch_choke.c index 3ac3e5c80b6f..19c851125901 100644 --- a/net/sched/sch_choke.c +++ b/net/sched/sch_choke.c @@ -183,7 +183,7 @@ static struct sk_buff *choke_peek_random(const struct choke_sched_data *q, int retrys = 3; do { - *pidx = (q->head + prandom_u32_max(choke_len(q))) & q->tab_mask; + *pidx = (q->head + get_random_u32_below(choke_len(q))) & q->tab_mask; skb = q->tab[*pidx]; if (skb) return skb; diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c index fb00ac40ecb7..6ef3021e1169 100644 --- a/net/sched/sch_netem.c +++ b/net/sched/sch_netem.c @@ -513,8 +513,8 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch, goto finish_segs; } - skb->data[prandom_u32_max(skb_headlen(skb))] ^= - 1<<prandom_u32_max(8); + skb->data[get_random_u32_below(skb_headlen(skb))] ^= + 1<<get_random_u32_below(8); } if (unlikely(sch->q.qlen >= sch->limit)) { diff --git a/net/sctp/socket.c b/net/sctp/socket.c index 83628c347744..cfe72085fdc4 100644 --- a/net/sctp/socket.c +++ b/net/sctp/socket.c @@ -8319,7 +8319,7 @@ static int sctp_get_port_local(struct sock *sk, union sctp_addr *addr) inet_get_local_port_range(net, &low, &high); remaining = (high - low) + 1; - rover = prandom_u32_max(remaining) + low; + rover = get_random_u32_below(remaining) + low; do { rover++; diff --git a/net/sctp/transport.c b/net/sctp/transport.c index f8fd98784977..ca1eba95c293 100644 --- a/net/sctp/transport.c +++ b/net/sctp/transport.c @@ -199,7 +199,7 @@ void sctp_transport_reset_hb_timer(struct sctp_transport *transport) if ((time_before(transport->hb_timer.expires, expires) || !timer_pending(&transport->hb_timer)) && !mod_timer(&transport->hb_timer, - expires + prandom_u32_max(transport->rto))) + expires + get_random_u32_below(transport->rto))) sctp_transport_hold(transport); } diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c index f075a9fb5ccc..95ff74706104 100644 --- a/net/sunrpc/cache.c +++ b/net/sunrpc/cache.c @@ -677,7 +677,7 @@ static void cache_limit_defers(void) /* Consider removing either the first or the last */ if (cache_defer_cnt > DFR_MAX) { - if (prandom_u32_max(2)) + if (get_random_u32_below(2)) discard = list_entry(cache_defer_list.next, struct cache_deferred_req, recent); else diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c index 915b9902f673..2e4987dcba29 100644 --- a/net/sunrpc/xprtsock.c +++ b/net/sunrpc/xprtsock.c @@ -1619,7 +1619,7 @@ static int xs_get_random_port(void) if (max < min) return -EADDRINUSE; range = max - min + 1; - rand = prandom_u32_max(range); + rand = get_random_u32_below(range); return rand + min; } diff --git a/net/tipc/socket.c b/net/tipc/socket.c index e902b01ea3cb..b35c8701876a 100644 --- a/net/tipc/socket.c +++ b/net/tipc/socket.c @@ -3010,7 +3010,7 @@ static int tipc_sk_insert(struct tipc_sock *tsk) struct net *net = sock_net(sk); struct tipc_net *tn = net_generic(net, tipc_net_id); u32 remaining = (TIPC_MAX_PORT - TIPC_MIN_PORT) + 1; - u32 portid = prandom_u32_max(remaining) + TIPC_MIN_PORT; + u32 portid = get_random_u32_below(remaining) + TIPC_MIN_PORT; while (remaining--) { portid++; diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index 884eca7f6743..d593d5b6d4b1 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -626,8 +626,7 @@ static int __vsock_bind_connectible(struct vsock_sock *vsk, struct sockaddr_vm new_addr; if (!port) - port = LAST_RESERVED_PORT + 1 + - prandom_u32_max(U32_MAX - LAST_RESERVED_PORT); + port = get_random_u32_above(LAST_RESERVED_PORT); vsock_addr_init(&new_addr, addr->svm_cid, addr->svm_port); diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c index 3d2fe7712ac5..d63a3644ee1a 100644 --- a/net/xfrm/xfrm_state.c +++ b/net/xfrm/xfrm_state.c @@ -2072,7 +2072,7 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high) } else { u32 spi = 0; for (h = 0; h < high-low+1; h++) { - spi = low + prandom_u32_max(high - low + 1); + spi = get_random_u32_inclusive(low, high); x0 = xfrm_state_lookup(net, mark, &x->id.daddr, htonl(spi), x->id.proto, x->props.family); if (x0 == NULL) { newspi = htonl(spi); diff --git a/tools/testing/selftests/wireguard/qemu/kernel.config b/tools/testing/selftests/wireguard/qemu/kernel.config index ce2a04717300..6327c9c400e0 100644 --- a/tools/testing/selftests/wireguard/qemu/kernel.config +++ b/tools/testing/selftests/wireguard/qemu/kernel.config @@ -64,8 +64,6 @@ CONFIG_PROC_FS=y CONFIG_PROC_SYSCTL=y CONFIG_SYSFS=y CONFIG_TMPFS=y -CONFIG_RANDOM_TRUST_CPU=y -CONFIG_RANDOM_TRUST_BOOTLOADER=y CONFIG_CONSOLE_LOGLEVEL_DEFAULT=15 CONFIG_LOG_BUF_SHIFT=18 CONFIG_PRINTK_TIME=y |