aboutsummaryrefslogtreecommitdiff
path: root/certs/blacklist_hashes.c
diff options
context:
space:
mode:
authorMichael Ellerman2022-06-16 18:41:49 +1000
committerMichael Ellerman2022-06-18 10:18:55 +1000
commit6cf06c17e94f26c290fd3370a5c36514ae15ac43 (patch)
tree0d986314d00bc9cd7194356c1b91126fc3ac5cf8 /certs/blacklist_hashes.c
parentb13baccc3850ca8b8cccbf8ed9912dbaa0fdf7f3 (diff)
powerpc/mm: Move CMA reservations after initmem_init()
After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment") there is an error at boot about the KVM CMA reservation failing, eg: kvm_cma_reserve: reserving 6553 MiB for global area cma: Failed to reserve 6553 MiB That makes it impossible to start KVM guests using the hash MMU with more than 2G of memory, because the VM is unable to allocate a large enough region for the hash page table, eg: $ qemu-system-ppc64 -enable-kvm -M pseries -m 4G ... qemu-system-ppc64: Failed to allocate KVM HPT of order 25: Cannot allocate memory Aneesh pointed out that this happens because when kvm_cma_reserve() is called, pageblock_order has not been initialised yet, and is still zero, causing the checks in cma_init_reserved_mem() against CMA_MIN_ALIGNMENT_PAGES to fail. Fix it by moving the call to kvm_cma_reserve() after initmem_init(). The pageblock_order is initialised in sparse_init() which is called from initmem_init(). Also move the hugetlb CMA reservation. Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment") Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220616120033.1976732-1-mpe@ellerman.id.au
Diffstat (limited to 'certs/blacklist_hashes.c')
0 files changed, 0 insertions, 0 deletions