aboutsummaryrefslogtreecommitdiff
path: root/arch/arm/mm/pmsa-v7.c
AgeCommit message (Collapse)Author
2021-03-25ARM: 9069/1: NOMMU: Fix conversion for_each_membock() to for_each_mem_range()Vladimir Murzin
for_each_mem_range() uses a loop variable, yet looking into code it is not just iteration counter but more complex entity which encodes information about memblock. Thus condition i == 0 looks fragile. Indeed, it broke boot of R-class platforms since it never took i == 0 path (due to i was set to 1). Fix that with restoring original flag check. Fixes: b10d6bca8720 ("arch, drivers: replace for_each_membock() with for_each_mem_range()") Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-10-13arch, drivers: replace for_each_membock() with for_each_mem_range()Mike Rapoport
There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start = __pfn_to_phys(memblock_region_memory_base_pfn(reg); end = __pfn_to_phys(memblock_region_memory_end_pfn(reg)); /* do something with start and end */ } Using for_each_mem_range() iterator is more appropriate in such cases and allows simpler and cleaner code. [akpm@linux-foundation.org: fix arch/arm/mm/pmsa-v7.c build] [rppt@linux.ibm.com: mips: fix cavium-octeon build caused by memblock refactoring] Link: http://lkml.kernel.org/r/20200827124549.GD167163@linux.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-13-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-05-19ARM: 8754/1: NOMMU: Move PMSAv7 MPU under it's own namespaceVladimir Murzin
We are going to support different MPU which programming model is not compatible to PMSAv7, so move PMSAv7 MPU under it's own namespace. Tested-by: Szemz? András <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-01-21ARM: 8740/1: NOMMU: Make sure we do not hold stale data in mem[] arrayVladimir Murzin
adjust_lowmem_bounds() called twice which can lead to stalled data (i.e. subreg) value in mem[] array after the first call. Zero out mem[] array before we allocate MPU regions for memory. Fixes: 5c9d9a1b3a54 ("ARM: 8712/1: NOMMU: Use more MPU regions to cover memory") Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-12-17ARM: 8732/1: NOMMU: Allow userspace to access background MPU regionVladimir Murzin
Currently, with MPU enabled, we prohibit userspace access to anything except RAM. Benjamin, reported that because of that his userspace application cannot access framebuffer's memory he reserved in device tree. It turns out we have no option other than to allow userspace access memory covered by background region. Reported-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-11-06ARM: 8719/1: NOMMU: work around maybe-uninitialized warningArnd Bergmann
The reworked MPU code produces a new warning in some configurations, presumably starting with the code move after the compiler now makes different inlining decisions: arch/arm/mm/pmsa-v7.c: In function 'adjust_lowmem_bounds_mpu': arch/arm/mm/pmsa-v7.c:310:5: error: 'specified_mem_size' may be used uninitialized in this function [-Werror=maybe-uninitialized] This appears to be harmless, as we know that there is always at least one memblock, and the only way this could get triggered is if the for_each_memblock() loop was never entered. I could not come up with a better workaround than initializing the specified_mem_size to zero, but at least that is the value that the variable would have in the hypothetical case of no memblocks. Fixes: 877ec119dbbf ("ARM: 8706/1: NOMMU: Move out MPU setup in separate module") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-10-23ARM: 8713/1: NOMMU: Support MPU in XIP configurationVladimir Murzin
Currently, there is assumption in early MPU setup code that kernel image is located in RAM, which is obviously not true for XIP. To run code from ROM we need to make sure that it is covered by MPU. However, due to we allocate regions (semi-)dynamically we can run into issue of trimming region we are running from in case ROM spawns several MPU regions. To help deal with that we enforce minimum alignments for start end end of XIP address space as 1MB and 128Kb correspondingly. Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-10-23ARM: 8712/1: NOMMU: Use more MPU regions to cover memoryVladimir Murzin
PMSAv7 defines curious alignment requirements to the regions: - size must be power of 2, and - region start must be aligned to the region size Because of that we currently adjust lowmem bounds plus we assign only one MPU region to cover memory all these lead to significant amount of memory could be wasted. As an example, consider 64Mb of memory at 0x70000000 - it fits alignment requirements nicely; now, imagine that 2Mb of memory is reserved for coherent DMA allocation, so now Linux is expected to see 62Mb of memory... and here annoying thing happens - memory gets truncated to 32Mb (we've lost 30Mb!), i.e. MPU layout looks like: 0: base 0x70000000, size 0x2000000 This patch tries to allocate as much as possible MPU slots to minimise amount of truncated memory. Moreover, with this patch MPU subregions starting to get used. MPU subregions allow us reduce the number of MPU slots used. For example given above, MPU layout looks like: 0: base 0x70000000, size 0x2000000 1: base 0x72000000, size 0x1000000 2: base 0x73000000, size 0x1000000, disable subreg 7 (0x73e00000 - 0x73ffffff) Where without subregions we'd get: 0: base 0x70000000, size 0x2000000 1: base 0x72000000, size 0x1000000 2: base 0x73000000, size 0x800000 3: base 0x73800000, size 0x400000 4: base 0x73c00000, size 0x200000 To achieve better layout we fist try to cover specified memory as is (maybe with help of subregions) and if we failed, we truncate memory to fit alignment requirements (so it occupies one MPU slot) and perform one more attempt with the reminder, and so on till we either cover all memory or run out of MPU slots. Tested-by: Szemző András <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-10-23ARM: 8711/1: V7M: Add support for MPU to M-classVladimir Murzin
This patch makes it possible to use MPU with v7M cores. Tested-by: Szemző András <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-10-23ARM: 8708/1: NOMMU: Rework MPU to be mostly done in CVladimir Murzin
Currently, there are several issues with how MPU is setup: 1. We won't boot if MPU is missing 2. We won't boot if use XIP 3. Further extension of MPU setup requires asm skills The 1st point can be relaxed, so we can continue with boot CPU even if MPU is missed and fail boot for secondaries only. To address the 2nd point we could create region covering CONFIG_XIP_PHYS_ADDR - _end and that might work for the first stage of MPU enable, but due to MPU's alignment requirement we could cover too much, IOW we need more flexibility in how we're partitioning memory regions... and it'd be hardly possible to archive because of the 3rd point. This patch is trying to address 1st and 3rd issues and paves the path for 2nd and further improvements. The most visible change introduced with this patch is that we start using mpu_rgn_info array (as it was supposed?), so change in MPU setup done by boot CPU is recorded there and feed to secondaries. It allows us to keep minimal region setup for boot CPU and do the rest in C. Since we start programming MPU regions in C evaluation of MPU constrains (number of regions supported and minimal region order) can be done once, which in turn open possibility to free-up "probe" region early. Tested-by: Szemző András <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-10-23ARM: 8707/1: NOMMU: Update MPU accessors to use cp15 helpersVladimir Murzin
Currently, inline assembly for accessing to MPU's cp15 lacks volatile keyword which opens possibility to compiler to optimise such accesses as soon as we start using them more intensively. Rather than fixing inline asm, lets move MPU accessors to use cp15 helpers which do the right thing. Tested-by: Szemző András <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-10-23ARM: 8706/1: NOMMU: Move out MPU setup in separate moduleVladimir Murzin
Having MPU handling code in dedicated module makes it easier to enhance/maintain it. Tested-by: Szemző András <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>