aboutsummaryrefslogtreecommitdiff
path: root/arch/xtensa/mm
AgeCommit message (Collapse)Author
2018-04-25signal/xtensa: Use force_sig_fault where appropriateEric W. Biederman
Filling in struct siginfo before calling force_sig_info a tedious and error prone process, where once in a great while the wrong fields are filled out, and siginfo has been inconsistently cleared. Simplify this process by using the helper force_sig_fault. Which takes as a parameters all of the information it needs, ensures all of the fiddly bits of filling in struct siginfo are done properly and then calls force_sig_info. In short about a 5 line reduction in code for every time force_sig_info is called, which makes the calling function clearer. Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Chris Zankel <chris@zankel.net> Cc: linux-xtensa@linux-xtensa.org Acked-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-04-25signal: Ensure every siginfo we send has all bits initializedEric W. Biederman
Call clear_siginfo to ensure every stack allocated siginfo is properly initialized before being passed to the signal sending functions. Note: It is not safe to depend on C initializers to initialize struct siginfo on the stack because C is allowed to skip holes when initializing a structure. The initialization of struct siginfo in tracehook_report_syscall_exit was moved from the helper user_single_step_siginfo into tracehook_report_syscall_exit itself, to make it clear that the local variable siginfo gets fully initialized. In a few cases the scope of struct siginfo has been reduced to make it clear that siginfo siginfo is not used on other paths in the function in which it is declared. Instances of using memset to initialize siginfo have been replaced with calls clear_siginfo for clarity. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-04-05mm: fix races between swapoff and flush dcacheHuang Ying
Thanks to commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB trunks"), after swapoff the address_space associated with the swap device will be freed. So page_mapping() users which may touch the address_space need some kind of mechanism to prevent the address_space from being freed during accessing. The dcache flushing functions (flush_dcache_page(), etc) in architecture specific code may access the address_space of swap device for anonymous pages in swap cache via page_mapping() function. But in some cases there are no mechanisms to prevent the swap device from being swapoff, for example, CPU1 CPU2 __get_user_pages() swapoff() flush_dcache_page() mapping = page_mapping() ... exit_swap_address_space() ... kvfree(spaces) mapping_mapped(mapping) The address space may be accessed after being freed. But from cachetlb.txt and Russell King, flush_dcache_page() only care about file cache pages, for anonymous pages, flush_anon_page() should be used. The implementation of flush_dcache_page() in all architectures follows this too. They will check whether page_mapping() is NULL and whether mapping_mapped() is true to determine whether to flush the dcache immediately. And they will use interval tree (mapping->i_mmap) to find all user space mappings. While mapping_mapped() and mapping->i_mmap isn't used by anonymous pages in swap cache at all. So, to fix the race between swapoff and flush dcache, __page_mapping() is add to return the address_space for file cache pages and NULL otherwise. All page_mapping() invoking in flush dcache functions are replaced with page_mapping_file(). [akpm@linux-foundation.org: simplify page_mapping_file(), per Mike] Link: http://lkml.kernel.org/r/20180305083634.15174-1-ying.huang@intel.com Signed-off-by: "Huang, Ying" <ying.huang@intel.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Chen Liqin <liqin.linux@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Zankel <chris@zankel.net> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Ley Foon Tan <lftan@altera.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-02-15xtensa: fix high memory/reserved memory collisionMax Filippov
Xtensa memory initialization code frees high memory pages without checking whether they are in the reserved memory regions or not. That results in invalid value of totalram_pages and duplicate page usage by CMA and highmem. It produces a bunch of BUGs at startup looking like this: BUG: Bad page state in process swapper pfn:70800 page:be60c000 count:0 mapcount:-127 mapping: (null) index:0x1 flags: 0x80000000() raw: 80000000 00000000 00000001 ffffff80 00000000 be60c014 be60c014 0000000a page dumped because: nonzero mapcount Modules linked in: CPU: 0 PID: 1 Comm: swapper Tainted: G B 4.16.0-rc1-00015-g7928b2cbe55b-dirty #23 Stack: bd839d33 00000000 00000018 ba97b64c a106578c bd839d70 be60c000 00000000 a1378054 bd86a000 00000003 ba97b64c a1066166 bd839da0 be60c000 ffe00000 a1066b58 bd839dc0 be504000 00000000 000002f4 bd838000 00000000 0000001e Call Trace: [<a1065734>] bad_page+0xac/0xd0 [<a106578c>] free_pages_check_bad+0x34/0x4c [<a1066166>] __free_pages_ok+0xae/0x14c [<a1066b58>] __free_pages+0x30/0x64 [<a1365de5>] init_cma_reserved_pageblock+0x35/0x44 [<a13682dc>] cma_init_reserved_areas+0xf4/0x148 [<a10034b8>] do_one_initcall+0x80/0xf8 [<a1361c16>] kernel_init_freeable+0xda/0x13c [<a125b59d>] kernel_init+0x9/0xd0 [<a1004304>] ret_from_kernel_thread+0xc/0x18 Only free high memory pages that are not reserved. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-12-17xtensa: print kernel sections info in mem_initMax Filippov
Output virtual addresses and sizes occupied by the main kernel sections: .text, .rodata, .data, .init and .bss. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-12-16xtensa: add support for KASANMax Filippov
Cover kernel addresses above 0x90000000 by the shadow map. Enable HAVE_ARCH_KASAN when MMU is enabled. Provide kasan_early_init that fills shadow map with writable copies of kasan_zero_page. Call kasan_early_init right after mmu initialization in the setup_arch. Provide kasan_init that allocates proper shadow map pages from the memblock and puts these pages into the shadow map for addresses from VMALLOC area to the end of KSEG. Call kasan_init right after memblock initialization. Don't use KASAN for the boot code, MMU and KASAN initialization and page fault handler. Make kernel stack size 4 times larger when KASAN is enabled to avoid stack overflows. GCC 7.3, 8 or newer is required to build the xtensa kernel with KASAN. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-12-16xtensa: move fixmap and kmap just above the KSEGMax Filippov
The virtual address space between the page table and the VMALLOC region is big enough to host KASAN shadow map and there's enough space between the VMALLOC area and KSEG for the fixmap and kmap. Move fixmap and kmap to the gap between VMALLOC area and KSEG, just above the KSEG. Reorder entries in the kernel memory layout printing code. Drop duplicate PGTABLE_START definition, use XCHAL_PAGE_TABLE_VADDR instead. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-12-16xtensa: don't clear swapper_pg_dir in paging_initMax Filippov
swapper_pg_dir is located in the .bss, so it's zero-initialized anyway. With KASAN enabled paging_init will be called after KASAN initialization, it must not erase page directory entries set up for KASAN shadow map. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-12-16xtensa: extract init_kioMax Filippov
KIO region placement may be specified in the device tree, that's why it's initialized with the rest of MMU after the early_init_devtree. In order to support KASAN the MMU must be initialized earlier. Separate KIO initialization from the rest of MMU initialization. Reinitialize KIO if its location is specified in the device tree. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-12-16xtensa: clean up custom-controlled debug outputMax Filippov
Replace #ifdef'fed/commented out debug printk statements with pr_debug. Replace printk statements with pr_* equivalents. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-11-02License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-01xtensa: mm/cache: add missing EXPORT_SYMBOLsMax Filippov
Functions clear_user_highpage, copy_user_highpage, flush_dcache_page, local_flush_cache_range and local_flush_cache_page may be used from modules. Export them. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-07-28xtensa: fix cache aliasing handling code for WT cacheMax Filippov
Currently building kernel for xtensa core with aliasing WT cache fails with the following messages: mm/memory.c:2152: undefined reference to `flush_dcache_page' mm/memory.c:2332: undefined reference to `local_flush_cache_page' mm/memory.c:1919: undefined reference to `local_flush_cache_range' mm/memory.c:4179: undefined reference to `copy_to_user_page' mm/memory.c:4183: undefined reference to `copy_from_user_page' This happens because implementation of these functions is only compiled when data cache is WB, which looks wrong: even when data cache doesn't need flushing it still needs invalidation. The functions like __flush_[invalidate_]dcache_* are correctly defined for both WB and WT caches (and even if they weren't that'd still be ok, just slower). Fix this by providing the same implementation of the above functions for both WB and WT cache. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-01-24xtensa: migrate exception table users off module.h and onto extable.hPaul Gortmaker
This file was only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile this file. Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: linux-xtensa@linux-xtensa.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2016-12-15xtensa: enable HAVE_DMA_CONTIGUOUSMax Filippov
Enable HAVE_DMA_CONTIGUOUS, reserve contiguous memory at bootmem_init, use dma_alloc_from_contiguous and dma_release_from_contiguous in xtensa_dma_alloc/free. This allows for big contiguous DMA buffer allocation from designated area configured in the device tree. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-10-05Merge tag 'xtensa-20161005' of git://github.com/jcmvbkbc/linux-xtensaLinus Torvalds
Pull Xtensa updates from Max Filippov: "Updates for the xtensa architecture. It is a combined set of patches for 4.8 that never got to the mainline and new patches for 4.9. - add new kernel memory layouts for MMUv3 cores: with 256MB and 512MB KSEG size, starting at physical address other than 0 - make kernel load address configurable - clean up kernel memory layout macros - drop sysmem early allocator and switch to memblock - enable kmemleak and memory reservation from the device tree - wire up new syscalls: userfaultfd, membarrier, mlock2, copy_file_range, preadv2 and pwritev2 - add new platform: Cadence Configurable System Platform (CSP) and new core variant for it: xt_lnx - rearrange CCOUNT calibration code, make most of it generic - improve machine reset code (XTFPGA now reboots reliably with MMUv3 cores) - provide default memmap command line option for configurations without device tree support - ISS fixes: simdisk is now capable of using highmem pages, panic correctly terminates simulator" * tag 'xtensa-20161005' of git://github.com/jcmvbkbc/linux-xtensa: (24 commits) xtensa: disable MMU initialization option on MMUv2 cores xtensa: add default memmap and mmio32native options to defconfigs xtensa: add default memmap option to common_defconfig xtensa: add default memmap option to iss_defconfig xtensa: ISS: allow simdisk to use high memory buffers xtensa: ISS: define simc_exit and use it instead of inline asm xtensa: xtfpga: group platform_* functions together xtensa: rearrange CCOUNT calibration xtensa: xtfpga: use clock provider, don't update DT xtensa: Tweak xuartps UART driver Rx watermark for Cadence CSP config. xtensa: initialize MMU before jumping to reset vector xtensa: fix icountlevel setting in cpu_reset xtensa: extract common CPU reset code into separate function xtensa: Added Cadence CSP kernel configuration for Xtensa xtensa: fix default kernel load address xtensa: wire up new syscalls xtensa: support reserved-memory DT node xtensa: drop sysmem and switch to memblock xtensa: minimize use of PLATFORM_DEFAULT_MEM_{ADDR,SIZE} xtensa: cleanup MMU setup and kernel layout macros ...
2016-07-26mm: do not pass mm_struct into handle_mm_faultKirill A. Shutemov
We always have vma->vm_mm around. Link: http://lkml.kernel.org/r/1466021202-61880-8-git-send-email-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-24xtensa: support reserved-memory DT nodeMax Filippov
This allows reserving regions of physical memory from the device tree. See Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt for more details. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-07-24xtensa: drop sysmem and switch to memblockMax Filippov
Memblock is the standard kernel boot-time memory tracker/allocator. Use it instead of the custom sysmem allocator. This allows using kmemleak, CMA and device tree memory reservation. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-07-24xtensa: add alternative kernel memory layoutsMax Filippov
MMUv3 is able to support low memory bigger than 128MB. Implement 256MB and 512MB KSEG layouts: - add Kconfig selector for KSEG layout; - add KSEG base address, size and alignment definitions to arch/xtensa/include/asm/kmem_layout.h; - use new definitions in TLB initialization; - add build time memory map consistency checks. See Documentation/xtensa/mmu.txt for the details of new memory layouts. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-07-24xtensa: move kernel mapping addresses into kmem_layout.hMax Filippov
Create a header dedicated to memory layout definitions. Include it from places where these definitions are needed. Express vmalloc area address, VIRTUAL_MEMORY_ADDRESS and KERNELOFFSET through KSEG address. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-03-20Merge tag 'xtensa-next-20160320' of git://github.com/czankel/xtensa-linuxLinus Torvalds
Pull Xtensa updates from Chris Zankel: "Xtensa improvements for 4.6: - control whether perf IRQ is treated as NMI from Kconfig - implement ioremap for regions outside KIO segment - fix ISS serial port behaviour when EOF is reached - fix preemption in {clear,copy}_user_highpage - fix endianness issues for XTFPGA devices, big-endian cores are now fully functional - clean up debug infrastructure and add support for hardware breakpoints and watchpoints - add processor configurations for Three Core HiFi-2 MX and HiFi3 cpus" * tag 'xtensa-next-20160320' of git://github.com/czankel/xtensa-linux: xtensa: add test_kc705_hifi variant xtensa: add Three Core HiFi-2 MX Variant. xtensa: support hardware breakpoints/watchpoints xtensa: use context structure for debug exceptions xtensa: remove remaining non-functional KGDB bits xtensa: clear all DBREAKC registers on start xtensa: xtfpga: fix earlycon endianness xtensa: xtfpga: fix i2c controller register width and endianness xtensa: xtfpga: fix ethernet controller endianness xtensa: xtfpga: fix serial port register width and endianness xtensa: define CONFIG_CPU_{BIG,LITTLE}_ENDIAN xtensa: fix preemption in {clear,copy}_user_highpage xtensa: ISS: don't hang if stdin EOF is reached xtensa: support ioremap for memory outside KIO region xtensa: use XTENSA_INT_LEVEL macro in asm/timex.h xtensa: make fake NMI configurable
2016-03-17mm: remove VM_FAULT_MINORJan Kara
The define has a comment from Nick Piggin from 2007: /* For backwards compat. Remove me quickly. */ I guess 9 years should not be too hurried sense of 'quickly' even for kernel measures. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-11xtensa: fix preemption in {clear,copy}_user_highpageMax Filippov
Disabling pagefault makes little sense there, preemption disabling is what was meant. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-01-15mm: differentiate page_mapped() from page_mapcount() for compound pagesKirill A. Shutemov
Let's define page_mapped() to be true for compound pages if any sub-pages of the compound page is mapped (with PMD or PTE). On other hand page_mapcount() return mapcount for this particular small page. This will make cases like page_get_anon_vma() behave correctly once we allow huge pages to be mapped with PTE. Most users outside core-mm should use page_mapcount() instead of page_mapped(). Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: Sasha Levin <sasha.levin@oracle.com> Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-11xtensa: support ioremap for memory outside KIO regionMax Filippov
Map physical memory outside KIO region into the vmalloc area. Unmap it with vunmap. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2015-08-17xtensa: count software page fault perf eventsMax Filippov
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2015-05-19mm/fault, arch: Use pagefault_disable() to check for disabled pagefaults in ↵David Hildenbrand
the handler Introduce faulthandler_disabled() and use it to check for irq context and disabled pagefaults (via pagefault_disable()) in the pagefault handlers. Please note that we keep the in_atomic() checks in place - to detect whether in irq context (in which case preemption is always properly disabled). In contrast, preempt_disable() should never be used to disable pagefaults. With !CONFIG_PREEMPT_COUNT, preempt_disable() doesn't modify the preempt counter, and therefore the result of in_atomic() differs. We validate that condition by using might_fault() checks when calling might_sleep(). Therefore, add a comment to faulthandler_disabled(), describing why this is needed. faulthandler_disabled() and pagefault_disable() are defined in linux/uaccess.h, so let's properly add that include to all relevant files. This patch is based on a patch from Thomas Gleixner. Reviewed-and-tested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: David.Laight@ACULAB.COM Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: airlied@linux.ie Cc: akpm@linux-foundation.org Cc: benh@kernel.crashing.org Cc: bigeasy@linutronix.de Cc: borntraeger@de.ibm.com Cc: daniel.vetter@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: hocko@suse.cz Cc: hughd@google.com Cc: mst@redhat.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: schwidefsky@de.ibm.com Cc: yang.shi@windriver.com Link: http://lkml.kernel.org/r/1431359540-32227-7-git-send-email-dahi@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-19sched/preempt, mm/kmap: Explicitly disable/enable preemption in kmap_atomic_*David Hildenbrand
The existing code relies on pagefault_disable() implicitly disabling preemption, so that no schedule will happen between kmap_atomic() and kunmap_atomic(). Let's make this explicit, to prepare for pagefault_disable() not touching preemption anymore. Reviewed-and-tested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: David.Laight@ACULAB.COM Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: airlied@linux.ie Cc: akpm@linux-foundation.org Cc: benh@kernel.crashing.org Cc: bigeasy@linutronix.de Cc: borntraeger@de.ibm.com Cc: daniel.vetter@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: hocko@suse.cz Cc: hughd@google.com Cc: mst@redhat.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: schwidefsky@de.ibm.com Cc: yang.shi@windriver.com Link: http://lkml.kernel.org/r/1431359540-32227-5-git-send-email-dahi@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-01-29vm: add VM_FAULT_SIGSEGV handling supportLinus Torvalds
The core VM already knows about VM_FAULT_SIGBUS, but cannot return a "you should SIGSEGV" error, because the SIGSEGV case was generally handled by the caller - usually the architecture fault handler. That results in lots of duplication - all the architecture fault handlers end up doing very similar "look up vma, check permissions, do retries etc" - but it generally works. However, there are cases where the VM actually wants to SIGSEGV, and applications _expect_ SIGSEGV. In particular, when accessing the stack guard page, libsigsegv expects a SIGSEGV. And it usually got one, because the stack growth is handled by that duplicated architecture fault handler. However, when the generic VM layer started propagating the error return from the stack expansion in commit fee7e49d4514 ("mm: propagate error from stack expansion even for guard page"), that now exposed the existing VM_FAULT_SIGBUS result to user space. And user space really expected SIGSEGV, not SIGBUS. To fix that case, we need to add a VM_FAULT_SIGSEGV, and teach all those duplicate architecture fault handlers about it. They all already have the code to handle SIGSEGV, so it's about just tying that new return value to the existing code, but it's all a bit annoying. This is the mindless minimal patch to do this. A more extensive patch would be to try to gather up the mostly shared fault handling logic into one generic helper routine, and long-term we really should do that cleanup. Just from this patch, you can generally see that most architectures just copied (directly or indirectly) the old x86 way of doing things, but in the meantime that original x86 model has been improved to hold the VM semaphore for shorter times etc and to handle VM_FAULT_RETRY and other "newer" things, so it would be a good idea to bring all those improvements to the generic case and teach other architectures about them too. Reported-and-tested-by: Takashi Iwai <tiwai@suse.de> Tested-by: Jan Engelhardt <jengelh@inai.de> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # "s390 still compiles and boots" Cc: linux-arch@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-10-21xtensa: nommu: clean up memory map dumpMax Filippov
noMMU configuration doesn't use special area for vmalloc allocations, don't print it in the memory map. PAGE_OFFSET is fixed to 0 in noMMU, use min_low_pfn and max_low_pfn for lowmem range display. Make all XCHAL_KSEG_* constants unsigned long for consistency with other addresses. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-10-21xtensa: nommu: reserve memory below PLATFORM_DEFAULT_MEM_STARTMax Filippov
Memory accounting code can't handle pages below PLATFORM_DEFAULT_MEM_START. Reserve those pages if they exist. When PLATFORM_DEFAULT_MEM_START is zero reserve one page at address 0 to make sure that successfull memory allocations don't return NULL. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-10-21xtensa: nommu: don't build most of the cache flushing codeMax Filippov
Most cache flushing code is only relevant for MMU. Don't build it for nommu configuration. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-08-14xtensa: support highmem in aliasing cache flushing codeMax Filippov
Use __flush_invalidate_dcache_page_alias with alias set to color of the page physical address instead of __flush_invalidate_dcache_page: this works for high memory pages and mapping/unmapping to the TLBTEMP area is virtually free. Allow building configurations with aliasing cache and highmem enabled. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-08-14xtensa: support aliasing cache in kmapMax Filippov
Define ARCH_PKMAP_COLORING and provide corresponding macro definitions on cores with aliasing data cache. Instead of single last_pkmap_nr maintain an array last_pkmap_nr_arr of pkmap counters for each page color. Make sure that kmap maps physical page at virtual address with color matching its physical address. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-08-14xtensa: support aliasing cache in k[un]map_atomicMax Filippov
Map high memory pages at virtual addresses with color that match color of their physical address. Existing cache alias management mechanisms may be used with such pages. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-08-14xtensa: implement clear_user_highpage and copy_user_highpageMax Filippov
Existing clear_user_page and copy_user_page cannot be used with highmem because they calculate physical page address from its virtual address and do it incorrectly in case of high memory page mapped with kmap_atomic. Also kmap is not needed, as most likely userspace mapping color would be different from the kmapped color. Provide clear_user_highpage and copy_user_highpage functions that determine if temporary mapping is needed for the pages. Move most of the logic of the former clear_user_page and copy_user_page to xtensa/mm/cache.c only leaving temporary mapping setup, invalidation and clearing/copying in the xtensa/mm/misc.S. Rename these functions to clear_page_alias and copy_page_alias. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-08-14xtensa: allow fixmap and kmap span more than one page tableMax Filippov
To support aliasing cache both kmap region sizes are multiplied by the number of data cache colors. After that expansion page tables that cover kmap regions may become larger than one page. Correctly allocate and initialize page tables in this case. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-08-14xtensa: make fixmap region addressing grow with indexMax Filippov
It's much easier to reason about alignment and coloring of regions located in the fixmap when fixmap index is just a PFN within the fixmap region. Change fixmap addressing so that index 0 corresponds to FIXADDR_START instead of the FIXADDR_TOP. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-06-09xtensa: fix sysmem reservation at the end of existing blockMax Filippov
When sysmem reservation occurs exactly at the end of an existing block that block is deleted, because it is incorrectly included in the range of memblocks to remove. Fix that by skipping such block. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-06xtensa: add HIGHMEM supportMax Filippov
Introduce fixmap area just below the vmalloc region. Use it for atomic mapping of high memory pages. High memory on cores with cache aliasing is not supported and is still to be implemented. Fail build for such configurations for now. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-06xtensa: optimize local_flush_tlb_kernel_rangeMax Filippov
Don't flush whole TLB if only a small kernel range is requested. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-02xtensa: dump sysmem from the bootmem_initMax Filippov
Debug dump of physical memory configuration. Useful for inspection of resulting memory map, esp. in the presence of memmap= kernel option. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-02xtensa: handle memmap kernel optionMax Filippov
This option is useful for reserving memory regions for secondary cores in AMP configurations. Implement the following memmap variants: - memmap=nn[KMG]@ss[KMG]: force usage of a specific region of memory; - memmap=nn[KMG]$ss[KMG]: mark specified memory as reserved; - memmap=nn[KMG]: set end of memory. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-02xtensa: keep sysmem banks ordered in mem_reserveMax Filippov
Rewrite mem_reserve so that it keeps bank order. Also make its return code more traditional. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-02xtensa: keep sysmem banks ordered in add_sysmem_bankMax Filippov
Rewrite add_sysmem_bank so that it keeps bank order and merges adjacent/overlapping banks. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-02xtensa: split bootparam and kernel meminfoMax Filippov
Bootparam meminfo is a bootloader ABI, kernel meminfo is for the kernel bookkeeping, keep them separate. Kernel doesn't care of memory region types, so drop the type field and don't pass it to add_sysmem_bank. Move kernel sysmem structures and prototypes to asm/sysmem.h and sysmem variable and add_sysmem_bank to mm/init.c Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-02-24Merge tag 'xtensa-for-next-20140221-1' into for_nextChris Zankel
Xtensa fixes for 3.14: - allow booting xtfpga on boards with new uBoot and >128MBytes memory; - drop nonexistent GPIO32 support from fsf variant; - don't select USE_GENERIC_SMP_HELPERS; - enable common clock framework support, set up ethoc clock on xtfpga; - wire up sched_setattr and sched_getattr syscalls. Signed-off-by: Chris Zankel <chris@zankel.net>
2014-02-21xtensa: don't pass high memory to bootmem allocatorMax Filippov
This fixes panic when booting on machine with more than 128M memory passed from the bootloader. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-01-19xtensa: fix warning '"CONFIG_OF" is not defined'Max Filippov
The warning only shows up when building MMUv3 configuration with OF support disabled. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>