aboutsummaryrefslogtreecommitdiff
path: root/include/asm-powerpc/pgtable-ppc64.h
AgeCommit message (Collapse)Author
2008-07-14Merge commit 'origin/HEAD' into test-mergeBenjamin Herrenschmidt
Manual fixup of include/asm-powerpc/pgtable-ppc64.h
2008-07-09powerpc/mm: Define flags for Strong Access OrderingDave Kleikamp
This patch defines: - PROT_SAO, which is passed into mmap() and mprotect() in the prot field - VM_SAO in vma->vm_flags, and - _PAGE_SAO, the combination of WIMG bits in the pte that enables strong access ordering for the page. Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-08Correct hash flushing from huge_ptep_set_wrprotect()David Gibson
As Andy Whitcroft recently pointed out, the current powerpc version of huge_ptep_set_wrprotect() has a bug. It just calls ptep_set_wrprotect() which in turn calls pte_update() then hpte_need_flush() with the 'huge' argument set to 0. This will cause hpte_need_flush() to flush the wrong hash entries (of any). Andy's fix for this is already in the powerpc tree as commit 016b33c4958681c24056abed8ec95844a0da80a3. I have confirmed this is a real bug, not masked by some other synchronization, with a new testcase for libhugetlbfs. A process write a (MAP_PRIVATE) hugepage mapping, fork(), then alter the mapping and have the child incorrectly see the second write. Therefore, this should be fixed for 2.6.26, and for the stable tree. Here is a suitable patch for 2.6.26, which I think will also be suitable for the stable tree (neither of the headers in question has been changed much recently). It is cut down slighlty from Andy's original version, in that it does not include a 32-bit version of huge_ptep_set_wrprotect(). Currently, hugepages are not supported on any 32-bit powerpc platform. When they are, a suitable 32-bit version can be added - the only 32-bit hardware which supports hugepages does not use the conventional hashtable MMU and so will have different needs anyway. Signed-off-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-01powerpc: Add 64 bit version of huge_ptep_set_wrprotectAndy Whitcroft
The implementation of huge_ptep_set_wrprotect() directly calls ptep_set_wrprotect() to mark a hugepte write protected. However this call is not appropriate on ppc64 kernels as this is a small page only implementation. This can lead to the hash not being flushed correctly when a mapping is being converted to COW, allowing processes to continue using the original copy. Currently huge_ptep_set_wrprotect() unconditionally calls ptep_set_wrprotect(). This is fine on ppc32 kernels as this call is generic. On 64 bit this is implemented as: pte_update(mm, addr, ptep, _PAGE_RW, 0); On ppc64 this last parameter is the page size and is passed directly on to hpte_need_flush(): hpte_need_flush(mm, addr, ptep, old, huge); And this directly affects the page size we pass to flush_hash_page(): flush_hash_page(vaddr, rpte, psize, ssize, 0); As this changes the way the hash is calculated we will flush the wrong pages, potentially leaving live hashes to the original page. Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific headers. Signed-off-by: Andy Whitcroft <apw@shadowen.org> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30powerpc: Free a PTE bit on ppc64 with 64K pagesBenjamin Herrenschmidt
This frees a PTE bit when using 64K pages on ppc64. This is done by getting rid of the separate _PAGE_HASHPTE bit. Instead, we just test if any of the 16 sub-page bits is set. For non-combo pages (ie. real 64K pages), we set SUB0 and the location encoding in that field. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-05-15[POWERPC] vmemmap fixes to use smaller pagesBenjamin Herrenschmidt
This changes vmemmap to use a different region (region 0xf) of the address space, and to configure the page size of that region dynamically at boot. The problem with the current approach of always using 16M pages is that it's not well suited to machines that have small amounts of memory such as small partitions on pseries, or PS3's. In fact, on the PS3, failure to allocate the 16M page backing vmmemmap tends to prevent hotplugging the HV's "additional" memory, thus limiting the available memory even more, from my experience down to something like 80M total, which makes it really not very useable. The logic used by my match to choose the vmemmap page size is: - If 16M pages are available and there's 1G or more RAM at boot, use that size. - Else if 64K pages are available, use that - Else use 4K pages I've tested on a POWER6 (16M pages) and on an iSeries POWER3 (4K pages) and it seems to work fine. Note that I intend to change the way we organize the kernel regions & SLBs so the actual region will change from 0xf back to something else at one point, as I simplify the SLB miss handler, but that will be for a later patch. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-28mm: introduce pte_special pte bitNick Piggin
s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory model (which is more dynamic than most). Instead, they had proposed to implement it with an additional path through vm_normal_page(), using a bit in the pte to determine whether or not the page should be refcounted: vm_normal_page() { ... if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { if (vma->vm_flags & VM_MIXEDMAP) { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; #else if (!pfn_valid(pfn)) return NULL; #endif goto out; } ... } This is fine, however if we are allowed to use a bit in the pte to determine refcountedness, we can use that to _completely_ replace all the vma based schemes. So instead of adding more cases to the already complex vma-based scheme, we can have a clearly seperate and simple pte-based scheme (and get slightly better code generation in the process): vm_normal_page() { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; return pte_page(pte); #else ... #endif } And finally, we may rather make this concept usable by any architecture rather than making it s390 only, so implement a new type of pte state for this. Unfortunately the old vma based code must stay, because some architectures may not be able to spare pte bits. This makes vm_normal_page a little bit more ugly than we would like, but the 2 cases are clearly seperate. So introduce a pte_special pte state, and use it in mm/memory.c. It is currently a noop for all architectures, so this doesn't actually result in any compiled code changes to mm/memory.o. BTW: I haven't put vm_normal_page() into arch code as-per an earlier suggestion. The reason is that, regardless of where vm_normal_page is actually implemented, the *abstraction* is still exactly the same. Also, while it depends on whether the architecture has pte_special or not, that is the only two possible cases, and it really isn't an arch specific function -- the role of the arch code should be to provide primitive functions and accessors with which to build the core code; pte_special does that. We do not want architectures to know or care about vm_normal_page itself, and we definitely don't want them being able to invent something new there out of sight of mm/ code. If we made vm_normal_page an arch function, then we have to make vm_insert_mixed (next patch) an arch function too. So I don't think moving it to arch code fundamentally improves any abstractions, while it does practically make the code more difficult to follow, for both mm and arch developers, and easier to misuse. [akpm@linux-foundation.org: build fix] Signed-off-by: Nick Piggin <npiggin@suse.de> Acked-by: Carsten Otte <cotte@de.ibm.com> Cc: Jared Hulbert <jaredeh@gmail.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-16ppc64: SPARSEMEM_VMEMMAP supportAndy Whitcroft
Enable virtual memmap support for SPARSEMEM on PPC64 systems. Slice a 16th off the end of the linear mapping space and use that to hold the vmemmap. Uses the same size mapping as uses in the linear 1:1 kernel mapping. [pbadari@gmail.com: fix warning] Signed-off-by: Andy Whitcroft <apw@shadowen.org> Acked-by: Mel Gorman <mel@csn.ul.ie> Cc: Christoph Lameter <clameter@sgi.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-17mm: remove ptep_test_and_clear_dirty and ptep_clear_flush_dirtyMartin Schwidefsky
Nobody is using ptep_test_and_clear_dirty and ptep_clear_flush_dirty. Remove the functions from all architectures. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-16Merge branch 'merge' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc * 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (209 commits) [POWERPC] Create add_rtc() function to enable the RTC CMOS driver [POWERPC] Add H_ILLAN_ATTRIBUTES hcall number [POWERPC] xilinxfb: Parameterize xilinxfb platform device registration [POWERPC] Oprofile support for Power 5++ [POWERPC] Enable arbitary speed tty ioctls and split input/output speed [POWERPC] Make drivers/char/hvc_console.c:khvcd() static [POWERPC] Remove dead code for preventing pread() and pwrite() calls [POWERPC] Remove unnecessary #undef printk from prom.c [POWERPC] Fix typo in Ebony default DTS [POWERPC] Check for NULL ppc_md.init_IRQ() before calling [POWERPC] Remove extra return statement [POWERPC] pasemi: Don't auto-select CONFIG_EMBEDDED [POWERPC] pasemi: Rename platform [POWERPC] arch/powerpc/kernel/sysfs.c: Move NUMA exports [POWERPC] Add __read_mostly support for powerpc [POWERPC] Modify sched_clock() to make CONFIG_PRINTK_TIME more sane [POWERPC] Create a dummy zImage if no valid platform has been selected [POWERPC] PS3: Bootwrapper support. [POWERPC] powermac i2c: Use mutex [POWERPC] Schedule removal of arch/ppc ... Fixed up conflicts manually in: Documentation/feature-removal-schedule.txt arch/powerpc/kernel/pci_32.c arch/powerpc/kernel/pci_64.c include/asm-powerpc/pci.h and asked the powerpc people to double-check the result..
2007-07-16page table handling cleanupJan Beulich
Kill pte_rdprotect(), pte_exprotect(), pte_mkread(), pte_mkexec(), pte_read(), pte_exec(), and pte_user() except where arch-specific code is making use of them. Signed-off-by: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-11Merge branch 'for-2.6.23' into mergePaul Mackerras
2007-06-16Rework ptep_set_access_flags and fix sun4cBenjamin Herrenschmidt
Some changes done a while ago to avoid pounding on ptep_set_access_flags and update_mmu_cache in some race situations break sun4c which requires update_mmu_cache() to always be called on minor faults. This patch reworks ptep_set_access_flags() semantics, implementations and callers so that it's now responsible for returning whether an update is necessary or not (basically whether the PTE actually changed). This allow fixing the sparc implementation to always return 1 on sun4c. [akpm@linux-foundation.org: fixes, cleanups] Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Hugh Dickins <hugh@veritas.com> Cc: David Miller <davem@davemloft.net> Cc: Mark Fortescue <mark@mtfhpc.demon.co.uk> Acked-by: William Lee Irwin III <wli@holomorphy.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-14[POWERPC] Start factoring pgtable-ppc32.h and pgtable-ppc64.hDavid Gibson
This factors some things defined in both pgtable-ppc32.h and pgtable-ppc64.h into the common part of asm-powerpc/pgtable.h. These are all things which have essentially identical definitions, and which by their nature are very unlikely ever to need different definitions in the two cases. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-14[POWERPC] Rewrite IO allocation & mapping on powerpc64Benjamin Herrenschmidt
This rewrites pretty much from scratch the handling of MMIO and PIO space allocations on powerpc64. The main goals are: - Get rid of imalloc and use more common code where possible - Simplify the current mess so that PIO space is allocated and mapped in a single place for PCI bridges - Handle allocation constraints of PIO for all bridges including hot plugged ones within the 2GB space reserved for IO ports, so that devices on hotplugged busses will now work with drivers that assume IO ports fit in an int. - Cleanup and separate tracking of the ISA space in the reserved low 64K of IO space. No ISA -> Nothing mapped there. I booted a cell blade with IDE on PIO and MMIO and a dual G5 so far, that's it :-) With this patch, all allocations are done using the code in mm/vmalloc.c, though we use the low level __get_vm_area with explicit start/stop constraints in order to manage separate areas for vmalloc/vmap, ioremap, and PCI IOs. This greatly simplifies a lot of things, as you can see in the diffstat of that patch :-) A new pair of functions pcibios_map/unmap_io_space() now replace all of the previous code that used to manipulate PCI IOs space. The allocation is done at mapping time, which is now called from scan_phb's, just before the devices are probed (instead of after, which is by itself a bug fix). The only other caller is the PCI hotplug code for hot adding PCI-PCI bridges (slots). imalloc is gone, as is the "sub-allocation" thing, but I do beleive that hotplug should still work in the sense that the space allocation is always done by the PHB, but if you unmap a child bus of this PHB (which seems to be possible), then the code should properly tear down all the HPTE mappings for that area of the PHB allocated IO space. I now always reserve the first 64K of IO space for the bridge with the ISA bus on it. I have moved the code for tracking ISA in a separate file which should also make it smarter if we ever are capable of hot unplugging or re-plugging an ISA bridge. This should have a side effect on platforms like powermac where VGA IOs will no longer work. This is done on purpose though as they would have worked semi-randomly before. The idea at this point is to isolate drivers that might need to access those and fix them by providing a proper function to obtain an offset to the legacy IOs of a given bus. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-02[POWERPC] Remove arch/powerpc's dependence on asm-ppc/pg{alloc,table}.hDavid Gibson
Currently, all 32-bit powerpc platforms use asm-ppc/pgtable.h and asm-ppc/pgalloc.h, even when otherwise compiled with ARCH=powerpc. Those asm-ppc files are a fairly nasty tangle of #ifdefs including a bunch of things which shouldn't be necessary any more in arch/powerpc. Cleaning up that mess is going to take a while, but this patch is a first step. It separates the asm-powerpc/pg{alloc,table}.h into 64 bit and 32 bit versions in asm-powerpc, which the basic .h files in asm-powerpc select based on config. We make a few tiny tweaks to the innards of the files along the way, making the outermost ifdefs (double-inclusion protection and __KERNEL__) a little cleaner, and #including asm-generic/pgtable.h from the top-level asm-powerpc/pgtable.h (since both the old 32-bit and 64-bit versions ended with such an #include). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>