aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2015-07-30ARM64: PCI: do not enable resources on PROBE_ONLY systemsLorenzo Pieralisi
On ARM64 PROBE_ONLY PCI systems resources are not currently claimed, therefore they can't be enabled since they do not have a valid parent pointer; this in turn prevents enabling PCI devices on ARM64 PROBE_ONLY systems, causing PCI devices initialization to fail. To solve this issue, resources must be claimed when devices are added on PROBE_ONLY systems, which ensures that the resource hierarchy is validated and the resource tree is sane, but this requires changes in the ARM64 resource management that can affect adversely existing PCI set-ups (claiming resources on !PROBE_ONLY systems might break existing ARM64 PCI platform implementations). As a temporary solution in preparation for a proper resources claiming implementation in ARM64 core, to enable PCI PROBE_ONLY systems on ARM64, this patch adds a pcibios_enable_device() arch implementation that simply prevents enabling resources on PROBE_ONLY systems (mirroring ARM behaviour). This is always a safe thing to do because on PROBE_ONLY systems the configuration space set-up can be considered immutable, and it is in preparation of proper resource claiming that would finally validate the PCI resources tree in the ARM64 arch implementation on PROBE_ONLY systems. For !PROBE_ONLY systems resources enablement in pcibios_enable_device() on ARM64 is implemented as in current PCI core, leaving the behaviour unchanged. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-30arm64: cmpxchg: truncate sub-word signed types before comparisonWill Deacon
When performing a cmpxchg operation on a signed sub-word type (e.g. s8), we need to ensure that the upper register bits of the "old" value used for comparison are zeroed, otherwise we may erroneously fail the cmpxchg which may even be interpreted as success by the caller (if the compiler performs the truncation as part of its check). This has been observed in mod_state, where negative values where causing problems with this_cpu_cmpxchg. This patch fixes the issue by explicitly casting 8-bit and 16-bit "old" values using unsigned types in our cmpxchg wrappers. 32-bit types can be left alone, since the underlying asm makes use of W registers in this case. Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-30arm64: alternative: put secondary CPUs into polling loop during patchWill Deacon
When patching the kernel text with alternatives, we may end up patching parts of the stop_machine state machine (e.g. atomic_dec_and_test in ack_state) and consequently corrupt the instruction stream of any secondary CPUs. This patch passes the cpu_online_mask to stop_machine, forcing all of the CPUs into our own callback which can place the secondary cores into a dumb (but safe!) polling loop whilst the patching is carried out. Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-29arm64/Documentation: clarify wording regarding memory below the ImageArd Biesheuvel
Clarify that the memory below the start of the image but inside the region covered by the linear mapping has no special significance to the kernel, and may be used by the firmware provided that it is marked as reserved. Also, fix up some whitespace errors. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-29arm64: lse: fix lse cmpxchg code indentationWill Deacon
For some reason, the ll/sc cmpxchg asm is all off to the left and awkward to read in conjunction with the following (correctly indented) LSE version. This patch shifts the ll/sc code back to where it should be. Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-29arm64: remove redundant object file listJonas Rabenstein
Commit 4b3dc9679cf7 ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs") forces SMP on arm64. To build the necessary objects for SMP, they were added to the arm64-obj-y rule in arch/arm64/kernel/Makefile, without removing the arm64-obj-$(CONFIG_SMP) rule. Remove redundant object file list depending on always-yes CONFIG_SMP in arch/arm64/kernel/Makefile. Signed-off-by: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-29arm64: remove dead-code depending on CONFIG_UP_LATE_INITJonas Rabenstein
Commit 4b3dc9679cf7 ("arm64: force CONFIG_SMP=y and remove redundant and therfore can not be selected anymore. Remove dead #ifdef-block depending on UP_LATE_INIT in arch/arm64/kernel/setup.c Signed-off-by: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> [will: kill do_post_cpus_up_work altogether] Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28arm64: pgtable: fix definition of pte_validWill Deacon
pte_valid should check if the PTE_VALID bit (1 << 0) is set in the pte, so fix the macro definition to use bitwise & instead of logical &&. Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28arm64: spinlock: fix ll/sc unlock on big-endian systemsWill Deacon
When unlocking a spinlock, we perform a read-modify-write on the owner ticket in order to increment it and store it back with release semantics. In the LL/SC case, we load the 16-bit ticket using a 32-bit load and therefore store back the wrong halfword on a big-endian system, corrupting the lock after the first unlock and killing the system dead. This patch fixes the unlock code to use 16-bit accessors consistently. Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28arm64: Use last level TLBI for user pte changesCatalin Marinas
The flush_tlb_page() function is used on user address ranges when PTEs (or PMDs/PUDs for huge pages) were changed (attributes or clearing). For such cases, it is more efficient to invalidate only the last level of the TLB with the "tlbi vale1is" instruction. In the TLB shoot-down case, the TLB caching of the intermediate page table levels (pmd, pud, pgd) is handled by __flush_tlb_pgtable() via the __(pte|pmd|pud)_free_tlb() functions and it is not deferred to tlb_finish_mmu() (as of commit 285994a62c80 - "arm64: Invalidate the TLB corresponding to intermediate page table levels"). The tlb_flush() function only needs to invalidate the TLB for the last level of page tables; the __flush_tlb_range() function gains a fourth argument for last level TLBI. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28arm64: Clean up __flush_tlb(_kernel)_range functionsCatalin Marinas
This patch moves the MAX_TLB_RANGE check into the flush_tlb(_kernel)_range functions directly to avoid the undescore-prefixed definitions (and for consistency with a subsequent patch). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28arm64: mm: mark create_mapping as __initMark Rutland
Currently create_mapping is marked with __ref, apparently because it refers to early_alloc. However, create_mapping has no logic to prevent erroneous use of early_alloc after it has been freed, and is only ever called by __init functions anyway. Thus the __ref marker is misleading and unnecessary. Instead, this patch marks create_mapping as __init, resulting in warnings if it is used from a a non __init functions, and allowing its memory to be reclaimed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: debug: rename enum debug_el to avoid symbol collisionWill Deacon
lib/list_sort.c defines a 'struct debug_el', where "el" is assumedly a a contraction of "element". This conflicts with 'enum debug_el' in our asm/debug-monitors.h header file, where "el" stands for Exception Level. The result is build failure when targetting allmodconfig, so rename our enum to 'dbg_active_el' to be slightly more explicit about what it is. Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: mm: add __init section marker to free_initrd_memWang Long
It is not needed after booting, this patch moves the free_initrd_mem() function to the __init section. This patch also make keep_initrd __initdata, to reduce kernel size. Signed-off-by: Wang Long <long.wanglong@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: elf: use cpuid_feature_extract_field for hwcap detectionWill Deacon
cpuid_feature_extract_field takes care of the fiddly ID register field sign-extension, so use that instead of rolling our own version. Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: lse: use generic cpufeature detection for LSE atomicsWill Deacon
Rework the cpufeature detection to support ISAR0 and use that for detecting the presence of LSE atomics. Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: kconfig: group the v8.1 features togetherWill Deacon
ARMv8 CPUs do not support any of the v8.1 features, so group them together in Kconfig to make it clear that they're part of 8.1 and not relevant to older cores. Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: lse: rename ARM64_CPU_FEAT_LSE_ATOMICS for consistencyWill Deacon
Other CPU features follow an 'ARM64_HAS_*' naming scheme, so do the same for the LSE atomics. Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: kconfig: select HAVE_CMPXCHG_LOCALWill Deacon
We implement an optimised cmpxchg_local macro, so let the kernel know. Reviewed-by: Steve Capper <steve.capper@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: atomic64_dec_if_positive: fix incorrect branch conditionWill Deacon
If we attempt to atomic64_dec_if_positive on INT_MIN, we will underflow and incorrectly decide that the original parameter was positive. This patches fixes the broken condition code so that we handle this corner case correctly. Reviewed-by: Steve Capper <steve.capper@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: atomics: implement atomic{,64}_cmpxchg using cmpxchgWill Deacon
We don't need duplicate cmpxchg implementations, so use cmpxchg to implement atomic{,64}_cmpxchg, like we do for xchg already. Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: atomics: prefetch the destination word for write prior to stxrWill Deacon
The cost of changing a cacheline from shared to exclusive state can be significant, especially when this is triggered by an exclusive store, since it may result in having to retry the transaction. This patch makes use of prfm to prefetch cachelines for write prior to ldxr/stxr loops when using the ll/sc atomic routines. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: atomics: tidy up common atomic{,64}_* macrosWill Deacon
The common (i.e. identical for ll/sc and lse) atomic macros in atomic.h are needlessley different for atomic_t and atomic64_t. This patch tidies up the definitions to make them consistent across the two atomic types and factors out common code such as the add_unless implementation based on cmpxchg. Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: cmpxchg: avoid memory barrier on comparison failureWill Deacon
cmpxchg doesn't require memory barrier semantics when the value comparison fails, so make the barrier conditional on success. Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: cmpxchg: avoid "cc" clobber in ll/sc routinesWill Deacon
We can perform the cmpxchg comparison using eor and cbnz which avoids the "cc" clobber for the ll/sc case and consequently for the LSE case where we may have to fall-back on the ll/sc code at runtime. Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPUWill Deacon
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of our cmpxchg_double primitives so that the LSE casp instruction is used instead. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: cmpxchg: patch in lse instructions when supported by the CPUWill Deacon
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of our cmpxchg primitives so that the LSE cas instruction is used instead. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: xchg: patch in lse instructions when supported by the CPUWill Deacon
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of our xchg primitives so that the LSE swp instruction (yes, you read right!) is used instead. Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: bitops: patch in lse instructions when supported by the CPUWill Deacon
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of our bitops functions so that LSE atomic instructions are used instead. Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: locks: patch in lse instructions when supported by the CPUWill Deacon
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of our locking functions so that LSE atomic instructions are used for spinlocks and rwlocks. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: atomics: patch in lse instructions when supported by the CPUWill Deacon
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of atomic_t and atomic64_t routines so that the call-site for the out-of-line ll/sc sequences is patched with an LSE atomic instruction when we detect that the CPU supports it. If binutils is not recent enough to assemble the LSE instructions, then the ll/sc sequences are inlined as though CONFIG_ARM64_LSE_ATOMICS=n. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: introduce CONFIG_ARM64_LSE_ATOMICS as fallback to ll/sc atomicsWill Deacon
In order to patch in the new atomic instructions at runtime, we need to generate wrappers around the out-of-line exclusive load/store atomics. This patch adds a new Kconfig option, CONFIG_ARM64_LSE_ATOMICS. which causes our atomic functions to branch to the out-of-line ll/sc implementations. To avoid the register spill overhead of the PCS, the out-of-line functions are compiled with specific compiler flags to force out-of-line save/restore of any registers that are usually caller-saved. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: alternatives: add cpu feature for lse atomicsWill Deacon
Add a CPU feature for the LSE atomic instructions, so that they can be patched in at runtime when we detect that they are supported. Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: elf: advertise 8.1 atomic instructions as new hwcapWill Deacon
The ARM v8.1 architecture introduces new atomic instructions to the A64 instruction set for things like cmpxchg, so advertise their availability to userspace using a hwcap. Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: atomics: move ll/sc atomics into separate header fileWill Deacon
In preparation for the Large System Extension (LSE) atomic instructions introduced by ARM v8.1, move the current exclusive load/store (LL/SC) atomics into their own header file. Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: cpufeature.h: add missing #include of kernel.hWill Deacon
cpufeature.h makes use of DECLARE_BITMAP, which in turn relies on the BITS_TO_LONGS and DIV_ROUND_UP macros. This patch includes kernel.h in cpufeature.h to prevent all users having to do the same thing. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27arm64: rwlocks: don't fail trylock purely due to contentionWill Deacon
STXR can fail for a number of reasons, so don't fail an rwlock trylock operation simply because the STXR reported failure. I'm not aware of any issues with the current code, but this makes it consistent with spin_trylock and also other architectures (e.g. arch/arm). Reported-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27Merge branch 'locking/arch-atomic' of ↵Will Deacon
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into aarch64/for-next/core Merge in PeterZ's logical atomic ops so that we can implement them in our subsequent LSE atomics.
2015-07-27atomic: Add simple atomic_t testsPeter Zijlstra
Add a few atomic_t tests, gets some compile coverage for the new operations. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27atomic: Replace atomic_{set,clear}_mask() usagePeter Zijlstra
Replace the deprecated atomic_{set,clear}_mask() usage with the now ubiquous atomic_{or,andnot}() functions. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27atomic: Collapse all atomic_{set,clear}_mask definitionsPeter Zijlstra
Move the now generic definitions of atomic_{set,clear}_mask() into linux/atomic.h to avoid endless and pointless repetition. Also, provide an atomic_andnot() wrapper for those few archs that can implement that. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27atomic: Provide atomic_{or,xor,and}Peter Zijlstra
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27tile: Provide atomic_{or,xor,and}Chris Metcalf
Implement atomic logic ops -- atomic_{or,xor,and}. For tilegx, these are relatively straightforward; the architecture provides atomic "or" and "and", both 32-bit and 64-bit. To support xor we provide a loop using "cmpexch". For the older 32-bit tilepro architecture, we have to extend the set of low-level assembly routines to include 32-bit "and", as well as all three 64-bit routines. Somewhat confusingly, some 32-bit versions are already used by the bitops inlines, with parameter types appropriate for bitops, so we have to do a bit of casting to match "int" to "unsigned long". Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: http://lkml.kernel.org/r/1436474297-32187-1-git-send-email-cmetcalf@ezchip.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27h8300: Provide atomic_{or,xor,and}Peter Zijlstra
Implement atomic logic ops -- atomic_{or,xor,and} These will replace the atomic_{set,clear}_mask functions that are available on some archs. Also rework the atomic implementation in terms of CPP macros to avoid the typical repetition -- I seem to have missed this arch the last time around when I did that. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27frv: Rewrite atomic implementationPeter Zijlstra
Mostly complete rewrite of the FRV atomic implementation, instead of using assembly files, use inline assembler. The out-of-line CONFIG option makes a bit of a mess of things, but a little CPP trickery gets that done too. FRV already had the atomic logic ops but under a non standard name, the reimplementation provides the generic names and provides the intermediate form required for the bitops implementation. The slightly inconsistent __atomic32_fetch_##op naming is because __atomic_fetch_##op conlicts with GCC builtin functions. The 64bit atomic ops use the inline assembly %Ln construct to access the low word register (r+1), afaik this construct was not previously used in the kernel and is completely undocumented, but I found it in the FRV GCC code and it seems to work. FRV had a non-standard definition of atomic_{clear,set}_mask() which would work types other than atomic_t, the one user relying on that (arch/frv/kernel/dma.c) got converted to use the new intermediate form. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27x86: Provide atomic_{or,xor,and}Peter Zijlstra
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27s390: Provide atomic_{or,xor,and}Peter Zijlstra
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27xtensa: Provide atomic_{or,xor,and}Peter Zijlstra
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27sparc: Provide atomic_{or,xor,and}Peter Zijlstra
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27sh: Provide atomic_{or,xor,and}Peter Zijlstra
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>