aboutsummaryrefslogtreecommitdiff
path: root/Documentation/x86
diff options
context:
space:
mode:
authorMasanari Iida2016-07-01 12:46:01 +0900
committerIngo Molnar2016-07-01 10:00:10 +0200
commitc76a093dc1415d364020b8b33f1e194ef4d26fd0 (patch)
tree7bf0ebac56ddfef7cfa0cbc224acaadbbc909b66 /Documentation/x86
parent1ead852dd88779eda12cb09cc894a03d9abfe1ec (diff)
x86/Documentation: Fix various typos in Documentation/x86/ files
Signed-off-by: Masanari Iida <standby24x7@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: corbet@lwn.net Cc: linux-doc@vger.kernel.org Link: http://lkml.kernel.org/r/20160701034601.30308-1-standby24x7@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'Documentation/x86')
-rw-r--r--Documentation/x86/intel_mpx.txt6
-rw-r--r--Documentation/x86/tlb.txt4
-rw-r--r--Documentation/x86/x86_64/machinecheck2
3 files changed, 6 insertions, 6 deletions
diff --git a/Documentation/x86/intel_mpx.txt b/Documentation/x86/intel_mpx.txt
index 1a5a12184a35..85d0549ad846 100644
--- a/Documentation/x86/intel_mpx.txt
+++ b/Documentation/x86/intel_mpx.txt
@@ -45,7 +45,7 @@ is how we expect the compiler, application and kernel to work together.
MPX-instrumented.
3) The kernel detects that the CPU has MPX, allows the new prctl() to
succeed, and notes the location of the bounds directory. Userspace is
- expected to keep the bounds directory at that locationWe note it
+ expected to keep the bounds directory at that location. We note it
instead of reading it each time because the 'xsave' operation needed
to access the bounds directory register is an expensive operation.
4) If the application needs to spill bounds out of the 4 registers, it
@@ -167,7 +167,7 @@ If a #BR is generated due to a bounds violation caused by MPX.
We need to decode MPX instructions to get violation address and
set this address into extended struct siginfo.
-The _sigfault feild of struct siginfo is extended as follow:
+The _sigfault field of struct siginfo is extended as follow:
87 /* SIGILL, SIGFPE, SIGSEGV, SIGBUS */
88 struct {
@@ -240,5 +240,5 @@ them at the same bounds table.
This is allowed architecturally. See more information "Intel(R) Architecture
Instruction Set Extensions Programming Reference" (9.3.4).
-However, if users did this, the kernel might be fooled in to unmaping an
+However, if users did this, the kernel might be fooled in to unmapping an
in-use bounds table since it does not recognize sharing.
diff --git a/Documentation/x86/tlb.txt b/Documentation/x86/tlb.txt
index 39d172326703..6a0607b99ed8 100644
--- a/Documentation/x86/tlb.txt
+++ b/Documentation/x86/tlb.txt
@@ -5,7 +5,7 @@ memory, it has two choices:
from areas other than the one we are trying to flush will be
destroyed and must be refilled later, at some cost.
2. Use the invlpg instruction to invalidate a single page at a
- time. This could potentialy cost many more instructions, but
+ time. This could potentially cost many more instructions, but
it is a much more precise operation, causing no collateral
damage to other TLB entries.
@@ -19,7 +19,7 @@ Which method to do depends on a few things:
work.
3. The size of the TLB. The larger the TLB, the more collateral
damage we do with a full flush. So, the larger the TLB, the
- more attrative an individual flush looks. Data and
+ more attractive an individual flush looks. Data and
instructions have separate TLBs, as do different page sizes.
4. The microarchitecture. The TLB has become a multi-level
cache on modern CPUs, and the global flushes have become more
diff --git a/Documentation/x86/x86_64/machinecheck b/Documentation/x86/x86_64/machinecheck
index b1fb30273286..d0648a74fceb 100644
--- a/Documentation/x86/x86_64/machinecheck
+++ b/Documentation/x86/x86_64/machinecheck
@@ -36,7 +36,7 @@ between all CPUs.
check_interval
How often to poll for corrected machine check errors, in seconds
- (Note output is hexademical). Default 5 minutes. When the poller
+ (Note output is hexadecimal). Default 5 minutes. When the poller
finds MCEs it triggers an exponential speedup (poll more often) on
the polling interval. When the poller stops finding MCEs, it
triggers an exponential backoff (poll less often) on the polling