aboutsummaryrefslogtreecommitdiff
path: root/Documentation
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/DMA-mapping.txt103
-rw-r--r--Documentation/pci.txt8
-rw-r--r--Documentation/power/pci.txt37
3 files changed, 4 insertions, 144 deletions
diff --git a/Documentation/DMA-mapping.txt b/Documentation/DMA-mapping.txt
index 028614cdd062..e07f2530326b 100644
--- a/Documentation/DMA-mapping.txt
+++ b/Documentation/DMA-mapping.txt
@@ -664,109 +664,6 @@ It is that simple.
Well, not for some odd devices. See the next section for information
about that.
- DAC Addressing for Address Space Hungry Devices
-
-There exists a class of devices which do not mesh well with the PCI
-DMA mapping API. By definition these "mappings" are a finite
-resource. The number of total available mappings per bus is platform
-specific, but there will always be a reasonable amount.
-
-What is "reasonable"? Reasonable means that networking and block I/O
-devices need not worry about using too many mappings.
-
-As an example of a problematic device, consider compute cluster cards.
-They can potentially need to access gigabytes of memory at once via
-DMA. Dynamic mappings are unsuitable for this kind of access pattern.
-
-To this end we've provided a small API by which a device driver
-may use DAC cycles to directly address all of physical memory.
-Not all platforms support this, but most do. It is easy to determine
-whether the platform will work properly at probe time.
-
-First, understand that there may be a SEVERE performance penalty for
-using these interfaces on some platforms. Therefore, you MUST only
-use these interfaces if it is absolutely required. %99 of devices can
-use the normal APIs without any problems.
-
-Note that for streaming type mappings you must either use these
-interfaces, or the dynamic mapping interfaces above. You may not mix
-usage of both for the same device. Such an act is illegal and is
-guaranteed to put a banana in your tailpipe.
-
-However, consistent mappings may in fact be used in conjunction with
-these interfaces. Remember that, as defined, consistent mappings are
-always going to be SAC addressable.
-
-The first thing your driver needs to do is query the PCI platform
-layer if it is capable of handling your devices DAC addressing
-capabilities:
-
- int pci_dac_dma_supported(struct pci_dev *hwdev, u64 mask);
-
-You may not use the following interfaces if this routine fails.
-
-Next, DMA addresses using this API are kept track of using the
-dma64_addr_t type. It is guaranteed to be big enough to hold any
-DAC address the platform layer will give to you from the following
-routines. If you have consistent mappings as well, you still
-use plain dma_addr_t to keep track of those.
-
-All mappings obtained here will be direct. The mappings are not
-translated, and this is the purpose of this dialect of the DMA API.
-
-All routines work with page/offset pairs. This is the _ONLY_ way to
-portably refer to any piece of memory. If you have a cpu pointer
-(which may be validly DMA'd too) you may easily obtain the page
-and offset using something like this:
-
- struct page *page = virt_to_page(ptr);
- unsigned long offset = offset_in_page(ptr);
-
-Here are the interfaces:
-
- dma64_addr_t pci_dac_page_to_dma(struct pci_dev *pdev,
- struct page *page,
- unsigned long offset,
- int direction);
-
-The DAC address for the tuple PAGE/OFFSET are returned. The direction
-argument is the same as for pci_{map,unmap}_single(). The same rules
-for cpu/device access apply here as for the streaming mapping
-interfaces. To reiterate:
-
- The cpu may touch the buffer before pci_dac_page_to_dma.
- The device may touch the buffer after pci_dac_page_to_dma
- is made, but the cpu may NOT.
-
-When the DMA transfer is complete, invoke:
-
- void pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev,
- dma64_addr_t dma_addr,
- size_t len, int direction);
-
-This must be done before the CPU looks at the buffer again.
-This interface behaves identically to pci_dma_sync_{single,sg}_for_cpu().
-
-And likewise, if you wish to let the device get back at the buffer after
-the cpu has read/written it, invoke:
-
- void pci_dac_dma_sync_single_for_device(struct pci_dev *pdev,
- dma64_addr_t dma_addr,
- size_t len, int direction);
-
-before letting the device access the DMA area again.
-
-If you need to get back to the PAGE/OFFSET tuple from a dma64_addr_t
-the following interfaces are provided:
-
- struct page *pci_dac_dma_to_page(struct pci_dev *pdev,
- dma64_addr_t dma_addr);
- unsigned long pci_dac_dma_to_offset(struct pci_dev *pdev,
- dma64_addr_t dma_addr);
-
-This is possible with the DAC interfaces purely because they are
-not translated in any way.
-
Optimizing Unmap State Space Consumption
On many platforms, pci_unmap_{single,page}() is simply a nop.
diff --git a/Documentation/pci.txt b/Documentation/pci.txt
index d38261b67905..7754f5aea4e9 100644
--- a/Documentation/pci.txt
+++ b/Documentation/pci.txt
@@ -113,9 +113,6 @@ initialization with a pointer to a structure describing the driver
(Please see Documentation/power/pci.txt for descriptions
of PCI Power Management and the related functions.)
- enable_wake Enable device to generate wake events from a low power
- state.
-
shutdown Hook into reboot_notifier_list (kernel/sys.c).
Intended to stop any idling DMA operations.
Useful for enabling wake-on-lan (NIC) or changing
@@ -299,7 +296,10 @@ If the PCI device can use the PCI Memory-Write-Invalidate transaction,
call pci_set_mwi(). This enables the PCI_COMMAND bit for Mem-Wr-Inval
and also ensures that the cache line size register is set correctly.
Check the return value of pci_set_mwi() as not all architectures
-or chip-sets may support Memory-Write-Invalidate.
+or chip-sets may support Memory-Write-Invalidate. Alternatively,
+if Mem-Wr-Inval would be nice to have but is not required, call
+pci_try_set_mwi() to have the system do its best effort at enabling
+Mem-Wr-Inval.
3.2 Request MMIO/IOP resources
diff --git a/Documentation/power/pci.txt b/Documentation/power/pci.txt
index e00b099a4b86..dd8fe43888d3 100644
--- a/Documentation/power/pci.txt
+++ b/Documentation/power/pci.txt
@@ -164,7 +164,6 @@ struct pci_driver:
int (*suspend) (struct pci_dev *dev, pm_message_t state);
int (*resume) (struct pci_dev *dev);
- int (*enable_wake) (struct pci_dev *dev, pci_power_t state, int enable);
suspend
@@ -251,42 +250,6 @@ The driver should update the current_state field in its pci_dev structure in
this function, except for PM-capable devices when pci_set_power_state is used.
-enable_wake
------------
-
-Usage:
-
-if (dev->driver && dev->driver->enable_wake)
- dev->driver->enable_wake(dev,state,enable);
-
-This callback is generally only relevant for devices that support the PCI PM
-spec and have the ability to generate a PME# (Power Management Event Signal)
-to wake the system up. (However, it is possible that a device may support
-some non-standard way of generating a wake event on sleep.)
-
-Bits 15:11 of the PMC (Power Mgmt Capabilities) Register in a device's
-PM Capabilities describe what power states the device supports generating a
-wake event from:
-
-+------------------+
-| Bit | State |
-+------------------+
-| 11 | D0 |
-| 12 | D1 |
-| 13 | D2 |
-| 14 | D3hot |
-| 15 | D3cold |
-+------------------+
-
-A device can use this to enable wake events:
-
- pci_enable_wake(dev,state,enable);
-
-Note that to enable PME# from D3cold, a value of 4 should be passed to
-pci_enable_wake (since it uses an index into a bitmask). If a driver gets
-a request to enable wake events from D3, two calls should be made to
-pci_enable_wake (one for both D3hot and D3cold).
-
A reference implementation
-------------------------