diff options
author | Dave Airlie | 2011-05-09 02:24:04 +0000 |
---|---|---|
committer | Dave Airlie | 2011-05-11 13:08:02 +1000 |
commit | 03a80665341bbb9a57064c2ddeca13b554d56893 (patch) | |
tree | a72db5aa5d2c6b30e22733e58e2c603738aec5dd /drivers/gpu/drm | |
parent | 1f03128251b77bfc68d1578a4f11316eb3806238 (diff) |
drm/radeon/nouveau: fix build regression on alpha due to Xen changes.
The Xen changes were using DMA_ERROR_CODE which isn't defined on a few
platforms, however we reverted the Xen patch that caused use to try and
use this code path earlier in 2.6.39 cycle, so for now lets just force
the code to never take this path and allow it to build again on alpha.
The proper long term answer is probably to store if the dma_addr has
been assigned to alongside the dma_addr in the higher level code,
though I think Thomas wanted to rewrite most of this anyways properly.
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Diffstat (limited to 'drivers/gpu/drm')
-rw-r--r-- | drivers/gpu/drm/nouveau/nouveau_sgdma.c | 3 | ||||
-rw-r--r-- | drivers/gpu/drm/radeon/radeon_gart.c | 6 |
2 files changed, 5 insertions, 4 deletions
diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c index 4bce801bc588..c77111eca6ac 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c +++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c @@ -42,7 +42,8 @@ nouveau_sgdma_populate(struct ttm_backend *be, unsigned long num_pages, nvbe->nr_pages = 0; while (num_pages--) { - if (dma_addrs[nvbe->nr_pages] != DMA_ERROR_CODE) { + /* this code path isn't called and is incorrect anyways */ + if (0) { /*dma_addrs[nvbe->nr_pages] != DMA_ERROR_CODE)*/ nvbe->pages[nvbe->nr_pages] = dma_addrs[nvbe->nr_pages]; nvbe->ttm_alloced[nvbe->nr_pages] = true; diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c index 8a955bbdb608..a533f52fd163 100644 --- a/drivers/gpu/drm/radeon/radeon_gart.c +++ b/drivers/gpu/drm/radeon/radeon_gart.c @@ -181,9 +181,9 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned offset, p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); for (i = 0; i < pages; i++, p++) { - /* On TTM path, we only use the DMA API if TTM_PAGE_FLAG_DMA32 - * is requested. */ - if (dma_addr[i] != DMA_ERROR_CODE) { + /* we reverted the patch using dma_addr in TTM for now but this + * code stops building on alpha so just comment it out for now */ + if (0) { /*dma_addr[i] != DMA_ERROR_CODE) */ rdev->gart.ttm_alloced[p] = true; rdev->gart.pages_addr[p] = dma_addr[i]; } else { |