diff options
author | Nicolin Chen | 2020-09-01 15:16:45 -0700 |
---|---|---|
committer | Christoph Hellwig | 2020-09-03 18:12:15 +0200 |
commit | 1e9d90dbed120ec98517428ffff4dacd9797e39d (patch) | |
tree | d743be6b54427bb62e36e6d1ae2f6fc243cd70b7 /include/linux/dma-mapping.h | |
parent | 2281f797f5524abb8fff66bf8540b4f4687332a2 (diff) |
dma-mapping: introduce dma_get_seg_boundary_nr_pages()
We found that callers of dma_get_seg_boundary mostly do an ALIGN
with page mask and then do a page shift to get number of pages:
ALIGN(boundary + 1, 1 << shift) >> shift
However, the boundary might be as large as ULONG_MAX, which means
that a device has no specific boundary limit. So either "+ 1" or
passing it to ALIGN() would potentially overflow.
According to kernel defines:
#define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define ALIGN(x, a) ALIGN_MASK(x, (typeof(x))(a) - 1)
We can simplify the logic here into a helper function doing:
ALIGN(boundary + 1, 1 << shift) >> shift
= ALIGN_MASK(b + 1, (1 << s) - 1) >> s
= {[b + 1 + (1 << s) - 1] & ~[(1 << s) - 1]} >> s
= [b + 1 + (1 << s) - 1] >> s
= [b + (1 << s)] >> s
= (b >> s) + 1
This patch introduces and applies dma_get_seg_boundary_nr_pages()
as an overflow-free helper for the dma_get_seg_boundary() callers
to get numbers of pages. It also takes care of the NULL dev case
for non-DMA API callers.
Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Acked-by: Niklas Schnelle <schnelle@linux.ibm.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'include/linux/dma-mapping.h')
-rw-r--r-- | include/linux/dma-mapping.h | 19 |
1 files changed, 19 insertions, 0 deletions
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 52635e91143b..faab0a8210b9 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -632,6 +632,25 @@ static inline unsigned long dma_get_seg_boundary(struct device *dev) return DMA_BIT_MASK(32); } +/** + * dma_get_seg_boundary_nr_pages - return the segment boundary in "page" units + * @dev: device to guery the boundary for + * @page_shift: ilog() of the IOMMU page size + * + * Return the segment boundary in IOMMU page units (which may be different from + * the CPU page size) for the passed in device. + * + * If @dev is NULL a boundary of U32_MAX is assumed, this case is just for + * non-DMA API callers. + */ +static inline unsigned long dma_get_seg_boundary_nr_pages(struct device *dev, + unsigned int page_shift) +{ + if (!dev) + return (U32_MAX >> page_shift) + 1; + return (dma_get_seg_boundary(dev) >> page_shift) + 1; +} + static inline int dma_set_seg_boundary(struct device *dev, unsigned long mask) { if (dev->dma_parms) { |