aboutsummaryrefslogtreecommitdiff
path: root/lib
diff options
context:
space:
mode:
authorSebastian Ott2016-08-02 14:04:13 -0700
committerLinus Torvalds2016-08-02 19:35:07 -0400
commitf003a1f182bb821f13775338a4bf8711830f927a (patch)
tree74d6093d44a21328cebb5420b970fb059d080547 /lib
parent4cad35a7ca690eabf0d241062ce9e59693ec03e7 (diff)
lib/iommu-helper: skip to next segment
When a large enough area in the iommu bitmap is found but would span a boundary we continue the search starting from the next bit position. For large allocations this can lead to several useless invocations of bitmap_find_next_zero_area() and iommu_is_span_boundary(). Continue the search from the start of the next segment (which is the next bit position such that we'll not cross the same segment boundary again). Link: http://lkml.kernel.org/r/alpine.LFD.2.20.1606081910070.3211@schleppi Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'lib')
-rw-r--r--lib/iommu-helper.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/lib/iommu-helper.c b/lib/iommu-helper.c
index c27e269210c4..a816f3a80625 100644
--- a/lib/iommu-helper.c
+++ b/lib/iommu-helper.c
@@ -29,8 +29,7 @@ again:
index = bitmap_find_next_zero_area(map, size, start, nr, align_mask);
if (index < size) {
if (iommu_is_span_boundary(index, nr, shift, boundary_size)) {
- /* we could do more effectively */
- start = index + 1;
+ start = ALIGN(shift + index, boundary_size) - shift;
goto again;
}
bitmap_set(map, index, nr);