diff options
author | Jens Axboe | 2024-03-12 20:24:21 -0600 |
---|---|---|
committer | Jens Axboe | 2024-04-15 08:10:26 -0600 |
commit | 87585b05757dc70545efb434669708d276125559 (patch) | |
tree | d3020002d23692f431baf61797e51d439312950f /include | |
parent | e270bfd22a2a10d1cfbaddf23e79b6d0b405d21e (diff) |
io_uring/kbuf: use vm_insert_pages() for mmap'ed pbuf ring
Rather than use remap_pfn_range() for this and manually free later,
switch to using vm_insert_page() and have it Just Work.
This requires a bit of effort on the mmap lookup side, as the ctx
uring_lock isn't held, which otherwise protects buffer_lists from being
torn down, and it's not safe to grab from mmap context that would
introduce an ABBA deadlock between the mmap lock and the ctx uring_lock.
Instead, lookup the buffer_list under RCU, as the the list is RCU freed
already. Use the existing reference count to determine whether it's
possible to safely grab a reference to it (eg if it's not zero already),
and drop that reference when done with the mapping. If the mmap
reference is the last one, the buffer_list and the associated memory can
go away, since the vma insertion has references to the inserted pages at
that point.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/io_uring_types.h | 3 |
1 files changed, 0 insertions, 3 deletions
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index ef45b8bd1b35..d34c8433caf9 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -372,9 +372,6 @@ struct io_ring_ctx { struct list_head io_buffers_cache; - /* deferred free list, protected by ->uring_lock */ - struct hlist_head io_buf_list; - /* Keep this last, we don't need it for the fast path */ struct wait_queue_head poll_wq; struct io_restriction restrictions; |