diff options
author | David Hildenbrand | 2022-05-09 18:20:42 -0700 |
---|---|---|
committer | akpm | 2022-05-09 18:20:42 -0700 |
commit | 623a1ddfeb232526275ddd0c8378771e6712aad4 (patch) | |
tree | 8975dc35fb8505de5635fd11b28d58a7ed04ea36 /mm | |
parent | 322842ea3c7264936b084ac949e9070ee47a5202 (diff) |
mm/hugetlb: take src_mm->write_protect_seq in copy_hugetlb_page_range()
Let's do it just like copy_page_range(), taking the seqlock and making
sure the mmap_lock is held in write mode.
This allows for add a VM_BUG_ON to page_needs_cow_for_dma() and properly
synchronizes concurrent fork() with GUP-fast of hugetlb pages, which will
be relevant for further changes.
Link: https://lkml.kernel.org/r/20220428083441.37290-3-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: David Rientjes <rientjes@google.com>
Cc: Don Dutile <ddutile@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jann Horn <jannh@google.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Khalid Aziz <khalid.aziz@oracle.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Liang Zhang <zhangliang5@huawei.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Oded Gabbay <oded.gabbay@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Pedro Demarchi Gomes <pedrodemargomes@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/hugetlb.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4e017940b26f..7389cd8a9a87 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4735,6 +4735,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, vma->vm_start, vma->vm_end); mmu_notifier_invalidate_range_start(&range); + mmap_assert_write_locked(src); + raw_write_seqcount_begin(&src->write_protect_seq); } else { /* * For shared mappings i_mmap_rwsem must be held to call @@ -4867,10 +4869,12 @@ again: spin_unlock(dst_ptl); } - if (cow) + if (cow) { + raw_write_seqcount_end(&src->write_protect_seq); mmu_notifier_invalidate_range_end(&range); - else + } else { i_mmap_unlock_read(mapping); + } return ret; } |