diff options
author | Chris Mason | 2008-07-21 10:29:44 -0400 |
---|---|---|
committer | Chris Mason | 2008-09-25 11:04:05 -0400 |
commit | 4a09675279674041862d2210635b0cc1f60be28e (patch) | |
tree | 19e4736c062f87729dcdc1bd57f4919b3227ec32 /fs/btrfs/file.c | |
parent | e5a2217ef6ff088d08a27208929a6f9c635d672c (diff) |
Btrfs: Data ordered fixes
* In btrfs_delete_inode, wait for ordered extents after calling
truncate_inode_pages. This is much faster, and more correct
* Properly clear our the PageChecked bit everywhere we redirty the page.
* Change the writepage fixup handler to lock the page range and check to
see if an ordered extent had been inserted since the improperly dirtied
page was discovered
* Wait for ordered extents outside the transaction. This isn't required
for locking rules but does improve transaction latencies
* Reduce contention on the alloc_mutex by dropping it while incrementing
refs on a node/leaf and while dropping refs on a leaf.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Diffstat (limited to 'fs/btrfs/file.c')
-rw-r--r-- | fs/btrfs/file.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index eccdb9562ba8..591a30208acd 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -75,6 +75,7 @@ static void btrfs_drop_pages(struct page **pages, size_t num_pages) for (i = 0; i < num_pages; i++) { if (!pages[i]) break; + ClearPageChecked(pages[i]); unlock_page(pages[i]); mark_page_accessed(pages[i]); page_cache_release(pages[i]); |