diff options
author | Nick Terrell | 2017-06-29 10:57:26 -0700 |
---|---|---|
committer | David Sterba | 2017-08-16 16:12:02 +0200 |
commit | 26b28dce50091ae36ebb0bf9cb814a43861f0641 (patch) | |
tree | 7e3d483c39292a2fb117b078d30d02a3b0675a0e /fs | |
parent | 913e153572218c911125414d4ca1f8531f20c120 (diff) |
btrfs: Keep one more workspace around
find_workspace() allocates up to num_online_cpus() + 1 workspaces.
free_workspace() will only keep num_online_cpus() workspaces. When
(de)compressing we will allocate num_online_cpus() + 1 workspaces, then
free one, and repeat. Instead, we can just keep num_online_cpus() + 1
workspaces around, and never have to allocate/free another workspace in the
common case.
I tested on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM. I mounted a
BtrFS partition with -o compress-force={lzo,zlib,zstd} and logged whenever
a workspace was allocated of freed. Then I copied vmlinux (527 MB) to the
partition. Before the patch, during the copy it would allocate and free 5-6
workspaces. After, it only allocated the initial 3. This held true for lzo,
zlib, and zstd. The time it took to execute cp vmlinux /mnt/btrfs && sync
dropped from 1.70s to 1.44s with lzo compression, and from 2.04s to 1.80s
for zstd compression.
Signed-off-by: Nick Terrell <terrelln@fb.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/btrfs/compression.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index d2ef9ac2a630..3896bd0175ec 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -825,7 +825,7 @@ static void free_workspace(int type, struct list_head *workspace) int *free_ws = &btrfs_comp_ws[idx].free_ws; spin_lock(ws_lock); - if (*free_ws < num_online_cpus()) { + if (*free_ws <= num_online_cpus()) { list_add(workspace, idle_ws); (*free_ws)++; spin_unlock(ws_lock); |