diff options
author | David S. Miller | 2015-08-26 11:01:45 -0700 |
---|---|---|
committer | David S. Miller | 2015-08-26 11:01:45 -0700 |
commit | 8c5bbe77d4cd012668cdaf501bbd1cbfb9ad1d24 (patch) | |
tree | ee2e2eb473e84ba5c052dddd05ef09db5b7faeb3 /drivers | |
parent | dc8242f704fee4fddcbebfcc5a4d08526951444a (diff) | |
parent | cff82457c5584f6a96d2b85d1a88b81ba304a330 (diff) |
Merge branch 'act_bpf_lockless'
Alexei Starovoitov says:
====================
act_bpf: remove spinlock in fast path
v1 version had a race condition in cleanup path of bpf_prog.
I tried to fix it by adding new callback 'cleanup_rcu' to 'struct tcf_common'
and call it out of act_api cleanup path, but Daniel noticed
(thanks for the idea!) that most of the classifiers already do action cleanup
out of rcu callback.
So instead this set of patches converts tcindex and rsvp classifiers to call
tcf_exts_destroy() after rcu grace period and since action cleanup logic
in __tcf_hash_release() is only called when bind and refcnt goes to zero,
it's guaranteed that cleanup() callback is called from rcu callback.
More specifically:
patches 1 and 2 - simple fixes
patches 2 and 3 - convert tcf_exts_destroy in tcindex and rsvp to call_rcu
patch 5 - removes spin_lock from act_bpf
The cleanup of actions is now universally done after rcu grace period
and in the future we can drop (now unnecessary) call_rcu from tcf_hash_destroy()
patch 5 is using synchronize_rcu() in act_bpf replacement path, since it's
very rare and alternative of dynamically allocating 'struct tcf_bpf_cfg' just
to pass it to call_rcu looks even less appealing.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'drivers')
0 files changed, 0 insertions, 0 deletions