aboutsummaryrefslogtreecommitdiff
path: root/net/unix
diff options
context:
space:
mode:
authorHannes Frederic Sowa2015-11-17 15:10:59 +0100
committerDavid S. Miller2015-11-17 15:25:45 -0500
commita3a116e04cc6a94d595ead4e956ab1bc1d2f4746 (patch)
tree311448a945154e0247d25366a786e9c4bddbf197 /net/unix
parentb22b941b2c253a20e1d000c671594c4f3f0a3858 (diff)
af_unix: take receive queue lock while appending new skb
While possibly in future we don't necessarily need to use sk_buff_head.lock this is a rather larger change, as it affects the af_unix fd garbage collector, diag and socket cleanups. This is too much for a stable patch. For the time being grab sk_buff_head.lock without disabling bh and irqs, so don't use locked skb_queue_tail. Fixes: 869e7c62486e ("net: af_unix: implement stream sendpage support") Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Reported-by: Eric Dumazet <edumazet@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/unix')
-rw-r--r--net/unix/af_unix.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index a8352db5c5b5..955ec152cb71 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1813,8 +1813,11 @@ alloc_skb:
skb->truesize += size;
atomic_add(size, &sk->sk_wmem_alloc);
- if (newskb)
+ if (newskb) {
+ spin_lock(&other->sk_receive_queue.lock);
__skb_queue_tail(&other->sk_receive_queue, newskb);
+ spin_unlock(&other->sk_receive_queue.lock);
+ }
unix_state_unlock(other);
mutex_unlock(&unix_sk(other)->readlock);