1. 09 Jan, 2011 1 commit
  2. 16 Dec, 2010 1 commit
    • Octavian Purdila's avatar
      net: fix nulls list corruptions in sk_prot_alloc · fcbdf09d
      Octavian Purdila authored
      
      
      Special care is taken inside sk_port_alloc to avoid overwriting
      skc_node/skc_nulls_node. We should also avoid overwriting
      skc_bind_node/skc_portaddr_node.
      
      The patch fixes the following crash:
      
       BUG: unable to handle kernel paging request at fffffffffffffff0
       IP: [<ffffffff812ec6dd>] udp4_lib_lookup2+0xad/0x370
       [<ffffffff812ecc22>] __udp4_lib_lookup+0x282/0x360
       [<ffffffff812ed63e>] __udp4_lib_rcv+0x31e/0x700
       [<ffffffff812bba45>] ? ip_local_deliver_finish+0x65/0x190
       [<ffffffff812bbbf8>] ? ip_local_deliver+0x88/0xa0
       [<ffffffff812eda35>] udp_rcv+0x15/0x20
       [<ffffffff812bba45>] ip_local_deliver_finish+0x65/0x190
       [<ffffffff812bbbf8>] ip_local_deliver+0x88/0xa0
       [<ffffffff812bb2cd>] ip_rcv_finish+0x32d/0x6f0
       [<ffffffff8128c14c>] ? netif_receive_skb+0x99c/0x11c0
       [<ffffffff812bb94b>] ip_rcv+0x2bb/0x350
       [<ffffffff8128c14c>] netif_receive_skb+0x99c/0x11c0
      Signed-off-by: default avatarLeonard Crestez <lcrestez@ixiacom.com>
      Signed-off-by: default avatarOctavian Purdila <opurdila@ixiacom.com>
      Acked-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fcbdf09d
  3. 09 Dec, 2010 1 commit
    • Eric Dumazet's avatar
      net: optimize INET input path further · 68835aba
      Eric Dumazet authored
      Followup of commit b178bb3d
      
       (net: reorder struct sock fields)
      
      Optimize INET input path a bit further, by :
      
      1) moving sk_refcnt close to sk_lock.
      
      This reduces number of dirtied cache lines by one on 64bit arches (and
      64 bytes cache line size).
      
      2) moving inet_daddr & inet_rcv_saddr at the beginning of sk
      
      (same cache line than hash / family / bound_dev_if / nulls_node)
      
      This reduces number of accessed cache lines in lookups by one, and dont
      increase size of inet and timewait socks.
      inet and tw sockets now share same place-holder for these fields.
      
      Before patch :
      
      offsetof(struct sock, sk_refcnt) = 0x10
      offsetof(struct sock, sk_lock) = 0x40
      offsetof(struct sock, sk_receive_queue) = 0x60
      offsetof(struct inet_sock, inet_daddr) = 0x270
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x274
      
      After patch :
      
      offsetof(struct sock, sk_refcnt) = 0x44
      offsetof(struct sock, sk_lock) = 0x48
      offsetof(struct sock, sk_receive_queue) = 0x68
      offsetof(struct inet_sock, inet_daddr) = 0x0
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x4
      
      compute_score() (udp or tcp) now use a single cache line per ignored
      item, instead of two.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      68835aba
  4. 06 Dec, 2010 1 commit
    • Eric Dumazet's avatar
      filter: fix sk_filter rcu handling · 46bcf14f
      Eric Dumazet authored
      Pavel Emelyanov tried to fix a race between sk_filter_(de|at)tach and
      sk_clone() in commit 47e958ea
      
      
      
      Problem is we can have several clones sharing a common sk_filter, and
      these clones might want to sk_filter_attach() their own filters at the
      same time, and can overwrite old_filter->rcu, corrupting RCU queues.
      
      We can not use filter->rcu without being sure no other thread could do
      the same thing.
      
      Switch code to a more conventional ref-counting technique : Do the
      atomic decrement immediately and queue one rcu call back when last
      reference is released.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      46bcf14f
  5. 02 Dec, 2010 1 commit
  6. 16 Nov, 2010 2 commits
    • Eric Dumazet's avatar
      net: reorder struct sock fields · b178bb3d
      Eric Dumazet authored
      
      
      Right now, fields in struct sock are not optimally ordered, because each
      path (RX softirq, TX completion, RX user,  TX user) has to touch fields
      that are contained in many different cache lines.
      
      The really critical thing is to shrink number of cache lines that are
      used at RX softirq time : CPU handling softirqs for a device can receive
      many frames per second for many sockets. If load is too big, we can drop
      frames at NIC level. RPS or multiqueue cards can help, but better reduce
      latency if possible.
      
      This patch starts with UDP protocol, then additional patches will try to
      reduce latencies of other ones as well.
      
      At RX softirq time, fields of interest for UDP protocol are :
      (not counting ones in inet struct for the lookup)
      
      Read/Written:
      sk_refcnt   (atomic increment/decrement)
      sk_rmem_alloc & sk_backlog.len (to check if there is room in queues)
      sk_receive_queue
      sk_backlog (if socket locked by user program)
      sk_rxhash
      sk_forward_alloc
      sk_drops
      
      Read only:
      sk_rcvbuf (sk_rcvqueues_full())
      sk_filter
      sk_wq
      sk_policy[0]
      sk_flags
      
      Additional notes :
      
      - sk_backlog has one hole on 64bit arches. We can fill it to save 8
      bytes.
      - sk_backlog is used only if RX sofirq handler finds the socket while
      locked by user.
      - sk_rxhash is written only once per flow.
      - sk_drops is written only if queues are full
      
      Final layout :
      
      [1] One section grouping all read/write fields, but placing rxhash and
      sk_backlog at the end of this section.
      
      [2] One section grouping all read fields in RX handler
         (sk_filter, sk_rcv_buf, sk_wq)
      
      [3] Section used by other paths
      
      I'll post a patch on its own to put sk_refcnt at the end of struct
      sock_common so that it shares same cache line than section [1]
      
      New offsets on 64bit arch :
      
      sizeof(struct sock)=0x268
      offsetof(struct sock, sk_refcnt)  =0x10
      offsetof(struct sock, sk_lock)    =0x48
      offsetof(struct sock, sk_receive_queue)=0x68
      offsetof(struct sock, sk_backlog)=0x80
      offsetof(struct sock, sk_rmem_alloc)=0x80
      offsetof(struct sock, sk_forward_alloc)=0x98
      offsetof(struct sock, sk_rxhash)=0x9c
      offsetof(struct sock, sk_rcvbuf)=0xa4
      offsetof(struct sock, sk_drops) =0xa0
      offsetof(struct sock, sk_filter)=0xa8
      offsetof(struct sock, sk_wq)=0xb0
      offsetof(struct sock, sk_policy)=0xd0
      offsetof(struct sock, sk_flags) =0xe0
      
      Instead of :
      
      sizeof(struct sock)=0x270
      offsetof(struct sock, sk_refcnt)  =0x10
      offsetof(struct sock, sk_lock)    =0x50
      offsetof(struct sock, sk_receive_queue)=0xc0
      offsetof(struct sock, sk_backlog)=0x70
      offsetof(struct sock, sk_rmem_alloc)=0xac
      offsetof(struct sock, sk_forward_alloc)=0x10c
      offsetof(struct sock, sk_rxhash)=0x128
      offsetof(struct sock, sk_rcvbuf)=0x4c
      offsetof(struct sock, sk_drops) =0x16c
      offsetof(struct sock, sk_filter)=0x198
      offsetof(struct sock, sk_wq)=0x88
      offsetof(struct sock, sk_policy)=0x98
      offsetof(struct sock, sk_flags) =0x130
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b178bb3d
    • Eric Dumazet's avatar
      udp: use atomic_inc_not_zero_hint · c31504dc
      Eric Dumazet authored
      
      
      UDP sockets refcount is usually 2, unless an incoming frame is going to
      be queued in receive or backlog queue.
      
      Using atomic_inc_not_zero_hint() permits to reduce latency, because
      processor issues less memory transactions.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c31504dc
  7. 10 Nov, 2010 1 commit
  8. 25 Oct, 2010 1 commit
  9. 26 Sep, 2010 1 commit
    • Eric Dumazet's avatar
      net: update SOCK_MIN_RCVBUF · 7a91b434
      Eric Dumazet authored
      
      
      SOCK_MIN_RCVBUF current value is 256 bytes
      
      It doesnt permit to receive the smallest possible frame, considering
      socket sk_rmem_alloc/sk_rcvbuf account skb truesizes. On 64bit arches,
      sizeof(struct sk_buff) is 240 bytes. Add the typical 64 bytes of
      headroom, and we go over the limit.
      
      With old kernels and 32bit arches, we were under the limit, if netdriver
      was doing copybreak.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7a91b434
  10. 08 Sep, 2010 1 commit
    • Eric Dumazet's avatar
      udp: add rehash on connect() · 719f8358
      Eric Dumazet authored
      commit 30fff923
      
       introduced in linux-2.6.33 (udp: bind() optimisation)
      added a secondary hash on UDP, hashed on (local addr, local port).
      
      Problem is that following sequence :
      
      fd = socket(...)
      connect(fd, &remote, ...)
      
      not only selects remote end point (address and port), but also sets
      local address, while UDP stack stored in secondary hash table the socket
      while its local address was INADDR_ANY (or ipv6 equivalent)
      
      Sequence is :
       - autobind() : choose a random local port, insert socket in hash tables
                    [while local address is INADDR_ANY]
       - connect() : set remote address and port, change local address to IP
                    given by a route lookup.
      
      When an incoming UDP frame comes, if more than 10 sockets are found in
      primary hash table, we switch to secondary table, and fail to find
      socket because its local address changed.
      
      One solution to this problem is to rehash datagram socket if needed.
      
      We add a new rehash(struct socket *) method in "struct proto", and
      implement this method for UDP v4 & v6, using a common helper.
      
      This rehashing only takes care of secondary hash table, since primary
      hash (based on local port only) is not changed.
      Reported-by: default avatarKrzysztof Piotr Oledzki <ole@ans.pl>
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Tested-by: default avatarKrzysztof Piotr Oledzki <ole@ans.pl>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      719f8358
  11. 19 Aug, 2010 1 commit
  12. 10 Aug, 2010 1 commit
  13. 14 Jul, 2010 1 commit
    • Tom Herbert's avatar
      net: fix problem in reading sock TX queue · b0f77d0e
      Tom Herbert authored
      
      
      Fix problem in reading the tx_queue recorded in a socket.  In
      dev_pick_tx, the TX queue is read by doing a check with
      sk_tx_queue_recorded on the socket, followed by a sk_tx_queue_get.
      The problem is that there is not mutual exclusion across these
      calls in the socket so it it is possible that the queue in the
      sock can be invalidated after sk_tx_queue_recorded is called so
      that sk_tx_queue get returns -1, which sets 65535 in queue_index
      and thus dev_pick_tx returns 65536 which is a bogus queue and
      can cause crash in dev_queue_xmit.
      
      We fix this by only calling sk_tx_queue_get which does the proper
      checks.  The interface is that sk_tx_queue_get returns the TX queue
      if the sock argument is non-NULL and TX queue is recorded, else it
      returns -1.  sk_tx_queue_recorded is no longer used so it can be
      completely removed.
      Signed-off-by: default avatarTom Herbert <therbert@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b0f77d0e
  14. 12 Jul, 2010 1 commit
  15. 16 Jun, 2010 1 commit
  16. 02 Jun, 2010 1 commit
  17. 01 Jun, 2010 1 commit
  18. 27 May, 2010 1 commit
    • Eric Dumazet's avatar
      net: fix lock_sock_bh/unlock_sock_bh · 8a74ad60
      Eric Dumazet authored
      
      
      This new sock lock primitive was introduced to speedup some user context
      socket manipulation. But it is unsafe to protect two threads, one using
      regular lock_sock/release_sock, one using lock_sock_bh/unlock_sock_bh
      
      This patch changes lock_sock_bh to be careful against 'owned' state.
      If owned is found to be set, we must take the slow path.
      lock_sock_bh() now returns a boolean to say if the slow path was taken,
      and this boolean is used at unlock_sock_bh time to call the appropriate
      unlock function.
      
      After this change, BH are either disabled or enabled during the
      lock_sock_bh/unlock_sock_bh protected section. This might be misleading,
      so we rename these functions to lock_sock_fast()/unlock_sock_fast().
      Reported-by: default avatarAnton Blanchard <anton@samba.org>
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Tested-by: default avatarAnton Blanchard <anton@samba.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8a74ad60
  19. 25 May, 2010 1 commit
  20. 24 May, 2010 1 commit
    • Herbert Xu's avatar
      cls_cgroup: Store classid in struct sock · f8451725
      Herbert Xu authored
      
      
      Up until now cls_cgroup has relied on fetching the classid out of
      the current executing thread.  This runs into trouble when a packet
      processing is delayed in which case it may execute out of another
      thread's context.
      
      Furthermore, even when a packet is not delayed we may fail to
      classify it if soft IRQs have been disabled, because this scenario
      is indistinguishable from one where a packet unrelated to the
      current thread is processed by a real soft IRQ.
      
      In fact, the current semantics is inherently broken, as a single
      skb may be constructed out of the writes of two different tasks.
      A different manifestation of this problem is when the TCP stack
      transmits in response of an incoming ACK.  This is currently
      unclassified.
      
      As we already have a concept of packet ownership for accounting
      purposes in the skb->sk pointer, this is a natural place to store
      the classid in a persistent manner.
      
      This patch adds the cls_cgroup classid in struct sock, filling up
      an existing hole on 64-bit :)
      
      The value is set at socket creation time.  So all sockets created
      via socket(2) automatically gains the ID of the thread creating it.
      Whenever another process touches the socket by either reading or
      writing to it, we will change the socket classid to that of the
      process if it has a valid (non-zero) classid.
      
      For sockets created on inbound connections through accept(2), we
      inherit the classid of the original listening socket through
      sk_clone, possibly preceding the actual accept(2) call.
      
      In order to minimise risks, I have not made this the authoritative
      classid.  For now it is only used as a backup when we execute
      with soft IRQs disabled.  Once we're completely happy with its
      semantics we can use it as the sole classid.
      
      Footnote: I have rearranged the error path on cls_group module
      creation.  If we didn't do this, then there is a window where
      someone could create a tc rule using cls_group before the cgroup
      subsystem has been registered.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f8451725
  21. 17 May, 2010 1 commit
    • Eric Dumazet's avatar
      net: add a noref bit on skb dst · 7fee226a
      Eric Dumazet authored
      
      
      Use low order bit of skb->_skb_dst to tell dst is not refcounted.
      
      Change _skb_dst to _skb_refdst to make sure all uses are catched.
      
      skb_dst() returns the dst, regardless of noref bit set or not, but
      with a lockdep check to make sure a noref dst is not given if current
      user is not rcu protected.
      
      New skb_dst_set_noref() helper to set an notrefcounted dst on a skb.
      (with lockdep check)
      
      skb_dst_drop() drops a reference only if skb dst was refcounted.
      
      skb_dst_force() helper is used to force a refcount on dst, when skb
      is queued and not anymore RCU protected.
      
      Use skb_dst_force() in __sk_add_backlog(), __dev_xmit_skb() if
      !IFF_XMIT_DST_RELEASE or skb enqueued on qdisc queue, in
      sock_queue_rcv_skb(), in __nf_queue().
      
      Use skb_dst_force() in dev_requeue_skb().
      
      Note: dst_use_noref() still dirties dst, we might transform it
      later to do one dirtying per jiffies.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7fee226a
  22. 16 May, 2010 1 commit
    • Eric Dumazet's avatar
      net: Introduce sk_route_nocaps · a465419b
      Eric Dumazet authored
      
      
      TCP-MD5 sessions have intermittent failures, when route cache is
      invalidated. ip_queue_xmit() has to find a new route, calls
      sk_setup_caps(sk, &rt->u.dst), destroying the 
      
      sk->sk_route_caps &= ~NETIF_F_GSO_MASK
      
      that MD5 desperately try to make all over its way (from
      tcp_transmit_skb() for example)
      
      So we send few bad packets, and everything is fine when
      tcp_transmit_skb() is called again for this socket.
      
      Since ip_queue_xmit() is at a lower level than TCP-MD5, I chose to use a
      socket field, sk_route_nocaps, containing bits to mask on sk_route_caps.
      Reported-by: default avatarBhaskar Dutta <bhaskie@gmail.com>
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a465419b
  23. 02 May, 2010 1 commit
  24. 01 May, 2010 1 commit
    • Eric Dumazet's avatar
      net: sock_def_readable() and friends RCU conversion · 43815482
      Eric Dumazet authored
      
      
      sk_callback_lock rwlock actually protects sk->sk_sleep pointer, so we
      need two atomic operations (and associated dirtying) per incoming
      packet.
      
      RCU conversion is pretty much needed :
      
      1) Add a new structure, called "struct socket_wq" to hold all fields
      that will need rcu_read_lock() protection (currently: a
      wait_queue_head_t and a struct fasync_struct pointer).
      
      [Future patch will add a list anchor for wakeup coalescing]
      
      2) Attach one of such structure to each "struct socket" created in
      sock_alloc_inode().
      
      3) Respect RCU grace period when freeing a "struct socket_wq"
      
      4) Change sk_sleep pointer in "struct sock" by sk_wq, pointer to "struct
      socket_wq"
      
      5) Change sk_sleep() function to use new sk->sk_wq instead of
      sk->sk_sleep
      
      6) Change sk_has_sleeper() to wq_has_sleeper() that must be used inside
      a rcu_read_lock() section.
      
      7) Change all sk_has_sleeper() callers to :
        - Use rcu_read_lock() instead of read_lock(&sk->sk_callback_lock)
        - Use wq_has_sleeper() to eventually wakeup tasks.
        - Use rcu_read_unlock() instead of read_unlock(&sk->sk_callback_lock)
      
      8) sock_wake_async() is modified to use rcu protection as well.
      
      9) Exceptions :
        macvtap, drivers/net/tun.c, af_unix use integrated "struct socket_wq"
      instead of dynamically allocated ones. They dont need rcu freeing.
      
      Some cleanups or followups are probably needed, (possible
      sk_callback_lock conversion to a spinlock for example...).
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      43815482
  25. 30 Apr, 2010 1 commit
  26. 28 Apr, 2010 1 commit
    • Eric Dumazet's avatar
      net: speedup udp receive path · 4b0b72f7
      Eric Dumazet authored
      Since commit 95766fff
      
       ([UDP]: Add memory accounting.), 
      each received packet needs one extra sock_lock()/sock_release() pair.
      
      This added latency because of possible backlog handling. Then later,
      ticket spinlocks added yet another latency source in case of DDOS.
      
      This patch introduces lock_sock_bh() and unlock_sock_bh()
      synchronization primitives, avoiding one atomic operation and backlog
      processing.
      
      skb_free_datagram_locked() uses them instead of full blown
      lock_sock()/release_sock(). skb is orphaned inside locked section for
      proper socket memory reclaim, and finally freed outside of it.
      
      UDP receive path now take the socket spinlock only once.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4b0b72f7
  27. 27 Apr, 2010 3 commits
  28. 22 Apr, 2010 1 commit
    • Eric Dumazet's avatar
      dst: rcu check refinement · f68c224f
      Eric Dumazet authored
      __sk_dst_get() might be called from softirq, with socket lock held.
      
      [  159.026180] include/net/sock.h:1200 invoked rcu_dereference_check()
      without protection!
      [  159.026261] 
      [  159.026261] other info that might help us debug this:
      [  159.026263] 
      [  159.026425] 
      [  159.026426] rcu_scheduler_active = 1, debug_locks = 0
      [  159.026552] 2 locks held by swapper/0:
      [  159.026609]  #0:  (&icsk->icsk_retransmit_timer){+.-...}, at:
      [<ffffffff8104fc15>] run_timer_softirq+0x105/0x350
      [  159.026839]  #1:  (slock-AF_INET){+.-...}, at: [<ffffffff81392b8f>]
      tcp_write_timer+0x2f/0x1e0
      [  159.027063] 
      [  159.027064] stack backtrace:
      [  159.027172] Pid: 0, comm: swapper Not tainted
      2.6.34-rc5-03707-gde498c89
      
      -dirty #36
      [  159.027252] Call Trace:
      [  159.027306]  <IRQ>  [<ffffffff810718ef>] lockdep_rcu_dereference
      +0xaf/0xc0
      [  159.027411]  [<ffffffff8138e4f7>] tcp_current_mss+0xa7/0xb0
      [  159.027537]  [<ffffffff8138fa49>] tcp_write_wakeup+0x89/0x190
      [  159.027600]  [<ffffffff81391936>] tcp_send_probe0+0x16/0x100
      [  159.027726]  [<ffffffff81392cd9>] tcp_write_timer+0x179/0x1e0
      [  159.027790]  [<ffffffff8104fca1>] run_timer_softirq+0x191/0x350
      [  159.027980]  [<ffffffff810477ed>] __do_softirq+0xcd/0x200
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f68c224f
  29. 20 Apr, 2010 1 commit
  30. 13 Apr, 2010 1 commit
    • Eric Dumazet's avatar
      net: sk_dst_cache RCUification · b6c6712a
      Eric Dumazet authored
      
      
      With latest CONFIG_PROVE_RCU stuff, I felt more comfortable to make this
      work.
      
      sk->sk_dst_cache is currently protected by a rwlock (sk_dst_lock)
      
      This rwlock is readlocked for a very small amount of time, and dst
      entries are already freed after RCU grace period. This calls for RCU
      again :)
      
      This patch converts sk_dst_lock to a spinlock, and use RCU for readers.
      
      __sk_dst_get() is supposed to be called with rcu_read_lock() or if
      socket locked by user, so use appropriate rcu_dereference_check()
      condition (rcu_read_lock_held() || sock_owned_by_user(sk))
      
      This patch avoids two atomic ops per tx packet on UDP connected sockets,
      for example, and permits sk_dst_lock to be much less dirtied.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b6c6712a
  31. 30 Mar, 2010 1 commit
    • Tejun Heo's avatar
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo authored
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        bloc...
      5a0e3ad6
  32. 08 Mar, 2010 1 commit
  33. 05 Mar, 2010 2 commits
    • Zhu Yi's avatar
      net: backlog functions rename · a3a858ff
      Zhu Yi authored
      
      
      sk_add_backlog -> __sk_add_backlog
      sk_add_backlog_limited -> sk_add_backlog
      Signed-off-by: default avatarZhu Yi <yi.zhu@intel.com>
      Acked-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a3a858ff
    • Zhu Yi's avatar
      net: add limit for socket backlog · 8eae939f
      Zhu Yi authored
      
      
      We got system OOM while running some UDP netperf testing on the loopback
      device. The case is multiple senders sent stream UDP packets to a single
      receiver via loopback on local host. Of course, the receiver is not able
      to handle all the packets in time. But we surprisingly found that these
      packets were not discarded due to the receiver's sk->sk_rcvbuf limit.
      Instead, they are kept queuing to sk->sk_backlog and finally ate up all
      the memory. We believe this is a secure hole that a none privileged user
      can crash the system.
      
      The root cause for this problem is, when the receiver is doing
      __release_sock() (i.e. after userspace recv, kernel udp_recvmsg ->
      skb_free_datagram_locked -> release_sock), it moves skbs from backlog to
      sk_receive_queue with the softirq enabled. In the above case, multiple
      busy senders will almost make it an endless loop. The skbs in the
      backlog end up eat all the system memory.
      
      The issue is not only for UDP. Any protocols using socket backlog is
      potentially affected. The patch adds limit for socket backlog so that
      the backlog size cannot be expanded endlessly.
      Reported-by: default avatarAlex Shi <alex.shi@intel.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru
      Cc: "Pekka Savola (ipv6)" <pekkas@netcore.fi>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Vlad Yasevich <vladislav.yasevich@hp.com>
      Cc: Sridhar Samudrala <sri@us.ibm.com>
      Cc: Jon Maloy <jon.maloy@ericsson.com>
      Cc: Allan Stephens <allan.stephens@windriver.com>
      Cc: Andrew Hendry <andrew.hendry@gmail.com>
      Signed-off-by: default avatarZhu Yi <yi.zhu@intel.com>
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8eae939f
  34. 22 Feb, 2010 1 commit
  35. 14 Feb, 2010 1 commit
  36. 10 Feb, 2010 1 commit