1. 17 Jan, 2013 5 commits
    • Yuval Mintz's avatar
      bnx2x: fix GRO parameters · cbf1de72
      Yuval Mintz authored
      bnx2x does an internal GRO pass but doesn't provide gso_segs, thus
      breaking qdisc_pkt_len_init() in case ingress qdisc is used.
      We store gso_segs in NAPI_GRO_CB(skb)->count, where tcp_gro_complete()
      expects to find the number of aggregated segments.
      Signed-off-by: default avatarYuval Mintz <yuvalmin@broadcom.com>
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • Vipul Pandya's avatar
      cxgb3: Fix Tx csum stats · bc6c47b5
      Vipul Pandya authored
      Signed-off-by: default avatarJay Hernandez <jay@chelsio.com>
      Signed-off-by: default avatarVipul Pandya <vipul@chelsio.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • Fabio Baltieri's avatar
      ipv6: fix ipv6_prefix_equal64_half mask conversion · 512613d7
      Fabio Baltieri authored
      Fix the 64bit optimized version of ipv6_prefix_equal to convert the
      bitmask to network byte order only after the bit-shift.
      The bug was introduced in:
       ipv6: 64bit version of ipv6_prefix_equal().
      Signed-off-by: default avatarFabio Baltieri <fabio.baltieri@linaro.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • Jesper Dangaard Brouer's avatar
      net: increase fragment memory usage limits · c2a93660
      Jesper Dangaard Brouer authored
      Increase the amount of memory usage limits for incomplete
      IP fragments.
      Arguing for new thresh high/low values:
       High threshold = 4 MBytes
       Low  threshold = 3 MBytes
      The fragmentation memory accounting code, tries to account for the
      real memory usage, by measuring both the size of frag queue struct
      (inet_frag_queue (ipv4:ipq/ipv6:frag_queue)) and the SKB's truesize.
      We want to be able to handle/hold-on-to enough fragments, to ensure
      good performance, without causing incomplete fragments to hurt
      scalability, by causing the number of inet_frag_queue to grow too much
      (resulting longer searches for frag queues).
      For IPv4, how much memory does the largest frag consume.
      Maximum size fragment is 64K, which is approx 44 fragments with
      MTU(1500) sized packets. Sizeof(struct ipq) is 200.  A 1500 byte
      packet results in a truesize of 2944 (not 2048 as I first assumed)
        (44*2944)+200 = 129736 bytes
      The current default high thresh of 262144 bytes, is obviously
      problematic, as only two 64K fragments can fit in the queue at the
      same time.
      How many 64K fragment can we fit into 4 MBytes:
        4*2^20/((44*2944)+200) = 32.34 fragment in queues
      An attacker could send a separate/distinct fake fragment packets per
      queue, causing us to allocate one inet_frag_queue per packet, and thus
      attacking the hash table and its lists.
      How many frag queue do we need to store, and given a current hash size
      of 64, what is the average list length.
      Using one MTU sized fragment per inet_frag_queue, each consuming
      (2944+200) 3144 bytes.
        4*2^20/(2944+200) = 1334 frag queues -> 21 avg list length
      An attack could send small fragments, the smallest packet I could send
      resulted in a truesize of 896 bytes (I'm a little surprised by this).
        4*2^20/(896+200)  = 3827 frag queues -> 59 avg list length
      When increasing these number, we also need to followup with
      improvements, that is going to help scalability.  Simply increasing
      the hash size, is not enough as the current implementation does not
      have a per hash bucket locking.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • Vincent Bernat's avatar
      sk-filter: Add ability to lock a socket filter program · d59577b6
      Vincent Bernat authored
      While a privileged program can open a raw socket, attach some
      restrictive filter and drop its privileges (or send the socket to an
      unprivileged program through some Unix socket), the filter can still
      be removed or modified by the unprivileged program. This commit adds a
      socket option to lock the filter (SO_LOCK_FILTER) preventing any
      modification of a socket filter program.
      This is similar to OpenBSD BIOCLOCK ioctl on bpf sockets, except even
      root is not allowed change/drop the filter.
      The state of the lock can be read with getsockopt(). No error is
      triggered if the state is not changed. -EPERM is returned when a user
      tries to remove the lock or to change/remove the filter while the lock
      is active. The check is done directly in sk_attach_filter() and
      sk_detach_filter() and does not affect only setsockopt() syscall.
      Signed-off-by: default avatarVincent Bernat <bernat@luffy.cx>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  2. 16 Jan, 2013 24 commits
  3. 15 Jan, 2013 11 commits