1. 21 Jul, 2010 1 commit
  2. 11 Jun, 2010 33 commits
  3. 10 Jun, 2010 6 commits
    • Eric Dumazet's avatar
      pkt_sched: gen_estimator: add a new lock · ae638c47
      Eric Dumazet authored
      gen_kill_estimator() / gen_new_estimator() is not always called with
      RTNL held.
      net/netfilter/xt_RATEEST.c is one user of these API that do not hold
      RTNL, so random corruptions can occur between "tc" and "iptables".
      Add a new fine grained lock instead of trying to use RTNL in netfilter.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • John Fastabend's avatar
      net: deliver skbs on inactive slaves to exact matches · 597a264b
      John Fastabend authored
      Currently, the accelerated receive path for VLAN's will
      drop packets if the real device is an inactive slave and
      is not one of the special pkts tested for in
      skb_bond_should_drop().  This behavior is different then
      the non-accelerated path and for pkts over a bonded vlan.
      For example,
      vlanx -> bond0 -> ethx
      will be dropped in the vlan path and not delivered to any
      packet handlers at all.  However,
      bond0 -> vlanx -> ethx
      bond0 -> ethx
      will be delivered to handlers that match the exact dev,
      because the VLAN path checks the real_dev which is not a
      slave and netif_recv_skb() doesn't drop frames but only
      delivers them to exact matches.
      This patch adds a sk_buff flag which is used for tagging
      skbs that would previously been dropped and allows the
      skb to continue to skb_netif_recv().  Here we add
      logic to check for the deliver_no_wcard flag and if it
      is set only deliver to handlers that match exactly.  This
      makes both paths above consistent and gives pkt handlers
      a way to identify skbs that come from inactive slaves.
      Without this patch in some configurations skbs will be
      delivered to handlers with exact matches and in others
      be dropped out right in the vlan path.
      I have tested the following 4 configurations in failover modes
      and load balancing modes.
      # bond0 -> ethx
      # vlanx -> bond0 -> ethx
      # bond0 -> vlanx -> ethx
      # bond0 -> ethx
        vlanx -> --
      Signed-off-by: default avatarJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • Sage Weil's avatar
      ceph: try to send partial cap release on cap message on missing inode · 2b2300d6
      Sage Weil authored
      If we have enough memory to allocate a new cap release message, do so, so
      that we can send a partial release message immediately.  This keeps us from
      making the MDS wait when the cap release it needs is in a partially full
      release message.
      If we fail because of ENOMEM, oh well, they'll just have to wait a bit
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
    • Sage Weil's avatar
      ceph: release cap on import if we don't have the inode · 3d7ded4d
      Sage Weil authored
      If we get an IMPORT that give us a cap, but we don't have the inode, queue
      a release (and try to send it immediately) so that the MDS doesn't get
      stuck waiting for us.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
    • Sage Weil's avatar
      ceph: fix misleading/incorrect debug message · 9dbd412f
      Sage Weil authored
      Nothing is released here: the caps message is simply ignored in this case.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
    • Jeff Mahoney's avatar
      ceph: fix atomic64_t initialization on ia64 · 00d5643e
      Jeff Mahoney authored
      bdi_seq is an atomic_long_t but we're using ATOMIC_INIT, which causes
       build failures on ia64. This patch fixes it to use ATOMIC_LONG_INIT.
      Signed-off-by: default avatarJeff Mahoney <jeffm@suse.com>
      Signed-off-by: default avatarSage Weil <sage@newdream.net>