1. 23 Apr, 2013 1 commit
  2. 22 Apr, 2013 2 commits
  3. 20 Apr, 2013 1 commit
  4. 19 Apr, 2013 2 commits
  5. 17 Apr, 2013 6 commits
  6. 15 Apr, 2013 2 commits
  7. 14 Apr, 2013 2 commits
  8. 12 Apr, 2013 1 commit
    • Eric Dumazet's avatar
      tcp: GSO should be TSQ friendly · d6a4a104
      Eric Dumazet authored
      I noticed that TSQ (TCP Small queues) was less effective when TSO is
      turned off, and GSO is on. If BQL is not enabled, TSQ has then no
      effect.
      
      It turns out the GSO engine frees the original gso_skb at the time the
      fragments are generated and queued to the NIC.
      
      We should instead call the tcp_wfree() destructor for the last fragment,
      to keep the flow control as intended in TSQ. This effectively limits
      the number of queued packets on qdisc + NIC layers.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Nandita Dukkipati <nanditad@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d6a4a104
  9. 09 Apr, 2013 4 commits
  10. 08 Apr, 2013 4 commits
  11. 07 Apr, 2013 1 commit
  12. 05 Apr, 2013 2 commits
    • Gao feng's avatar
      netfilter: nf_log: prepare net namespace support for loggers · 30e0c6a6
      Gao feng authored
      This patch adds netns support to nf_log and it prepares netns
      support for existing loggers. It is composed of four major
      changes.
      
      1) nf_log_register has been split to two functions: nf_log_register
         and nf_log_set. The new nf_log_register is used to globally
         register the nf_logger and nf_log_set is used for enabling
         pernet support from nf_loggers.
      
         Per netns is not yet complete after this patch, it comes in
         separate follow up patches.
      
      2) Add net as a parameter of nf_log_bind_pf. Per netns is not
         yet complete after this patch, it only allows to bind the
         nf_logger to the protocol family from init_net and it skips
         other cases.
      
      3) Adapt all nf_log_packet callers to pass netns as parameter.
         After this patch, this function only works for init_net.
      
      4) Make the sysctl net/netfilter/nf_log pernet.
      Signed-off-by: default avatarGao feng <gaofeng@cn.fujitsu.com>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      30e0c6a6
    • Gao feng's avatar
      netfilter: make /proc/net/netfilter pernet · f3c1a44a
      Gao feng authored
      This patch makes this proc dentry pernet. So far only init_net
      had a /proc/net/netfilter directory.
      Signed-off-by: default avatarGao feng <gaofeng@cn.fujitsu.com>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      f3c1a44a
  13. 04 Apr, 2013 1 commit
    • Jesper Dangaard Brouer's avatar
      net: frag queue per hash bucket locking · 19952cc4
      Jesper Dangaard Brouer authored
      This patch implements per hash bucket locking for the frag queue
      hash.  This removes two write locks, and the only remaining write
      lock is for protecting hash rebuild.  This essentially reduce the
      readers-writer lock to a rebuild lock.
      
      This patch is part of "net: frag performance followup"
       http://thread.gmane.org/gmane.linux.network/263644
      of which two patches have already been accepted:
      
      Same test setup as previous:
       (http://thread.gmane.org/gmane.linux.network/257155)
       Two 10G interfaces, on seperate NUMA nodes, are under-test, and uses
       Ethernet flow-control.  A third interface is used for generating the
       DoS attack (with trafgen).
      
      Notice, I have changed the frag DoS generator script to be more
      efficient/deadly.  Before it would only hit one RX queue, now its
      sending packets causing multi-queue RX, due to "better" RX hashing.
      
      Test types summary (netperf UDP_STREAM):
       Test-20G64K     == 2x10G with 65K fragments
       Test-20G3F      == 2x10G with 3x fragments (3*1472 bytes)
       Test-20G64K+DoS == Same as 20G64K with frag DoS
       Test-20G3F+DoS  == Same as 20G3F  with frag DoS
       Test-20G64K+MQ  == Same as 20G64K with Multi-Queue frag DoS
       Test-20G3F+MQ   == Same as 20G3F  with Multi-Queue frag DoS
      
      When I rebased this-patch(03) (on top of net-next commit a210576c) and
      removed the _bh spinlock, I saw a performance regression.  BUT this
      was caused by some unrelated change in-between.  See tests below.
      
      Test (A) is what I reported before for patch-02, accepted in commit 1b5ab0de.
      Test (B) verifying-retest of commit 1b5ab0de corrospond to patch-02.
      Test (C) is what I reported before for this-patch
      
      Test (D) is net-next master HEAD (commit a210576c), which reveals some
      (unknown) performance regression (compared against test (B)).
      Test (D) function as a new base-test.
      
      Performance table summary (in Mbit/s):
      
      (#) Test-type:  20G64K    20G3F    20G64K+DoS  20G3F+DoS  20G64K+MQ 20G3F+MQ
          ----------  -------   -------  ----------  ---------  --------  -------
      (A) Patch-02  : 18848.7   13230.1   4103.04     5310.36     130.0    440.2
      (B) 1b5ab0de  : 18841.5   13156.8   4101.08     5314.57     129.0    424.2
      (C) Patch-03v1: 18838.0   13490.5   4405.11     6814.72     196.6    461.6
      
      (D) a210576c  : 18321.5   11250.4   3635.34     5160.13     119.1    405.2
      (E) with _bh  : 17247.3   11492.6   3994.74     6405.29     166.7    413.6
      (F) without bh: 17471.3   11298.7   3818.05     6102.11     165.7    406.3
      
      Test (E) and (F) is this-patch(03), with(V1) and without(V2) the _bh spinlocks.
      
      I cannot explain the slow down for 20G64K (but its an artificial
      "lab-test" so I'm not worried).  But the other results does show
      improvements.  And test (E) "with _bh" version is slightly better.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Acked-by: default avatarHannes Frederic Sowa <hannes@stressinduktion.org>
      Acked-by: default avatarEric Dumazet <edumazet@google.com>
      
      ----
      V2:
      - By analysis from Hannes Frederic Sowa and Eric Dumazet, we don't
        need the spinlock _bh versions, as Netfilter currently does a
        local_bh_disable() before entering inet_fragment.
      - Fold-in desc from cover-mail
      V3:
      - Drop the chain_len counter per hash bucket.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      19952cc4
  14. 02 Apr, 2013 1 commit
  15. 01 Apr, 2013 10 commits
    • Julian Anastasov's avatar
      ipvs: convert services to rcu · ceec4c38
      Julian Anastasov authored
      This is the final step in RCU conversion.
      
      Things that are removed:
      
      - svc->usecnt: now svc is accessed under RCU read lock
      - svc->inc: and some unused code
      - ip_vs_bind_pe and ip_vs_unbind_pe: no ability to replace PE
      - __ip_vs_svc_lock: replaced with RCU
      - IP_VS_WAIT_WHILE: now readers lookup svcs and dests under
      	RCU and work in parallel with configuration
      
      Other changes:
      
      - before now, a RCU read-side critical section included the
      calling of the schedule method, now it is extended to include
      service lookup
      - ip_vs_svc_table and ip_vs_svc_fwm_table are now using hlist
      - svc->pe and svc->scheduler remain to the end (of grace period),
      	the schedulers are prepared for such RCU readers
      	even after done_service is called but they need
      	to use synchronize_rcu because last ip_vs_scheduler_put
      	can happen while RCU read-side critical sections
      	use an outdated svc->scheduler pointer
      - as planned, update_service is removed
      - empty services can be freed immediately after grace period.
      	If dests were present, the services are freed from
      	the dest trash code
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      ceec4c38
    • Julian Anastasov's avatar
      ipvs: convert dests to rcu · 413c2d04
      Julian Anastasov authored
      In previous commits the schedulers started to access
      svc->destinations with _rcu list traversal primitives
      because the IP_VS_WAIT_WHILE macro still plays the role of
      grace period. Now it is time to finish the updating part,
      i.e. adding and deleting of dests with _rcu suffix before
      removing the IP_VS_WAIT_WHILE in next commit.
      
      We use the same rule for conns as for the
      schedulers: dests can be searched in RCU read-side critical
      section where ip_vs_dest_hold can be called by ip_vs_bind_dest.
      
      Some things are not perfect, for example, calling
      functions like ip_vs_lookup_dest from updating code under
      RCU, just because we use some function both from reader
      and from updater.
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      413c2d04
    • Julian Anastasov's avatar
      ipvs: convert sched_lock to spin lock · ba3a3ce1
      Julian Anastasov authored
      As all read_locks are gone spin lock is preferred.
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      ba3a3ce1
    • Julian Anastasov's avatar
      ipvs: do not expect result from done_service · ed3ffc4e
      Julian Anastasov authored
      This method releases the scheduler state,
      it can not fail. Such change will help to properly
      replace the scheduler in following patch.
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      ed3ffc4e
    • Julian Anastasov's avatar
      ipvs: reorganize dest trash · 578bc3ef
      Julian Anastasov authored
      All dests will go to trash, no exceptions.
      But we have to use new list node t_list for this, due
      to RCU changes in following patches. Dests will wait there
      initial grace period and later all conns and schedulers to
      put their reference. The dests don't get reference for
      staying in dest trash as before.
      
      	As result, we do not load ip_vs_dest_put with
      extra checks for last refcnt and the schedulers do not
      need to play games with atomic_inc_not_zero while
      selecting best destination.
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      578bc3ef
    • Julian Anastasov's avatar
      ipvs: add ip_vs_dest_hold and ip_vs_dest_put · fca9c20a
      Julian Anastasov authored
      ip_vs_dest_hold will be used under RCU lock
      while ip_vs_dest_put can be called even after dest
      is removed from service, as it happens for conns and
      some schedulers.
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      fca9c20a
    • Julian Anastasov's avatar
      ipvs: preparations for using rcu in schedulers · 6b6df466
      Julian Anastasov authored
      Allow schedulers to use rcu_dereference when
      returning destination on lookup. The RCU read-side critical
      section will allow ip_vs_bind_dest to get dest refcnt as
      preparation for the step where destinations will be
      deleted without an IP_VS_WAIT_WHILE guard that holds the
      packet processing during update.
      
      	Add new optional scheduler methods add_dest,
      del_dest and upd_dest. For now the methods are called
      together with update_service but update_service will be
      removed in a following change.
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      6b6df466
    • Julian Anastasov's avatar
      ipvs: avoid kmem_cache_zalloc in ip_vs_conn_new · 9a05475c
      Julian Anastasov authored
      We have many fields to set and few to reset,
      use kmem_cache_alloc instead to save some cycles.
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off by: Hans Schillstrom <hans@schillstrom.com>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      9a05475c
    • Julian Anastasov's avatar
      ipvs: reorder keys in connection structure · 1845ed0b
      Julian Anastasov authored
      __ip_vs_conn_in_get and ip_vs_conn_out_get are
      hot places. Optimize them, so that ports are matched first.
      By moving net and fwmark below, on 32-bit arch we can fit
      caddr in 32-byte cache line and all addresses in 64-byte
      cache line.
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off by: Hans Schillstrom <hans@schillstrom.com>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      1845ed0b
    • Julian Anastasov's avatar
      ipvs: convert connection locking · 088339a5
      Julian Anastasov authored
      Convert __ip_vs_conntbl_lock_array as follows:
      
      - readers that do not modify conn lists will use RCU lock
      - updaters that modify lists will use spinlock_t
      
      Now for conn lookups we will use RCU read-side
      critical section. Without using __ip_vs_conn_get such
      places have access to connection fields and can
      dereference some pointers like pe and pe_data plus
      the ability to update timer expiration. If full access
      is required we contend for reference.
      
      We add barrier in __ip_vs_conn_put, so that
      other CPUs see the refcnt operation after other writes.
      
      With the introduction of ip_vs_conn_unlink()
      we try to reorganize ip_vs_conn_expire(), so that
      unhashing of connections that should stay more time is
      avoided, even if it is for very short time.
      Signed-off-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off by: Hans Schillstrom <hans@schillstrom.com>
      Signed-off-by: default avatarSimon Horman <horms@verge.net.au>
      088339a5