1. 16 Jul, 2008 8 commits
  2. 03 Jul, 2008 2 commits
  3. 27 Jun, 2008 1 commit
  4. 12 Jun, 2008 1 commit
    • David S. Miller's avatar
      tcp: Revert 'process defer accept as established' changes. · ec0a1966
      David S. Miller authored
      This reverts two changesets, ec3c0982
      ("[TCP]: TCP_DEFER_ACCEPT updates - process as established") and
      the follow-on bug fix 9ae27e0a
      ("tcp: Fix slab corruption with ipv6 and tcp6fuzz").
      This change causes several problems, first reported by Ingo Molnar
      as a distcc-over-loopback regression where connections were getting
      Ilpo Järvinen first spotted the locking problems.  The new function
      added by this code, tcp_defer_accept_check(), only has the
      child socket locked, yet it is modifying state of the parent
      listening socket.
      Fixing that is non-trivial at best, because we can't simply just grab
      the parent listening socket lock at this point, because it would
      create an ABBA deadlock.  The normal ordering is parent listening
      socket --> child socket, but this code path would require the
      reverse lock ordering.
      Next is a problem noticed by Vitaliy Gusev, he noted:
      >--- a/net/ipv4/tcp_timer.c
      >+++ b/net/ipv4/tcp_timer.c
      >@@ -481,6 +481,11 @@ static void tcp_keepalive_timer (unsigned long data)
      > 		goto death;
      > 	}
      >+	if (tp->defer_tcp_accept.request && sk->sk_state == TCP_ESTABLISHED) {
      >+		tcp_send_active_reset(sk, GFP_ATOMIC);
      >+		goto death;
      Here socket sk is not attached to listening socket's request queue. tcp_done()
      will not call inet_csk_destroy_sock() (and tcp_v4_destroy_sock() which should
      release this sk) as socket is not DEAD. Therefore socket sk will be lost for
      Finally, Alexey Kuznetsov argues that there might not even be any
      real value or advantage to these new semantics even if we fix all
      of the bugs:
      Hiding from accept() sockets with only out-of-order data only
      is the only thing which is impossible with old approach. Is this really
      so valuable? My opinion: no, this is nothing but a new loophole
      to consume memory without control.
      So revert this thing for now.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  5. 11 Jun, 2008 2 commits
  6. 04 Jun, 2008 1 commit
  7. 21 Apr, 2008 1 commit
  8. 22 Mar, 2008 1 commit
    • Herbert Xu's avatar
      [TCP]: Let skbs grow over a page on fast peers · 69d15067
      Herbert Xu authored
      While testing the virtio-net driver on KVM with TSO I noticed
      that TSO performance with a 1500 MTU is significantly worse
      compared to the performance of non-TSO with a 16436 MTU.  The
      packet dump shows that most of the packets sent are smaller
      than a page.
      Looking at the code this actually is quite obvious as it always
      stop extending the packet if it's the first packet yet to be
      sent and if it's larger than the MSS.  Since each extension is
      bound by the page size, this means that (given a 1500 MTU) we're
      very unlikely to construct packets greater than a page, provided
      that the receiver and the path is fast enough so that packets can
      always be sent immediately.
      The fix is also quite obvious.  The push calls inside the loop
      is just an optimisation so that we don't end up doing all the
      sending at the end of the loop.  Therefore there is no specific
      reason why it has to do so at MSS boundaries.  For TSO, the
      most natural extension of this optimisation is to do the pushing
      once the skb exceeds the TSO size goal.
      This is what the patch does and testing with KVM shows that the
      TSO performance with a 1500 MTU easily surpasses that of a 16436
      MTU and indeed the packet sizes sent are generally larger than
      I don't see any obvious downsides for slower peers or connections,
      but it would be prudent to test this extensively to ensure that
      those cases don't regress.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  9. 21 Mar, 2008 1 commit
    • Patrick McManus's avatar
      [TCP]: TCP_DEFER_ACCEPT updates - process as established · ec3c0982
      Patrick McManus authored
      Change TCP_DEFER_ACCEPT implementation so that it transitions a
      connection to ESTABLISHED after handshake is complete instead of
      leaving it in SYN-RECV until some data arrvies. Place connection in
      accept queue when first data packet arrives from slow path.
        - established connection is now reset if it never makes it
         to the accept queue
       - diagnostic state of established matches with the packet traces
         showing completed handshake
       - TCP_DEFER_ACCEPT timeouts are expressed in seconds and can now be
         enforced with reasonable accuracy instead of rounding up to next
         exponential back-off of syn-ack retry.
      Signed-off-by: default avatarPatrick McManus <mcmanus@ducksong.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  10. 03 Feb, 2008 1 commit
    • Arnaldo Carvalho de Melo's avatar
      [SOCK] proto: Add hashinfo member to struct proto · ab1e0a13
      Arnaldo Carvalho de Melo authored
      This way we can remove TCP and DCCP specific versions of
      sk->sk_prot->get_port: both v4 and v6 use inet_csk_get_port
      sk->sk_prot->hash:     inet_hash is directly used, only v6 need
                             a specific version to deal with mapped sockets
      sk->sk_prot->unhash:   both v4 and v6 use inet_hash directly
      struct inet_connection_sock_af_ops also gets a new member, bind_conflict, so
      that inet_csk_get_port can find the per family routine.
      Now only the lookup routines receive as a parameter a struct inet_hashtable.
      With this we further reuse code, reducing the difference among INET transport
      Eventually work has to be done on UDP and SCTP to make them share this
      infrastructure and get as a bonus inet_diag interfaces so that iproute can be
      used with these protocols.
        struct proto			     |   +8
        struct inet_connection_sock_af_ops |   +8
       2 structs changed
        __inet_hash_nolisten               |  +18
        __inet_hash                        | -210
        inet_put_port                      |   +8
        inet_bind_bucket_create            |   +1
        __inet_hash_connect                |   -8
       5 functions changed, 27 bytes added, 218 bytes removed, diff: -191
        proto_seq_show                     |   +3
       1 function changed, 3 bytes added, diff: +3
        inet_csk_get_port                  |  +15
       1 function changed, 15 bytes added, diff: +15
        tcp_set_state                      |   -7
       1 function changed, 7 bytes removed, diff: -7
        tcp_v4_get_port                    |  -31
        tcp_v4_hash                        |  -48
        tcp_v4_destroy_sock                |   -7
        tcp_v4_syn_recv_sock               |   -2
        tcp_unhash                         | -179
       5 functions changed, 267 bytes removed, diff: -267
        __inet6_hash |   +8
       1 function changed, 8 bytes added, diff: +8
        inet_unhash                        | +190
        inet_hash                          | +242
       2 functions changed, 432 bytes added, diff: +432
       16 functions changed, 485 bytes added, 492 bytes removed, diff: -7
        tcp_v6_get_port                    |  -31
        tcp_v6_hash                        |   -7
        tcp_v6_syn_recv_sock               |   -9
       3 functions changed, 47 bytes removed, diff: -47
        dccp_destroy_sock                  |   -7
        dccp_unhash                        | -179
        dccp_hash                          |  -49
        dccp_set_state                     |   -7
        dccp_done                          |   +1
       5 functions changed, 1 bytes added, 242 bytes removed, diff: -241
        dccp_v4_get_port                   |  -31
        dccp_v4_request_recv_sock          |   -2
       2 functions changed, 33 bytes removed, diff: -33
        dccp_v6_get_port                   |  -31
        dccp_v6_hash                       |   -7
        dccp_v6_request_recv_sock          |   +5
       3 functions changed, 5 bytes added, 38 bytes removed, diff: -33
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  11. 28 Jan, 2008 8 commits
    • Ilpo Järvinen's avatar
      [TCP]: Uninline tcp_set_state · 490d5046
      Ilpo Järvinen authored
        tcp_close_state | -226
        tcp_done        | -145
        tcp_close       | -564
        tcp_disconnect  | -141
       4 functions changed, 1076 bytes removed, diff: -1076
        tcp_fin               |  -86
        tcp_rcv_state_process | -164
       2 functions changed, 250 bytes removed, diff: -250
        tcp_v4_connect | -209
       1 function changed, 209 bytes removed, diff: -209
        arp_ignore |   +5
       1 function changed, 5 bytes added, diff: +5
        tcp_v6_connect | -158
       1 function changed, 158 bytes removed, diff: -158
        xs_sendpages |   -2
       1 function changed, 2 bytes removed, diff: -2
        ccid3_update_send_interval |   +7
       1 function changed, 7 bytes added, diff: +7
        tcp_set_state | +238
       1 function changed, 238 bytes added, diff: +238
       12 functions changed, 250 bytes added, 1695 bytes removed, diff: -1445
      I've no explanation why some unrelated changes seem to occur
      consistently as well (arp_ignore, ccid3_update_send_interval;
      I checked the arp_ignore asm and it seems to be due to some
      reordered of operation order causing some extra opcodes to be
      generated). Still, the benefits are pretty obvious from the
      codiff's results.
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • Ilpo Järvinen's avatar
      [TCP]: Remove TCPCB_URG & TCPCB_AT_TAIL as unnecessary · 4828e7f4
      Ilpo Järvinen authored
      The snd_up check should be enough. I suspect this has been
      there to provide a minor optimization in clean_rtx_queue which
      used to have a small if (!->sacked) block which could skip
      snd_up check among the other work.
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • Hideo Aoki's avatar
      [NET] CORE: Introducing new memory accounting interface. · 3ab224be
      Hideo Aoki authored
      This patch introduces new memory accounting functions for each network
      protocol. Most of them are renamed from memory accounting functions
      for stream protocols. At the same time, some stream memory accounting
      functions are removed since other functions do same thing.
      	sk_stream_free_skb()		->	sk_wmem_free_skb()
      	__sk_stream_mem_reclaim()	->	__sk_mem_reclaim()
      	sk_stream_mem_reclaim()		->	sk_mem_reclaim()
      	sk_stream_mem_schedule 		->    	__sk_mem_schedule()
      	sk_stream_pages()      		->	sk_mem_pages()
      	sk_stream_rmem_schedule()	->	sk_rmem_schedule()
      	sk_stream_wmem_schedule()	->	sk_wmem_schedule()
      	sk_charge_skb()			->	sk_mem_charge()
      	sk_stream_rfree():	consolidates into sock_rfree()
      	sk_stream_set_owner_r(): consolidates into skb_set_owner_r()
      The following functions are added.
          	sk_has_account(): check if the protocol supports accounting
      	sk_mem_uncharge(): do the opposite of sk_mem_charge()
      In addition, to achieve consolidation, updating sk_wmem_queued is
      removed from sk_mem_charge().
      Next, to consolidate memory accounting functions, this patch adds
      memory accounting calls to network core functions. Moreover, present
      memory accounting call is renamed to new accounting call.
      Finally we replace present memory accounting calls with new interface
      in TCP and SCTP.
      Signed-off-by: default avatarTakahiro Yasui <tyasui@redhat.com>
      Signed-off-by: default avatarHideo Aoki <haoki@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • Pavel Emelyanov's avatar
      [TCP]: Use BUILD_BUG_ON for tcp_skb_cb size checking · 1f9e636e
      Pavel Emelyanov authored
      The sizeof(struct tcp_skb_cb) should not be less than the
      sizeof(skb->cb). This is checked in net/ipv4/tcp.c, but
      this check can be made more gracefully.
      Signed-off-by: default avatarPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • Pavel Emelyanov's avatar
      [NET]: Eliminate unused argument from sk_stream_alloc_pskb · df97c708
      Pavel Emelyanov authored
      The 3rd argument is always zero (according to grep :) Eliminate
      it and merge the function with sk_stream_alloc_skb.
      This saves 44 more bytes, and together with the previous patch
      we have:
      add/remove: 1/0 grow/shrink: 0/8 up/down: 183/-751 (-568)
      function                                     old     new   delta
      sk_stream_alloc_skb                            -     183    +183
      ip_rt_init                                   529     525      -4
      arp_ignore                                   112     107      -5
      __inet_lookup_listener                       284     274     -10
      tcp_sendmsg                                 2583    2481    -102
      tcp_sendpage                                1449    1300    -149
      tso_fragment                                 417     258    -159
      tcp_fragment                                1149     988    -161
      __tcp_push_pending_frames                   1998    1837    -161
      Signed-off-by: default avatarPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • Pavel Emelyanov's avatar
      [NET]: Uninline the sk_stream_alloc_pskb · f561d0f2
      Pavel Emelyanov authored
      This function seems too big for inlining. Indeed, it saves
      half-a-kilo when uninlined:
      add/remove: 1/0 grow/shrink: 0/7 up/down: 195/-719 (-524)
      function                                     old     new   delta
      sk_stream_alloc_pskb                           -     195    +195
      ip_rt_init                                   529     525      -4
      __inet_lookup_listener                       284     274     -10
      tcp_sendmsg                                 2583    2486     -97
      tcp_sendpage                                1449    1305    -144
      tso_fragment                                 417     267    -150
      tcp_fragment                                1149     992    -157
      __tcp_push_pending_frames                   1998    1841    -157
      Signed-off-by: default avatarPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    • Adrian Bunk's avatar
    • Jens Axboe's avatar
      [TCP]: Splice receive support. · 9c55e01c
      Jens Axboe authored
      Support for network splice receive.
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  12. 07 Nov, 2007 1 commit
    • Eric Dumazet's avatar
      [INET]: Remove per bucket rwlock in tcp/dccp ehash table. · 230140cf
      Eric Dumazet authored
      As done two years ago on IP route cache table (commit
      ) , we can avoid using one
      lock per hash bucket for the huge TCP/DCCP hash tables.
      On a typical x86_64 platform, this saves about 2MB or 4MB of ram, for
      litle performance differences. (we hit a different cache line for the
      rwlock, but then the bucket cache line have a better sharing factor
      among cpus, since we dirty it less often). For netstat or ss commands
      that want a full scan of hash table, we perform fewer memory accesses.
      Using a 'small' table of hashed rwlocks should be more than enough to
      provide correct SMP concurrency between different buckets, without
      using too much memory. Sizing of this table depends on
      num_possible_cpus() and various CONFIG settings.
      This patch provides some locking abstraction that may ease a future
      work using a different model for TCP/DCCP table.
      Signed-off-by: default avatarEric Dumazet <dada1@cosmosbay.com>
      Acked-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  13. 30 Oct, 2007 1 commit
    • Jean Delvare's avatar
      [TCP]: Saner thash_entries default with much memory. · 0ccfe618
      Jean Delvare authored
      On systems with a very large amount of memory, the heuristics in
      alloc_large_system_hash() result in a very large TCP established hash
      table: 16 millions of entries for a 128 GB ia64 system. This makes
      reading from /proc/net/tcp pretty slow (well over a second) and as a
      result netstat is slow on these machines. I know that /proc/net/tcp is
      deprecated in favor of tcp_diag, however at the moment netstat only
      knows of the former.
      I am skeptical that such a large TCP established hash is often needed.
      Just because a system has a lot of memory doesn't imply that it will
      have several millions of concurrent TCP connections. Thus I believe
      that we should put an arbitrary high limit to the size of the TCP
      established hash by default. Users who really need a bigger hash can
      always use the thash_entries boot parameter to get more.
      I propose 2 millions of entries as the arbitrary high limit. This
      makes /proc/net/tcp reasonably fast on the system in question (0.2 s)
      while being still large enough for me to be confident that network
      performance won't suffer.
      This is just one way to limit the hash size, there are others; I am not
      familiar enough with the TCP code to decide which is best. Thus, I
      would welcome the proposals of alternatives.
      [ 2 million is still too large, thus I've modified the limit in the
        change to be '512 * 1024'. -DaveM ]
      Signed-off-by: default avatarJean Delvare <jdelvare@suse.de>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  14. 19 Oct, 2007 1 commit
  15. 10 Oct, 2007 3 commits
  16. 02 Aug, 2007 1 commit
    • David S. Miller's avatar
      [TCP]: Invoke tcp_sendmsg() directly, do not use inet_sendmsg(). · 3516ffb0
      David S. Miller authored
      As discovered by Evegniy Polyakov, if we try to sendmsg after
      a connection reset, we can do incredibly stupid things.
      The core issue is that inet_sendmsg() tries to autobind the
      socket, but we should never do that for TCP.  Instead we should
      just go straight into TCP's sendmsg() code which will do all
      of the necessary state and pending socket error checks.
      TCP's sendpage already directly vectors to tcp_sendpage(), so this
      merely brings sendmsg() in line with that.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  17. 19 Jul, 2007 1 commit
    • Paul Mundt's avatar
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt authored
      Slab destructors were no longer supported after Christoph's
       change. They've been
      BUGs for both slab and slub, and slob never supported them
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: default avatarPaul Mundt <lethal@linux-sh.org>
  18. 11 Jul, 2007 2 commits
  19. 24 Jun, 2007 1 commit
  20. 03 Jun, 2007 1 commit
  21. 31 May, 2007 1 commit