1. 01 Dec, 2010 1 commit
  2. 30 Nov, 2010 3 commits
  3. 27 Oct, 2010 1 commit
  4. 16 Jun, 2010 1 commit
    • Eric Dumazet's avatar
      inetpeer: restore small inet_peer structures · 317fe0e6
      Eric Dumazet authored
      Addition of rcu_head to struct inet_peer added 16bytes on 64bit arches.
      Thats a bit unfortunate, since old size was exactly 64 bytes.
      This can be solved, using an union between this rcu_head an four fields,
      that are normally used only when a refcount is taken on inet_peer.
      rcu_head is used only when refcnt=-1, right before structure freeing.
      Add a inet_peer_refcheck() function to check this assertion for a while.
      We can bring back SLAB_HWCACHE_ALIGN qualifier in kmem cache creation.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  5. 15 Jun, 2010 1 commit
    • Eric Dumazet's avatar
      inetpeer: RCU conversion · aa1039e7
      Eric Dumazet authored
      inetpeer currently uses an AVL tree protected by an rwlock.
      It's possible to make most lookups use RCU
      1) Add a struct rcu_head to struct inet_peer
      2) add a lookup_rcu_bh() helper to perform lockless and opportunistic
      lookup. This is a normal function, not a macro like lookup().
      3) Add a limit to number of links followed by lookup_rcu_bh(). This is
      needed in case we fall in a loop.
      4) add an smp_wmb() in link_to_pool() right before node insert.
      5) make unlink_from_pool() use atomic_cmpxchg() to make sure it can take
      last reference to an inet_peer, since lockless readers could increase
      refcount, even while we hold peers.lock.
      6) Delay struct inet_peer freeing after rcu grace period so that
      lookup_rcu_bh() cannot crash.
      7) inet_getpeer() first attempts lockless lookup.
         Note this lookup can fail even if target is in AVL tree, but a
      concurrent writer can let tree in a non correct form.
         If this attemps fails, lock is taken a regular lookup is performed
      8) convert peers.lock from rwlock to a spinlock
      9) Remove SLAB_HWCACHE_ALIGN when peer_cachep is created, because
      rcu_head adds 16 bytes on 64bit arches, doubling effective size (64 ->
      128 bytes)
      In a future patch, this is probably possible to revert this part, if rcu
      field is put in an union to share space with rid, ip_id_count, tcp_ts &
      tcp_ts_stamp. These fields being manipulated only with refcnt > 0.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  6. 13 Nov, 2009 1 commit
    • Eric Dumazet's avatar
      inetpeer: Optimize inet_getid() · 2c1409a0
      Eric Dumazet authored
      While investigating for network latencies, I found inet_getid() was a
      contention point for some workloads, as inet_peer_idlock is shared
      by all inet_getid() users regardless of peers.
      One way to fix this is to make ip_id_count an atomic_t instead
      of __u16, and use atomic_add_return().
      In order to keep sizeof(struct inet_peer) = 64 on 64bit arches
      tcp_ts_stamp is also converted to __u32 instead of "unsigned long".
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  7. 04 Nov, 2009 1 commit
  8. 11 Jun, 2008 1 commit
  9. 12 Nov, 2007 1 commit
  10. 20 Oct, 2006 1 commit
  11. 16 Oct, 2006 1 commit
  12. 28 Sep, 2006 1 commit
  13. 03 Jan, 2006 1 commit
  14. 16 Apr, 2005 1 commit
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      Let it rip!