1. 21 Mar, 2016 1 commit
  2. 03 Mar, 2016 1 commit
  3. 19 Jan, 2016 1 commit
    • Erez Shitrit's avatar
      IB/IPoIB: Fix kernel panic on multicast flow · 50be28de
      Erez Shitrit authored
      ipoib_mcast_restart_task calls ipoib_mcast_remove_list with the
      parameter mcast->dev. That mcast is a temporary (used as an iterator)
      variable that may be uninitialized.
      There is no need to send the variable dev to the function, as each mcast
      has its dev as a member in the mcast struct.
      
      This causes the next panic:
      RIP: 0010: ipoib_mcast_leave+0x6d/0xf0 [ib_ipoib]
      RSP: 0018: EFLAGS: 00010246
      RAX: f0201 RBX: 24e00 RCX: 00000
      ....
      ....
      Stack:
      Call Trace:
      	ipoib_mcast_remove_list+0x3a/0x70 [ib_ipoib]
      	ipoib_mcast_restart_task+0x3bb/0x520 [ib_ipoib]
      	process_one_work+0x164/0x470
      	worker_thread+0x11d/0x420
      	...
      
      Fixes: 5a0e81f6 ('IB/IPoIB: factor out common multicast list removal code')
      Signed-off-by: default avatarErez Shitrit <erezsh@mellanox.com>
      Reported-by: default avatarDoron Tsur <doront@mellanox.com>
      Reviewed-by: default avatarChristoph Lameter <cl@linux.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      50be28de
  4. 23 Dec, 2015 2 commits
  5. 13 Oct, 2015 1 commit
  6. 08 Oct, 2015 1 commit
    • Christoph Hellwig's avatar
      IB: split struct ib_send_wr · e622f2f4
      Christoph Hellwig authored
      This patch split up struct ib_send_wr so that all non-trivial verbs
      use their own structure which embedds struct ib_send_wr.  This dramaticly
      shrinks the size of a WR for most common operations:
      
      sizeof(struct ib_send_wr) (old):	96
      
      sizeof(struct ib_send_wr):		48
      sizeof(struct ib_rdma_wr):		64
      sizeof(struct ib_atomic_wr):		96
      sizeof(struct ib_ud_wr):		88
      sizeof(struct ib_fast_reg_wr):		88
      sizeof(struct ib_bind_mw_wr):		96
      sizeof(struct ib_sig_handover_wr):	80
      
      And with Sagi's pending MR rework the fast registration WR will also be
      down to a reasonable size:
      
      sizeof(struct ib_fastreg_wr):		64
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> [srp, srpt]
      Reviewed-by: Chuck Lever <chuck.lever@oracle.com> [sunrpc]
      Tested-by: default avatarHaggai Eran <haggaie@mellanox.com>
      Tested-by: default avatarSagi Grimberg <sagig@mellanox.com>
      Tested-by: default avatarSteve Wise <swise@opengridcomputing.com>
      e622f2f4
  7. 25 Sep, 2015 2 commits
  8. 30 Aug, 2015 1 commit
  9. 14 Jul, 2015 1 commit
    • Yuval Shaia's avatar
      IB/ipoib: Scatter-Gather support in connected mode · c4268778
      Yuval Shaia authored
      By default, IPoIB-CM driver uses 64k MTU. Larger MTU gives better
      performance.
      This MTU plus overhead puts the memory allocation for IP based packets at
      32 4k pages (order 5), which have to be contiguous.
      When the system memory under pressure, it was observed that allocating 128k
      contiguous physical memory is difficult and causes serious errors (such as
      system becomes unusable).
      
      This enhancement resolve the issue by removing the physically contiguous
      memory requirement using Scatter/Gather feature that exists in Linux stack.
      
      With this fix Scatter-Gather will be supported also in connected mode.
      
      This change reverts some of the change made in commit e112373f
      ("IPoIB/cm: Reduce connected mode TX object size").
      
      The ability to use SG in IPoIB CM is possible because the coupling
      between NETIF_F_SG and NETIF_F_CSUM was removed in commit
      ec5f0615 ("net: Kill link between CSUM and SG features.")
      Signed-off-by: default avatarYuval Shaia <yuval.shaia@oracle.com>
      Acked-by: default avatarChristian Marie <christian@ponies.io>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      c4268778
  10. 15 Apr, 2015 6 commits
    • Erez Shitrit's avatar
      IB/ipoib: Remove IPOIB_MCAST_RUN bit · 0e5544d9
      Erez Shitrit authored
      After Doug Ledford's changes there is no need in that bit, it's
      semantic becomes subset of the IPOIB_FLAG_OPER_UP bit.
      Signed-off-by: default avatarErez Shitrit <erezsh@mellanox.com>
      Signed-off-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      0e5544d9
    • Erez Shitrit's avatar
      IB/ipoib: Handle QP in SQE state · 2c010730
      Erez Shitrit authored
      As the result of a completion error the QP can moved to SQE state by
      the hardware. Since it's not the Error state, there are no flushes
      and hence the driver doesn't know about that.
      
      The fix creates a task that after completion with error which is not a
      flush tracks the QP state and if it is in SQE state moves it back to RTS.
      Signed-off-by: default avatarErez Shitrit <erezsh@mellanox.com>
      Signed-off-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      2c010730
    • Erez Shitrit's avatar
      IB/ipoib: Use one linear skb in RX flow · a44878d1
      Erez Shitrit authored
      The current code in the RX flow uses two sg entries for each incoming
      packet, the first one was for the IB headers and the second for the rest
      of the data, that causes two  dma map/unmap and two allocations, and few
      more actions that were done at the data path.
      
      Use only one linear skb on each incoming packet, for the data (IB
      headers and payload), that reduces the packet processing in the
      data-path (only one skb, no frags, the first frag was not used anyway,
      less memory allocations) and the dma handling (only one dma map/unmap
      over each incoming packet instead of two map/unmap per each incoming packet).
      
      After commit 73d3fe6d ("gro: fix aggregation for skb using frag_list") from
      Eric Dumazet, we will get full aggregation for large packets.
      
      When running bandwidth tests before and after the (over the card's numa node),
      using "netperf -H 1.1.1.3 -T -t TCP_STREAM", the results before are ~12Gbs before
      and after ~16Gbs on my setup (Mellanox's ConnectX3).
      Signed-off-by: default avatarErez Shitrit <erezsh@mellanox.com>
      Signed-off-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      a44878d1
    • Doug Ledford's avatar
      IB/ipoib: fix MCAST_FLAG_BUSY usage · 69911416
      Doug Ledford authored
      Commit a9c8ba58 ("IPoIB: Fix usage of uninitialized multicast
      objects") added a new flag MCAST_JOIN_STARTED, but was not very strict
      in how it was used.  We didn't always initialize the completion struct
      before we set the flag, and we didn't always call complete on the
      completion struct from all paths that complete it.  And when we did
      complete it, sometimes we continued to touch the mcast entry after
      the completion, opening us up to possible use after free issues.
      
      This made it less than totally effective, and certainly made its use
      confusing.  And in the flush function we would use the presence of this
      flag to signal that we should wait on the completion struct, but we never
      cleared this flag, ever.
      
      In order to make things clearer and aid in resolving the rtnl deadlock
      bug I've been chasing, I cleaned this up a bit.
      
       1) Remove the MCAST_JOIN_STARTED flag entirely
       2) Change MCAST_FLAG_BUSY so it now only means a join is in-flight
       3) Test mcast->mc directly to see if we have completed
          ib_sa_join_multicast (using IS_ERR_OR_NULL)
       4) Make sure that before setting MCAST_FLAG_BUSY we always initialize
          the mcast->done completion struct
       5) Make sure that before calling complete(&mcast->done), we always clear
          the MCAST_FLAG_BUSY bit
       6) Take the mcast_mutex before we call ib_sa_multicast_join and also
          take the mutex in our join callback.  This forces
          ib_sa_multicast_join to return and set mcast->mc before we process
          the callback.  This way, our callback can safely clear mcast->mc
          if there is an error on the join and we will do the right thing as
          a result in mcast_dev_flush.
       7) Because we need the mutex to synchronize mcast->mc, we can no
          longer call mcast_sendonly_join directly from mcast_send and
          instead must add sendonly join processing to the mcast_join_task
       8) Make MCAST_RUN mean that we have a working mcast subsystem, not that
          we have a running task.  We know when we need to reschedule our
          join task thread and don't need a flag to tell us.
       9) Add a helper for rescheduling the join task thread
      
      A number of different races are resolved with these changes.  These
      races existed with the old MCAST_FLAG_BUSY usage, the
      MCAST_JOIN_STARTED flag was an attempt to address them, and while it
      helped, a determined effort could still trip things up.
      
      One race looks something like this:
      
      Thread 1                             Thread 2
      ib_sa_join_multicast (as part of running restart mcast task)
        alloc member
        call callback
                                           ifconfig ib0 down
      				     wait_for_completion
          callback call completes
                                           wait_for_completion in
      				     mcast_dev_flush completes
      				       mcast->mc is PTR_ERR_OR_NULL
      				       so we skip ib_sa_leave_multicast
          return from callback
        return from ib_sa_join_multicast
      set mcast->mc = return from ib_sa_multicast
      
      We now have a permanently unbalanced join/leave issue that trips up the
      refcounting in core/multicast.c
      
      Another like this:
      
      Thread 1                   Thread 2         Thread 3
      ib_sa_multicast_join
                                                  ifconfig ib0 down
      					    priv->broadcast = NULL
                                 join_complete
      			                    wait_for_completion
      			   mcast->mc is not yet set, so don't clear
      return from ib_sa_join_multicast and set mcast->mc
      			   complete
      			   return -EAGAIN (making mcast->mc invalid)
      			   		    call ib_sa_multicast_leave
      					    on invalid mcast->mc, hang
      					    forever
      
      By holding the mutex around ib_sa_multicast_join and taking the mutex
      early in the callback, we force mcast->mc to be valid at the time we
      run the callback.  This allows us to clear mcast->mc if there is an
      error and the join is going to fail.  We do this before we complete
      the mcast.  In this way, mcast_dev_flush always sees consistent state
      in regards to mcast->mc membership at the time that the
      wait_for_completion() returns.
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      69911416
    • Doug Ledford's avatar
      IB/ipoib: No longer use flush as a parameter · efc82eee
      Doug Ledford authored
      Various places in the IPoIB code had a deadlock related to flushing
      the ipoib workqueue.  Now that we have per device workqueues and a
      specific flush workqueue, there is no longer a deadlock issue with
      flushing the device specific workqueues and we can do so unilaterally.
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      efc82eee
    • Doug Ledford's avatar
      IB/ipoib: Use dedicated workqueues per interface · 0b39578b
      Doug Ledford authored
      During my recent work on the rtnl lock deadlock in the IPoIB driver, I
      saw that even once I fixed the apparent races for a single device, as
      soon as that device had any children, new races popped up.  It turns
      out that this is because no matter how well we protect against races
      on a single device, the fact that all devices use the same workqueue,
      and flush_workqueue() flushes *everything* from that workqueue means
      that we would also have to prevent all races between different devices
      (for instance, ipoib_mcast_restart_task on interface ib0 can race with
      ipoib_mcast_flush_dev on interface ib0.8002, resulting in a deadlock on
      the rtnl_lock).
      
      There are several possible solutions to this problem:
      
      Make carrier_on_task and mcast_restart_task try to take the rtnl for
      some set period of time and if they fail, then bail.  This runs the
      real risk of dropping work on the floor, which can end up being its
      own separate kind of deadlock.
      
      Set some global flag in the driver that says some device is in the
      middle of going down, letting all tasks know to bail.  Again, this can
      drop work on the floor.
      
      Or the method this patch attempts to use, which is when we bring an
      interface up, create a workqueue specifically for that interface, so
      that when we take it back down, we are flushing only those tasks
      associated with our interface.  In addition, keep the global
      workqueue, but now limit it to only flush tasks.  In this way, the
      flush tasks can always flush the device specific work queues without
      having deadlock issues.
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      0b39578b
  11. 30 Jan, 2015 4 commits
  12. 15 Dec, 2014 4 commits
    • Doug Ledford's avatar
      IPoIB: No longer use flush as a parameter · ce347ab9
      Doug Ledford authored
      Various places in the IPoIB code had a deadlock related to flushing
      the ipoib workqueue.  Now that we have per device workqueues and a
      specific flush workqueue, there is no longer a deadlock issue with
      flushing the device specific workqueues and we can do so unilaterally.
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      ce347ab9
    • Doug Ledford's avatar
      IPoIB: Make ipoib_mcast_stop_thread flush the workqueue · bb42a6dd
      Doug Ledford authored
      We used to pass a flush variable to mcast_stop_thread to indicate if
      we should flush the workqueue or not.  This was due to some code
      trying to flush a workqueue that it was currently running on which is
      a no-no.  Now that we have per-device work queues, and now that
      ipoib_mcast_restart_task has taken the fact that it is queued on a
      single thread workqueue with all of the ipoib_mcast_join_task's and
      therefore has no need to stop the join task while it runs, we can do
      away with the flush parameter and unilaterally flush always.
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      bb42a6dd
    • Doug Ledford's avatar
      IPoIB: Use dedicated workqueues per interface · 5141861c
      Doug Ledford authored
      During my recent work on the rtnl lock deadlock in the IPoIB driver, I
      saw that even once I fixed the apparent races for a single device, as
      soon as that device had any children, new races popped up.  It turns
      out that this is because no matter how well we protect against races
      on a single device, the fact that all devices use the same workqueue,
      and flush_workqueue() flushes *everything* from that workqueue, we can
      have one device in the middle of a down and holding the rtnl lock and
      another totally unrelated device needing to run mcast_restart_task,
      which wants the rtnl lock and will loop trying to take it unless is
      sees its own FLAG_ADMIN_UP flag go away.  Because the unrelated
      interface will never see its own ADMIN_UP flag drop, the interface
      going down will deadlock trying to flush the queue.  There are several
      possible solutions to this problem:
      
      Make carrier_on_task and mcast_restart_task try to take the rtnl for
      some set period of time and if they fail, then bail.  This runs the
      real risk of dropping work on the floor, which can end up being its
      own separate kind of deadlock.
      
      Set some global flag in the driver that says some device is in the
      middle of going down, letting all tasks know to bail.  Again, this can
      drop work on the floor.  I suppose if our own ADMIN_UP flag doesn't go
      away, then maybe after a few tries on the rtnl lock we can queue our
      own task back up as a delayed work and return and avoid dropping work
      on the floor that way.  But I'm not 100% convinced that we won't cause
      other problems.
      
      Or the method this patch attempts to use, which is when we bring an
      interface up, create a workqueue specifically for that interface, so
      that when we take it back down, we are flushing only those tasks
      associated with our interface.  In addition, keep the global
      workqueue, but now limit it to only flush tasks.  In this way, the
      flush tasks can always flush the device specific work queues without
      having deadlock issues.
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      5141861c
    • Doug Ledford's avatar
      IPoIB: fix MCAST_FLAG_BUSY usage · 016d9fb2
      Doug Ledford authored
      Commit a9c8ba58 ("IPoIB: Fix usage of uninitialized multicast
      objects") added a new flag MCAST_JOIN_STARTED, but was not very strict
      in how it was used.  We didn't always initialize the completion struct
      before we set the flag, and we didn't always call complete on the
      completion struct from all paths that complete it.  This made it less
      than totally effective, and certainly made its use confusing.  And in
      the flush function we would use the presence of this flag to signal
      that we should wait on the completion struct, but we never cleared
      this flag, ever.  This is further muddied by the fact that we overload
      the MCAST_FLAG_BUSY flag to mean two different things: we have a join
      in flight, and we have succeeded in getting an ib_sa_join_multicast.
      
      In order to make things clearer and aid in resolving the rtnl deadlock
      bug I've been chasing, I cleaned this up a bit.
      
       1) Remove the MCAST_JOIN_STARTED flag entirely
       2) Un-overload MCAST_FLAG_BUSY so it now only means a join is in-flight
       3) Test on mcast->mc directly to see if we have completed
          ib_sa_join_multicast (using IS_ERR_OR_NULL)
       4) Make sure that before setting MCAST_FLAG_BUSY we always initialize
          the mcast->done completion struct
       5) Make sure that before calling complete(&mcast->done), we always clear
          the MCAST_FLAG_BUSY bit
       6) Take the mcast_mutex before we call ib_sa_multicast_join and also
          take the mutex in our join callback.  This forces
          ib_sa_multicast_join to return and set mcast->mc before we process
          the callback.  This way, our callback can safely clear mcast->mc
          if there is an error on the join and we will do the right thing as
          a result in mcast_dev_flush.
       7) Because we need the mutex to synchronize mcast->mc, we can no
          longer call mcast_sendonly_join directly from mcast_send and
          instead must add sendonly join processing to the mcast_join_task
      
      A number of different races are resolved with these changes.  These
      races existed with the old MCAST_FLAG_BUSY usage, the
      MCAST_JOIN_STARTED flag was an attempt to address them, and while it
      helped, a determined effort could still trip things up.
      
      One race looks something like this:
      
      Thread 1                             Thread 2
      ib_sa_join_multicast (as part of running restart mcast task)
        alloc member
        call callback
                                           ifconfig ib0 down
      				     wait_for_completion
          callback call completes
                                           wait_for_completion in
      				     mcast_dev_flush completes
      				       mcast->mc is PTR_ERR_OR_NULL
      				       so we skip ib_sa_leave_multicast
          return from callback
        return from ib_sa_join_multicast
      set mcast->mc = return from ib_sa_multicast
      
      We now have a permanently unbalanced join/leave issue that trips up the
      refcounting in core/multicast.c
      
      Another like this:
      
      Thread 1                   Thread 2         Thread 3
      ib_sa_multicast_join
                                                  ifconfig ib0 down
      					    priv->broadcast = NULL
                                 join_complete
      			                    wait_for_completion
      			   mcast->mc is not yet set, so don't clear
      return from ib_sa_join_multicast and set mcast->mc
      			   complete
      			   return -EAGAIN (making mcast->mc invalid)
      			   		    call ib_sa_multicast_leave
      					    on invalid mcast->mc, hang
      					    forever
      
      By holding the mutex around ib_sa_multicast_join and taking the mutex
      early in the callback, we force mcast->mc to be valid at the time we
      run the callback.  This allows us to clear mcast->mc if there is an
      error and the join is going to fail.  We do this before we complete
      the mcast.  In this way, mcast_dev_flush always sees consistent state
      in regards to mcast->mc membership at the time that the
      wait_for_completion() returns.
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      016d9fb2
  13. 22 Sep, 2014 1 commit
  14. 05 Aug, 2014 2 commits
  15. 08 Nov, 2013 2 commits
    • Erez Shitrit's avatar
      IPoIB: Fix usage of uninitialized multicast objects · a9c8ba58
      Erez Shitrit authored
      The driver should avoid calling ib_sa_free_multicast on the mcast->mc
      object until it finishes its initialization state.  Otherwise we can
      crash when ipoib_mcast_dev_flush() attempts to use the uninitialized
      multicast object.
      
      Instead, only call wait_for_completion() for multicast entries that
      started the join process, meaning that ib_sa_join_multicast() finished.
      Signed-off-by: default avatarErez Shitrit <erezsh@mellanox.com>
      Signed-off-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      a9c8ba58
    • Erez Shitrit's avatar
      IPoIB: Fix deadlock between dev_change_flags() and __ipoib_dev_flush() · f47944cc
      Erez Shitrit authored
      When ipoib interface is going down it takes all of its children with
      it, under mutex.
      
      For each child, dev_change_flags() is called.  That function calls
      ipoib_stop() via the ndo, and causes flush of the workqueue.
      Sometimes in the workqueue an __ipoib_dev_flush work() is waiting and
      when invoked tries to get the same mutex, which leads to a deadlock,
      as seen below.
      
      The solution is to switch to rw-sem instead of mutex.
      
      The deadlock:
      [11028.165303]  [<ffffffff812b0977>] ? vgacon_scroll+0x107/0x2e0
      [11028.171844]  [<ffffffff814eaac5>] schedule_timeout+0x215/0x2e0
      [11028.178465]  [<ffffffff8105a5c3>] ? perf_event_task_sched_out+0x33/0x80
      [11028.185962]  [<ffffffff814ea743>] wait_for_common+0x123/0x180
      [11028.192491]  [<ffffffff8105fa40>] ? default_wake_function+0x0/0x20
      [11028.199504]  [<ffffffff814ea85d>] wait_for_completion+0x1d/0x20
      [11028.206224]  [<ffffffff8108b4f1>] flush_cpu_workqueue+0x61/0x90
      [11028.212948]  [<ffffffff8108b5a0>] ? wq_barrier_func+0x0/0x20
      [11028.219375]  [<ffffffff8108bfc4>] flush_workqueue+0x54/0x80
      [11028.225712]  [<ffffffffa05a0576>] ipoib_mcast_stop_thread+0x66/0x90 [ib_ipoib]
      [11028.233988]  [<ffffffffa059ccea>] ipoib_ib_dev_down+0x6a/0x100 [ib_ipoib]
      [11028.241678]  [<ffffffffa059849a>] ipoib_stop+0x8a/0x140 [ib_ipoib]
      [11028.248692]  [<ffffffff8142adf1>] dev_close+0x71/0xc0
      [11028.254447]  [<ffffffff8142a631>] dev_change_flags+0xa1/0x1d0
      [11028.261062]  [<ffffffffa059851b>] ipoib_stop+0x10b/0x140 [ib_ipoib]
      [11028.268172]  [<ffffffff8142adf1>] dev_close+0x71/0xc0
      [11028.273922]  [<ffffffff8142a631>] dev_change_flags+0xa1/0x1d0
      [11028.280452]  [<ffffffff8148f20b>] devinet_ioctl+0x5eb/0x6a0
      [11028.286786]  [<ffffffff814903b8>] inet_ioctl+0x88/0xa0
      [11028.292633]  [<ffffffff8141591a>] sock_ioctl+0x7a/0x280
      [11028.298576]  [<ffffffff81189012>] vfs_ioctl+0x22/0xa0
      [11028.304326]  [<ffffffff81140540>] ? unmap_region+0x110/0x130
      [11028.310756]  [<ffffffff811891b4>] do_vfs_ioctl+0x84/0x580
      [11028.316897]  [<ffffffff81189731>] sys_ioctl+0x81/0xa0
      
      and
      
      11028.017533]  [<ffffffff8105a5c3>] ? perf_event_task_sched_out+0x33/0x80
      [11028.025030]  [<ffffffff8100bb8e>] ? apic_timer_interrupt+0xe/0x20
      [11028.031945]  [<ffffffff814eb2ae>] __mutex_lock_slowpath+0x13e/0x180
      [11028.039053]  [<ffffffff814eb14b>] mutex_lock+0x2b/0x50
      [11028.044910]  [<ffffffffa059f7e7>] __ipoib_ib_dev_flush+0x37/0x210 [ib_ipoib]
      [11028.052894]  [<ffffffffa059fa00>] ? ipoib_ib_dev_flush_light+0x0/0x20 [ib_ipoib]
      [11028.061363]  [<ffffffffa059fa17>] ipoib_ib_dev_flush_light+0x17/0x20 [ib_ipoib]
      [11028.069738]  [<ffffffff8108b120>] worker_thread+0x170/0x2a0
      [11028.076068]  [<ffffffff81090990>] ? autoremove_wake_function+0x0/0x40
      [11028.083374]  [<ffffffff8108afb0>] ? worker_thread+0x0/0x2a0
      [11028.089709]  [<ffffffff81090626>] kthread+0x96/0xa0
      [11028.095266]  [<ffffffff8100c0ca>] child_rip+0xa/0x20
      [11028.100921]  [<ffffffff81090590>] ? kthread+0x0/0xa0
      [11028.106573]  [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
      [11028.112423] INFO: task ifconfig:23640 blocked for more than 120 seconds.
      Signed-off-by: default avatarErez Shitrit <erezsh@mellanox.com>
      Signed-off-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      f47944cc
  16. 19 Feb, 2013 2 commits
  17. 02 Oct, 2012 1 commit
  18. 01 Oct, 2012 1 commit
    • Or Gerlitz's avatar
      IB/ipoib: Add more rtnl_link_ops callbacks · 862096a8
      Or Gerlitz authored
      Add the rtnl_link_ops changelink and fill_info callbacks, through
      which the admin can now set/get the driver mode, etc policies.
      Maintain the proprietary sysfs entries only for legacy childs.
      
      For child devices, set dev->iflink to point to the parent
      device ifindex, such that user space tools can now correctly
      show the uplink relation as done for vlan, macvlan, etc
      devices. Pointed out by Patrick McHardy <kaber@trash.net>
      Signed-off-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      862096a8
  19. 20 Sep, 2012 1 commit
    • Or Gerlitz's avatar
      IB/ipoib: Add rtnl_link_ops support · 9baa0b03
      Or Gerlitz authored
      Add rtnl_link_ops to IPoIB, with the first usage being child device
      create/delete through them. Childs devices are now either legacy ones,
      created/deleted through the ipoib sysfs entries, or RTNL ones.
      
      Adding support for RTNL childs involved refactoring of ipoib_vlan_add
      which is now used by both the sysfs and the link_ops code.
      
      Also, added ndo_uninit entry to support calling unregister_netdevice_queue
      from the rtnl dellink entry. This required removal of calls to
      ipoib_dev_cleanup from the driver in flows which use unregister_netdevice,
      since the networking core will invoke ipoib_uninit which does exactly that.
      Signed-off-by: default avatarErez Shitrit <erezsh@mellanox.co.il>
      Signed-off-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9baa0b03
  20. 12 Sep, 2012 2 commits
  21. 30 Jul, 2012 1 commit
    • Shlomo Pongratz's avatar
      IPoIB: Use a private hash table for path lookup in xmit path · b63b70d8
      Shlomo Pongratz authored
      Dave Miller <davem@davemloft.net> provided a detailed description of
      why the way IPoIB is using neighbours for its own ipoib_neigh struct
      is buggy:
      
          Any time an ipoib_neigh is changed, a sequence like the following is made:
      
          			spin_lock_irqsave(&priv->lock, flags);
          			/*
          			 * It's safe to call ipoib_put_ah() inside
          			 * priv->lock here, because we know that
          			 * path->ah will always hold one more reference,
          			 * so ipoib_put_ah() will never do more than
          			 * decrement the ref count.
          			 */
          			if (neigh->ah)
          				ipoib_put_ah(neigh->ah);
          			list_del(&neigh->list);
          			ipoib_neigh_free(dev, neigh);
          			spin_unlock_irqrestore(&priv->lock, flags);
          			ipoib_path_lookup(skb, n, dev);
      
          This doesn't work, because you're leaving a stale pointer to the freed up
          ipoib_neigh in the special neigh->ha pointer cookie.  Yes, it even fails
          with all the locking done to protect _changes_ to *ipoib_neigh(n), and
          with the code in ipoib_neigh_free() that NULLs out the pointer.
      
          The core issue is that read side calls to *to_ipoib_neigh(n) are not
          being synchronized at all, they are performed without any locking.  So
          whether we hold the lock or not when making changes to *ipoib_neigh(n)
          you still can have threads see references to freed up ipoib_neigh
          objects.
      
          	cpu 1			cpu 2
          	n = *ipoib_neigh()
          				*ipoib_neigh() = NULL
          				kfree(n)
          	n->foo == OOPS
      
          [..]
      
          Perhaps the ipoib code can have a private path database it manages
          entirely itself, which holds all the necessary information and is
          looked up by some generic key which is available easily at transmit
          time and does not involve generic neighbour entries.
      
      See <http://marc.info/?l=linux-rdma&m=132812793105624&w=2> and
      <http://marc.info/?l=linux-rdma&w=2&r=1&s=allows+references+to+freed+memory&q=b>
      for the full discussion.
      
      This patch aims to solve the race conditions found in the IPoIB driver.
      
      The patch removes the connection between the core networking neighbour
      structure and the ipoib_neigh structure.  In addition to avoiding the
      race described above, it allows us to handle SKBs carrying IP packets
      that don't have any associated neighbour.
      
      We add an ipoib_neigh hash table with N buckets where the key is the
      destination hardware address.  The ipoib_neigh is fetched from the
      hash table and instead of the stashed location in the neighbour
      structure. The hash table uses both RCU and reference counting to
      guarantee that no ipoib_neigh instance is ever deleted while in use.
      
      Fetching the ipoib_neigh structure instance from the hash also makes
      the special code in ipoib_start_xmit that handles remote and local
      bonding failover redundant.
      
      Aged ipoib_neigh instances are deleted by a garbage collection task
      that runs every M seconds and deletes every ipoib_neigh instance that
      was idle for at least 2*M seconds. The deletion is safe since the
      ipoib_neigh instances are protected using RCU and reference count
      mechanisms.
      
      The number of buckets (N) and frequency of running the GC thread (M),
      are taken from the exported arb_tbl.
      Signed-off-by: default avatarShlomo Pongratz <shlomop@mellanox.com>
      Signed-off-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      b63b70d8
  22. 08 Feb, 2012 2 commits