1. 04 Dec, 2009 10 commits
  2. 10 Sep, 2009 15 commits
  3. 05 Sep, 2009 2 commits
    • Vasu Dev's avatar
      [SCSI] fcoe, libfc: fully makes use of per cpu exch pool and then removes em_lock · b2f0091f
      Vasu Dev authored
      1. Updates fcoe_rcv() to queue incoming frames to the fcoe per
         cpu thread on which this frame's exch was originated and simply
         use current cpu for request exch not originated by initiator.
         It is redundant to add this code under CONFIG_SMP, so removes
         CONFIG_SMP uses around this code.
      
      2. Updates fc_exch_em_alloc, fc_exch_delete, fc_exch_find to use
         per cpu exch pools, here fc_exch_delete is rename of older
         fc_exch_mgr_delete_ep since ep/exch are now deleted in pools
         of EM and so brief new name is sufficient and better name.
      
         Updates these functions to map exch id to their index into exch
         pool using fc_cpu_mask, fc_cpu_order and EM min_xid.
         This mapping is as per detailed explanation about this in
         last patch and basically this is just as lower fc_cpu_mask
         bits of exch id as cpu number and upper bit sum of EM min_xid
         and exch index in pool.
      
         Uses pool next_index to keep track of exch allocation from
         pool along with pool_max_index as upper bound of exches array
         in pool.
      
      3. Adds exch pool ptr to fc_exch to free exch to its pool in
         fc_exch_delete.
      
      4. Updates fc_exch_mgr_reset to reset all exch pools of an EM,
         this required adding fc_exch_pool_reset func to reset exches
         in pool and then have fc_exch_mgr_reset call fc_exch_pool_reset
         for each pool within each EM for a lport.
      
      5. Removes no longer needed exches array, em_lock, next_xid, and
         total_exches from struct fc_exch_mgr, these are not needed after
         use of per cpu exch pool, also removes not used max_read,
         last_read from struct fc_exch_mgr.
      
      6. Updates locking notes for exch pool lock with fc_exch lock and
         uses pool lock in exch allocation, lookup and reset.
      Signed-off-by: default avatarVasu Dev <vasu.dev@intel.com>
      Signed-off-by: default avatarRobert Love <robert.w.love@intel.com>
      Signed-off-by: default avatarJames Bottomley <James.Bottomley@suse.de>
      b2f0091f
    • Vasu Dev's avatar
      [SCSI] fcoe, libfc: adds per cpu exch pool within exchange manager(EM) · e4bc50be
      Vasu Dev authored
      Adds per cpu exch pool for these reasons:-
      
       1. Currently an EM instance is shared across all cpus to manage
          all exches for all cpus. This required em_lock across all
          cpus for an exch alloc, free, lookup and reset each frame
          and that made em_lock expensive, so instead having per cpu
          exch pool with their own per cpu pool lock will likely reduce
          locking contention in fast path for an exch alloc, free and
          lookup.
      
       2. Per cpu exch pool will likely improve cache hit ratio since
          all frames of an exch will be processed on the same cpu on
          which exch originated.
      
      This patch is only prep work to help in keeping complexity of next
      patch low, so this patch only sets up per cpu exch pool and related
      helper funcs to be used by next patch. The next patch fully makes
      use of per cpu exch pool in all code paths ie. tx, rx and reset.
      
      Divides per EM exch id range equally across all cpus to setup per
      cpu exch pool. This division is such that lower bits of exch id
      carries cpu number info on which exch originated, later a simple
      bitwise AND operation on exch id of incoming frame with fc_cpu_mask
      retrieves cpu number info to direct all frames to same cpu on which
      exch originated. This required a global fc_cpu_mask and fc_cpu_order
      initialized to max possible cpus number nr_cpu_ids rounded up to 2's
      power, this will be used in mapping exch id and exch ptr array
      index in pool during exch allocation, find or reset code paths.
      
      Adds a check in fc_exch_mgr_alloc() to ensure specified min_xid
      lower bits are zero since these bits are used to carry cpu info.
      
      Adds and initializes struct fc_exch_pool with all required fields
      to manage exches in pool.
      
      Allocates per cpu struct fc_exch_pool with memory for exches array
      for range of exches per pool. The exches array memory is followed
      by struct fc_exch_pool.
      
      Adds fc_exch_ptr_get/set() helper functions to get/set exch ptr in
      pool exches array at specified array index.
      
      Increases default FCOE_MAX_XID to 0x0FFF from 0x07EF, so that more
      exches are available per cpu after above described exch id range
      division across all cpus to each pool.
      Signed-off-by: default avatarVasu Dev <vasu.dev@intel.com>
      Signed-off-by: default avatarRobert Love <robert.w.love@intel.com>
      Signed-off-by: default avatarJames Bottomley <James.Bottomley@suse.de>
      e4bc50be
  4. 22 Aug, 2009 7 commits
  5. 21 Jun, 2009 1 commit
  6. 23 May, 2009 1 commit
  7. 27 Apr, 2009 1 commit
    • Abhijeet Joglekar's avatar
      [SCSI] libfc: Track rogue remote ports · b4c6f546
      Abhijeet Joglekar authored
      Rogue ports are currently not tracked on any list. The only reference
      to them is through any outstanding exchanges pending on the rogue ports.
      If the module is removed while a retry is set on a rogue port
      (say a Plogi retry for instance), this retry is not cancelled because there
      is no reference to the rogue port in the discovery rports list. Thus the
      local port can clean itself up, delete the exchange pool, and then the
      rogue port timeout can fire and try to start up another exchange.
      
      This patch tracks the rogue ports in a new list disc->rogue_rports. Creating
      a new list instead of using the disc->rports list keeps remote port code
      change to a minimum.
      
      1)  Whenever a rogue port is created, it is immediately added to the
      disc->rogue_rports list.
      
      2) When the rogues port goes to ready, it is removed from the rogue list
      and the real remote port is added to the disc->rports list
      
      3) The removal of the rogue from the disc->rogue_rports list is done in
      the context of the fc_rport_work() workQ thread in discovery callback.
      
      4) Real rports are removed from the disc->rports list like before. Lookup
      is done only in the real rports list. This avoids making large changes
      to the remote port code.
      
      5) In fc_disc_stop_rports, the rogues list is traversed in addition to the
      real list to stop the rogue ports and issue logoffs on them. This way, rogue
      ports get cleaned up when the local port goes away.
      
      6) rogue remote ports are not removed from the list right away, but
      removed late in fc_rport_work() context, multiple threads can find the same
      remote port in the list and call rport_logoff(). Rport_logoff() only
      continues with the logoff if port is not in NONE state, thus preventing
      multiple logoffs and multiple list deletions.
      
      7) Since the rport is removed from the disc list at a later stage
      (in the disc callback), incoming frames can find the rport even if
      rport_logoff() has been called on the rport. When rport_logoff() is called,
      the rport state is set to NONE, and we are trying to cancel all exchanges
      and retries on that port. While in this state, if an incoming
      Plogi/Prli/Logo or other frames match the rport, we should not reply
      because the rport is in the NONE state. Just drop the frame, since the
      rport will be deleted soon in the disc callback (fc_rport_work)
      
      8)  In fc_disc_single(), remove rport lookup and call to fc_disc_del_target.
      fc_disc_single() is called from recv_rscn_req() where rport lookup
      and rport_logoff is already done.
      Signed-off-by: default avatarAbhijeet Joglekar <abjoglek@cisco.com>
      Signed-off-by: default avatarRobert Love <robert.w.love@intel.com>
      Signed-off-by: default avatarJames Bottomley <James.Bottomley@HansenPartnership.com>
      b4c6f546
  8. 03 Apr, 2009 2 commits
  9. 13 Mar, 2009 1 commit
    • Yi Zou's avatar
      [SCSI] libfc: add support of large receive offload by ddp in fc_fcp · b277d2aa
      Yi Zou authored
      When LLD supports direct data placement (ddp) for large receive of an scsi
      i/o coming into fc_fcp, we call into libfc_function_template's ddp_setup()
      to prepare for a ddp of large receive for this read I/O. When I/O is complete,
      we call the corresponding ddp_done() to get the length of data ddped as well
      as to let LLD do clean up.
      
      fc_fcp_ddp_setup()/fc_fcp_ddp_done() are added to setup and complete a ddped
      read I/O described by the given fc_fcp_pkt. They would call into corresponding
      ddp_setup/ddp_done implemented by the fcoe layer. Eventually, fcoe layer calls
      into LLD's ddp_setup/ddp_done provided through net_device
      Signed-off-by: default avatarYi Zou <yi.zou@intel.com>
      Signed-off-by: default avatarJames Bottomley <James.Bottomley@HansenPartnership.com>
      b277d2aa