1. 06 Dec, 2013 1 commit
  2. 17 Sep, 2013 1 commit
    • Qin Chuanyu's avatar
      vhost: wake up worker outside spin_lock · ac9fde24
      Qin Chuanyu authored
      the wake_up_process func is included by spin_lock/unlock in
      vhost_work_queue,
      but it could be done outside the spin_lock.
      I have test it with kernel 3.0.27 and guest suse11-sp2 using iperf,
      the num as below.
                        original                 modified
      thread_num  tp(Gbps)   vhost(%)  |  tp(Gbps)     vhost(%)
      1           9.59        28.82    |   9.59        27.49
      8           9.61        32.92    |   9.62        26.77
      64          9.58        46.48    |   9.55        38.99
      256         9.6         63.7     |   9.6         52.59
      Signed-off-by: default avatarChuanyu Qin <qinchuanyu@huawei.com>
      Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      ac9fde24
  3. 04 Sep, 2013 1 commit
  4. 20 Aug, 2013 1 commit
  5. 07 Jul, 2013 2 commits
  6. 11 Jun, 2013 1 commit
  7. 06 May, 2013 2 commits
  8. 01 May, 2013 4 commits
  9. 11 Apr, 2013 1 commit
    • Jason Wang's avatar
      vhost_net: remove tx polling state · 70181d51
      Jason Wang authored
      After commit 2b8b328b (vhost_net: handle polling
      errors when setting backend), we in fact track the polling state through
      poll->wqh, so there's no need to duplicate the work with an extra
      vhost_net_polling_state. So this patch removes this and make the code simpler.
      
      This patch also removes the all tx starting/stopping code in tx path according
      to Michael's suggestion.
      
      Netperf test shows almost the same result in stream test, but gets improvements
      on TCP_RR tests (both zerocopy or copy) especially on low load cases.
      
      Tested between multiqueue kvm guest and external host with two direct
      connected 82599s.
      
      zerocopy disabled:
      
      sessions|transaction rates|normalize|
      before/after/+improvements
      1 | 9510.24/11727.29/+23.3%    | 693.54/887.68/+28.0%   |
      25| 192931.50/241729.87/+25.3% | 2376.80/2771.70/+16.6% |
      50| 277634.64/291905.76/+5%    | 3118.36/3230.11/+3.6%  |
      
      zerocopy enabled:
      
      sessions|transaction rates|normalize|
      before/after/+improvements
      1 | 7318.33/11929.76/+63.0%    | 521.86/843.30/+61.6%   |
      25| 167264.88/242422.15/+44.9% | 2181.60/2788.16/+27.8% |
      50| 272181.02/294347.04/+8.1%  | 3071.56/3257.85/+6.1%  |
      Signed-off-by: default avatarJason Wang <jasowang@redhat.com>
      Acked-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      70181d51
  10. 29 Jan, 2013 1 commit
    • Jason Wang's avatar
      vhost_net: handle polling errors when setting backend · 2b8b328b
      Jason Wang authored
      Currently, the polling errors were ignored, which can lead following issues:
      
      - vhost remove itself unconditionally from waitqueue when stopping the poll,
        this may crash the kernel since the previous attempt of starting may fail to
        add itself to the waitqueue
      - userspace may think the backend were successfully set even when the polling
        failed.
      
      Solve this by:
      
      - check poll->wqh before trying to remove from waitqueue
      - report polling errors in vhost_poll_start(), tx_poll_start(), the return value
        will be checked and returned when userspace want to set the backend
      
      After this fix, there still could be a polling failure after backend is set, it
      will addressed by the next patch.
      Signed-off-by: default avatarJason Wang <jasowang@redhat.com>
      Acked-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2b8b328b
  11. 06 Dec, 2012 1 commit
    • Michael S. Tsirkin's avatar
      vhost: avoid backend flush on vring ops · 935cdee7
      Michael S. Tsirkin authored
      vring changes already do a flush internally where appropriate, so we do
      not need a second flush.
      
      It's currently not very expensive but a follow-up patch makes flush more
      heavy-weight, so remove the extra flush here to avoid regressing
      performance if call or kick fds are changed on data path.
      Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      935cdee7
  12. 28 Nov, 2012 1 commit
  13. 03 Nov, 2012 4 commits
  14. 27 Sep, 2012 1 commit
  15. 21 Jul, 2012 1 commit
    • Stefan Hajnoczi's avatar
      vhost: make vhost work queue visible · 163049ae
      Stefan Hajnoczi authored
      The vhost work queue allows processing to be done in vhost worker thread
      context, which uses the owner process mm.  Access to the vring and guest
      memory is typically only possible from vhost worker context so it is
      useful to allow work to be queued directly by users.
      
      Currently vhost_net only uses the poll wrappers which do not expose the
      work queue functions.  However, for tcm_vhost (vhost_scsi) it will be
      necessary to queue custom work.
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@linux.vnet.ibm.com>
      Cc: Zhi Yong Wu <wuzhy@cn.ibm.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarNicholas Bellinger <nab@linux-iscsi.org>
      Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      163049ae
  16. 27 Jun, 2012 1 commit
  17. 02 May, 2012 1 commit
  18. 13 Apr, 2012 1 commit
  19. 20 Mar, 2012 1 commit
  20. 28 Feb, 2012 2 commits
    • Michael S. Tsirkin's avatar
      vhost: fix release path lockdep checks · ea5d4046
      Michael S. Tsirkin authored
      We shouldn't hold any locks on release path. Pass a flag to
      vhost_dev_cleanup to use the lockdep info correctly.
      Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Tested-by: default avatarSasha Levin <levinsasha928@gmail.com>
      ea5d4046
    • Nadav Har'El's avatar
      vhost: don't forget to schedule() · d550dda1
      Nadav Har'El authored
      This is a tiny, but important, patch to vhost.
      
      Vhost's worker thread only called schedule() when it had no work to do, and
      it wanted to go to sleep. But if there's always work to do, e.g., the guest
      is running a network-intensive program like netperf with small message sizes,
      schedule() was *never* called. This had several negative implications (on
      non-preemptive kernels):
      
       1. Passing time was not properly accounted to the "vhost" process (ps and
          top would wrongly show it using zero CPU time).
      
       2. Sometimes error messages about RCU timeouts would be printed, if the
          core running the vhost thread didn't schedule() for a very long time.
      
       3. Worst of all, a vhost thread would "hog" the core. If several vhost
          threads need to share the same core, typically one would get most of the
          CPU time (and its associated guest most of the performance), while the
          others hardly get any work done.
      
      The trivial solution is to add
      
      	if (need_resched())
      		schedule();
      
      After doing every piece of work. This will not do the heavy schedule() all
      the time, just when the timer interrupt decided a reschedule is warranted
      (so need_resched returns true).
      
      Thanks to Abel Gordon for this patch.
      Signed-off-by: default avatarNadav Har'El <nyh@il.ibm.com>
      Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      d550dda1
  21. 19 Jul, 2011 4 commits
  22. 18 Jul, 2011 1 commit
    • Michael S. Tsirkin's avatar
      vhost: vhost TX zero-copy support · bab632d6
      Michael S. Tsirkin authored
      >From: Shirley Ma <mashirle@us.ibm.com>
      
      This adds experimental zero copy support in vhost-net,
      disabled by default. To enable, set
      experimental_zcopytx module option to 1.
      
      This patch maintains the outstanding userspace buffers in the
      sequence it is delivered to vhost. The outstanding userspace buffers
      will be marked as done once the lower device buffers DMA has finished.
      This is monitored through last reference of kfree_skb callback. Two
      buffer indices are used for this purpose.
      
      The vhost-net device passes the userspace buffers info to lower device
      skb through message control. DMA done status check and guest
      notification are handled by handle_tx: in the worst case is all buffers
      in the vq are in pending/done status, so we need to notify guest to
      release DMA done buffers first before we get any new buffers from the
      vq.
      
      One known problem is that if the guest stops submitting
      buffers, buffers might never get used until some
      further action, e.g. device reset. This does not
      seem to affect linux guests.
      Signed-off-by: default avatarShirley <xma@us.ibm.com>
      Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bab632d6
  23. 30 May, 2011 1 commit
  24. 06 May, 2011 1 commit
  25. 08 Mar, 2011 2 commits
  26. 10 Jan, 2011 1 commit
  27. 09 Dec, 2010 1 commit