1. 04 Feb, 2014 1 commit
  2. 13 Jan, 2014 3 commits
  3. 19 Nov, 2013 1 commit
  4. 28 Sep, 2013 3 commits
  5. 24 Sep, 2013 3 commits
  6. 03 Sep, 2013 1 commit
  7. 22 Aug, 2013 1 commit
  8. 20 Aug, 2013 2 commits
  9. 23 Jul, 2013 2 commits
  10. 12 Jul, 2013 1 commit
    • Chegu Vinod's avatar
      Force auto-convegence of live migration · 7ca1dfad
      Chegu Vinod authored
      If a user chooses to turn on the auto-converge migration capability
      these changes detect the lack of convergence and throttle down the
      guest. i.e. force the VCPUs out of the guest for some duration
      and let the migration thread catchup and help converge.
      
      Verified the convergence using the following :
       - Java Warehouse workload running on a 20VCPU/256G guest(~80% busy)
       - OLTP like workload running on a 80VCPU/512G guest (~80% busy)
      
      Sample results with Java warehouse workload : (migrate speed set to 20Gb and
      migrate downtime set to 4seconds).
      
       (qemu) info migrate
       capabilities: xbzrle: off auto-converge: off  <----
       Migration status: active
       total time: 1487503 milliseconds
       expected downtime: 519 milliseconds
       transferred ram: 383749347 kbytes
       remaining ram: 2753372 kbytes
       total ram: 268444224 kbytes
       duplicate: 65461532 pages
       skipped: 64901568 pages
       normal: 95750218 pages
       normal bytes: 383000872 kbytes
       dirty pages rate: 67551 pages
      
       ---
      
       (qemu) info migrate
       capabilities: xbzrle: off auto-converge: on   <----
       Migration status: completed
       total time: 241161 milliseconds
       downtime: 6373 milliseconds
       transferred ram: 28235307 kbytes
       remaining ram: 0 kbytes
       total ram: 268444224 kbytes
       duplicate: 64946416 pages
       skipped: 64903523 pages
       normal: 7044971 pages
       normal bytes: 28179884 kbytes
      Signed-off-by: default avatarChegu Vinod <chegu_vinod@hp.com>
      Signed-off-by: default avatarJuan Quintela <quintela@redhat.com>
      7ca1dfad
  11. 30 Jun, 2013 1 commit
  12. 28 Jun, 2013 1 commit
  13. 26 Jun, 2013 4 commits
  14. 14 Jun, 2013 4 commits
  15. 24 May, 2013 1 commit
  16. 29 Apr, 2013 3 commits
  17. 15 Apr, 2013 2 commits
  18. 08 Apr, 2013 1 commit
    • Paolo Bonzini's avatar
      hw: move headers to include/ · 0d09e41a
      Paolo Bonzini authored
      Many of these should be cleaned up with proper qdev-/QOM-ification.
      Right now there are many catch-all headers in include/hw/ARCH depending
      on cpu.h, and this makes it necessary to compile these files per-target.
      However, fixing this does not belong in these patches.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      0d09e41a
  19. 04 Apr, 2013 3 commits
  20. 26 Mar, 2013 2 commits
    • Orit Wasserman's avatar
      Use qemu_put_buffer_async for guest memory pages · 500f0061
      Orit Wasserman authored
      This will remove an unneeded copy of guest memory pages.
      For the page header and device state we still copy the data to the
      static buffer the other option is to allocate the memory on demand
      which is more expensive.
      Signed-off-by: default avatarOrit Wasserman <owasserm@redhat.com>
      Signed-off-by: default avatarJuan Quintela <quintela@redhat.com>
      500f0061
    • Peter Lieven's avatar
      migration: use XBZRLE only after bulk stage · 5cc11c46
      Peter Lieven authored
      at the beginning of migration all pages are marked dirty and
      in the first round a bulk migration of all pages is performed.
      
      currently all these pages are copied to the page cache regardless
      of whether they are frequently updated or not. this doesn't make sense
      since most of these pages are never transferred again.
      
      this patch changes the XBZRLE transfer to only be used after
      the bulk stage has been completed. that means a page is added
      to the page cache the second time it is transferred and XBZRLE
      can benefit from the third time of transfer.
      
      since the page cache is likely smaller than the number of pages
      it's also likely that in the second round the page is missing in the
      cache due to collisions in the bulk phase.
      
      on the other hand a lot of unnecessary mallocs, memdups and frees
      are saved.
      
      the following results have been taken earlier while executing
      the test program from docs/xbzrle.txt. (+) with the patch and (-)
      without. (thanks to Eric Blake for reformatting and comments)
      
      + total time: 22185 milliseconds
      - total time: 22410 milliseconds
      
      Shaved 0.3 seconds, better than 1%!
      
      + downtime: 29 milliseconds
      - downtime: 21 milliseconds
      
      Not sure why downtime seemed worse, but probably not the end of the world.
      
      + transferred ram: 706034 kbytes
      - transferred ram: 721318 kbytes
      
      Fewer bytes sent - good.
      
      + remaining ram: 0 kbytes
      - remaining ram: 0 kbytes
      + total ram: 1057216 kbytes
      - total ram: 1057216 kbytes
      + duplicate: 108556 pages
      - duplicate: 105553 pages
      + normal: 175146 pages
      - normal: 179589 pages
      + normal bytes: 700584 kbytes
      - normal bytes: 718356 kbytes
      
      Fewer normal bytes...
      
      + cache size: 67108864 bytes
      - cache size: 67108864 bytes
      + xbzrle transferred: 3127 kbytes
      - xbzrle transferred: 630 kbytes
      
      ...and more compressed pages sent - good.
      
      + xbzrle pages: 117811 pages
      - xbzrle pages: 21527 pages
      + xbzrle cache miss: 18750
      - xbzrle cache miss: 179589
      
      And very good improvement on the cache miss rate.
      
      + xbzrle overflow : 0
      - xbzrle overflow : 0
      Signed-off-by: default avatarPeter Lieven <pl@kamp.de>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarOrit Wasserman <owasserm@redhat.com>
      Signed-off-by: default avatarJuan Quintela <quintela@redhat.com>
      5cc11c46