1. 31 Oct, 2008 7 commits
    • aliguori's avatar
      Move CharDriverState code out of vl.c · 6f97dba0
      aliguori authored
      
      
      The motivating goal behind this is to allow other tools to use the CharDriver
      code.  This patch is pure code motion except for the Makefile changes and the
      copyright/header in qemu-char.c.
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      
      
      
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5580 c046a42c-6fe2-441c-8c8c-71466251a162
      6f97dba0
    • aliguori's avatar
      Move some declarations around in the QEMU CharDriver code · 0e82f34d
      aliguori authored
      
      
      The goal of this series is to move the CharDriverState code out of vl.c and
      into its own file, qemu-char.c.  This patch moves around some declarations so
      the next patch can be pure code motion.
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      
      
      
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5579 c046a42c-6fe2-441c-8c8c-71466251a162
      0e82f34d
    • aliguori's avatar
      Increase default IO timeout from 10ms to 5s · 0a1af395
      aliguori authored
      
      
      With the recent changes to the main loop, we no longer have unconditional
      polling.  This means we can now sleep in select() for much longer than we
      previously did.  This patch increases our select() sleep time from 10ms to 5s
      which is effectively unlimited since we're going to wake up sooner than that
      in almost all circumstances.
      
      With this patch, I see the number of wake-ups with an idle dynamic ticks guest
      drop from 80 per second to about 15 times per second.
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      
      
      
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5578 c046a42c-6fe2-441c-8c8c-71466251a162
      0a1af395
    • aliguori's avatar
      Main loop fixes/cleanup · 56f3a5d0
      aliguori authored
      Tidy up win32 main loop bits, allow timeout >= 1s, and force timeout to 0 if
      there is a pending bottom half.
      
      
      
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5577 c046a42c-6fe2-441c-8c8c-71466251a162
      56f3a5d0
    • aliguori's avatar
      Implement "info chardev" command. (Gerd Hoffmann) · 5ccfae10
      aliguori authored
      
      
      This patch makes qemu keep track of the character devices in use and
      implements a "info chardev" monitor command to print a list.
      
      qemu_chr_open() sticks the devices into a linked list now.  It got a new
      argument (label), so there is a name for each device.  It also assigns a
      filename to each character device.  By default it just copyes the
      filename passed in.  Individual drivers can fill in something else
      though.  qemu_chr_open_pty() sets the filename to name of the pseudo tty
      allocated.
      
      Output looks like this:
      
        (qemu) info chardev
        monitor: filename=unix:/tmp/run.sh-26827/monitor,server,nowait
        serial0: filename=unix:/tmp/run.sh-26827/console,server
        serial1: filename=pty:/dev/pts/5
        parallel0: filename=vc:640x480
      Signed-off-by: default avatarGerd Hoffmann <kraxel@redhat.com>
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      
      
      
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5575 c046a42c-6fe2-441c-8c8c-71466251a162
      5ccfae10
    • aliguori's avatar
      Make DMA bottom-half driven (v2) · 492c30af
      aliguori authored
      
      
      The current DMA routines are driven by a call in main_loop_wait() after every
      select.
      
      This patch converts the DMA code to be driven by a constantly rescheduled
      bottom half.  The advantage of using a scheduled bottom half is that we can
      stop scheduling the bottom half when there no DMA channels are runnable.  This
      means we can potentially detect this case and sleep longer in the main loop.
      
      The only two architectures implementing DMA_run() are cris and i386.  For cris,
      I converted it to a simple repeating bottom half.  I've only compile tested
      this as cris does not seem to work on a 64-bit host.  It should be functionally
      identical to the previous implementation so I expect it to work.
      
      For x86, I've made sure to only fire the DMA bottom half if there is a DMA
      channel that is runnable.  The effect of this is that unless you're using sb16
      or a floppy disk, the DMA bottom half never fires.
      
      You probably should test this malc.  My own benchmarks actually show slight
      improvement by it's possible the change in timing could affect your demos.
      
      Since v1, I've changed the code to use a BH instead of a timer.  cris at least
      seems to depend on faster than 10ms polling.
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      
      
      
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5573 c046a42c-6fe2-441c-8c8c-71466251a162
      492c30af
    • aliguori's avatar
      Make bottom halves more robust · 1b435b10
      aliguori authored
      
      
      Bottom halves are supposed to not complete until the next iteration of the main
      loop.  This is very important to ensure that guests can not cause stack
      overflows in the block driver code.  Right now, if you attempt to schedule a
      bottom half within a bottom half callback, you will enter an infinite loop.
      
      This patch uses the same logic that we use for the IOHandler loop to make the
      bottom half processing robust in list manipulation while in a callback.
      
      This patch also introduces idle scheduling for bottom halves.  qemu_bh_poll()
      returns an indication of whether any bottom halves were successfully executed.
      qemu_aio_wait() uses this to immediately return if a bottom half was executed
      instead of waiting for a completion notification.
      
      qemu_bh_schedule_idle() works around this by not reporting the callback has
      run in the qemu_bh_poll loop.  qemu_aio_wait() probably needs some refactoring
      but that would require a larger code audit.  idle scheduling seems like a good
      compromise.
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      
      
      
      
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5572 c046a42c-6fe2-441c-8c8c-71466251a162
      1b435b10
  2. 28 Oct, 2008 1 commit
  3. 25 Oct, 2008 2 commits
  4. 24 Oct, 2008 2 commits
  5. 14 Oct, 2008 1 commit
    • aliguori's avatar
      Expand cache= option and use write-through caching by default · 9f7965c7
      aliguori authored
      
      
      This patch changes the cache= option to accept none, writeback, or writethough
      to control the host page cache behavior.  By default, writethrough caching is
      now used which internally is implemented by using O_DSYNC to open the disk
      images.  When using -snapshot, writeback is used by default since data integrity
      it not at all an issue.
      
      cache=none has the same behavior as cache=off previously.  The later syntax is
      still supported by now deprecated.  I also cleaned up the O_DIRECT
      implementation to avoid many of the #ifdefs.
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      
      
      
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5485 c046a42c-6fe2-441c-8c8c-71466251a162
      9f7965c7
  6. 13 Oct, 2008 1 commit
  7. 12 Oct, 2008 3 commits
  8. 11 Oct, 2008 1 commit
  9. 08 Oct, 2008 1 commit
    • aliguori's avatar
      Fix IO performance regression in sparc · 9e472e10
      aliguori authored
      
      
      Replace signalfd with signal handler/pipe.  There is no way to interrupt
      the CPU execution loop when a file descriptor becomes readable.  This
      results in a large performance regression in sparc emulation during
      bootup.
         
      This patch switches us to signal handler/pipe which was originally
      suggested by Ian Jackson.  The signal handler lets us interrupt the
      CPU emulation loop while the write to a pipe lets us avoid the
      select/signal race condition.
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      
      
      
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5451 c046a42c-6fe2-441c-8c8c-71466251a162
      9e472e10
  10. 07 Oct, 2008 1 commit
  11. 06 Oct, 2008 2 commits
    • aliguori's avatar
      Switch the memory savevm handler to be "live" · 475e4277
      aliguori authored
      
      
      This patch replaces the static memory savevm/loadvm handler with a "live" one.
      This handler is used even if performing a non-live migration.
      
      The key difference between this handler and the previous is that each page is
      prefixed with the address of the page.  The QEMUFile rate limiting code, in
      combination with the live migration dirty tracking bits, is used to determine
      which pages should be sent and how many should be sent.
      
      The live save code "converges" when the number of dirty pages reaches a fixed
      amount.  Currently, this is 10 pages.  This is something that should eventually
      be derived from whatever the bandwidth limitation is.
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      
      
      
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5437 c046a42c-6fe2-441c-8c8c-71466251a162
      475e4277
    • aliguori's avatar
      Introduce v3 of savevm protocol · 9366f418
      aliguori authored
      
      
      The current savevm/loadvm protocol has some draw backs.  It does not support
      the ability to do progressive saving which means it cannot be used for live
      checkpointing or migration.  The sections sizes are 32-bit integers which
      means that it will not function when using more than 4GB of memory for a guest.
      It attempts to seek within the output file which means it cannot be streamed.
      The current protocol also is pretty lax about how it supports forward
      compatibility.  If a saved section version is greater than what the restore
      code support, the restore code generally treats the saved data as being in
      whatever version it supports.  This means that restoring a saved VM on an older
      version of QEMU will likely result in silent guest failure.
      
      This patch introduces a new version of the savevm protocol.  It has the
      following features:
      
       * Support for progressive save of sections (for live checkpoint/migration)
       * An asynchronous API for doing save
       * Support for interleaving multiple progressive save sections
         (for future support of memory hot-add/storage migration)
       * Fully streaming format
       * Strong section version checking
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      
      
      
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5434 c046a42c-6fe2-441c-8c8c-71466251a162
      9366f418
  12. 05 Oct, 2008 1 commit
  13. 04 Oct, 2008 2 commits
  14. 02 Oct, 2008 1 commit
  15. 01 Oct, 2008 3 commits
  16. 30 Sep, 2008 3 commits
  17. 29 Sep, 2008 1 commit
  18. 28 Sep, 2008 2 commits
  19. 27 Sep, 2008 2 commits
  20. 25 Sep, 2008 1 commit
  21. 22 Sep, 2008 1 commit
  22. 20 Sep, 2008 1 commit