1. 31 Oct, 2008 4 commits
    • aliguori's avatar
      Implement "info chardev" command. (Gerd Hoffmann) · 5ccfae10
      aliguori authored
      This patch makes qemu keep track of the character devices in use and
      implements a "info chardev" monitor command to print a list.
      qemu_chr_open() sticks the devices into a linked list now.  It got a new
      argument (label), so there is a name for each device.  It also assigns a
      filename to each character device.  By default it just copyes the
      filename passed in.  Individual drivers can fill in something else
      though.  qemu_chr_open_pty() sets the filename to name of the pseudo tty
      Output looks like this:
        (qemu) info chardev
        monitor: filename=unix:/tmp/run.sh-26827/monitor,server,nowait
        serial0: filename=unix:/tmp/run.sh-26827/console,server
        serial1: filename=pty:/dev/pts/5
        parallel0: filename=vc:640x480
      Signed-off-by: default avatarGerd Hoffmann <kraxel@redhat.com>
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5575 c046a42c-6fe2-441c-8c8c-71466251a162
    • aliguori's avatar
      fix bdrv_aio_read API breakage in qcow2 (Andrea Arcangeli) · 1490791f
      aliguori authored
      I noticed the qemu_aio_flush was doing nothing at all. And a flood of
      cmd_writeb commands leading to a noop-invocation of qemu_aio_flush
      were executed.
      In short all 'memset;goto redo' places must be fixed to use the bh and
      not to call the callback in the context of bdrv_aio_read or the
      bdrv_aio_read model falls apart. Reading from qcow2 holes is possible
      with phyisical readahead (kind of breada in linux buffer cache).
      This is needed at least for scsi, ide is lucky (or it has been
      band-aided against this API breakage by fixing the symptom and not the
      real bug).
      Same bug exists in qcow of course, can be fixed later as it's less
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5574 c046a42c-6fe2-441c-8c8c-71466251a162
    • aliguori's avatar
      Make DMA bottom-half driven (v2) · 492c30af
      aliguori authored
      The current DMA routines are driven by a call in main_loop_wait() after every
      This patch converts the DMA code to be driven by a constantly rescheduled
      bottom half.  The advantage of using a scheduled bottom half is that we can
      stop scheduling the bottom half when there no DMA channels are runnable.  This
      means we can potentially detect this case and sleep longer in the main loop.
      The only two architectures implementing DMA_run() are cris and i386.  For cris,
      I converted it to a simple repeating bottom half.  I've only compile tested
      this as cris does not seem to work on a 64-bit host.  It should be functionally
      identical to the previous implementation so I expect it to work.
      For x86, I've made sure to only fire the DMA bottom half if there is a DMA
      channel that is runnable.  The effect of this is that unless you're using sb16
      or a floppy disk, the DMA bottom half never fires.
      You probably should test this malc.  My own benchmarks actually show slight
      improvement by it's possible the change in timing could affect your demos.
      Since v1, I've changed the code to use a BH instead of a timer.  cris at least
      seems to depend on faster than 10ms polling.
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5573 c046a42c-6fe2-441c-8c8c-71466251a162
    • aliguori's avatar
      Make bottom halves more robust · 1b435b10
      aliguori authored
      Bottom halves are supposed to not complete until the next iteration of the main
      loop.  This is very important to ensure that guests can not cause stack
      overflows in the block driver code.  Right now, if you attempt to schedule a
      bottom half within a bottom half callback, you will enter an infinite loop.
      This patch uses the same logic that we use for the IOHandler loop to make the
      bottom half processing robust in list manipulation while in a callback.
      This patch also introduces idle scheduling for bottom halves.  qemu_bh_poll()
      returns an indication of whether any bottom halves were successfully executed.
      qemu_aio_wait() uses this to immediately return if a bottom half was executed
      instead of waiting for a completion notification.
      qemu_bh_schedule_idle() works around this by not reporting the callback has
      run in the qemu_bh_poll loop.  qemu_aio_wait() probably needs some refactoring
      but that would require a larger code audit.  idle scheduling seems like a good
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5572 c046a42c-6fe2-441c-8c8c-71466251a162
  2. 29 Oct, 2008 2 commits
  3. 28 Oct, 2008 9 commits
  4. 27 Oct, 2008 15 commits
  5. 26 Oct, 2008 9 commits
  6. 25 Oct, 2008 1 commit