1. 22 Sep, 2014 1 commit
    • Chrysostomos Nanakos's avatar
      async: aio_context_new(): Handle event_notifier_init failure · 2f78e491
      Chrysostomos Nanakos authored
      On a system with a low limit of open files the initialization
      of the event notifier could fail and QEMU exits without printing any
      error information to the user.
      
      The problem can be easily reproduced by enforcing a low limit of open
      files and start QEMU with enough I/O threads to hit this limit.
      
      The same problem raises, without the creation of I/O threads, while
      QEMU initializes the main event loop by enforcing an even lower limit of
      open files.
      
      This commit adds an error message on failure:
      
       # qemu [...] -object iothread,id=iothread0 -object iothread,id=iothread1
       qemu: Failed to initialize event notifier: Too many open files in system
      Signed-off-by: default avatarChrysostomos Nanakos <cnanakos@grnet.gr>
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      2f78e491
  2. 29 Aug, 2014 3 commits
    • Paolo Bonzini's avatar
      AioContext: introduce aio_prepare · a3462c65
      Paolo Bonzini authored
      This will be used to implement socket polling on Windows.
      On Windows, select() and g_poll() are completely different;
      sockets are polled with select() before calling g_poll,
      and the g_poll must be nonblocking if select() says a
      socket is ready.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      a3462c65
    • Paolo Bonzini's avatar
      AioContext: export and use aio_dispatch · e4c7e2d1
      Paolo Bonzini authored
      So far, aio_poll's scheme was dispatch/poll/dispatch, where
      the first dispatch phase was used only in the GSource case in
      order to avoid a blocking poll.  Earlier patches changed it to
      dispatch/prepare/poll/dispatch, where prepare is aio_compute_timeout.
      
      By making aio_dispatch public, we can remove the first dispatch
      phase altogether, so that both aio_poll and the GSource use the same
      prepare/poll/dispatch scheme.
      
      This patch breaks the invariant that aio_poll(..., true) will not block
      the first time it returns false.  This used to be fundamental for
      qemu_aio_flush's implementation as "while (qemu_aio_wait()) {}" but
      no code in QEMU relies on this invariant anymore.  The return value
      of aio_poll() is now comparable with that of g_main_context_iteration.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      e4c7e2d1
    • Paolo Bonzini's avatar
      AioContext: take bottom halves into account when computing aio_poll timeout · 845ca10d
      Paolo Bonzini authored
      Right now, QEMU invokes aio_bh_poll before the "poll" phase
      of aio_poll.  It is simpler to do it afterwards and skip the
      "poll" phase altogether when the OS-dependent parts of AioContext
      are invoked from GSource.  This way, AioContext behaves more
      similarly when used as a GSource vs. when used as stand-alone.
      
      As a start, take bottom halves into account when computing the
      poll timeout.  If a bottom half is ready, do a non-blocking
      poll.  As a side effect, this makes idle bottom halves work
      with aio_poll; an improvement, but not really an important
      one since they are deprecated.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      845ca10d
  3. 09 Jul, 2014 1 commit
    • Paolo Bonzini's avatar
      AioContext: speed up aio_notify · 0ceb849b
      Paolo Bonzini authored
      In many cases, the call to event_notifier_set in aio_notify is unnecessary.
      In particular, if we are executing aio_dispatch, or if aio_poll is not
      blocking, we know that we will soon get to the next loop iteration (if
      necessary); the thread that hosts the AioContext's event loop does not
      need any nudging.
      
      The patch includes a Promela formal model that shows that this really
      works and does not need any further complication such as generation
      counts.  It needs a memory barrier though.
      
      The generation counts are not needed because any change to
      ctx->dispatching after the memory barrier is okay for aio_notify.
      If it changes from zero to one, it is the right thing to skip
      event_notifier_set.  If it changes from one to zero, the
      event_notifier_set is unnecessary but harmless.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      0ceb849b
  4. 04 Jun, 2014 1 commit
  5. 13 Mar, 2014 1 commit
    • Stefan Hajnoczi's avatar
      aio: add aio_context_acquire() and aio_context_release() · 98563fc3
      Stefan Hajnoczi authored
      It can be useful to run an AioContext from a thread which normally does
      not "own" the AioContext.  For example, request draining can be
      implemented by acquiring the AioContext and looping aio_poll() until all
      requests have been completed.
      
      The following pattern should work:
      
        /* Event loop thread */
        while (running) {
            aio_context_acquire(ctx);
            aio_poll(ctx, true);
            aio_context_release(ctx);
        }
      
        /* Another thread */
        aio_context_acquire(ctx);
        bdrv_read(bs, 0x1000, buf, 1);
        aio_context_release(ctx);
      
      This patch implements aio_context_acquire() and aio_context_release().
      
      Note that existing aio_poll() callers do not need to worry about
      acquiring and releasing - it is only needed when multiple threads will
      call aio_poll() on the same AioContext.
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      98563fc3
  6. 22 Aug, 2013 3 commits
  7. 19 Aug, 2013 1 commit
  8. 18 Jul, 2013 1 commit
  9. 15 Mar, 2013 1 commit
    • Stefan Hajnoczi's avatar
      aio: add a ThreadPool instance to AioContext · 9b34277d
      Stefan Hajnoczi authored
      This patch adds a ThreadPool to AioContext.  It's possible that some
      AioContext instances will never use the ThreadPool, so defer creation
      until aio_get_thread_pool().
      
      The reason why AioContext should have the ThreadPool is because the
      ThreadPool is bound to a AioContext instance where the work item's
      callback function is invoked.  It doesn't make sense to keep the
      ThreadPool pointer anywhere other than AioContext.  For example,
      block/raw-posix.c can get its AioContext's ThreadPool and submit work.
      
      Special note about headers: I used struct ThreadPool in aio.h because
      there is a circular dependency if aio.h includes thread-pool.h.
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      9b34277d
  10. 21 Feb, 2013 1 commit
  11. 19 Dec, 2012 2 commits
  12. 11 Dec, 2012 1 commit
  13. 12 Nov, 2012 1 commit
  14. 30 Oct, 2012 6 commits
  15. 01 May, 2012 1 commit
  16. 26 Apr, 2012 1 commit
  17. 21 Oct, 2011 1 commit
  18. 06 Sep, 2011 1 commit
    • Kevin Wolf's avatar
      async: Allow nested qemu_bh_poll calls · 648fb0ea
      Kevin Wolf authored
      qemu may segfault when a BH handler first deletes a BH and then (possibly
      indirectly) calls a nested qemu_bh_poll(). This is because the inner instance
      frees the BH and deletes it from the list that the outer one processes.
      
      This patch deletes BHs only in the outermost qemu_bh_poll instance.
      
      Commit 7887f620 already tried to achieve the same, but it assumed that the BH
      handler would only delete its own BH. With a nested qemu_bh_poll(), this isn't
      guaranteed, so that commit wasn't enough. Hope this one fixes it for real.
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      648fb0ea
  19. 20 Aug, 2011 1 commit
  20. 02 Aug, 2011 1 commit
    • Kevin Wolf's avatar
      async: Remove AsyncContext · 384acbf4
      Kevin Wolf authored
      The purpose of AsyncContexts was to protect qcow and qcow2 against reentrancy
      during an emulated bdrv_read/write (which includes a qemu_aio_wait() call and
      can run AIO callbacks of different requests if it weren't for AsyncContexts).
      
      Now both qcow and qcow2 are protected by CoMutexes and AsyncContexts can be
      removed.
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      384acbf4
  21. 15 Jun, 2011 1 commit
  22. 27 Oct, 2009 2 commits