1. 27 Oct, 2014 1 commit
  2. 26 Sep, 2014 1 commit
  3. 22 Sep, 2014 1 commit
    • Chrysostomos Nanakos's avatar
      async: aio_context_new(): Handle event_notifier_init failure · 2f78e491
      Chrysostomos Nanakos authored
      On a system with a low limit of open files the initialization
      of the event notifier could fail and QEMU exits without printing any
      error information to the user.
      
      The problem can be easily reproduced by enforcing a low limit of open
      files and start QEMU with enough I/O threads to hit this limit.
      
      The same problem raises, without the creation of I/O threads, while
      QEMU initializes the main event loop by enforcing an even lower limit of
      open files.
      
      This commit adds an error message on failure:
      
       # qemu [...] -object iothread,id=iothread0 -object iothread,id=iothread1
       qemu: Failed to initialize event notifier: Too many open files in system
      Signed-off-by: default avatarChrysostomos Nanakos <cnanakos@grnet.gr>
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      2f78e491
  4. 09 Jul, 2014 1 commit
  5. 13 Mar, 2014 1 commit
  6. 17 Sep, 2013 1 commit
  7. 22 Aug, 2013 3 commits
  8. 19 Aug, 2013 1 commit
  9. 12 Jun, 2013 1 commit
    • Michael Tokarev's avatar
      main-loop: do not include slirp/slirp.h, use libslirp.h instead · 520b6dd4
      Michael Tokarev authored
      The header slirp/slirp.h is an internal header for slirp, and
      main-loop.c does not use internals from there.  Instead, it uses
      public functions (slirp_update_timeout(), slirp_pollfds_fill()
      etc) which are declared in slirp/libslirp.h.
      
      Including slirp/slirp.h is somewhat dangerous since it redefines
      errno on WIN32, so any file including it may misbehave wrt errno.
      
      Unfortunately libslirp isn't self-contained, it needs declaration
      of struct in_addr, which is provided by qemu/sockets.h.  Maybe
      instead of #including qemu/sockets.h before libslirp.h, it is
      better to make the latter self-contained.
      Signed-off-by: default avatarMichael Tokarev <mjt@tls.msk.ru>
      Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      520b6dd4
  10. 16 May, 2013 2 commits
  11. 05 Apr, 2013 1 commit
    • Anthony Liguori's avatar
      main-loop: drop the BQL if the I/O appears to be spinning · 893986fe
      Anthony Liguori authored
      The char-flow refactoring introduced a busy-wait that depended on
      an action from the VCPU thread.  However, the VCPU thread could
      never take that action because the busy-wait starved the VCPU thread
      of the BQL because it never dropped the mutex while running select.
      
      Paolo doesn't want to drop this optimization for fear that we will
      stop detecting these busy waits.  I'm afraid to keep this optimization
      even with the busy-wait fixed because I think a similar problem can
      occur just with heavy I/O thread load manifesting itself as VCPU pauses.
      
      As a compromise, introduce an artificial timeout after a thousand
      iterations but print a rate limited warning when this happens.  This
      let's us still detect when this condition occurs without it being
      a fatal error.
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      Message-id: 1365169560-11012-1-git-send-email-aliguori@us.ibm.com
      893986fe
  12. 15 Mar, 2013 1 commit
  13. 21 Feb, 2013 6 commits
  14. 09 Jan, 2013 1 commit
    • Fabien Chouteau's avatar
      Check return values from g_poll and select · 5e3bc735
      Fabien Chouteau authored
      The current implementation of os_host_main_loop_wait() on Windows,
      returns 1 only when a g_poll() event occurs because the return value of
      select() is overridden. This is wrong as we may skip a socket event, as
      shown in this example:
      
      1. select() returns 0
      2. g_poll() returns 1  (socket event occurs)
      3. os_host_main_loop_wait() returns 1
      4. qemu_iohandler_poll() sees no socket event because select() has
         return before the event occurs
      5. select() returns 1
      6. g_poll() returns 0 (g_poll overrides select's return value)
      7. os_host_main_loop_wait() returns 0
      8. qemu_iohandler_poll() doesn't check for socket events because the
         return value of os_host_main_loop_wait() is zero.
      9. goto 5
      
      This patch use one variable for each of these return values, so we don't
      miss a select() event anymore.
      
      Also move the call to select() after g_poll(), this will improve latency
      as we don't have to go through two os_host_main_loop_wait() calls to
      detect a socket event.
      Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarFabien Chouteau <chouteau@adacore.com>
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      5e3bc735
  15. 19 Dec, 2012 2 commits
  16. 11 Dec, 2012 1 commit
  17. 02 Nov, 2012 2 commits
  18. 30 Oct, 2012 7 commits
  19. 01 May, 2012 1 commit
  20. 28 Apr, 2012 1 commit
  21. 26 Apr, 2012 1 commit
  22. 15 Apr, 2012 1 commit
  23. 07 Apr, 2012 2 commits