Skip to content
  • Paolo Bonzini's avatar
    AioContext: do not rely on aio_poll(ctx, true) result to end a loop · acfb23ad
    Paolo Bonzini authored
    
    
    Currently, whenever aio_poll(ctx, true) has completed all pending
    work it returns true *and* the next call to aio_poll(ctx, true)
    will not block.
    
    This invariant has its roots in qemu_aio_flush()'s implementation
    as "while (qemu_aio_wait()) {}".  However, qemu_aio_flush() does
    not exist anymore and bdrv_drain_all() is implemented differently;
    and this invariant is complicated to maintain and subtly different
    from the return value of GMainLoop's g_main_context_iteration.
    
    All calls to aio_poll(ctx, true) except one are guarded by a
    while() loop checking for a request to be incomplete, or a
    BlockDriverState to be idle.  The one remaining call (in
    iothread.c) uses this to delay the aio_context_release/acquire
    pair until the AioContext is quiescent, however:
    
    - we can do the same just by using non-blocking aio_poll,
      similar to how vl.c invokes main_loop_wait
    
    - it is buggy, because it does not ensure that the AioContext
      is released between an aio_notify and the next time the
      iothread goes to sleep.  This leads to hangs when stopping
      the dataplane thread.
    
    In the end, these semantics are a bad match for the current
    users of AioContext.  So modify that one exception in iothread.c,
    which also fixes the hangs, as well as the testcase so that
    it use the same idiom as the actual QEMU code.
    
    Reported-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
    Tested-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
    Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
    acfb23ad