1. 20 Oct, 2014 3 commits
  2. 20 Aug, 2014 1 commit
    • Markus Armbruster's avatar
      block: Use g_new() & friends where that makes obvious sense · 5839e53b
      Markus Armbruster authored
      g_new(T, n) is neater than g_malloc(sizeof(T) * n).  It's also safer,
      for two reasons.  One, it catches multiplication overflowing size_t.
      Two, it returns T * rather than void *, which lets the compiler catch
      more type errors.
      
      Patch created with Coccinelle, with two manual changes on top:
      
      * Add const to bdrv_iterate_format() to keep the types straight
      
      * Convert the allocation in bdrv_drop_intermediate(), which Coccinelle
        inexplicably misses
      
      Coccinelle semantic patch:
      
          @@
          type T;
          @@
          -g_malloc(sizeof(T))
          +g_new(T, 1)
          @@
          type T;
          @@
          -g_try_malloc(sizeof(T))
          +g_try_new(T, 1)
          @@
          type T;
          @@
          -g_malloc0(sizeof(T))
          +g_new0(T, 1)
          @@
          type T;
          @@
          -g_try_malloc0(sizeof(T))
          +g_try_new0(T, 1)
          @@
          type T;
          expression n;
          @@
          -g_malloc(sizeof(T) * (n))
          +g_new(T, n)
          @@
          type T;
          expression n;
          @@
          -g_try_malloc(sizeof(T) * (n))
          +g_try_new(T, n)
          @@
          type T;
          expression n;
          @@
          -g_malloc0(sizeof(T) * (n))
          +g_new0(T, n)
          @@
          type T;
          expression n;
          @@
          -g_try_malloc0(sizeof(T) * (n))
          +g_try_new0(T, n)
          @@
          type T;
          expression p, n;
          @@
          -g_realloc(p, sizeof(T) * (n))
          +g_renew(T, p, n)
          @@
          type T;
          expression p, n;
          @@
          -g_try_realloc(p, sizeof(T) * (n))
          +g_try_renew(T, p, n)
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      Reviewed-by: default avatarJeff Cody <jcody@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      5839e53b
  3. 15 Aug, 2014 1 commit
  4. 18 Jul, 2014 1 commit
  5. 04 Jun, 2014 1 commit
    • chai wen's avatar
      block: fix wrong order in live block migration setup · 1ac362cd
      chai wen authored
      The function init_blk_migration is better to be called before
      set_dirty_tracking as the reasons below.
      
      If we want to track dirty blocks via dirty_maps on a BlockDriverState
      when doing live block-migration, its correspoding 'BlkMigDevState' should be
      added to block_mig_state.bmds_list first for subsequent processing.
      Otherwise set_dirty_tracking will do nothing on an empty list than allocating
      dirty_bitmaps for them. And bdrv_get_dirty_count will access the
      bmds->dirty_maps directly, then there would be a segfault triggered.
      
      If the set_dirty_tracking fails, qemu_savevm_state_cancel will handle
      the cleanup of init_blk_migration automatically.
      Reviewed-by: default avatarFam Zheng <famz@redhat.com>
      Signed-off-by: default avatarchai wen <chaiw.fnst@cn.fujitsu.com>
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      1ac362cd
  6. 28 May, 2014 1 commit
    • Fam Zheng's avatar
      block: Replace in_use with operation blocker · 3718d8ab
      Fam Zheng authored
      This drops BlockDriverState.in_use with op_blockers:
      
        - Call bdrv_op_block_all in place of bdrv_set_in_use(bs, 1).
      
        - Call bdrv_op_unblock_all in place of bdrv_set_in_use(bs, 0).
      
        - Check bdrv_op_is_blocked() in place of bdrv_in_use(bs).
      
          The specific types are used, e.g. in place of starting block backup,
          bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP, ...).
      
          There is one exception in block_job_create, where
          bdrv_op_blocker_is_empty() is used, because we don't know the operation
          type here. This doesn't matter because in a few commits away we will drop
          the check and move it to callers that _do_ know the type.
      
        - Check bdrv_op_blocker_is_empty() in place of assert(!bs->in_use).
      
      Note: there is only bdrv_op_block_all and bdrv_op_unblock_all callers at
      this moment. So although the checks are specific to op types, this
      changes can still be seen as identical logic with previously with
      in_use. The difference is error message are improved because of blocker
      error info.
      Signed-off-by: default avatarFam Zheng <famz@redhat.com>
      Reviewed-by: default avatarJeff Cody <jcody@redhat.com>
      Reviewed-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      3718d8ab
  7. 22 Apr, 2014 1 commit
  8. 29 Nov, 2013 1 commit
    • Fam Zheng's avatar
      block: per caller dirty bitmap · e4654d2d
      Fam Zheng authored
      Previously a BlockDriverState has only one dirty bitmap, so only one
      caller (e.g. a block job) can keep track of writing. This changes the
      dirty bitmap to a list and creates a BdrvDirtyBitmap for each caller, the
      lifecycle is managed with these new functions:
      
          bdrv_create_dirty_bitmap
          bdrv_release_dirty_bitmap
      
      Where BdrvDirtyBitmap is a linked list wrapper structure of HBitmap.
      
      In place of bdrv_set_dirty_tracking, a BdrvDirtyBitmap pointer argument
      is added to these functions, since each caller has its own dirty bitmap:
      
          bdrv_get_dirty
          bdrv_dirty_iter_init
          bdrv_get_dirty_count
      
      bdrv_set_dirty and bdrv_reset_dirty prototypes are unchanged but will
      internally walk the list of all dirty bitmaps and set them one by one.
      Signed-off-by: default avatarFam Zheng <famz@redhat.com>
      Reviewed-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      e4654d2d
  9. 28 Nov, 2013 2 commits
  10. 06 Sep, 2013 1 commit
  11. 18 Jul, 2013 1 commit
  12. 11 Mar, 2013 7 commits
  13. 12 Feb, 2013 1 commit
    • Stefan Hajnoczi's avatar
      block-migration: fix pending() and iterate() return values · 6aaa9dae
      Stefan Hajnoczi authored
      The return value of .save_live_pending() is the number of bytes
      remaining.  This is just an estimate because we do not know how many
      blocks will be dirtied by the running guest.
      
      Currently our return value for .save_live_pending() is wrong because it
      includes dirty blocks but not in-flight bdrv_aio_readv() requests or
      unsent blocks.  Crucially, it also doesn't include the bulk phase where
      the entire device is transferred - therefore we risk completing block
      migration before all blocks have been transferred!
      
      The return value of .save_live_iterate() is the number of bytes
      transferred this iteration.  Currently we return whether there are bytes
      remaining, which is incorrect.
      
      Move the bytes remaining calculation into .save_live_pending() and
      really return the number of bytes transferred this iteration in
      .save_live_iterate().
      
      Also fix the %ld format specifier which was used for a uint64_t
      argument.  PRIu64 must be use to avoid warnings on 32-bit hosts.
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarJuan Quintela <quintela@redhat.com>
      Message-id: 1360661835-28663-3-git-send-email-stefanha@redhat.com
      Signed-off-by: default avatarAnthony Liguori <aliguori@us.ibm.com>
      6aaa9dae
  14. 11 Feb, 2013 3 commits
  15. 25 Jan, 2013 2 commits
  16. 20 Dec, 2012 1 commit
    • Juan Quintela's avatar
      savevm: New save live migration method: pending · e4ed1541
      Juan Quintela authored
      Code just now does (simplified for clarity)
      
          if (qemu_savevm_state_iterate(s->file) == 1) {
             vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
             qemu_savevm_state_complete(s->file);
          }
      
      Problem here is that qemu_savevm_state_iterate() returns 1 when it
      knows that remaining memory to sent takes less than max downtime.
      
      But this means that we could end spending 2x max_downtime, one
      downtime in qemu_savevm_iterate, and the other in
      qemu_savevm_state_complete.
      
      Changed code to:
      
          pending_size = qemu_savevm_state_pending(s->file, max_size);
          DPRINTF("pending size %lu max %lu\n", pending_size, max_size);
          if (pending_size >= max_size) {
              ret = qemu_savevm_state_iterate(s->file);
           } else {
              vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
              qemu_savevm_state_complete(s->file);
           }
      
      So what we do is: at current network speed, we calculate the maximum
      number of bytes we can sent: max_size.
      
      Then we ask every save_live section how much they have pending.  If
      they are less than max_size, we move to complete phase, otherwise we
      do an iterate one.
      
      This makes things much simpler, because now individual sections don't
      have to caluclate the bandwidth (it was implossible to do right from
      there).
      Signed-off-by: default avatarJuan Quintela <quintela@redhat.com>
      Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e4ed1541
  17. 19 Dec, 2012 4 commits
  18. 17 Oct, 2012 3 commits
  19. 28 Sep, 2012 1 commit
  20. 20 Jul, 2012 4 commits