Skip to content
  • Joe Thornber's avatar
    dm thin: fix race between simultaneous io and discards to same block · e8088073
    Joe Thornber authored
    
    
    There is a race when discard bios and non-discard bios are issued
    simultaneously to the same block.
    
    Discard support is expensive for all thin devices precisely because you
    have to be careful to quiesce the area you're discarding.  DM thin must
    handle this conflicting IO pattern (simultaneous non-discard vs discard)
    even though a sane application shouldn't be issuing such IO.
    
    The race manifests as follows:
    
    1. A non-discard bio is mapped in thin_bio_map.
       This doesn't lock out parallel activity to the same block.
    
    2. A discard bio is issued to the same block as the non-discard bio.
    
    3. The discard bio is locked in a dm_bio_prison_cell in process_discard
       to lock out parallel activity against the same block.
    
    4. The non-discard bio's mapping continues and its all_io_entry is
       incremented so the bio is accounted for in the thin pool's all_io_ds
       which is a dm_deferred_set used to track time locality of non-discard IO.
    
    5. The non-discard bio is finally locked in a dm_bio_prison_cell in
       process_bio.
    
    The race can result in deadlock, leaving the block layer hanging waiting
    for completion of a discard bio that never completes, e.g.:
    
    INFO: task ruby:15354 blocked for more than 120 seconds.
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    ruby            D ffffffff8160f0e0     0 15354  15314 0x00000000
     ffff8802fb08bc58 0000000000000082 ffff8802fb08bfd8 0000000000012900
     ffff8802fb08a010 0000000000012900 0000000000012900 0000000000012900
     ffff8802fb08bfd8 0000000000012900 ffff8803324b9480 ffff88032c6f14c0
    Call Trace:
     [<ffffffff814e5a19>] schedule+0x29/0x70
     [<ffffffff814e3d85>] schedule_timeout+0x195/0x220
     [<ffffffffa06b9bc1>] ? _dm_request+0x111/0x160 [dm_mod]
     [<ffffffff814e589e>] wait_for_common+0x11e/0x190
     [<ffffffff8107a170>] ? try_to_wake_up+0x2b0/0x2b0
     [<ffffffff814e59ed>] wait_for_completion+0x1d/0x20
     [<ffffffff81233289>] blkdev_issue_discard+0x219/0x260
     [<ffffffff81233e79>] blkdev_ioctl+0x6e9/0x7b0
     [<ffffffff8119a65c>] block_ioctl+0x3c/0x40
     [<ffffffff8117539c>] do_vfs_ioctl+0x8c/0x340
     [<ffffffff8119a547>] ? block_llseek+0x67/0xb0
     [<ffffffff811756f1>] sys_ioctl+0xa1/0xb0
     [<ffffffff810561f6>] ? sys_rt_sigprocmask+0x86/0xd0
     [<ffffffff814ef099>] system_call_fastpath+0x16/0x1b
    
    The thinp-test-suite's test_discard_random_sectors reliably hits this
    deadlock on fast SSD storage.
    
    The fix for this race is that the all_io_entry for a bio must be
    incremented whilst the dm_bio_prison_cell is held for the bio's
    associated virtual and physical blocks.  That cell locking wasn't
    occurring early enough in thin_bio_map.  This patch fixes this.
    
    Care is taken to always call the new function inc_all_io_entry() with
    the relevant cells locked, but they are generally unlocked before
    calling issue() to try to avoid holding the cells locked across
    generic_submit_request.
    
    Also, now that thin_bio_map may lock bios in a cell, process_bio() is no
    longer the only thread that will do so.  Because of this we must be sure
    to use cell_defer_except() to release all non-holder entries, that
    were added by the other thread, because they must be deferred.
    
    This patch depends on "dm thin: replace dm_cell_release_singleton with
    cell_defer_except".
    
    Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
    Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
    Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
    Cc: stable@vger.kernel.org
    e8088073