1. 04 Apr, 2016 2 commits
    • Kirill A. Shutemov's avatar
      mm, fs: remove remaining PAGE_CACHE_* and page_cache_{get,release} usage · ea1754a0
      Kirill A. Shutemov authored
      Mostly direct substitution with occasional adjustment or removing
      outdated comments.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ea1754a0
    • Kirill A. Shutemov's avatar
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov authored
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  2. 18 Mar, 2016 2 commits
  3. 15 Mar, 2016 2 commits
  4. 14 Mar, 2016 6 commits
    • Christoph Hellwig's avatar
      xfs: always set rvalp in xfs_dir2_node_trim_free · 355cced4
      Christoph Hellwig authored
      xfs_dir2_node_trim_free can return with setting the rvalp argument
      pointer.  Initialize it to 0 at the beginning of the function and
      only update it to 1 if we succeeded trimming a freespace block.
      Reported-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarCarlos Maiolino <cmaiolino@redhat.com>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      
      355cced4
    • Eric Sandeen's avatar
      xfs: ensure committed is initialized in xfs_trans_roll · cc07eed8
      Eric Sandeen authored
      __xfs_trans_roll() can return without setting the
      *committed argument; this was a problem for xfs_bmap_finish():
      
              int       committed;/* xact committed or not */
      ...
              error = __xfs_trans_roll(tp, ip, &committed);
              if (error) {
      ...
                      if (committed) {
      
      and we tested an uninitialized "committed" variable on the
      error path.  No caller is preserving "committed" state across
      calls to __xfs_trans_roll(), so just initialize committed inside
      the function to avoid future errors like this.
      Reported-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarEric Sandeen <sandeen@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      
      cc07eed8
    • Brian Foster's avatar
      xfs: borrow indirect blocks from freed extent when available · d34999c9
      Brian Foster authored
      xfs_bmap_del_extent() handles extent removal from the in-core and
      on-disk extent lists. When removing a delalloc range, it updates the
      indirect block reservation appropriately based on the removal. It
      currently enforces that the new indirect block reservation is less than
      or equal to the original. This is normally the case in all situations
      except for in certain cases when the removed range creates a hole in a
      single delalloc extent, thus splitting a single delalloc extent in two.
      
      It is possible with small enough extents to split an indlen==1 extent
      into two such slightly smaller extents. This leaves one extent with 0
      indirect blocks and leads to assert failures in other areas (e.g.,
      xfs_bunmapi() if the extent happens to be removed).
      
      Update the indlen distribution code to steal blocks from the deleted
      extent, if necessary, to satisfy the worst case total indirect
      reservation for the new extents. This is safe as the caller does not
      update the fdblocks counters until the extent is removed. Blocks stolen
      in this manner simply remain accounted as allocated, having ownership
      transferred from the data extent to an indirect reservation.
      
      As a precaution, fall back to the original reservation algorithm if the
      new indlen requirement is not met and warn if we end up with extents
      without any reservation at all to detect this more easily in the future.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      
      d34999c9
    • Brian Foster's avatar
      xfs: refactor delalloc indlen reservation split into helper · a9bd24ac
      Brian Foster authored
      The delayed allocation indirect reservation splitting code is not
      sufficient in some cases where a delalloc extent is split in two. In
      preparation for enhancements to this code, refactor the current indlen
      distribution algorithm into a new helper function.
      
      [dchinner: rename temp, temp2 variables]
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      
      a9bd24ac
    • Brian Foster's avatar
      xfs: update freeblocks counter after extent deletion · b2706a05
      Brian Foster authored
      xfs_bunmapi() currently updates the fdblocks counter, unreserves quota,
      etc. before the extent is deleted by xfs_bmap_del_extent(). The function
      has problems dividing up the indirect reserved blocks for scenarios
      where a single delalloc extent is split in two. Particularly, there
      aren't always enough blocks reserved for multiple extents in a single
      extent reservation.
      
      The solution to this problem is to allow the extent removal code to
      steal from the deleted extent to meet indirect reservation requirements.
      Move the block of code in xfs_bmapi() that updates the fdblocks counter
      to after the call to xfs_bmap_del_extent() to allow the codepath to
      update the extent record before the free blocks are accounted. Also,
      reshuffle the code slightly so the delalloc accounting occurs near the
      xfs_bmap_del_extent() call to provide context for the comments.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      
      b2706a05
    • Brian Foster's avatar
      xfs: debug mode forced buffered write failure · 801cc4e1
      Brian Foster authored
      Add a DEBUG mode-only sysfs knob to enable forced buffered write
      failure. An additional side effect of this mode is brute force killing
      of delayed allocation blocks in the range of the write. The latter is
      the prime motiviation behind this patch, as userspace test
      infrastructure requires a reliable mechanism to create and split
      delalloc extents without causing extent conversion.
      
      Certain fallocate operations (i.e., zero range) were used for this in
      the past, but the implementations have changed such that delalloc
      extents are flushed and converted to real blocks, rendering the test
      useless.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      
      801cc4e1
  5. 08 Mar, 2016 2 commits
  6. 06 Mar, 2016 8 commits
  7. 01 Mar, 2016 6 commits
  8. 29 Feb, 2016 4 commits
  9. 27 Feb, 2016 2 commits
    • Ross Zwisler's avatar
      dax: move writeback calls into the filesystems · 7f6d5b52
      Ross Zwisler authored
      Previously calls to dax_writeback_mapping_range() for all DAX filesystems
      (ext2, ext4 & xfs) were centralized in filemap_write_and_wait_range().
      
      dax_writeback_mapping_range() needs a struct block_device, and it used
      to get that from inode->i_sb->s_bdev.  This is correct for normal inodes
      mounted on ext2, ext4 and XFS filesystems, but is incorrect for DAX raw
      block devices and for XFS real-time files.
      
      Instead, call dax_writeback_mapping_range() directly from the filesystem
      ->writepages function so that it can supply us with a valid block
      device.  This also fixes DAX code to properly flush caches in response
      to sync(2).
      Signed-off-by: default avatarRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Cc: Al Viro <viro@ftp.linux.org.uk>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7f6d5b52
    • Ross Zwisler's avatar
      dax: give DAX clearing code correct bdev · 20a90f58
      Ross Zwisler authored
      dax_clear_blocks() needs a valid struct block_device and previously it
      was using inode->i_sb->s_bdev in all cases.  This is correct for normal
      inodes on mounted ext2, ext4 and XFS filesystems, but is incorrect for
      DAX raw block devices and for XFS real-time devices.
      
      Instead, rename dax_clear_blocks() to dax_clear_sectors(), and change
      its arguments to take a bdev and a sector instead of an inode and a
      block.  This better reflects what the function does, and it allows the
      filesystem and raw block device code to pass in an appropriate struct
      block_device.
      Signed-off-by: default avatarRoss Zwisler <ross.zwisler@linux.intel.com>
      Suggested-by: default avatarDan Williams <dan.j.williams@intel.com>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Al Viro <viro@ftp.linux.org.uk>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      20a90f58
  10. 14 Feb, 2016 6 commits
    • Dave Chinner's avatar
      xfs: don't chain ioends during writepage submission · e10de372
      Dave Chinner authored
      Currently we can build a long ioend chain during ->writepages that
      gets attached to the writepage context. IO submission only then
      occurs when we finish all the writepage processing. This means we
      can have many ioends allocated and pending, and this violates the
      mempool guarantees that we need to give about forwards progress.
      i.e. we really should only have one ioend being built at a time,
      otherwise we may drain the mempool trying to allocate a new ioend
      and that blocks submission, completion and freeing of ioends that
      are already in progress.
      
      To prevent this situation from happening, we need to submit ioends
      for IO as soon as they are ready for dispatch rather than queuing
      them for later submission. This means the ioends have bios built
      immediately and they get queued on any plug that is current active.
      Hence if we schedule away from writeback, the ioends that have been
      built will make forwards progress due to the plug flushing on
      context switch. This will also prevent context switches from
      creating unnecessary IO submission latency.
      
      We can't completely avoid having nested IO allocation - when we have
      a block size smaller than a page size, we still need to hold the
      ioend submission until after we have marked the current page dirty.
      Hence we may need multiple ioends to be held while the current page
      is completely mapped and made ready for IO dispatch. We cannot avoid
      this problem - the current code already has this ioend chaining
      within a page so we can mostly ignore that it occurs.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      e10de372
    • Dave Chinner's avatar
      xfs: factor mapping out of xfs_do_writepage · bfce7d2e
      Dave Chinner authored
      Separate out the bufferhead based mapping from the writepage code so
      that we have a clear separation of the page operations and the
      bufferhead state.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      bfce7d2e
    • Dave Chinner's avatar
      xfs: xfs_cluster_write is redundant · ad68972a
      Dave Chinner authored
      xfs_cluster_write() is not necessary now that xfs_vm_writepages()
      aggregates writepage calls across a single mapping. This means we no
      longer need to do page lookups in xfs_cluster_write, so writeback
      only needs to look up th epage cache once per page being written.
      This also removes a large amount of mostly duplicate code between
      xfs_do_writepage() and xfs_convert_page().
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      ad68972a
    • Dave Chinner's avatar
      xfs: Introduce writeback context for writepages · fbcc0256
      Dave Chinner authored
      xfs_vm_writepages() calls generic_writepages to writeback a range of
      a file, but then xfs_vm_writepage() clusters pages itself as it does
      not have any context it can pass between->writepage calls from
      __write_cache_pages().
      
      Introduce a writeback context for xfs_vm_writepages() and call
      __write_cache_pages directly with our own writepage callback so that
      we can pass that context to each writepage invocation. This
      encapsulates the current mapping, whether it is valid or not, the
      current ioend and it's IO type and the ioend chain being built.
      
      This requires us to move the ioend submission up to the level where
      the writepage context is declared. This does mean we do not submit
      IO until we packaged the entire writeback range, but with the block
      plugging in the writepages call this is the way IO is submitted,
      anyway.
      
      It also means that we need to handle discontiguous page ranges.  If
      the pages sent down by write_cache_pages to the writepage callback
      are discontiguous, we need to detect this and put each discontiguous
      page range into individual ioends. This is needed to ensure that the
      ioend accurately represents the range of the file that it covers so
      that file size updates during IO completion set the size correctly.
      Failure to take into account the discontiguous ranges results in
      files being too small when writeback patterns are non-sequential.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      fbcc0256
    • Dave Chinner's avatar
      xfs: remove xfs_cancel_ioend · 150d5be0
      Dave Chinner authored
      We currently have code to cancel ioends being built because we
      change bufferhead state as we build the ioend. On error, this needs
      to be unwound and so we have cancelling code that walks the buffers
      on the ioend chain and undoes these state changes.
      
      However, the IO submission path already handles state changes for
      buffers when a submission error occurs, so we don't really need a
      separate cancel function to do this - we can simply submit the
      ioend chain with the specific error and it will be cancelled rather
      than submitted.
      
      Hence we can remove the explicit cancel code and just rely on
      submission to deal with the error correctly.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      150d5be0
    • Dave Chinner's avatar
      xfs: remove nonblocking mode from xfs_vm_writepage · 988ef927
      Dave Chinner authored
      Remove the nonblocking optimisation done for mapping lookups during
      writeback. It's not clear that leaving a hole in the writeback range
      just because we couldn't get a lock is really a win, as it makes us
      do another small random IO later on rather than a large sequential
      IO now.
      
      As this gets in the way of sane error handling later on, just remove
      for the moment and we can re-introduce an equivalent optimisation in
      future if we see problems due to extent map lock contention.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      988ef927