1. 17 Jan, 2011 1 commit
    • liubo's avatar
      Btrfs: forced readonly mounts on errors · acce952b
      liubo authored
      
      
      This patch comes from "Forced readonly mounts on errors" ideas.
      
      As we know, this is the first step in being more fault tolerant of disk
      corruptions instead of just using BUG() statements.
      
      The major content:
      - add a framework for generating errors that should result in filesystems
        going readonly.
      - keep FS state in disk super block.
      - make sure that all of resource will be freed and released at umount time.
      - make sure that fter FS is forced readonly on error, there will be no more
        disk change before FS is corrected. For this, we should stop write operation.
      
      After this patch is applied, the conversion from BUG() to such a framework can
      happen incrementally.
      Signed-off-by: default avatarLiu Bo <liubo2009@cn.fujitsu.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      acce952b
  2. 16 Jan, 2011 3 commits
  3. 22 Dec, 2010 1 commit
    • Li Zefan's avatar
      btrfs: Add lzo compression support · a6fa6fae
      Li Zefan authored
      
      
      Lzo is a much faster compression algorithm than gzib, so would allow
      more users to enable transparent compression, and some users can
      choose from compression ratio and speed for different applications
      
      Usage:
      
       # mount -t btrfs -o compress[=<zlib,lzo>] dev /mnt
      or
       # mount -t btrfs -o compress-force[=<zlib,lzo>] dev /mnt
      
      "-o compress" without argument is still allowed for compatability.
      
      Compatibility:
      
      If we mount a filesystem with lzo compression, it will not be able be
      mounted in old kernels. One reason is, otherwise btrfs will directly
      dump compressed data, which sits in inline extent, to user.
      
      Performance:
      
      The test copied a linux source tarball (~400M) from an ext4 partition
      to the btrfs partition, and then extracted it.
      
      (time in second)
                 lzo        zlib        nocompress
      copy:      10.6       21.7        14.9
      extract:   70.1       94.4        66.6
      
      (data size in MB)
                 lzo        zlib        nocompress
      copy:      185.87     108.69      394.49
      extract:   193.80     132.36      381.21
      
      Changelog:
      
      v1 -> v2:
      - Select LZO_COMPRESS and LZO_DECOMPRESS in btrfs Kconfig.
      - Add incompability flag.
      - Fix error handling in compress code.
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      a6fa6fae
  4. 13 Dec, 2010 1 commit
    • Chris Mason's avatar
      Btrfs: EIO when we fail to read tree roots · 68433b73
      Chris Mason authored
      
      
      If we just get a plain IO error when we read tree roots, the code
      wasn't properly sending that error up the chain.  This allowed mounts to
      continue when they should failed, and allowed operations
      on partially setup root structs.  The end result was usually oopsen
      on spinlocks that hadn't been spun up correctly.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      68433b73
  5. 10 Dec, 2010 1 commit
  6. 29 Nov, 2010 1 commit
  7. 27 Nov, 2010 1 commit
    • Josef Bacik's avatar
      Btrfs: setup blank root and fs_info for mount time · 450ba0ea
      Josef Bacik authored
      
      
      There is a problem with how we use sget, it searches through the list of supers
      attached to the fs_type looking for a super with the same fs_devices as what
      we're trying to mount.  This depends on sb->s_fs_info being filled, but we don't
      fill that in until we get to btrfs_fill_super, so we could hit supers on the
      fs_type super list that have a null s_fs_info.  In order to fix that we need to
      go ahead and setup a blank root with a blank fs_info to hold fs_devices, that
      way our test will work out right and then we can set s_fs_info in
      btrfs_set_super, and then open_ctree will simply use our pre-allocated root and
      fs_info when setting everything up.  Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      450ba0ea
  8. 21 Nov, 2010 1 commit
    • Chris Mason's avatar
      Btrfs: add migrate page for metadata inode · 784b4e29
      Chris Mason authored
      
      
      Migrate page will directly call the btrfs btree writepage function,
      which isn't actually allowed.
      
      Our writepage assumes that you have locked the extent_buffer and
      flagged the block as written.  Without doing these steps, we can
      corrupt metadata blocks.
      
      A later commit will remove the btree writepage function since
      it is really only safely used internally by btrfs.  We
      use writepages for everything else.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      784b4e29
  9. 29 Oct, 2010 3 commits
    • Sage Weil's avatar
      Btrfs: async transaction commit · bb9c12c9
      Sage Weil authored
      
      
      Add support for an async transaction commit that is ordered such that any
      subsequent operations will join the following transaction, but does not
      wait until the current commit is fully on disk.  This avoids much of the
      latency associated with the btrfs_commit_transaction for callers concerned
      with serialization and not safety.
      
      The wait_for_unblock flag controls whether we wait for the 'middle' portion
      of commit_transaction to complete, which is necessary if the caller expects
      some of the modifications contained in the commit to be available (this is
      the case for subvol/snapshot creation).
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      bb9c12c9
    • Andi Kleen's avatar
      Btrfs: cleanup warnings from gcc 4.6 (nonbugs) · 559af821
      Andi Kleen authored
      
      
      These are all the cases where a variable is set, but not read which are
      not bugs as far as I can see, but simply leftovers.
      
      Still needs more review.
      
      Found by gcc 4.6's new warnings
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      559af821
    • Josef Bacik's avatar
      Btrfs: write out free space cache · 0cb59c99
      Josef Bacik authored
      
      
      This is a simple bit, just dump the free space cache out to our preallocated
      inode when we're writing out dirty block groups.  There are a bunch of changes
      in inode.c in order to account for special cases.  Mostly when we're doing the
      writeout we're holding trans_mutex, so we need to use the nolock transacation
      functions.  Also we can't do asynchronous completions since the async thread
      could be blocked on already completed IO waiting for the transaction lock.  This
      has been tested with xfstests and btrfs filesystem balance, as well as my ENOSPC
      tests.  Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      0cb59c99
  10. 28 Oct, 2010 1 commit
    • Josef Bacik's avatar
      Btrfs: create special free space cache inode · 0af3d00b
      Josef Bacik authored
      
      
      In order to save free space cache, we need an inode to hold the data, and we
      need a special item to point at the right inode for the right block group.  So
      first, create a special item that will point to the right inode, and the number
      of extent entries we will have and the number of bitmaps we will have.  We
      truncate and pre-allocate space everytime to make sure it's uptodate.
      
      This feature will be turned on as soon as you mount with -o space_cache, however
      it is safe to boot into old kernels, they will just generate the cache the old
      fashion way.  When you boot back into a newer kernel we will notice that we
      modified and not the cache and automatically discard the cache.
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      0af3d00b
  11. 07 Aug, 2010 1 commit
    • Christoph Hellwig's avatar
      block: unify flags for struct bio and struct request · 7b6d91da
      Christoph Hellwig authored
      
      
      Remove the current bio flags and reuse the request flags for the bio, too.
      This allows to more easily trace the type of I/O from the filesystem
      down to the block driver.  There were two flags in the bio that were
      missing in the requests:  BIO_RW_UNPLUG and BIO_RW_AHEAD.  Also I've
      renamed two request flags that had a superflous RW in them.
      
      Note that the flags are in bio.h despite having the REQ_ name - as
      blkdev.h includes bio.h that is the only way to go for now.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      7b6d91da
  12. 11 Jun, 2010 2 commits
  13. 25 May, 2010 7 commits
  14. 26 Apr, 2010 1 commit
  15. 30 Mar, 2010 3 commits
    • Josef Bacik's avatar
      Btrfs: kill max_extent mount option · 287a0ab9
      Josef Bacik authored
      
      
      As Yan pointed out, theres not much reason for all this complicated math to
      account for file extents being split up into max_extent chunks, since they are
      likely to all end up in the same leaf anyway.  Since there isn't much reason to
      use max_extent, just remove the option altogether so we have one less thing we
      need to test.
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      287a0ab9
    • Josef Bacik's avatar
      Btrfs: fail to mount if we have problems reading the block groups · 1b1d1f66
      Josef Bacik authored
      
      
      We don't actually check the return value of btrfs_read_block_groups, so we can
      possibly succeed to mount, but then fail to say read the superblock xattr for
      selinux which will cause the vfs code to deactivate the super.
      
      This is a problem because in find_free_extent we just assume that we
      will find the right space_info for the allocation we want.  But if we
      failed to read the block groups, we won't have setup any space_info's,
      and we'll hit a NULL pointer deref in find_free_extent.
      
      This patch fixes that problem by checking the return value of
      btrfs_read_block_groups, and failing out properly.  I've also added a
      check in find_free_extent so if for some reason we don't find an
      appropriate space_info, we just return -ENOSPC.
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      1b1d1f66
    • Tejun Heo's avatar
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo authored
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Guess-its-ok-by: default avatarChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  16. 15 Mar, 2010 1 commit
    • Josef Bacik's avatar
      Btrfs: cache the extent state everywhere we possibly can V2 · 2ac55d41
      Josef Bacik authored
      
      
      This patch just goes through and fixes everybody that does
      
      lock_extent()
      blah
      unlock_extent()
      
      to use
      
      lock_extent_bits()
      blah
      unlock_extent_cached()
      
      and pass around a extent_state so we only have to do the searches once per
      function.  This gives me about a 3 mb/s boots on my random write test.  I have
      not converted some things, like the relocation and ioctl's, since they aren't
      heavily used and the relocation stuff is in the middle of being re-written.  I
      also changed the clear_extent_bit() to only unset the cached state if we are
      clearing EXTENT_LOCKED and related stuff, so we can do things like this
      
      lock_extent_bits()
      clear delalloc bits
      unlock_extent_cached()
      
      without losing our cached state.  I tested this thoroughly and turned on
      LEAK_DEBUG to make sure we weren't leaking extent states, everything worked out
      fine.
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      2ac55d41
  17. 08 Mar, 2010 1 commit
  18. 04 Feb, 2010 1 commit
    • Miao Xie's avatar
      Btrfs: remove BUG_ON() due to mounting bad filesystem · d7ce5843
      Miao Xie authored
      
      
      Mounting a bad filesystem caused a BUG_ON(). The following is steps to
      reproduce it.
       # mkfs.btrfs /dev/sda2
       # mount /dev/sda2 /mnt
       # mkfs.btrfs /dev/sda1 /dev/sda2
       (the program says that /dev/sda2 was mounted, and then exits. )
       # umount /mnt
       # mount /dev/sda1 /mnt
      
      At the third step, mkfs.btrfs exited in the way of make filesystem. So the
      initialization of the filesystem didn't finish. So the filesystem was bad, and
      it caused BUG_ON() when mounting it. But BUG_ON() should be called by the wrong
      code, not user's operation, so I think it is a bug of btrfs.
      
      This patch fixes it.
      Signed-off-by: default avatarMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      d7ce5843
  19. 28 Jan, 2010 1 commit
    • Josef Bacik's avatar
      Btrfs: run orphan cleanup on default fs root · e3acc2a6
      Josef Bacik authored
      This patch revert's commit
      
      6c090a11
      
      
      
      Since it introduces this problem where we can run orphan cleanup on a
      volume that can have orphan entries re-added.  Instead of my original
      fix, Yan Zheng pointed out that we can just revert my original fix and
      then run the orphan cleanup in open_ctree after we look up the fs_root.
      I have tested this with all the tests that gave me problems and this
      patch fixes both problems.  Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      e3acc2a6
  20. 17 Dec, 2009 2 commits
  21. 15 Dec, 2009 1 commit
  22. 13 Oct, 2009 1 commit
    • Chris Mason's avatar
      Btrfs: avoid tree log commit when there are no changes · 257c62e1
      Chris Mason authored
      
      
      rpm has a habit of running fdatasync when the file hasn't
      changed.  We already detect if a file hasn't been changed
      in the current transaction but it might have been sent to
      the tree-log in this transaction and not changed since
      the last call to fsync.
      
      In this case, we want to avoid a tree log sync, which includes
      a number of synchronous writes and barriers.  This commit
      extends the existing tracking of the last transaction to change
      a file to also track the last sub-transaction.
      
      The end result is that rpm -ivh and -Uvh are roughly twice as fast,
      and on par with ext3.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      257c62e1
  23. 08 Oct, 2009 1 commit
    • Josef Bacik's avatar
      Btrfs: async delalloc flushing under space pressure · e3ccfa98
      Josef Bacik authored
      
      
      This patch moves the delalloc flushing that occurs when we are under space
      pressure off to a async thread pool.  This helps since we only free up
      metadata space when we actually insert the extent item, which means it takes
      quite a while for space to be free'ed up if we wait on all ordered extents.
      However, if space is freed up due to inline extents being inserted, we can
      wake people who are waiting up early, and they can finish their work.
      Signed-off-by: default avatarJosef Bacik <jbacik@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      e3ccfa98
  24. 05 Oct, 2009 1 commit
    • Chris Mason's avatar
      Btrfs: fix deadlock on async thread startup · 61d92c32
      Chris Mason authored
      
      
      The btrfs async worker threads are used for a wide variety of things,
      including processing bio end_io functions.  This means that when
      the endio threads aren't running, the rest of the FS isn't
      able to do the final processing required to clear PageWriteback.
      
      The endio threads also try to exit as they become idle and
      start more as the work piles up.  The problem is that starting more
      threads means kthreadd may need to allocate ram, and that allocation
      may wait until the global number of writeback pages on the system is
      below a certain limit.
      
      The result of that throttling is that end IO threads wait on
      kthreadd, who is waiting on IO to end, which will never happen.
      
      This commit fixes the deadlock by handing off thread startup to a
      dedicated thread.  It also fixes a bug where the on-demand thread
      creation was creating far too many threads because it didn't take into
      account threads being started by other procs.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      61d92c32
  25. 01 Oct, 2009 2 commits