1. 24 May, 2011 1 commit
  2. 11 Mar, 2011 4 commits
  3. 30 Oct, 2010 2 commits
  4. 24 Oct, 2010 3 commits
  5. 05 Oct, 2010 1 commit
    • Arnd Bergmann's avatar
      block: autoconvert trivial BKL users to private mutex · 2a48fc0a
      Arnd Bergmann authored
      The block device drivers have all gained new lock_kernel
      calls from a recent pushdown, and some of the drivers
      were already using the BKL before.
      
      This turns the BKL into a set of per-driver mutexes.
      Still need to check whether this is safe to do.
      
      file=$1
      name=$2
      if grep -q lock_kernel ${file} ; then
          if grep -q 'include.*linux.mutex.h' ${file} ; then
                  sed -i '/include.*<linux\/smp_lock.h>/d' ${file}
          else
                  sed -i 's/include.*<linux\/smp_lock.h>.*$/include <linux\/mutex.h>/g' ${file}
          fi
          sed -i ${file} \
              -e "/^#include.*linux.mutex.h/,$ {
                      1,/^\(static\|int\|long\)/ {
                           /^\(static\|int\|long\)/istatic DEFINE_MUTEX(${name}_mutex);
      
      } }"  \
          -e "s/\(un\)*lock_kernel\>[ ]*()/mutex_\1lock(\&${name}_mutex)/g" \
          -e '/[      ]*cycle_kernel_lock();/d'
      else
          sed -i -e '/include.*\<smp_lock.h\>/d' ${file}  \
                      -e '/cycle_kernel_lock()/d'
      fi
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      2a48fc0a
  6. 08 Aug, 2010 1 commit
  7. 07 Aug, 2010 3 commits
  8. 06 Aug, 2010 1 commit
  9. 02 Aug, 2010 1 commit
  10. 08 Mar, 2010 1 commit
  11. 26 Feb, 2010 4 commits
  12. 25 Feb, 2010 2 commits
  13. 30 Nov, 2009 1 commit
  14. 26 Nov, 2009 1 commit
    • Ilya Loginov's avatar
      block: add helpers to run flush_dcache_page() against a bio and a request's pages · 2d4dc890
      Ilya Loginov authored
      Mtdblock driver doesn't call flush_dcache_page for pages in request.  So,
      this causes problems on architectures where the icache doesn't fill from
      the dcache or with dcache aliases.  The patch fixes this.
      
      The ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE symbol was introduced to avoid
      pointless empty cache-thrashing loops on architectures for which
      flush_dcache_page() is a no-op.  Every architecture was provided with this
      flush pages on architectires where ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE is
      equal 1 or do nothing otherwise.
      
      See "fix mtd_blkdevs problem with caches on some architectures" discussion
      on LKML for more information.
      Signed-off-by: default avatarIlya Loginov <isloginov@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Peter Horton <phorton@bitbox.co.uk>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      2d4dc890
  15. 20 Oct, 2009 1 commit
  16. 01 Oct, 2009 2 commits
    • Christoph Hellwig's avatar
      block: use normal I/O path for discard requests · c15227de
      Christoph Hellwig authored
      prepare_discard_fn() was being called in a place where memory allocation
      was effectively impossible.  This makes it inappropriate for all but
      the most trivial translations of Linux's DISCARD operation to the block
      command set.  Additionally adding a payload there makes the ownership
      of the bio backing unclear as it's now allocated by the device driver
      and not the submitter as usual.
      
      It is replaced with QUEUE_FLAG_DISCARD which is used to indicate whether
      the queue supports discard operations or not.  blkdev_issue_discard now
      allocates a one-page, sector-length payload which is the right thing
      for the common ATA and SCSI implementations.
      
      The mtd implementation of prepare_discard_fn() is replaced with simply
      checking for the request being a discard.
      
      Largely based on a previous patch from Matthew Wilcox <matthew@wil.cx>
      which did the prepare_discard_fn but not the different payload allocation
      yet.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      c15227de
    • Christoph Hellwig's avatar
      block: use normal I/O path for discard requests · 1122a26f
      Christoph Hellwig authored
      prepare_discard_fn() was being called in a place where memory allocation
      was effectively impossible.  This makes it inappropriate for all but
      the most trivial translations of Linux's DISCARD operation to the block
      command set.  Additionally adding a payload there makes the ownership
      of the bio backing unclear as it's now allocated by the device driver
      and not the submitter as usual.
      
      It is replaced with QUEUE_FLAG_DISCARD which is used to indicate whether
      the queue supports discard operations or not.  blkdev_issue_discard now
      allocates a one-page, sector-length payload which is the right thing
      for the common ATA and SCSI implementations.
      
      The mtd implementation of prepare_discard_fn() is replaced with simply
      checking for the request being a discard.
      
      Largely based on a previous patch from Matthew Wilcox <matthew@wil.cx>
      which did the prepare_discard_fn but not the different payload allocation
      yet.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      1122a26f
  17. 22 Sep, 2009 1 commit
  18. 03 Aug, 2009 1 commit
  19. 26 May, 2009 1 commit
  20. 22 May, 2009 1 commit
  21. 11 May, 2009 4 commits
    • Tejun Heo's avatar
      block: implement and enforce request peek/start/fetch · 9934c8c0
      Tejun Heo authored
      Till now block layer allowed two separate modes of request execution.
      A request is always acquired from the request queue via
      elv_next_request().  After that, drivers are free to either dequeue it
      or process it without dequeueing.  Dequeue allows elv_next_request()
      to return the next request so that multiple requests can be in flight.
      
      Executing requests without dequeueing has its merits mostly in
      allowing drivers for simpler devices which can't do sg to deal with
      segments only without considering request boundary.  However, the
      benefit this brings is dubious and declining while the cost of the API
      ambiguity is increasing.  Segment based drivers are usually for very
      old or limited devices and as converting to dequeueing model isn't
      difficult, it doesn't justify the API overhead it puts on block layer
      and its more modern users.
      
      Previous patches converted all block low level drivers to dequeueing
      model.  This patch completes the API transition by...
      
      * renaming elv_next_request() to blk_peek_request()
      
      * renaming blkdev_dequeue_request() to blk_start_request()
      
      * adding blk_fetch_request() which is combination of peek and start
      
      * disallowing completion of queued (not started) requests
      
      * applying new API to all LLDs
      
      Renamings are for consistency and to break out of tree code so that
      it's apparent that out of tree drivers need updating.
      
      [ Impact: block request issue API cleanup, no functional change ]
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Mike Miller <mike.miller@hp.com>
      Cc: unsik Kim <donari75@gmail.com>
      Cc: Paul Clements <paul.clements@steeleye.com>
      Cc: Tim Waugh <tim@cyberelk.net>
      Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Laurent Vivier <Laurent@lvivier.info>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Grant Likely <grant.likely@secretlab.ca>
      Cc: Adrian McMenamin <adrian@mcmen.demon.co.uk>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
      Cc: Borislav Petkov <petkovbb@googlemail.com>
      Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
      Cc: Alex Dubov <oakad@yahoo.com>
      Cc: Pierre Ossman <drzeus@drzeus.cx>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
      Cc: Stefan Weinhuber <wein@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Pete Zaitcev <zaitcev@redhat.com>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      9934c8c0
    • Tejun Heo's avatar
      mtd_blkdevs: dequeue in-flight request · 1498ada7
      Tejun Heo authored
      mtd_blkdevs processes requests one-by-one synchronously from a kthread
      and can be easily converted to dequeueing model.  Convert it.
      
      [ Impact: dequeue in-flight request ]
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      1498ada7
    • Tejun Heo's avatar
      block: blk_rq_[cur_]_{sectors|bytes}() usage cleanup · 1011c1b9
      Tejun Heo authored
      With the previous changes, the followings are now guaranteed for all
      requests in any valid state.
      
      * blk_rq_sectors() == blk_rq_bytes() >> 9
      * blk_rq_cur_sectors() == blk_rq_cur_bytes() >> 9
      
      Clean up accessor usages.  Notable changes are
      
      * nbd,i2o_block: end_all used instead of explicit byte count
      * scsi_lib: unnecessary conditional on request type removed
      
      [ Impact: cleanup ]
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Paul Clements <paul.clements@steeleye.com>
      Cc: Pete Zaitcev <zaitcev@redhat.com>
      Cc: Alex Dubov <oakad@yahoo.com>
      Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Boaz Harrosh <bharrosh@panasas.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      1011c1b9
    • Tejun Heo's avatar
      block: convert to pos and nr_sectors accessors · 83096ebf
      Tejun Heo authored
      With recent cleanups, there is no place where low level driver
      directly manipulates request fields.  This means that the 'hard'
      request fields always equal the !hard fields.  Convert all
      rq->sectors, nr_sectors and current_nr_sectors references to
      accessors.
      
      While at it, drop superflous blk_rq_pos() < 0 test in swim.c.
      
      [ Impact: use pos and nr_sectors accessors ]
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarGeert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
      Tested-by: default avatarGrant Likely <grant.likely@secretlab.ca>
      Acked-by: default avatarGrant Likely <grant.likely@secretlab.ca>
      Tested-by: default avatarAdrian McMenamin <adrian@mcmen.demon.co.uk>
      Acked-by: default avatarAdrian McMenamin <adrian@mcmen.demon.co.uk>
      Acked-by: default avatarMike Miller <mike.miller@hp.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
      Cc: Borislav Petkov <petkovbb@googlemail.com>
      Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
      Cc: Eric Moore <Eric.Moore@lsi.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Pete Zaitcev <zaitcev@redhat.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Paul Clements <paul.clements@steeleye.com>
      Cc: Tim Waugh <tim@cyberelk.net>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Alex Dubov <oakad@yahoo.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Dario Ballabio <ballabio_dario@emc.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: unsik Kim <donari75@gmail.com>
      Cc: Laurent Vivier <Laurent@lvivier.info>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      83096ebf
  22. 27 Apr, 2009 1 commit
    • Tejun Heo's avatar
      block: replace end_request() with [__]blk_end_request_cur() · f06d9a2b
      Tejun Heo authored
      end_request() has been kept around for backward compatibility;
      however, it's about time for it to go away.
      
      * There aren't too many users left.
      
      * Its use of @updtodate is pretty confusing.
      
      * In some cases, newer code ends up using mixture of end_request() and
        [__]blk_end_request[_all](), which is way too confusing.
      
      So, add [__]blk_end_request_cur() and replace end_request() with it.
      Most conversions are straightforward.  Noteworthy ones are...
      
      * paride/pcd: next_request() updated to take 0/-errno instead of 1/0.
      
      * paride/pf: pf_end_request() and next_request() updated to take
        0/-errno instead of 1/0.
      
      * xd: xd_readwrite() updated to return 0/-errno instead of 1/0.
      
      * mtd/mtd_blkdevs: blktrans_discard_request() updated to return
        0/-errno instead of 1/0.  Unnecessary local variable res
        initialization removed from mtd_blktrans_thread().
      
      [ Impact: cleanup ]
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarJoerg Dorchain <joerg@dorchain.net>
      Acked-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: default avatarGrant Likely <grant.likely@secretlab.ca>
      Acked-by: default avatarLaurent Vivier <Laurent@lvivier.info>
      Cc: Tim Waugh <tim@cyberelk.net>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Pete Zaitcev <zaitcev@redhat.com>
      Cc: unsik Kim <donari75@gmail.com>
      f06d9a2b
  23. 04 Apr, 2009 1 commit
    • David Brownell's avatar
      [MTD] driver model updates · 1f24b5a8
      David Brownell authored
      Update driver model support in the MTD framework, so it fits
      better into the current udev-based hotplug framework:
      
       - Each mtd_info now has a device node.  MTD drivers should set
         the dev.parent field to point to the physical device, before
         setting up partitions or otherwise declaring MTDs.
      
       - Those device nodes always map to /sys/class/mtdX device nodes,
         which no longer depend on MTD_CHARDEV.
      
       - Those mtdX sysfs nodes have a "starter set" of attributes;
         it's not yet sufficient to replace /proc/mtd.
      
       - Enabling MTD_CHARDEV provides /sys/class/mtdXro/ nodes and the
         /sys/class/mtd*/dev attributes (for udev, mdev, etc).
      
       - Include a MODULE_ALIAS_CHARDEV_MAJOR macro.  It'll work with
         udev creating the /dev/mtd* nodes, not just a static rootfs.
      
      So the sysfs structure is pretty much what you'd expect, except
      that readonly chardev nodes are a bit quirky.
      Signed-off-by: default avatarDavid Brownell <dbrownell@users.sourceforge.net>
      Signed-off-by: default avatarDavid Woodhouse <David.Woodhouse@intel.com>
      1f24b5a8
  24. 03 Apr, 2009 1 commit