1. 20 Oct, 2006 2 commits
  2. 12 Oct, 2006 1 commit
  3. 04 Oct, 2006 1 commit
  4. 30 Sep, 2006 16 commits
    • Andrew Morton's avatar
      [PATCH] CONFIG_BLOCK: blk_congestion_wait() fix · bcfd8d36
      Andrew Morton authored
      
      
      Don't just do nothing: it'll cause busywaits all over writeback and page
      reclaim.
      
      For now, take a fixed-length nap.  Will improve when NFS starts waking up
      throttled processes.
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      bcfd8d36
    • David Howells's avatar
      [PATCH] BLOCK: Make it possible to disable the block layer [try #6] · 9361401e
      David Howells authored
      
      
      Make it possible to disable the block layer.  Not all embedded devices require
      it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
      the block layer to be present.
      
      This patch does the following:
      
       (*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
           support.
      
       (*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
           an item that uses the block layer.  This includes:
      
           (*) Block I/O tracing.
      
           (*) Disk partition code.
      
           (*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.
      
           (*) The SCSI layer.  As far as I can tell, even SCSI chardevs use the
           	 block layer to do scheduling.  Some drivers that use SCSI facilities -
           	 such as USB storage - end up disabled indirectly from this.
      
           (*) Various block-based device drivers, such as IDE and the old CDROM
           	 drivers.
      
           (*) MTD blockdev handling and FTL.
      
           (*) JFFS - which uses set_bdev_super(), something it could avoid doing by
           	 taking a leaf out of JFFS2's book.
      
       (*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
           linux/elevator.h contingent on CONFIG_BLOCK being set.  sector_div() is,
           however, still used in places, and so is still available.
      
       (*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
           parts of linux/fs.h.
      
       (*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.
      
       (*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.
      
       (*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
           is not enabled.
      
       (*) fs/no-block.c is created to hold out-of-line stubs and things that are
           required when CONFIG_BLOCK is not set:
      
           (*) Default blockdev file operations (to give error ENODEV on opening).
      
       (*) Makes some /proc changes:
      
           (*) /proc/devices does not list any blockdevs.
      
           (*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.
      
       (*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.
      
       (*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
           given command other than Q_SYNC or if a special device is specified.
      
       (*) In init/do_mounts.c, no reference is made to the blockdev routines if
           CONFIG_BLOCK is not defined.  This does not prohibit NFS roots or JFFS2.
      
       (*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
           error ENOSYS by way of cond_syscall if so).
      
       (*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
           CONFIG_BLOCK is not set, since they can't then happen.
      Signed-Off-By: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      9361401e
    • Jens Axboe's avatar
      [PATCH] Allow file systems to differentiate between data and meta reads · 5404bc7a
      Jens Axboe authored
      
      
      We can use this information for making more intelligent priority
      decisions, and it will also be useful for blktrace.
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      5404bc7a
    • Jens Axboe's avatar
      [PATCH] Add blk_start_queueing() helper · dc72ef4a
      Jens Axboe authored
      
      
      CFQ implements this on its own now, but it's really block layer
      knowledge. Tells a device queue to start dispatching requests to
      the driver, taking care to unplug if needed. Also fixes the issue
      where as/cfq will invoke a stopped queue, which we really don't
      want.
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      dc72ef4a
    • Jens Axboe's avatar
      [PATCH] Make sure all block/io scheduler setups are node aware · b5deef90
      Jens Axboe authored
      
      
      Some were kmalloc_node(), some were still kmalloc(). Change them all to
      kmalloc_node().
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      b5deef90
    • Jens Axboe's avatar
    • Jens Axboe's avatar
      [PATCH] cfq-iosched: kill cfq_exit_lock · fc46379d
      Jens Axboe authored
      
      
      cfq_exit_lock is protecting two things now:
      
      - The per-ioc rbtree of cfq_io_contexts
      
      - The per-cfqd linked list of cfq_io_contexts
      
      The per-cfqd linked list can be protected by the queue lock, as it is (by
      definition) per cfqd as the queue lock is.
      
      The per-ioc rbtree is mainly used and updated by the process itself only.
      The only outside use is the io priority changing. If we move the
      priority changing to not browsing the rbtree, we can remove any locking
      from the rbtree updates and lookup completely. Let the sys_ioprio syscall
      just mark processes as having the iopriority changed and lazily update
      the private cfq io contexts the next time io is queued, and we can
      remove this locking as well.
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      fc46379d
    • Jens Axboe's avatar
      [PATCH] struct request: shrink and optimize some more · e6a1c874
      Jens Axboe authored
      
      
      Move some members around and unionize completion_data and rb_node since
      they cannot ever be used at the same time.
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      e6a1c874
    • Jens Axboe's avatar
      [PATCH] Remove ->rq_status from struct request · cdd60262
      Jens Axboe authored
      
      
      After Christophs SCSI change, the only usage left is RQ_ACTIVE
      and RQ_INACTIVE. The block layer sets RQ_INACTIVE right before freeing
      the request, so any check for RQ_INACTIVE in a driver is a bug and
      indicates use-after-free.
      
      So kill/clean the remaining users, straight forward.
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      cdd60262
    • Jens Axboe's avatar
      [PATCH] Remove struct request_list from struct request · 49171e5c
      Jens Axboe authored
      
      
      It is always identical to &q->rq, and we only use it for detecting
      whether this request came out of our mempool or not. So replace it
      with an additional ->flags bit flag.
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      49171e5c
    • Jens Axboe's avatar
      [PATCH] Remove ->waiting member from struct request · c00895ab
      Jens Axboe authored
      
      
      As the comments indicates in blkdev.h, we can fold it into ->end_io_data
      usage as that is really what ->waiting is. Fixup the users of
      blk_end_sync_rq().
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      c00895ab
    • Jens Axboe's avatar
      [PATCH] Add one more pointer to struct request for IO scheduler usage · ff7d145f
      Jens Axboe authored
      
      
      Then we have enough room in the request to get rid of the dynamic
      allocations in CFQ/AS.
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      ff7d145f
    • Jens Axboe's avatar
      [PATCH] as-iosched: remove arq->is_sync member · 9e2585a8
      Jens Axboe authored
      
      
      We can track this in struct request.
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      9e2585a8
    • Jens Axboe's avatar
      [PATCH] elevator: abstract out the rbtree sort handling · 2e662b65
      Jens Axboe authored
      
      
      The rbtree sort/lookup/reposition logic is mostly duplicated in
      cfq/deadline/as, so move it to the elevator core. The io schedulers
      still provide the actual rb root, as we don't want to impose any sort
      of specific handling on the schedulers.
      
      Introduce the helpers and rb_node in struct request to help migrate the
      IO schedulers.
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      2e662b65
    • Jens Axboe's avatar
      [PATCH] elevator: move the backmerging logic into the elevator core · 9817064b
      Jens Axboe authored
      
      
      Right now, every IO scheduler implements its own backmerging (except for
      noop, which does no merging). That results in duplicated code for
      essentially the same operation, which is never a good thing. This patch
      moves the backmerging out of the io schedulers and into the elevator
      core. We save 1.6kb of text and as a bonus get backmerging for noop as
      well. Win-win!
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      9817064b
    • Jens Axboe's avatar
      [PATCH] Split struct request ->flags into two parts · 4aff5e23
      Jens Axboe authored
      
      
      Right now ->flags is a bit of a mess: some are request types, and
      others are just modifiers. Clean this up by splitting it into
      ->cmd_type and ->cmd_flags. This allows introduction of generic
      Linux block message types, useful for sending generic Linux commands
      to block devices.
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      4aff5e23
  5. 29 Sep, 2006 1 commit
  6. 22 Sep, 2006 1 commit
  7. 31 Aug, 2006 1 commit
  8. 23 Jun, 2006 3 commits
  9. 10 Jun, 2006 1 commit
    • Christoph Hellwig's avatar
      [SCSI] remove RQ_SCSI_* flags · 8d7feac3
      Christoph Hellwig authored
      
      
      The RQ_SCSI_* flags are a vestiage of a long past history.  The EH code
      still sets them but we never make use of that information.  The other
      users is pluto.c which never had a chance to work but needs to be kept
      compiling to keep Davem happy, so copy over the definition there.
      
      We could probably get rid of RQ_ACTIVE/RQ_INACTIVE aswell with some
      work, there's only two more or less bogus looking uses in ubd and scsi.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJames Bottomley <James.Bottomley@SteelEye.com>
      8d7feac3
  10. 26 Apr, 2006 1 commit
  11. 13 Apr, 2006 1 commit
    • Christoph Hellwig's avatar
      [SCSI] unify SCSI_IOCTL_SEND_COMMAND implementations · 21b2f0c8
      Christoph Hellwig authored
      
      
      We currently have two implementations of this obsolete ioctl, one in
      the block layer and one in the scsi code.  Both of them have drawbacks.
      
      This patch kills the scsi layer version after updating the block version
      with the missing bits:
      
       - argument checking
       - use scatterlist I/O
       - set number of retries based on the submitted command
      
      This is the last user of non-S/G I/O except for the gdth driver, so
      getting this in ASAP and through the scsi tree would be nie to kill
      the non-S/G I/O path.  Jens, what do you think about adding a check
      for non-S/G I/O in the midlayer?
      
      Thanks to  Or Gerlitz for testing this patch.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJames Bottomley <James.Bottomley@SteelEye.com>
      21b2f0c8
  12. 28 Mar, 2006 1 commit
    • Jens Axboe's avatar
      [BLOCK] cfq-iosched: seek and async performance fixes · 206dc69b
      Jens Axboe authored
      
      
      Detect whether a given process is seeky and if so disable (mostly) the
      idle window if it is. We still allow just a little idle time, just enough
      to allow that process to submit a new request. That is needed to maintain
      fairness across priority groups.
      
      In some cases, we could setup several async queues. This is not optimal
      from a performance POV, since we want all async io in one queue to perform
      good sorting on it. It also impacted sync queues, as async io got too much
      slice time.
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      206dc69b
  13. 27 Mar, 2006 1 commit
  14. 23 Mar, 2006 1 commit
  15. 18 Mar, 2006 3 commits
  16. 24 Jan, 2006 1 commit
  17. 09 Jan, 2006 2 commits
  18. 06 Jan, 2006 2 commits