1. 10 Sep, 2009 4 commits
  2. 05 Sep, 2009 3 commits
    • Vasu Dev's avatar
      [SCSI] fcoe, libfc: fully makes use of per cpu exch pool and then removes em_lock · b2f0091f
      Vasu Dev authored
      
      
      1. Updates fcoe_rcv() to queue incoming frames to the fcoe per
         cpu thread on which this frame's exch was originated and simply
         use current cpu for request exch not originated by initiator.
         It is redundant to add this code under CONFIG_SMP, so removes
         CONFIG_SMP uses around this code.
      
      2. Updates fc_exch_em_alloc, fc_exch_delete, fc_exch_find to use
         per cpu exch pools, here fc_exch_delete is rename of older
         fc_exch_mgr_delete_ep since ep/exch are now deleted in pools
         of EM and so brief new name is sufficient and better name.
      
         Updates these functions to map exch id to their index into exch
         pool using fc_cpu_mask, fc_cpu_order and EM min_xid.
         This mapping is as per detailed explanation about this in
         last patch and basically this is just as lower fc_cpu_mask
         bits of exch id as cpu number and upper bit sum of EM min_xid
         and exch index in pool.
      
         Uses pool next_index to keep track of exch allocation from
         pool along with pool_max_index as upper bound of exches array
         in pool.
      
      3. Adds exch pool ptr to fc_exch to free exch to its pool in
         fc_exch_delete.
      
      4. Updates fc_exch_mgr_reset to reset all exch pools of an EM,
         this required adding fc_exch_pool_reset func to reset exches
         in pool and then have fc_exch_mgr_reset call fc_exch_pool_reset
         for each pool within each EM for a lport.
      
      5. Removes no longer needed exches array, em_lock, next_xid, and
         total_exches from struct fc_exch_mgr, these are not needed after
         use of per cpu exch pool, also removes not used max_read,
         last_read from struct fc_exch_mgr.
      
      6. Updates locking notes for exch pool lock with fc_exch lock and
         uses pool lock in exch allocation, lookup and reset.
      Signed-off-by: default avatarVasu Dev <vasu.dev@intel.com>
      Signed-off-by: default avatarRobert Love <robert.w.love@intel.com>
      Signed-off-by: default avatarJames Bottomley <James.Bottomley@suse.de>
      b2f0091f
    • Vasu Dev's avatar
      [SCSI] fcoe, libfc: adds per cpu exch pool within exchange manager(EM) · e4bc50be
      Vasu Dev authored
      
      
      Adds per cpu exch pool for these reasons:-
      
       1. Currently an EM instance is shared across all cpus to manage
          all exches for all cpus. This required em_lock across all
          cpus for an exch alloc, free, lookup and reset each frame
          and that made em_lock expensive, so instead having per cpu
          exch pool with their own per cpu pool lock will likely reduce
          locking contention in fast path for an exch alloc, free and
          lookup.
      
       2. Per cpu exch pool will likely improve cache hit ratio since
          all frames of an exch will be processed on the same cpu on
          which exch originated.
      
      This patch is only prep work to help in keeping complexity of next
      patch low, so this patch only sets up per cpu exch pool and related
      helper funcs to be used by next patch. The next patch fully makes
      use of per cpu exch pool in all code paths ie. tx, rx and reset.
      
      Divides per EM exch id range equally across all cpus to setup per
      cpu exch pool. This division is such that lower bits of exch id
      carries cpu number info on which exch originated, later a simple
      bitwise AND operation on exch id of incoming frame with fc_cpu_mask
      retrieves cpu number info to direct all frames to same cpu on which
      exch originated. This required a global fc_cpu_mask and fc_cpu_order
      initialized to max possible cpus number nr_cpu_ids rounded up to 2's
      power, this will be used in mapping exch id and exch ptr array
      index in pool during exch allocation, find or reset code paths.
      
      Adds a check in fc_exch_mgr_alloc() to ensure specified min_xid
      lower bits are zero since these bits are used to carry cpu info.
      
      Adds and initializes struct fc_exch_pool with all required fields
      to manage exches in pool.
      
      Allocates per cpu struct fc_exch_pool with memory for exches array
      for range of exches per pool. The exches array memory is followed
      by struct fc_exch_pool.
      
      Adds fc_exch_ptr_get/set() helper functions to get/set exch ptr in
      pool exches array at specified array index.
      
      Increases default FCOE_MAX_XID to 0x0FFF from 0x07EF, so that more
      exches are available per cpu after above described exch id range
      division across all cpus to each pool.
      Signed-off-by: default avatarVasu Dev <vasu.dev@intel.com>
      Signed-off-by: default avatarRobert Love <robert.w.love@intel.com>
      Signed-off-by: default avatarJames Bottomley <James.Bottomley@suse.de>
      e4bc50be
    • Mike Christie's avatar
      [SCSI] iscsi_tcp: add new conn error to indicate tcp conn closed · d1af8a32
      Mike Christie authored
      
      
      If a target closed the connection, we will detect it in the
      state_changed or data_ready callout. This adds a new conn
      error value to use for this problem, so it is not confused
      with when the initiator throws a conn error and drops
      the connection.
      Signed-off-by: default avatarMike Christie <michaelc@cs.wisc.edu>
      Signed-off-by: default avatarJames Bottomley <James.Bottomley@suse.de>
      d1af8a32
  3. 22 Aug, 2009 12 commits
  4. 21 Aug, 2009 2 commits
  5. 20 Aug, 2009 1 commit
  6. 18 Aug, 2009 1 commit
    • KOSAKI Motohiro's avatar
      mm: revert "oom: move oom_adj value" · 0753ba01
      KOSAKI Motohiro authored
      The commit 2ff05b2b (oom: move oom_adj value) moveed the oom_adj value to
      the mm_struct.  It was a very good first step for sanitize OOM.
      
      However Paul Menage reported the commit makes regression to his job
      scheduler.  Current OOM logic can kill OOM_DISABLED process.
      
      Why? His program has the code of similar to the following.
      
      	...
      	set_oom_adj(OOM_DISABLE); /* The job scheduler never killed by oom */
      	...
      	if (vfork() == 0) {
      		set_oom_adj(0); /* Invoked child can be killed */
      		execve("foo-bar-cmd");
      	}
      	....
      
      vfork() parent and child are shared the same mm_struct.  then above
      set_oom_adj(0) doesn't only change oom_adj for vfork() child, it's also
      change oom_adj for vfork() parent.  Then, vfork() parent (job scheduler)
      lost OOM immune and it was killed.
      
      Actually, fork-setting-exec idiom is very frequently used in userland program.
      We must not break this assumption.
      
      Then, this patch revert commit 2ff05b2b and related commit.
      
      Reverted commit list
      ---------------------
      - commit 2ff05b2b (oom: move oom_adj value from task_struct to mm_struct)
      - commit 4d8b9135 (oom: avoid unnecessary mm locking and scanning for OOM_DISABLE)
      - commit 81236810 (oom: only oom kill exiting tasks with attached memory)
      - commit 933b787b
      
       (mm: copy over oom_adj value at fork time)
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0753ba01
  7. 17 Aug, 2009 1 commit
  8. 16 Aug, 2009 4 commits
  9. 13 Aug, 2009 2 commits
    • Peter Zijlstra's avatar
      perf: Rework/fix the whole read vs group stuff · 3dab77fb
      Peter Zijlstra authored
      
      
      Replace PERF_SAMPLE_GROUP with PERF_SAMPLE_READ and introduce
      PERF_FORMAT_GROUP to deal with group reads in a more generic
      way.
      
      This allows you to get group reads out of read() as well.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey J Ashford <cjashfor@us.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      LKML-Reference: <20090813103655.117411814@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3dab77fb
    • Ingo Molnar's avatar
      perf_counter: Provide hw_perf_counter_setup_online() APIs · 28402971
      Ingo Molnar authored
      
      
      Provide weak aliases for hw_perf_counter_setup_online(). This is
      used by the BTS patches (for v2.6.32), but it interacts with
      fixes so propagate this upstream. (it has no effect as of yet)
      
      Also export perf_counter_output() to architecture code.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      28402971
  10. 12 Aug, 2009 1 commit
  11. 10 Aug, 2009 4 commits
    • Frederic Weisbecker's avatar
      perf_counter: Zero dead bytes from ftrace raw samples size alignment · 1853db0e
      Frederic Weisbecker authored
      
      
      After aligning the ftrace raw samples, there are dead bytes storing
      random data from the stack. We don't want to leak these to userspace,
      then zero these out.
      
      Before:
      
      	0x2de88 [0x50]: event: 9
      	.
      	. ... raw event: size 80 bytes
      	.  0000:  09 00 00 00 01 00 50 00 d0 c7 00 81 ff ff ff ff  ......P........
      	.  0010:  68 01 00 00 68 01 00 00 2c 00 00 00 00 00 00 00  h...h...,......
      	.  0020:  2c 00 00 00 2b 00 01 02 68 01 00 00 68 01 00 00  ,...+...h...h..
      	.  0030:  6b 6f 6e 64 65 6d 61 6e 64 2f 30 00 00 00 00 00  kondemand/0....
      	.  0040:  68 01 00 00 40 7f 46 81 ff ff ff ff 00 10 1b 7f  h...@.F........
                                                            ^  ^  ^  ^
                                                               Leak
      
      After:
      
      	0x2d318 [0x50]: event: 9
      	.
      	. ... raw event: size 80 bytes
      	.  0000:  09 00 00 00 01 00 50 00 d0 c7 00 81 ff ff ff ff  ......P........
      	.  0010:  68 01 00 00 68 01 00 00 68 14 00 00 00 00 00 00  h...h...h......
      	.  0020:  2c 00 00 00 2b 00 01 02 68 01 00 00 68 01 00 00  ,...+...h...h..
      	.  0030:  6b 6f 6e 64 65 6d 61 6e 64 2f 30 00 00 00 00 00  kondemand/0....
      	.  0040:  68 01 00 00 a0 80 46 81 ff ff ff ff 00 00 00 00  h.....F........
                                                            ^  ^  ^  ^
      							 Fixed
      Reported-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <1249915116-5210-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      1853db0e
    • Frederic Weisbecker's avatar
      perf_counter: Subtract the buffer size field from the event record size · 304703ab
      Frederic Weisbecker authored
      
      
      We compute the perf raw sample size by aligning the raw ftrace
      event size plus the buffer size field itself. We do that
      instead of aligning only the perf raw sample size, so that we
      might economize some in some cases.
      
      But this buffer size field is not stored in the perf raw
      sample, we must then substract its size from the buffer once we
      computed the alignment unless we may get a useless u32 field in
      the buffer.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20090810141129.GA5124@nowhere>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      304703ab
    • Peter Zijlstra's avatar
      locking, sched: Give waitqueue spinlocks their own lockdep classes · 2fc39111
      Peter Zijlstra authored
      
      
      Give waitqueue spinlocks their own lockdep classes when they
      are initialised from init_waitqueue_head().  This means that
      struct wait_queue::func functions can operate other waitqueues.
      
      This is used by CacheFiles to catch the page from a backing fs
      being unlocked and to wake up another thread to take a copy of
      it.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Tested-by: default avatarTakashi Iwai <tiwai@suse.de>
      Cc: linux-cachefs@redhat.com
      Cc: torvalds@osdl.org
      Cc: akpm@linux-foundation.org
      LKML-Reference: <20090810113305.17284.81508.stgit@warthog.procyon.org.uk>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      2fc39111
    • Peter Zijlstra's avatar
      perf_counter: Correct PERF_SAMPLE_RAW output · a044560c
      Peter Zijlstra authored
      
      
      PERF_SAMPLE_* output switches should unconditionally output the
      correct format, as they are the only way to unambiguously parse
      the PERF_EVENT_SAMPLE data.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <1249896447.17467.74.camel@twins>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      a044560c
  12. 09 Aug, 2009 3 commits
    • Frederic Weisbecker's avatar
      perf_counter: Fix tracepoint sampling to be part of generic sampling · 3a43ce68
      Frederic Weisbecker authored
      
      
      Based on Peter's comments, make tracepoint sampling generic
      just like all the other sampling bits are. This is a rename
      with no code changes:
      
      - PERF_SAMPLE_TP_RECORD to PERF_SAMPLE_RAW
      - struct perf_tracepoint_record to perf_raw_record
      
      We want the system in place that transport tracepoints raw
      samples events into the perf ring buffer to be generalized and
      usable by any type of counter.
      
      Reported-by; Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <1249698400-5441-4-git-send-email-fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3a43ce68
    • Frederic Weisbecker's avatar
      perf_counter: Fix/complete ftrace event records sampling · f413cdb8
      Frederic Weisbecker authored
      
      
      This patch implements the kernel side support for ftrace event
      record sampling.
      
      A new counter sampling attribute is added:
      
         PERF_SAMPLE_TP_RECORD
      
      which requests ftrace events record sampling. In this case
      if a PERF_TYPE_TRACEPOINT counter is active and a tracepoint
      fires, we emit the tracepoint binary record to the
      perfcounter event buffer, as a sample.
      
      Result, after setting PERF_SAMPLE_TP_RECORD attribute from perf
      record:
      
       perf record -f -F 1 -a -e workqueue:workqueue_execution
       perf report -D
      
       0x21e18 [0x48]: event: 9
       .
       . ... raw event: size 72 bytes
       .  0000:  09 00 00 00 01 00 48 00 d0 c7 00 81 ff ff ff ff  ......H........
       .  0010:  0a 00 00 00 0a 00 00 00 21 00 00 00 00 00 00 00  ........!......
       .  0020:  2b 00 01 02 0a 00 00 00 0a 00 00 00 65 76 65 6e  +...........eve
       .  0030:  74 73 2f 31 00 00 00 00 00 00 00 00 0a 00 00 00  ts/1...........
       .  0040:  e0 b1 31 81 ff ff ff ff                          .......
      .
      0x21e18 [0x48]: PERF_EVENT_SAMPLE (IP, 1): 10: 0xffffffff8100c7d0 period: 33
      
      The raw ftrace binary record starts at offset 0020.
      
      Translation:
      
       struct trace_entry {
      	type		= 0x2b = 43;
      	flags		= 1;
      	preempt_count	= 2;
      	pid		= 0xa = 10;
      	tgid		= 0xa = 10;
       }
      
       thread_comm = "events/1"
       thread_pid  = 0xa = 10;
       func	    = 0xffffffff8131b1e0 = flush_to_ldisc()
      
      What will come next?
      
       - Userspace support ('perf trace'), 'flight data recorder' mode
         for perf trace, etc.
      
       - The unconditional copy from the profiling callback brings
         some costs however if someone wants no such sampling to
         occur, and needs to be fixed in the future. For that we need
         to have an instant access to the perf counter attribute.
         This is a matter of a flag to add in the struct ftrace_event.
      
       - Take care of the events recursivity! Don't ever try to record
         a lock event for example, it seems some locking is used in
         the profiling fast path and lead to a tracing recursivity.
         That will be fixed using raw spinlock or recursivity
         protection.
      
       - [...]
      
       - Profit! :-)
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Gabriel Munteanu <eduard.munteanu@linux360.ro>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      f413cdb8
    • Peter Zijlstra's avatar
      perf_counter, ftrace: Fix perf_counter integration · 3a659305
      Peter Zijlstra authored
      
      
      Adds possible second part to the assign argument of TP_EVENT().
      
        TP_perf_assign(
      	__perf_count(foo);
      	__perf_addr(bar);
        )
      
      Which, when specified make the swcounter increment with @foo instead
      of the usual 1, and report @bar for PERF_SAMPLE_ADDR (data address
      associated with the event) when this triggers a counter overflow.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3a659305
  13. 07 Aug, 2009 2 commits
    • Phillip Lougher's avatar
      bzip2/lzma/gzip: fix comments describing decompressor API · daeb6b6f
      Phillip Lougher authored
      
      
      Fix and improve comments in decompress/generic.h that describe the
      decompressor API.  Also remove an unused definition, and rename INBUF_LEN
      in lib/decompress_inflate.c to conform to bzip2/lzma naming.
      Signed-off-by: default avatarPhillip Lougher <phillip@lougher.demon.co.uk>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      daeb6b6f
    • KAMEZAWA Hiroyuki's avatar
      mm: make set_mempolicy(MPOL_INTERLEAV) N_HIGH_MEMORY aware · 4bfc4495
      KAMEZAWA Hiroyuki authored
      At first, init_task's mems_allowed is initialized as this.
       init_task->mems_allowed == node_state[N_POSSIBLE]
      
      And cpuset's top_cpuset mask is initialized as this
       top_cpuset->mems_allowed = node_state[N_HIGH_MEMORY]
      
      Before 2.6.29:
      policy's mems_allowed is initialized as this.
      
        1. update tasks->mems_allowed by its cpuset->mems_allowed.
        2. policy->mems_allowed = nodes_and(tasks->mems_allowed, user's mask)
      
      Updating task's mems_allowed in reference to top_cpuset's one.
      cpuset's mems_allowed is aware of N_HIGH_MEMORY, always.
      
      In 2.6.30: After commit 58568d2a
      
      
      ("cpuset,mm: update tasks' mems_allowed in time"), policy's mems_allowed
      is initialized as this.
      
        1. policy->mems_allowd = nodes_and(task->mems_allowed, user's mask)
      
      Here, if task is in top_cpuset, task->mems_allowed is not updated from
      init's one.  Assume user excutes command as #numactrl --interleave=all
      ,....
      
        policy->mems_allowd = nodes_and(N_POSSIBLE, ALL_SET_MASK)
      
      Then, policy's mems_allowd can includes a possible node, which has no pgdat.
      
      MPOL's INTERLEAVE just scans nodemask of task->mems_allowd and access this
      directly.
      
        NODE_DATA(nid)->zonelist even if NODE_DATA(nid)==NULL
      
      Then, what's we need is making policy->mems_allowed be aware of
      N_HIGH_MEMORY.  This patch does that.  But to do so, extra nodemask will
      be on statck.  Because I know cpumask has a new interface of
      CPUMASK_ALLOC(), I added it to node.
      
      This patch stands on old behavior.  But I feel this fix itself is just a
      Band-Aid.  But to do fundametal fix, we have to take care of memory
      hotplug and it takes time.  (task->mems_allowd should be N_HIGH_MEMORY, I
      think.)
      
      mpol_set_nodemask() should be aware of N_HIGH_MEMORY and policy's nodemask
      should be includes only online nodes.
      
      In old behavior, this is guaranteed by frequent reference to cpuset's
      code.  Now, most of them are removed and mempolicy has to check it by
      itself.
      
      To do check, a few nodemask_t will be used for calculating nodemask.  But,
      size of nodemask_t can be big and it's not good to allocate them on stack.
      
      Now, cpumask_t has CPUMASK_ALLOC/FREE an easy code for get scratch area.
      NODEMASK_ALLOC/FREE shoudl be there.
      
      [akpm@linux-foundation.org: cleanups & tweaks]
      Tested-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Miao Xie <miaox@cn.fujitsu.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Paul Menage <menage@google.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4bfc4495