1. 01 Sep, 2010 5 commits
    • Paul Mackerras's avatar
      powerpc: Dynamically allocate most lppaca structs · 93c22703
      Paul Mackerras authored
      
      
      This arranges for the lppaca structs for most cpus to be dynamically
      allocated in the same manner as the paca structs.  If we don't include
      support for legacy iSeries, only the first lppaca is statically
      allocated; the rest are dynamically allocated.  If we include legacy
      iSeries support, then we statically allocate the first 64 lppaca
      structs, since the iSeries hypervisor requires that the lppaca
      structs be present in the data section of the kernel image, but
      legacy iSeries supports at most 64 cpus.
      
      With CONFIG_NR_CPUS, the kernel image size for a typical pSeries config
      went from:
      
         text    data     bss     dec     hex filename
      9524478 4734564 8469944 22728986        15ad11a ../test-1024/vmlinux
      
      to:
      
         text    data     bss     dec     hex filename
      9524482 3751508 8469944 21745934        14bd10e ../test-1024/vmlinux
      
      a reduction of 983052 bytes overall.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      93c22703
    • Paul Mackerras's avatar
      powerpc: Abstract indexing of lppaca structs · 8154c5d2
      Paul Mackerras authored
      
      
      Currently we have the lppaca structs as a simple array of NR_CPUS
      entries, taking up space in the data section of the kernel image.
      In future we would like to allocate them dynamically, so this
      abstracts out the accesses to the array, making it easier to
      change how we locate the lppaca for a given cpu in future.
      Specifically, lppaca[cpu] changes to lppaca_of(cpu).
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8154c5d2
    • Anton Blanchard's avatar
      powerpc: Feature nop out reservation clear when stcx checks address · f89451fb
      Anton Blanchard authored
      The POWER architecture does not require stcx to check that it is operating
      on the same address as the larx. This means it is possible for an
      an exception handler to execute a larx, get a reservation, decide
      not to do the stcx and then return back with an active reservation. If the
      interrupted code was in the middle of a larx/stcx sequence the stcx could
      incorrectly succeed.
      
      All recent POWER CPUs check the address before letting the stcx succeed
      so we can create a CPU feature and nop it out. As Ben suggested, we can
      only do this in our syscall path because there is a remote possibility
      some kernel code gets interrupted by an exception that ends up operating
      on the same cacheline.
      
      Thanks to Paul Mackerras and Derek Williams for the idea.
      
      To test this I used a very simple null syscall (actually getppid) testcase
      at http://ozlabs.org/~anton/junkcode/null_syscall.c
      
      
      
      I tested against 2.6.35-git10 with the following changes against the
      pseries_defconfig:
      
      CONFIG_VIRT_CPU_ACCOUNTING=n
      CONFIG_AUDIT=n
      CONFIG_PPC_4K_PAGES=n
      CONFIG_PPC_64K_PAGES=y
      CONFIG_FORCE_MAX_ZONEORDER=9
      CONFIG_PPC_SUBPAGE_PROT=n
      CONFIG_FUNCTION_TRACER=n
      CONFIG_FUNCTION_GRAPH_TRACER=n
      CONFIG_IRQSOFF_TRACER=n
      CONFIG_STACK_TRACER=n
      
      to remove the overhead of virtual CPU accounting, syscall auditing and
      the ftrace mcount tracers. 64kB pages were enabled to minimise TLB misses.
      
      POWER6: +8.2%
      POWER7: +7.0%
      
      Another suggestion was to use a larx to something in the L1 instead of a stcx.
      This was almost as fast as removing the larx on POWER6, but only 3.5% faster
      on POWER7. We can use this to speed up the reservation clear in our
      exception exit code.
      Signed-off-by: default avatarAnton Blanchard <anton@samba.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      f89451fb
    • Anton Blanchard's avatar
      powerpc: Add 64bit csum_and_copy_to_user · 8c773914
      Anton Blanchard authored
      
      
      This adds the equivalent of csum_and_copy_from_user for the receive side so we
      can copy and checksum in one pass. It is modelled on the generic checksum
      routine.
      Signed-off-by: default avatarAnton Blanchard <anton@samba.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8c773914
    • Anton Blanchard's avatar
      powerpc: Optimise 64bit csum_partial_copy_generic and add csum_and_copy_from_user · fdd374b6
      Anton Blanchard authored
      
      
      We use the same core loop as the new csum_partial, adding in the
      stores and exception handling code. To keep things simple we do all the
      exception fixup in csum_and_copy_from_user. This wrapper function is
      modelled on the generic checksum code and is careful to always calculate
      a complete checksum even if we only copied part of the data to userspace.
      
      To test this I forced checksumming on over loopback and ran socklib (a
      simple TCP benchmark). On a POWER6 575 throughput improved by 19% with
      this patch. If I forced both the sender and receiver onto the same cpu
      (with the hope of shifting the benchmark from being cache bandwidth limited
      to cpu limited), adding this patch improved performance by 55%
      Signed-off-by: default avatarAnton Blanchard <anton@samba.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      fdd374b6
  2. 23 Aug, 2010 4 commits
  3. 14 Aug, 2010 1 commit
  4. 11 Aug, 2010 3 commits
  5. 10 Aug, 2010 1 commit
    • hyc@symas.com's avatar
      tty: Add EXTPROC support for LINEMODE · 26df6d13
      hyc@symas.com authored
      This patch is against the 2.6.34 source.
      
      Paraphrased from the 1989 BSD patch by David Borman @ cray.com:
      
           These are the changes needed for the kernel to support
           LINEMODE in the server.
      
           There is a new bit in the termios local flag word, EXTPROC.
           When this bit is set, several aspects of the terminal driver
           are disabled.  Input line editing, character echo, and mapping
           of signals are all disabled.  This allows the telnetd to turn
           off these functions when in linemode, but still keep track of
           what state the user wants the terminal to be in.
      
           New ioctl:
               TIOCSIG         Generate a signal to processes in the
                               current process group of the pty.
      
           There is a new mode for packet driver, the TIOCPKT_IOCTL bit.
           When packet mode is turned on in the pty, and the EXTPROC bit
           is set, then whenever the state of the pty is changed, the
           next read on the master side of the pty will have the TIOCPKT_IOCTL
           bit set.  This allows the process on the server side of the pty
           to know when the state of the terminal has changed; it can then
           issue the appropriate ioctl to retrieve the new state.
      
      Since the original BSD patches accompanied the source code for telnet
      I've left that reference here, but obviously the feature is useful for
      any remote terminal protocol, including ssh.
      
      The corresponding feature has existed in the BSD tty driver since 1989.
      For historical reference, a good copy of the relevant files can be found
      here:
      
      http://anonsvn.mit.edu/viewvc/krb5/trunk/src/appl/telnet/?pathrev=17741
      
      Signed-off-by: default avatarHoward Chu <hyc@symas.com>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@suse.de>
      
      26df6d13
  6. 09 Aug, 2010 1 commit
    • Cesar Eduardo Barros's avatar
      kmap_atomic: make kunmap_atomic() harder to misuse · 597781f3
      Cesar Eduardo Barros authored
      kunmap_atomic() is currently at level -4 on Rusty's "Hard To Misuse"
      list[1] ("Follow common convention and you'll get it wrong"), except in
      some architectures when CONFIG_DEBUG_HIGHMEM is set[2][3].
      
      kunmap() takes a pointer to a struct page; kunmap_atomic(), however, takes
      takes a pointer to within the page itself.  This seems to once in a while
      trip people up (the convention they are following is the one from
      kunmap()).
      
      Make it much harder to misuse, by moving it to level 9 on Rusty's list[4]
      ("The compiler/linker won't let you get it wrong").  This is done by
      refusing to build if the type of its first argument is a pointer to a
      struct page.
      
      The real kunmap_atomic() is renamed to kunmap_atomic_notypecheck()
      (which is what you would call in case for some strange reason calling it
      with a pointer to a struct page is not incorrect in your code).
      
      The previous version of this patch was compile tested on x86-64.
      
      [1] http://ozlabs.org/~rusty/index.cgi/tech/2008-04-01.html
      [2] In these cases, it is at level 5, "Do it right or it will always
          break at runtime."
      [3] At least mips and powerpc look very similar, and sparc also seems to
          share a common ancestor with both; there seems to be quite some
          degree of copy-and-paste coding here. The include/asm/highmem.h file
          for these three archs mention x86 CPUs at its top.
      [4] http://ozlabs.org/~rusty/index.cgi/tech/2008-03-30.html
      
      
      [5] As an aside, could someone tell me why mn10300 uses unsigned long as
          the first parameter of kunmap_atomic() instead of void *?
      Signed-off-by: default avatarCesar Eduardo Barros <cesarb@cesarb.net>
      Cc: Russell King <linux@arm.linux.org.uk> (arch/arm)
      Cc: Ralf Baechle <ralf@linux-mips.org> (arch/mips)
      Cc: David Howells <dhowells@redhat.com> (arch/frv, arch/mn10300)
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com> (arch/mn10300)
      Cc: Kyle McMartin <kyle@mcmartin.ca> (arch/parisc)
      Cc: Helge Deller <deller@gmx.de> (arch/parisc)
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> (arch/parisc)
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> (arch/powerpc)
      Cc: Paul Mackerras <paulus@samba.org> (arch/powerpc)
      Cc: "David S. Miller" <davem@davemloft.net> (arch/sparc)
      Cc: Thomas Gleixner <tglx@linutronix.de> (arch/x86)
      Cc: Ingo Molnar <mingo@redhat.com> (arch/x86)
      Cc: "H. Peter Anvin" <hpa@zytor.com> (arch/x86)
      Cc: Arnd Bergmann <arnd@arndb.de> (include/asm-generic)
      Cc: Rusty Russell <rusty@rustcorp.com.au> ("Hard To Misuse" list)
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      597781f3
  7. 07 Aug, 2010 1 commit
  8. 06 Aug, 2010 1 commit
    • Eric Millbrandt's avatar
      powerpc/5200: add mpc5200_psc_ac97_gpio_reset · cfa6a88c
      Eric Millbrandt authored
      
      
      Work around a silicon bug in the ac97 reset functionality of the
      mpc5200(b).  The implementation of the ac97 "cold" reset is flawed.
      If the sync and output lines are high when reset is asserted the
      attached ac97 device may go into test mode.  Avoid this by
      reconfiguring the psc to gpio mode and generating the reset manually.
      
      From MPC5200B User's Manual:
      "Some AC97 devices goes to a test mode, if the Sync line is high
      during the Res line is low (reset phase). To avoid this behavior the
      Sync line must be also forced to zero during the reset phase. To do
      that, the pin muxing should switch to GPIO mode and the GPIO control
      register should be used to control the output lines."
      Signed-off-by: default avatarEric Millbrandt <emillbrandt@dekaresearch.com>
      Signed-off-by: default avatarGrant Likely <grant.likely@secretlab.ca>
      cfa6a88c
  9. 01 Aug, 2010 6 commits
  10. 30 Jul, 2010 2 commits
  11. 28 Jul, 2010 2 commits
    • Paul Mackerras's avatar
      powerpc: Clean up obsolete code relating to decrementer and timebase · d75d68cf
      Paul Mackerras authored
      
      
      Since the decrementer and timekeeping code was moved over to using
      the generic clockevents and timekeeping infrastructure, several
      variables and functions have been obsolete and effectively unused.
      This deletes them.
      
      In particular, wakeup_decrementer() is no longer needed since the
      generic code reprograms the decrementer as part of the process of
      resuming the timekeeping code, which happens during sysdev resume.
      Thus the wakeup_decrementer calls in the suspend_enter methods for
      52xx platforms have been removed.  The call in the powermac cpu
      frequency change code has been replaced by set_dec(1), which will
      cause a timer interrupt as soon as interrupts are enabled, and the
      generic code will then reprogram the decrementer with the correct
      value.
      
      This also simplifies the generic_suspend_en/disable_irqs functions
      and makes them static since they are not referenced outside time.c.
      The preempt_enable/disable calls are removed because the generic
      code has disabled all but the boot cpu at the point where these
      functions are called, so we can't be moved to another cpu.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d75d68cf
    • Paul Mackerras's avatar
      powerpc: Rework VDSO gettimeofday to prevent time going backwards · 0e469db8
      Paul Mackerras authored
      
      
      Currently it is possible for userspace to see the result of
      gettimeofday() going backwards by 1 microsecond, assuming that
      userspace is using the gettimeofday() in the VDSO.  The VDSO
      gettimeofday() algorithm computes the time in "xsecs", which are
      units of 2^-20 seconds, or approximately 0.954 microseconds,
      using the algorithm
      
      	now = (timebase - tb_orig_stamp) * tb_to_xs + stamp_xsec
      
      and then converts the time in xsecs to seconds and microseconds.
      
      The kernel updates the tb_orig_stamp and stamp_xsec values every
      tick in update_vsyscall().  If the length of the tick is not an
      integer number of xsecs, then some precision is lost in converting
      the current time to xsecs.  For example, with CONFIG_HZ=1000, the
      tick is 1ms long, which is 1048.576 xsecs.  That means that
      stamp_xsec will advance by either 1048 or 1049 on each tick.
      With the right conditions, it is possible for userspace to get
      (timebase - tb_orig_stamp) * tb_to_xs being 1049 if the kernel is
      slightly late in updating the vdso_datapage, and then for stamp_xsec
      to advance by 1048 when the kernel does update it, and for userspace
      to then see (timebase - tb_orig_stamp) * tb_to_xs being zero due to
      integer truncation.  The result is that time appears to go backwards
      by 1 microsecond.
      
      To fix this we change the VDSO gettimeofday to use a new field in the
      VDSO datapage which stores the nanoseconds part of the time as a
      fractional number of seconds in a 0.32 binary fraction format.
      (Or put another way, as a 32-bit number in units of 0.23283 ns.)
      This is convenient because we can use the mulhwu instruction to
      convert it to either microseconds or nanoseconds.
      
      Since it turns out that computing the time of day using this new field
      is simpler than either using stamp_xsec (as gettimeofday does) or
      stamp_xtime.tv_nsec (as clock_gettime does), this converts both
      gettimeofday and clock_gettime to use the new field.  The existing
      __do_get_tspec function is converted to use the new field and take
      a parameter in r7 that indicates the desired resolution, 1,000,000
      for microseconds or 1,000,000,000 for nanoseconds.  The __do_get_xsec
      function is then unused and is deleted.
      
      The new algorithm is
      
      	now = ((timebase - tb_orig_stamp) << 12) * tb_to_xs
      		+ (stamp_xtime_seconds << 32) + stamp_sec_fraction
      
      with 'now' in units of 2^-32 seconds.  That is then converted to
      seconds and either microseconds or nanoseconds with
      
      	seconds = now >> 32
      	partseconds = ((now & 0xffffffff) * resolution) >> 32
      
      The 32-bit VDSO code also makes a further simplification: it ignores
      the bottom 32 bits of the tb_to_xs value, which is a 0.64 format binary
      fraction.  Doing so gets rid of 4 multiply instructions.  Assuming
      a timebase frequency of 1GHz or less and an update interval of no
      more than 10ms, the upper 32 bits of tb_to_xs will be at least
      4503599, so the error from ignoring the low 32 bits will be at most
      2.2ns, which is more than an order of magnitude less than the time
      taken to do gettimeofday or clock_gettime on our fastest processors,
      so there is no possibility of seeing inconsistent values due to this.
      
      This also moves update_gtod() down next to its only caller, and makes
      update_vsyscall use the time passed in via the wall_time argument rather
      than accessing xtime directly.  At present, wall_time always points to
      xtime, but that could change in future.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      0e469db8
  12. 24 Jul, 2010 3 commits
  13. 22 Jul, 2010 2 commits
  14. 18 Jul, 2010 1 commit
  15. 14 Jul, 2010 1 commit
  16. 13 Jul, 2010 2 commits
  17. 11 Jul, 2010 2 commits
  18. 09 Jul, 2010 1 commit
  19. 08 Jul, 2010 1 commit
    • Benjamin Herrenschmidt's avatar
      powerpc/book3e: More doorbell cleanups. Sample the PIR register · b9f1cd71
      Benjamin Herrenschmidt authored
      
      
      The doorbells use the content of the PIR register to match messages
      from other CPUs. This may or may not be the same as our linux CPU
      number, so using that as the "target" is no right.
      
      Instead, we sample the PIR register at boot on every processor
      and use that value subsequently when sending IPIs.
      
      We also use a per-cpu message mask rather than a global array which
      should limit cache line contention.
      
      Note: We could use the CPU number in the device-tree instead of
      the PIR register, as they are supposed to be equivalent. This
      might prove useful if doorbells are to be used to kick CPUs out
      of FW at boot time, thus before we can sample the PIR. This is
      however not the case now and using the PIR just works.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      b9f1cd71