1. 15 Jul, 2016 1 commit
    • Rafael J. Wysocki's avatar
      x86 / hibernate: Use hlt_play_dead() when resuming from hibernation · 406f992e
      Rafael J. Wysocki authored
      On Intel hardware, native_play_dead() uses mwait_play_dead() by
      default and only falls back to the other methods if that fails.
      That also happens during resume from hibernation, when the restore
      (boot) kernel runs disable_nonboot_cpus() to take all of the CPUs
      except for the boot one offline.
      However, that is problematic, because the address passed to
      __monitor() in mwait_play_dead() is likely to be written to in the
      last phase of hibernate image restoration and that causes the "dead"
      CPU to start executing instructions again.  Unfortunately, the page
      containing the address in that CPU's instruction pointer may not be
      valid any more at that point.
      First, that page may have been overwritten with image kernel memory
      contents already, so the instructions the CPU attempts to execute may
      simply be invalid.  Second, the page tables previously used by that
      CPU may have been overwritten by image kernel memory contents, so the
      address in its instruction pointer is impossible to resolve then.
      A report from Varun Koyyalagunta and investigation carried out by
      Chen Yu show that the latter sometimes happens in practice.
      To prevent it from happening, temporarily change the smp_ops.play_dead
      pointer during resume from hibernation so that it points to a special
      "play dead" routine which uses hlt_play_dead() and avoids the
      inadvertent "revivals" of "dead" CPUs this way.
      A slightly unpleasant consequence of this change is that if the
      system is hibernated with one or more CPUs offline, it will generally
      draw more power after resume than it did before hibernation, because
      the physical state entered by CPUs via hlt_play_dead() is higher-power
      than the mwait_play_dead() one in the majority of cases.  It is
      possible to work around this, but it is unclear how much of a problem
      that's going to be in practice, so the workaround will be implemented
      later if it turns out to be necessary.
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=106371Reported-by: default avatarVarun Koyyalagunta <cpudebug@centtech.com>
      Original-by: default avatarChen Yu <yu.c.chen@intel.com>
      Tested-by: default avatarChen Yu <yu.c.chen@intel.com>
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: default avatarIngo Molnar <mingo@kernel.org>
  2. 09 Jul, 2016 5 commits
  3. 08 Jul, 2016 1 commit
  4. 01 Jul, 2016 4 commits
    • Rafael J. Wysocki's avatar
      PM / hibernate: Recycle safe pages after image restoration · 307c5971
      Rafael J. Wysocki authored
      One of the memory bitmaps used by the hibernation image restoration
      code is freed after the image has been loaded.
      That is not quite efficient, though, because the memory pages used
      for building that bitmap are known to be safe (ie. they were not
      used by the image kernel before hibernation) and the arch-specific
      code finalizing the image restoration may need them.  In that case
      it needs to allocate those pages again via the memory management
      subsystem, check if they are really safe again by consulting the
      other bitmaps and so on.
      To avoid that, recycle those pages by putting them into the global
      list of known safe pages so that they can be given to the arch code
      right away when necessary.
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
    • Rafael J. Wysocki's avatar
      PM / hibernate: Simplify mark_unsafe_pages() · 6dbecfd3
      Rafael J. Wysocki authored
      Rework mark_unsafe_pages() to use a simpler method of clearing
      all bits in free_pages_map and to set the bits for the "unsafe"
      pages (ie. pages that were used by the image kernel before
      hibernation) with the help of duplicate_memory_bitmap().
      For this purpose, move the pfn_valid() check from mark_unsafe_pages()
      to unpack_orig_pfns() where the "unsafe" pages are discovered.
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
    • Rafael J. Wysocki's avatar
      PM / hibernate: Do not free preallocated safe pages during image restore · 9c744481
      Rafael J. Wysocki authored
      The core image restoration code preallocates some safe pages
      (ie. pages that weren't used by the image kernel before hibernation)
      for future use before allocating the bulk of memory for loading the
      image data.  Those safe pages are then freed so they can be allocated
      again (with the memory management subsystem's help).  That's done to
      ensure that there will be enough safe pages for temporary data
      structures needed during image restoration.
      However, it is not really necessary to free those pages after they
      have been allocated.  They can be added to the (global) list of
      safe pages right away and then picked up from there when needed
      without freeing.
      That reduces the overhead related to using safe pages, especially
      in the arch-specific code, so modify the code accordingly.
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
    • Roger Lu's avatar
      PM / suspend: show workqueue state in suspend flow · 7b776af6
      Roger Lu authored
      If freezable workqueue aborts suspend flow, show
      workqueue state for debug purpose.
      Signed-off-by: default avatarRoger Lu <roger.lu@mediatek.com>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
  5. 30 Jun, 2016 1 commit
    • Rafael J. Wysocki's avatar
      x86/power/64: Fix kernel text mapping corruption during image restoration · 65c0554b
      Rafael J. Wysocki authored
      Logan Gunthorpe reports that hibernation stopped working reliably for
      him after commit ab76f7b4 (x86/mm: Set NX on gap between __ex_table
      and rodata).
      That turns out to be a consequence of a long-standing issue with the
      64-bit image restoration code on x86, which is that the temporary
      page tables set up by it to avoid page tables corruption when the
      last bits of the image kernel's memory contents are copied into
      their original page frames re-use the boot kernel's text mapping,
      but that mapping may very well get corrupted just like any other
      part of the page tables.  Of course, if that happens, the final
      jump to the image kernel's entry point will go to nowhere.
      The exact reason why commit ab76f7b4 matters here is that it
      sometimes causes a PMD of a large page to be split into PTEs
      that are allocated dynamically and get corrupted during image
      restoration as described above.
      To fix that issue note that the code copying the last bits of the
      image kernel's memory contents to the page frames occupied by them
      previoulsy doesn't use the kernel text mapping, because it runs from
      a special page covered by the identity mapping set up for that code
      from scratch.  Hence, the kernel text mapping is only needed before
      that code starts to run and then it will only be used just for the
      final jump to the image kernel's entry point.
      Accordingly, the temporary page tables set up in swsusp_arch_resume()
      on x86-64 need to contain the kernel text mapping too.  That mapping
      is only going to be used for the final jump to the image kernel, so
      it only needs to cover the image kernel's entry point, because the
      first thing the image kernel does after getting control back is to
      switch over to its own original page tables.  Moreover, the virtual
      address of the image kernel's entry point in that mapping has to be
      the same as the one mapped by the image kernel's page tables.
      With that in mind, modify the x86-64's arch_hibernation_header_save()
      and arch_hibernation_header_restore() routines to pass the physical
      address of the image kernel's entry point (in addition to its virtual
      address) to the boot kernel (a small piece of assembly code involved
      in passing the entry point's virtual address to the image kernel is
      not necessary any more after that, so drop it).  Update RESTORE_MAGIC
      too to reflect the image header format change.
      Next, in set_up_temporary_mappings(), use the physical and virtual
      addresses of the image kernel's entry point passed in the image
      header to set up a minimum kernel text mapping (using memory pages
      that won't be overwritten by the image kernel's memory contents) that
      will map those addresses to each other as appropriate.
      This makes the concern about the possible corruption of the original
      boot kernel text mapping go away and if the the minimum kernel text
      mapping used for the final jump marks the image kernel's entry point
      memory as executable, the jump to it is guaraneed to succeed.
      Fixes: ab76f7b4 (x86/mm: Set NX on gap between __ex_table and rodata)
      Link: http://marc.info/?l=linux-pm&m=146372852823760&w=2Reported-by: default avatarLogan Gunthorpe <logang@deltatee.com>
      Reported-and-tested-by: default avatarBorislav Petkov <bp@suse.de>
      Tested-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
  6. 27 Jun, 2016 1 commit
  7. 26 Jun, 2016 2 commits
  8. 25 Jun, 2016 10 commits
  9. 24 Jun, 2016 15 commits