1. 15 Dec, 2014 1 commit
  2. 12 Dec, 2014 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing/sched: Check preempt_count() for current when reading task->state · aee4e5f3
      Steven Rostedt (Red Hat) authored
      When recording the state of a task for the sched_switch tracepoint a check of
      task_preempt_count() is performed to see if PREEMPT_ACTIVE is set. This is
      because, technically, a task being preempted is really in the TASK_RUNNING
      state, and that is what should be recorded when tracing a sched_switch,
      even if the task put itself into another state (it hasn't scheduled out
      in that state yet).
      But with the change to use per_cpu preempt counts, the
      task_thread_info(p)->preempt_count is no longer used, and instead
      task_preempt_count(p) is used.
      The problem is that this does not use the current preempt count but a stale
      one from a previous sched_switch. The task_preempt_count(p) uses
      saved_preempt_count and not preempt_count(). But for tracing sched_switch,
      if p is current, we really want preempt_count().
      I hit this bug when I was tracing sleep and the call from do_nanosleep()
      scheduled out in the "RUNNING" state.
                 sleep-4290  [000] 537272.259992: sched_switch:         sleep:4290 [120] R ==> swapper/0:0 [120]
                 sleep-4290  [000] 537272.260015: kernel_stack:         <stack trace>
      => __schedule (ffffffff8150864a)
      => schedule (ffffffff815089f8)
      => do_nanosleep (ffffffff8150b76c)
      => hrtimer_nanosleep (ffffffff8108d66b)
      => SyS_nanosleep (ffffffff8108d750)
      => return_to_handler (ffffffff8150e8e5)
      => tracesys_phase2 (ffffffff8150c844)
      After a bit of hair pulling, I found that the state was really
      TASK_INTERRUPTIBLE, but the saved_preempt_count had an old PREEMPT_ACTIVE
      set and caused the sched_switch tracepoint to show it as RUNNING.
      Link: http://lkml.kernel.org/r/20141210174428.3cb7542a@gandalf.local.homeAcked-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: stable@vger.kernel.org # 3.13+
      Cc: Peter Zijlstra <peterz@infradead.org>
      Fixes: 01028747 "sched: Create more preempt_count accessors"
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
  3. 11 Dec, 2014 1 commit
  4. 03 Dec, 2014 5 commits
  5. 01 Dec, 2014 9 commits
  6. 24 Nov, 2014 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      ftrace/x86: Have static function tracing always test for function graph · 62a207d7
      Steven Rostedt (Red Hat) authored
      New updates to the ftrace generic code had ftrace_stub not always being
      called when ftrace is off. This causes the static tracer to always save
      and restore functions. But it also showed that when function tracing is
      running, the function graph tracer can not. We should always check to see
      if function graph tracing is running even if the function tracer is running
      too. The function tracer code is not the only one that uses the hook to
      function mcount.
      Cc: Markos Chandras <Markos.Chandras@imgtec.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
  7. 21 Nov, 2014 2 commits
  8. 20 Nov, 2014 2 commits
  9. 19 Nov, 2014 18 commits