Skip to content
Snippets Groups Projects
  1. Feb 14, 2009
    • Peter Zijlstra's avatar
      lockdep: generate usage strings · fabe9c42
      Peter Zijlstra authored
      
      generate the usage strings
      
      XXX capital invasion :-(
      
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      fabe9c42
    • Peter Zijlstra's avatar
      lockdep: simplify mark_lock() · 5346417e
      Peter Zijlstra authored
      
      remove the state iteration
      
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      5346417e
    • Peter Zijlstra's avatar
      lockdep: simplify mark_held_locks · 36bfb9bb
      Peter Zijlstra authored
      
      remove the explicit state iteration
      
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      36bfb9bb
    • Peter Zijlstra's avatar
      lockdep: sanitize reclaim bit names · a652d708
      Peter Zijlstra authored
      
      s/HELD_OVER/ENABLED/g
      
      so that its similar to the hard and soft-irq names.
      
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      a652d708
    • Peter Zijlstra's avatar
      lockdep: sanitize bit names · 4fc95e86
      Peter Zijlstra authored
      
      s/\(LOCKF\?_ENABLED_[^ ]*\)S\(_READ\)\?\>/\1\2/g
      
      So that the USED_IN and ENABLED have the same names.
      
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      4fc95e86
    • Nick Piggin's avatar
      lockdep: annotate reclaim context (__GFP_NOFS) · cf40bd16
      Nick Piggin authored
      
      Here is another version, with the incremental patch rolled up, and
      added reclaim context annotation to kswapd, and allocation tracing
      to slab allocators (which may only ever reach the page allocator
      in rare cases, so it is good to put annotations here too).
      
      Haven't tested this version as such, but it should be getting closer
      to merge worthy ;)
      
      --
      After noticing some code in mm/filemap.c accidentally perform a __GFP_FS
      allocation when it should not have been, I thought it might be a good idea to
      try to catch this kind of thing with lockdep.
      
      I coded up a little idea that seems to work. Unfortunately the system has to
      actually be in __GFP_FS page reclaim, then take the lock, before it will mark
      it. But at least that might still be some orders of magnitude more common
      (and more debuggable) than an actual deadlock condition, so we have some
      improvement I hope (the concept is no less complete than discovery of a lock's
      interrupt contexts).
      
      I guess we could even do the same thing with __GFP_IO (normal reclaim), and
      even GFP_NOIO locks too... but filesystems will have the most locks and fiddly
      code paths, so let's start there and see how it goes.
      
      It *seems* to work. I did a quick test.
      
      =================================
      [ INFO: inconsistent lock state ]
      2.6.28-rc6-00007-ged31348-dirty #26
      ---------------------------------
      inconsistent {in-reclaim-W} -> {ov-reclaim-W} usage.
      modprobe/8526 [HC0[0]:SC0[0]:HE1:SE1] takes:
       (testlock){--..}, at: [<ffffffffa0020055>] brd_init+0x55/0x216 [brd]
      {in-reclaim-W} state was registered at:
        [<ffffffff80267bdb>] __lock_acquire+0x75b/0x1a60
        [<ffffffff80268f71>] lock_acquire+0x91/0xc0
        [<ffffffff8070f0e1>] mutex_lock_nested+0xb1/0x310
        [<ffffffffa002002b>] brd_init+0x2b/0x216 [brd]
        [<ffffffff8020903b>] _stext+0x3b/0x170
        [<ffffffff80272ebf>] sys_init_module+0xaf/0x1e0
        [<ffffffff8020c3fb>] system_call_fastpath+0x16/0x1b
        [<ffffffffffffffff>] 0xffffffffffffffff
      irq event stamp: 3929
      hardirqs last  enabled at (3929): [<ffffffff8070f2b5>] mutex_lock_nested+0x285/0x310
      hardirqs last disabled at (3928): [<ffffffff8070f089>] mutex_lock_nested+0x59/0x310
      softirqs last  enabled at (3732): [<ffffffff8061f623>] sk_filter+0x83/0xe0
      softirqs last disabled at (3730): [<ffffffff8061f5b6>] sk_filter+0x16/0xe0
      
      other info that might help us debug this:
      1 lock held by modprobe/8526:
       #0:  (testlock){--..}, at: [<ffffffffa0020055>] brd_init+0x55/0x216 [brd]
      
      stack backtrace:
      Pid: 8526, comm: modprobe Not tainted 2.6.28-rc6-00007-ged31348-dirty #26
      Call Trace:
       [<ffffffff80265483>] print_usage_bug+0x193/0x1d0
       [<ffffffff80266530>] mark_lock+0xaf0/0xca0
       [<ffffffff80266735>] mark_held_locks+0x55/0xc0
       [<ffffffffa0020000>] ? brd_init+0x0/0x216 [brd]
       [<ffffffff802667ca>] trace_reclaim_fs+0x2a/0x60
       [<ffffffff80285005>] __alloc_pages_internal+0x475/0x580
       [<ffffffff8070f29e>] ? mutex_lock_nested+0x26e/0x310
       [<ffffffffa0020000>] ? brd_init+0x0/0x216 [brd]
       [<ffffffffa002006a>] brd_init+0x6a/0x216 [brd]
       [<ffffffffa0020000>] ? brd_init+0x0/0x216 [brd]
       [<ffffffff8020903b>] _stext+0x3b/0x170
       [<ffffffff8070f8b9>] ? mutex_unlock+0x9/0x10
       [<ffffffff8070f83d>] ? __mutex_unlock_slowpath+0x10d/0x180
       [<ffffffff802669ec>] ? trace_hardirqs_on_caller+0x12c/0x190
       [<ffffffff80272ebf>] sys_init_module+0xaf/0x1e0
       [<ffffffff8020c3fb>] system_call_fastpath+0x16/0x1b
      
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      cf40bd16
  2. Dec 04, 2008
  3. Dec 03, 2008
  4. Nov 25, 2008
    • Ingo Molnar's avatar
      lockdep: fix unused function warning in kernel/lockdep.c · 7807fafa
      Ingo Molnar authored
      
      Impact: fix build warning
      
      this warning:
      
        kernel/lockdep.c:584: warning: ‘print_lock_dependencies’ defined but not used
      
      triggers because print_lock_dependencies() is only used if both
      CONFIG_TRACE_IRQFLAGS and CONFIG_PROVE_LOCKING are enabled.
      
      But adding #ifdefs is not an option here - it would spread out to 4-5
      other helper functions and uglify the file. So mark this function
      as __used - it's static and the compiler can eliminate it just fine.
      
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      7807fafa
  5. Nov 21, 2008
  6. Oct 28, 2008
  7. Oct 20, 2008
  8. Aug 27, 2008
    • Zhu Yi's avatar
      lockdep: fix invalid list_del_rcu in zap_class · 74870172
      Zhu Yi authored
      
      The problem is found during iwlagn driver testing on
      v2.6.27-rc4-176-gb8e6c91 kernel, but it turns out to be a lockdep bug.
      In our testing, we frequently load and unload the iwlagn driver
      (>50 times). Then the MAX_STACK_TRACE_ENTRIES is reached (expected
      behaviour?). The error message with the call trace is as below.
      
      BUG: MAX_STACK_TRACE_ENTRIES too low!
      turning off the locking correctness validator.
      Pid: 4895, comm: iwlagn Not tainted 2.6.27-rc4 #13
      
      Call Trace:
       [<ffffffff81014aa1>] save_stack_trace+0x22/0x3e
       [<ffffffff8105390a>] save_trace+0x8b/0x91
       [<ffffffff81054e60>] mark_lock+0x1b0/0x8fa
       [<ffffffff81056f71>] __lock_acquire+0x5b9/0x716
       [<ffffffffa00d818a>] ieee80211_sta_work+0x0/0x6ea [mac80211]
       [<ffffffff81057120>] lock_acquire+0x52/0x6b
       [<ffffffff81045f0e>] run_workqueue+0x97/0x1ed
       [<ffffffff81045f5e>] run_workqueue+0xe7/0x1ed
       [<ffffffff81045f0e>] run_workqueue+0x97/0x1ed
       [<ffffffff81046ae4>] worker_thread+0xd8/0xe3
       [<ffffffff81049503>] autoremove_wake_function+0x0/0x2e
       [<ffffffff81046a0c>] worker_thread+0x0/0xe3
       [<ffffffff810493ec>] kthread+0x47/0x73
       [<ffffffff8128e3ab>] trace_hardirqs_on_thunk+0x3a/0x3f
       [<ffffffff8100cea9>] child_rip+0xa/0x11
       [<ffffffff8100c4df>] restore_args+0x0/0x30
       [<ffffffff810316e1>] finish_task_switch+0x0/0xcc
       [<ffffffff810493a5>] kthread+0x0/0x73
       [<ffffffff8100ce9f>] child_rip+0x0/0x11
      
      Although the above is harmless, when the ilwagn module is removed
      later lockdep will trigger a kernel oops as below.
      
      BUG: unable to handle kernel NULL pointer dereference at
      0000000000000008
      IP: [<ffffffff810531e1>] zap_class+0x24/0x82
      PGD 73128067 PUD 7448c067 PMD 0
      Oops: 0002 [1] SMP
      CPU 0
      Modules linked in: rfcomm l2cap bluetooth autofs4 sunrpc
      nf_conntrack_ipv6 xt_state nf_conntrack xt_tcpudp ip6t_ipv6header
      ip6t_REJECT ip6table_filter ip6_tables x_tables ipv6 cpufreq_ondemand
      acpi_cpufreq dm_mirror dm_log dm_multipath dm_mod snd_hda_intel sr_mod
      snd_seq_dummy snd_seq_oss snd_seq_midi_event battery snd_seq
      snd_seq_device cdrom button snd_pcm_oss snd_mixer_oss snd_pcm
      snd_timer snd_page_alloc e1000e snd_hwdep sg iTCO_wdt
      iTCO_vendor_support ac pcspkr i2c_i801 i2c_core snd soundcore video
      output ata_piix ata_generic libata sd_mod scsi_mod ext3 jbd mbcache
      uhci_hcd ohci_hcd ehci_hcd [last unloaded: mac80211]
      Pid: 4941, comm: modprobe Not tainted 2.6.27-rc4 #10
      RIP: 0010:[<ffffffff810531e1>]  [<ffffffff810531e1>]
      zap_class+0x24/0x82
      RSP: 0000:ffff88007bcb3eb0  EFLAGS: 00010046
      RAX: 0000000000068ee8 RBX: ffffffff8192a0a0 RCX: 0000000000000000
      RDX: 0000000000000000 RSI: 0000000000001dfb RDI: ffffffff816e70b0
      RBP: ffffffffa00cd000 R08: ffffffff816818f8 R09: ffff88007c923558
      R10: ffffe20002ad2408 R11: ffffffff811028ec R12: ffffffff8192a0a0
      R13: 000000000002bd90 R14: 0000000000000000 R15: 0000000000000296
      FS:  00007f9d1cee56f0(0000) GS:ffffffff814a58c0(0000)
      knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      CR2: 0000000000000008 CR3: 0000000073047000 CR4: 00000000000006e0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      Process modprobe (pid: 4941, threadinfo ffff88007bcb2000, task
      ffff8800758d1fc0)
      Stack:  ffffffff81057376 0000000000000000 ffffffffa00f7b00
      0000000000000000
       0000000000000080 0000000000618278 00007fff24f16720 0000000000000000
       ffffffff8105d37a ffffffffa00f7b00 ffffffff8105d591 313132303863616d
      Call Trace:
       [<ffffffff81057376>] ? lockdep_free_key_range+0x61/0xf5
       [<ffffffff8105d37a>] ? free_module+0xd4/0xe4
       [<ffffffff8105d591>] ? sys_delete_module+0x1de/0x1f9
       [<ffffffff8106dbfa>] ? audit_syscall_entry+0x12d/0x160
       [<ffffffff8100be2b>] ? system_call_fastpath+0x16/0x1b
      
      Code: b2 00 01 00 00 00 c3 31 f6 49 c7 c0 10 8a 61 81 eb 32 49 39 38
      75 26 48 98 48 6b c0 38 48 8b 90 08 8a 61 81 48 8b 88 00 8a 61 81 <48>
      89 51 08 48 89 0a 48 c7 80 08 8a 61 81 00 02 20 00 48 ff c6
      RIP  [<ffffffff810531e1>] zap_class+0x24/0x82
       RSP <ffff88007bcb3eb0>
      CR2: 0000000000000008
      ---[ end trace a1297e0c4abb0f2e ]---
      
      The root cause for this oops is in add_lock_to_list() when
      save_trace() fails due to MAX_STACK_TRACE_ENTRIES is reached,
      entry->class is assigned but entry is never added into any lock list.
      This makes the list_del_rcu() in zap_class() oops later when the
      module is unloaded. This patch fixes the problem by assigning
      entry->class after save_trace() returns success.
      
      Signed-off-by: default avatarZhu Yi <yi.zhu@intel.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      74870172
  9. Aug 26, 2008
  10. Aug 18, 2008
  11. Aug 13, 2008
  12. Aug 11, 2008
    • Peter Zijlstra's avatar
      lockdep: fix debug_lock_alloc · 0f2bc27b
      Peter Zijlstra authored
      
      When we enable DEBUG_LOCK_ALLOC but do not enable PROVE_LOCKING and or
      LOCK_STAT, lock_alloc() and lock_release() turn into nops, even though
      we should be doing hlock checking (check=1).
      
      This causes a false warning and a lockdep self-disable.
      
      Rectify this.
      
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      0f2bc27b
    • Rabin Vincent's avatar
      lockdep: handle chains involving classes defined in modules · 8bfe0298
      Rabin Vincent authored
      
      Solve this by marking the classes as unused and not printing information
      about the unused classes.
      
      Reported-by: default avatarEric Sesterhenn <snakebyte@gmx.de>
      Signed-off-by: default avatarRabin Vincent <rabin@rab.in>
      Acked-by: default avatarHuang Ying <ying.huang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      8bfe0298
    • Peter Zijlstra's avatar
      lockdep: lock protection locks · 7531e2f3
      Peter Zijlstra authored
      
      On Fri, 2008-08-01 at 16:26 -0700, Linus Torvalds wrote:
      
      > On Fri, 1 Aug 2008, David Miller wrote:
      > >
      > > Taking more than a few locks of the same class at once is bad
      > > news and it's better to find an alternative method.
      >
      > It's not always wrong.
      >
      > If you can guarantee that anybody that takes more than one lock of a
      > particular class will always take a single top-level lock _first_, then
      > that's all good. You can obviously screw up and take the same lock _twice_
      > (which will deadlock), but at least you cannot get into ABBA situations.
      >
      > So maybe the right thing to do is to just teach lockdep about "lock
      > protection locks". That would have solved the multi-queue issues for
      > networking too - all the actual network drivers would still have taken
      > just their single queue lock, but the one case that needs to take all of
      > them would have taken a separate top-level lock first.
      >
      > Never mind that the multi-queue locks were always taken in the same order:
      > it's never wrong to just have some top-level serialization, and anybody
      > who needs to take <n> locks might as well do <n+1>, because they sure as
      > hell aren't going to be on _any_ fastpaths.
      >
      > So the simplest solution really sounds like just teaching lockdep about
      > that one special case. It's not "nesting" exactly, although it's obviously
      > related to it.
      
      Do as Linus suggested. The lock protection lock is called nest_lock.
      
      Note that we still have the MAX_LOCK_DEPTH (48) limit to consider, so anything
      that spills that it still up shit creek.
      
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      7531e2f3
    • Dave Jones's avatar
      lockdep: shrink held_lock structure · f82b217e
      Dave Jones authored
      
      struct held_lock {
              u64                        prev_chain_key;       /*     0     8 */
              struct lock_class *        class;                /*     8     8 */
              long unsigned int          acquire_ip;           /*    16     8 */
              struct lockdep_map *       instance;             /*    24     8 */
              int                        irq_context;          /*    32     4 */
              int                        trylock;              /*    36     4 */
              int                        read;                 /*    40     4 */
              int                        check;                /*    44     4 */
              int                        hardirqs_off;         /*    48     4 */
      
              /* size: 56, cachelines: 1 */
              /* padding: 4 */
              /* last cacheline: 56 bytes */
      };
      
      struct held_lock {
              u64                        prev_chain_key;       /*     0     8 */
              long unsigned int          acquire_ip;           /*     8     8 */
              struct lockdep_map *       instance;             /*    16     8 */
              unsigned int               class_idx:11;         /*    24:21  4 */
              unsigned int               irq_context:2;        /*    24:19  4 */
              unsigned int               trylock:1;            /*    24:18  4 */
              unsigned int               read:2;               /*    24:16  4 */
              unsigned int               check:2;              /*    24:14  4 */
              unsigned int               hardirqs_off:1;       /*    24:13  4 */
      
              /* size: 32, cachelines: 1 */
              /* padding: 4 */
              /* bit_padding: 13 bits */
              /* last cacheline: 32 bytes */
      };
      
      [mingo@elte.hu: shrunk hlock->class too]
      [peterz@infradead.org: fixup bit sizes]
      Signed-off-by: default avatarDave Jones <davej@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      f82b217e
    • Peter Zijlstra's avatar
      lockdep: lock_set_subclass - reset a held lock's subclass · 64aa348e
      Peter Zijlstra authored
      
      this can be used to reset a held lock's subclass, for arbitrary-depth
      iterated data structures such as trees or lists which have per-node
      locks.
      
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      64aa348e
  13. Jul 31, 2008
    • David Miller's avatar
      lockdep: fix combinatorial explosion in lock subgraph traversal · 419ca3f1
      David Miller authored
      
      When we traverse the graph, either forwards or backwards, we
      are interested in whether a certain property exists somewhere
      in a node reachable in the graph.
      
      Therefore it is never necessary to traverse through a node more
      than once to get a correct answer to the given query.
      
      Take advantage of this property using a global ID counter so that we
      need not clear all the markers in all the lock_class entries before
      doing a traversal.  A new ID is choosen when we start to traverse, and
      we continue through a lock_class only if it's ID hasn't been marked
      with the new value yet.
      
      This short-circuiting is essential especially for high CPU count
      systems.  The scheduler has a runqueue per cpu, and needs to take
      two runqueue locks at a time, which leads to long chains of
      backwards and forwards subgraphs from these runqueue lock nodes.
      Without the short-circuit implemented here, a graph traversal on
      a runqueue lock can take up to (1 << (N - 1)) checks on a system
      with N cpus.
      
      For anything more than 16 cpus or so, lockdep will eventually bring
      the machine to a complete standstill.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      419ca3f1
  14. Jul 14, 2008
    • Ingo Molnar's avatar
      lockdep: fix ftrace irq tracing false positive · 992860e9
      Ingo Molnar authored
      
      fix this false positive:
      
      [    0.020000] ------------[ cut here ]------------
      [    0.020000] WARNING: at kernel/lockdep.c:2718 check_flags+0x14a/0x170()
      [    0.020000] Modules linked in:
      [    0.020000] Pid: 0, comm: swapper Not tainted 2.6.26-tip-00343-gd7e5521-dirty #14486
      [    0.020000]  [<c01312e4>] warn_on_slowpath+0x54/0x80
      [    0.020000]  [<c067e451>] ? _spin_unlock_irqrestore+0x61/0x70
      [    0.020000]  [<c0131bb1>] ? release_console_sem+0x201/0x210
      [    0.020000]  [<c0143d65>] ? __kernel_text_address+0x35/0x40
      [    0.020000]  [<c010562e>] ? dump_trace+0x5e/0x140
      [    0.020000]  [<c01518b5>] ? __lock_acquire+0x245/0x820
      [    0.020000]  [<c015063a>] check_flags+0x14a/0x170
      [    0.020000]  [<c0151ed8>] ? lock_acquire+0x48/0xc0
      [    0.020000]  [<c0151ee1>] lock_acquire+0x51/0xc0
      [    0.020000]  [<c014a16c>] ? down+0x2c/0x40
      [    0.020000]  [<c010a609>] ? sched_clock+0x9/0x10
      [    0.020000]  [<c067e7b2>] _write_lock+0x32/0x60
      [    0.020000]  [<c013797f>] ? request_resource+0x1f/0xb0
      [    0.020000]  [<c013797f>] request_resource+0x1f/0xb0
      [    0.020000]  [<c02f89ad>] vgacon_startup+0x2bd/0x3e0
      [    0.020000]  [<c094d62a>] con_init+0x19/0x22f
      [    0.020000]  [<c0330c7c>] ? tty_register_ldisc+0x5c/0x70
      [    0.020000]  [<c094cf49>] console_init+0x20/0x2e
      [    0.020000]  [<c092a969>] start_kernel+0x20c/0x379
      [    0.020000]  [<c092a516>] ? unknown_bootoption+0x0/0x1f6
      [    0.020000]  [<c092a099>] __init_begin+0x99/0xa1
      [    0.020000]  =======================
      [    0.020000] ---[ end trace 4eaa2a86a8e2da22 ]---
      [    0.020000] possible reason: unannotated irqs-on.
      [    0.020000] irq event stamp: 0
      
      which occurs if CONFIG_TRACE_IRQFLAGS=y, CONFIG_DEBUG_LOCKDEP=y,
      but CONFIG_PROVE_LOCKING is disabled.
      
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      992860e9
  15. Jun 23, 2008
  16. Jun 20, 2008
  17. May 23, 2008
    • Steven Rostedt's avatar
      lockdep: update lockdep_recursion on graph_lock · bb065afb
      Steven Rostedt authored
      
      With the introduction of ftrace, it is possible to recurse into
      the lockdep functions via the mcount call. To prevent possible
      lockups, updating the lockdep_recursion counter on grabbing the internal
      lockdep_lock should prevent deadlocks.
      
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      bb065afb
    • Steven Rostedt's avatar
      ftrace: use Makefile to remove tracing from lockdep · 1d09daa5
      Steven Rostedt authored
      
      This patch removes the "notrace" annotation from lockdep and adds the debugging
      files in the kernel director to those that should not be compiled with
      "-pg" mcount tracing.
      
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      1d09daa5
    • Steven Rostedt's avatar
      ftrace: lockdep notrace annotations · 0764d23c
      Steven Rostedt authored
      
      Add notrace annotations to lockdep to keep ftrace from causing
      recursive problems with lock tracing and debugging.
      
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      0764d23c
    • Steven Rostedt's avatar
      ftrace: trace irq disabled critical timings · 81d68a96
      Steven Rostedt authored
      
      This patch adds latency tracing for critical timings
      (how long interrupts are disabled for).
      
       "irqsoff" is added to /debugfs/tracing/available_tracers
      
      Note:
        tracing_max_latency
          also holds the max latency for irqsoff (in usecs).
         (default to large number so one must start latency tracing)
      
        tracing_thresh
          threshold (in usecs) to always print out if irqs off
          is detected to be longer than stated here.
          If irq_thresh is non-zero, then max_irq_latency
          is ignored.
      
      Here's an example of a trace with ftrace_enabled = 0
      
      =======
      preemption latency trace v1.1.5 on 2.6.24-rc7
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------
       latency: 100 us, #3/3, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
       => started at: _spin_lock_irqsave+0x2a/0xb7
       => ended at:   _spin_unlock_irqrestore+0x32/0x5f
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
       swapper-0     1d.s3    0us+: _spin_lock_irqsave+0x2a/0xb7 (e1000_update_stats+0x47/0x64c [e1000])
       swapper-0     1d.s3  100us : _spin_unlock_irqrestore+0x32/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1d.s3  100us : trace_hardirqs_on_caller+0x75/0x89 (_spin_unlock_irqrestore+0x32/0x5f)
      
      vim:ft=help
      =======
      
      And this is a trace with ftrace_enabled == 1
      
      =======
      preemption latency trace v1.1.5 on 2.6.24-rc7
      --------------------------------------------------------------------
       latency: 102 us, #12/12, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
       => started at: _spin_lock_irqsave+0x2a/0xb7
       => ended at:   _spin_unlock_irqrestore+0x32/0x5f
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
       swapper-0     1dNs3    0us+: _spin_lock_irqsave+0x2a/0xb7 (e1000_update_stats+0x47/0x64c [e1000])
       swapper-0     1dNs3   46us : e1000_read_phy_reg+0x16/0x225 [e1000] (e1000_update_stats+0x5e2/0x64c [e1000])
       swapper-0     1dNs3   46us : e1000_swfw_sync_acquire+0x10/0x99 [e1000] (e1000_read_phy_reg+0x49/0x225 [e1000])
       swapper-0     1dNs3   46us : e1000_get_hw_eeprom_semaphore+0x12/0xa6 [e1000] (e1000_swfw_sync_acquire+0x36/0x99 [e1000])
       swapper-0     1dNs3   47us : __const_udelay+0x9/0x47 (e1000_read_phy_reg+0x116/0x225 [e1000])
       swapper-0     1dNs3   47us+: __delay+0x9/0x50 (__const_udelay+0x45/0x47)
       swapper-0     1dNs3   97us : preempt_schedule+0xc/0x84 (__delay+0x4e/0x50)
       swapper-0     1dNs3   98us : e1000_swfw_sync_release+0xc/0x55 [e1000] (e1000_read_phy_reg+0x211/0x225 [e1000])
       swapper-0     1dNs3   99us+: e1000_put_hw_eeprom_semaphore+0x9/0x35 [e1000] (e1000_swfw_sync_release+0x50/0x55 [e1000])
       swapper-0     1dNs3  101us : _spin_unlock_irqrestore+0xe/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1dNs3  102us : _spin_unlock_irqrestore+0x32/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1dNs3  102us : trace_hardirqs_on_caller+0x75/0x89 (_spin_unlock_irqrestore+0x32/0x5f)
      
      vim:ft=help
      =======
      
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      81d68a96
  18. Feb 25, 2008
  19. Jan 25, 2008
    • Ingo Molnar's avatar
      softlockup: automatically detect hung TASK_UNINTERRUPTIBLE tasks · 82a1fcb9
      Ingo Molnar authored
      
      this patch extends the soft-lockup detector to automatically
      detect hung TASK_UNINTERRUPTIBLE tasks. Such hung tasks are
      printed the following way:
      
       ------------------>
       INFO: task prctl:3042 blocked for more than 120 seconds.
       "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message
       prctl         D fd5e3793     0  3042   2997
              f6050f38 00000046 00000001 fd5e3793 00000009 c06d8264 c06dae80 00000286
              f6050f40 f6050f00 f7d34d90 f7d34fc8 c1e1be80 00000001 f6050000 00000000
              f7e92d00 00000286 f6050f18 c0489d1a f6050f40 00006605 00000000 c0133a5b
       Call Trace:
        [<c04883a5>] schedule_timeout+0x6d/0x8b
        [<c04883d8>] schedule_timeout_uninterruptible+0x15/0x17
        [<c0133a76>] msleep+0x10/0x16
        [<c0138974>] sys_prctl+0x30/0x1e2
        [<c0104c52>] sysenter_past_esp+0x5f/0xa5
        =======================
       2 locks held by prctl/3042:
       #0:  (&sb->s_type->i_mutex_key#5){--..}, at: [<c0197d11>] do_fsync+0x38/0x7a
       #1:  (jbd_handle){--..}, at: [<c01ca3d2>] journal_start+0xc7/0xe9
       <------------------
      
      the current default timeout is 120 seconds. Such messages are printed
      up to 10 times per bootup. If the system has crashed already then the
      messages are not printed.
      
      if lockdep is enabled then all held locks are printed as well.
      
      this feature is a natural extension to the softlockup-detector (kernel
      locked up without scheduling) and to the NMI watchdog (kernel locked up
      with IRQs disabled).
      
      [ Gautham R Shenoy <ego@in.ibm.com>: CPU hotplug fixes. ]
      [ Andrew Morton <akpm@linux-foundation.org>: build warning fix. ]
      
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarArjan van de Ven <arjan@linux.intel.com>
      82a1fcb9
  20. Jan 24, 2008
  21. Jan 16, 2008
  22. Dec 07, 2007
  23. Dec 05, 2007
    • Oleg Nesterov's avatar
      lockdep: in_range() fix · 54561783
      Oleg Nesterov authored
      
      Torsten Kaiser wrote:
      
      | static inline int in_range(const void *start, const void *addr, const void *end)
      | {
      |         return addr >= start && addr <= end;
      | }
      | This  will return true, if addr is in the range of start (including)
      | to end (including).
      |
      | But debug_check_no_locks_freed() seems does:
      | const void *mem_to = mem_from + mem_len
      | -> mem_to is the last byte of the freed range, that fits in_range
      | lock_from = (void *)hlock->instance;
      | -> first byte of the lock
      | lock_to = (void *)(hlock->instance + 1);
      | -> first byte of the next lock, not last byte of the lock that is being checked!
      |
      | The test is:
      | if (!in_range(mem_from, lock_from, mem_to) &&
      |                                         !in_range(mem_from, lock_to, mem_to))
      |                         continue;
      | So it tests, if the first byte of the lock is in the range that is freed ->OK
      | And if the first byte of the *next* lock is in the range that is freed
      | -> Not OK.
      
      We can also simplify in_range checks, we need only 2 comparisons, not 4.
      If the lock is not in memory range, it should be either at the left of range
      or at the right.
      
      Signed-off-by: default avatarOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      54561783
    • Ingo Molnar's avatar
      lockdep: fix debug_show_all_locks() · 85684873
      Ingo Molnar authored
      fix the oops that can be seen in:
      
         http://bugzilla.kernel.org/attachment.cgi?id=13828&action=view
      
      
      
      it is not safe to print the locks of running tasks.
      
      (even with this fix we have a small race - but this is a debug
       function after all.)
      
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      85684873
  24. Oct 28, 2007
  25. Oct 19, 2007
Loading