1. 09 Jul, 2010 1 commit
    • Kenji Kaneshige's avatar
      x86, ioremap: Fix incorrect physical address handling in PAE mode · ffa71f33
      Kenji Kaneshige authored
      
      
      Current x86 ioremap() doesn't handle physical address higher than
      32-bit properly in X86_32 PAE mode. When physical address higher than
      32-bit is passed to ioremap(), higher 32-bits in physical address is
      cleared wrongly. Due to this bug, ioremap() can map wrong address to
      linear address space.
      
      In my case, 64-bit MMIO region was assigned to a PCI device (ioat
      device) on my system. Because of the ioremap()'s bug, wrong physical
      address (instead of MMIO region) was mapped to linear address space.
      Because of this, loading ioatdma driver caused unexpected behavior
      (kernel panic, kernel hangup, ...).
      Signed-off-by: default avatarKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
      LKML-Reference: <4C1AE680.7090408@jp.fujitsu.com>
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      ffa71f33
  2. 18 Jun, 2010 2 commits
    • Jiri Slaby's avatar
      x86-64, mm: Initialize VDSO earlier on 64 bits · d7a0380d
      Jiri Slaby authored
      
      
      When initrd is in use and a driver does request_module() in its
      module_init (i.e. __initcall or device_initcall), a modprobe process
      is created with VDSO mapping. But VDSO is inited even in __initcall,
      i.e. on the same level (at the same time), so it may not be inited
      yet (link order matters).
      
      Move the VDSO initialization code earlier by switching to something
      before rootfs_initcall where initrd is loaded as rootfs. Specifically
      to subsys_initcall. Do it for standard 64-bit path (init_vdso_vars)
      and for compat (sysenter_setup), just in case people have 32-bit
      initrd and ia32 emulation built-in.
      
      i386 (pure 32-bit) is not affected, since sysenter_setup() is called
      from check_bugs()->identify_boot_cpu() in start_kernel() before
      rest_init()->kernel_thread(kernel_init) where even kernel_init() calls
      do_basic_setup()->do_initcalls().
      
      What this patch fixes are early modprobe crashes such as:
      Unpacking initramfs...
      Freeing initrd memory: 9324k freed
      modprobe[368]: segfault at 7fff4429c020 ip 00007fef397e160c \
          sp 00007fff442795c0 error 4 in ld-2.11.2.so[7fef397df000+1f000]
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      LKML-Reference: <1276720242-13365-1-git-send-email-jslaby@suse.cz>
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      d7a0380d
    • Marcin Slusarz's avatar
      x86, kmmio/mmiotrace: Fix double free of kmmio_fault_pages · 8b8f79b9
      Marcin Slusarz authored
      
      
      After every iounmap mmiotrace has to free kmmio_fault_pages, but
      it can't do it directly, so it defers freeing by RCU.
      
      It usually works, but when mmiotraced code calls ioremap-iounmap
      multiple times without sleeping between (so RCU won't kick in
      and start freeing) it can be given the same virtual address, so
      at every iounmap mmiotrace will schedule the same pages for
      release. Obviously it will explode on second free.
      
      Fix it by marking kmmio_fault_pages which are scheduled for
      release and not adding them second time.
      Signed-off-by: default avatarMarcin Slusarz <marcin.slusarz@gmail.com>
      Tested-by: default avatarMarcin Kocielnicki <koriakin@0x04.net>
      Tested-by: default avatarShinpei KATO <shinpei@il.is.s.u-tokyo.ac.jp>
      Acked-by: default avatarPekka Paalanen <pq@iki.fi>
      Cc: Stuart Bennett <stuart@freedesktop.org>
      Cc: Marcin Kocielnicki <koriakin@0x04.net>
      Cc: nouveau@lists.freedesktop.org
      Cc: <stable@kernel.org>
      LKML-Reference: <20100613215654.GA3829@joi.lan>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      8b8f79b9
  3. 11 Jun, 2010 33 commits
  4. 10 Jun, 2010 4 commits
    • Eric Dumazet's avatar
      pkt_sched: gen_estimator: add a new lock · ae638c47
      Eric Dumazet authored
      
      
      gen_kill_estimator() / gen_new_estimator() is not always called with
      RTNL held.
      
      net/netfilter/xt_RATEEST.c is one user of these API that do not hold
      RTNL, so random corruptions can occur between "tc" and "iptables".
      
      Add a new fine grained lock instead of trying to use RTNL in netfilter.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ae638c47
    • John Fastabend's avatar
      net: deliver skbs on inactive slaves to exact matches · 597a264b
      John Fastabend authored
      
      
      Currently, the accelerated receive path for VLAN's will
      drop packets if the real device is an inactive slave and
      is not one of the special pkts tested for in
      skb_bond_should_drop().  This behavior is different then
      the non-accelerated path and for pkts over a bonded vlan.
      
      For example,
      
      vlanx -> bond0 -> ethx
      
      will be dropped in the vlan path and not delivered to any
      packet handlers at all.  However,
      
      bond0 -> vlanx -> ethx
      
      and
      
      bond0 -> ethx
      
      will be delivered to handlers that match the exact dev,
      because the VLAN path checks the real_dev which is not a
      slave and netif_recv_skb() doesn't drop frames but only
      delivers them to exact matches.
      
      This patch adds a sk_buff flag which is used for tagging
      skbs that would previously been dropped and allows the
      skb to continue to skb_netif_recv().  Here we add
      logic to check for the deliver_no_wcard flag and if it
      is set only deliver to handlers that match exactly.  This
      makes both paths above consistent and gives pkt handlers
      a way to identify skbs that come from inactive slaves.
      Without this patch in some configurations skbs will be
      delivered to handlers with exact matches and in others
      be dropped out right in the vlan path.
      
      I have tested the following 4 configurations in failover modes
      and load balancing modes.
      
      # bond0 -> ethx
      
      # vlanx -> bond0 -> ethx
      
      # bond0 -> vlanx -> ethx
      
      # bond0 -> ethx
                  |
        vlanx -> --
      Signed-off-by: default avatarJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      597a264b
    • Sage Weil's avatar
      ceph: try to send partial cap release on cap message on missing inode · 2b2300d6
      Sage Weil authored
      
      
      If we have enough memory to allocate a new cap release message, do so, so
      that we can send a partial release message immediately.  This keeps us from
      making the MDS wait when the cap release it needs is in a partially full
      release message.
      
      If we fail because of ENOMEM, oh well, they'll just have to wait a bit
      longer.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      2b2300d6
    • Sage Weil's avatar
      ceph: release cap on import if we don't have the inode · 3d7ded4d
      Sage Weil authored
      
      
      If we get an IMPORT that give us a cap, but we don't have the inode, queue
      a release (and try to send it immediately) so that the MDS doesn't get
      stuck waiting for us.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      3d7ded4d