Commit 536f8098 authored by Jeff Garzik's avatar Jeff Garzik
Browse files

Merge /spare/repo/linux-2.6/

parents e86ee668 3fd07d3b
......@@ -2211,6 +2211,15 @@ D: OV511 driver
S: (address available on request)
S: USA
N: Ian McDonald
E: iam4@cs.waikato.ac.nz
E: imcdnzl@gmail.com
W: http://wand.net.nz/~iam4
W: http://imcdnzl.blogspot.com
D: DCCP, CCID3
S: Hamilton
S: New Zealand
N: Patrick McHardy
E: kaber@trash.net
P: 1024D/12155E80 B128 7DE6 FF0A C2B2 48BE AB4C C9D4 964E 1215 5E80
......@@ -2246,19 +2255,12 @@ S: D-90453 Nuernberg
S: Germany
N: Arnaldo Carvalho de Melo
E: acme@conectiva.com.br
E: acme@kernel.org
E: acme@gnu.org
W: http://bazar2.conectiva.com.br/~acme
W: http://advogato.org/person/acme
E: acme@mandriva.com
E: acme@ghostprotocols.net
W: http://oops.ghostprotocols.net:81/blog/
P: 1024D/9224DF01 D5DF E3BB E3C8 BCBB F8AD 841A B6AB 4681 9224 DF01
D: wanrouter hacking
D: misc Makefile, Config.in, drivers and network stacks fixes
D: IPX & LLC network stacks maintainer
D: Cyclom 2X synchronous card driver
D: wl3501 PCMCIA wireless card driver
D: i18n for minicom, net-tools, util-linux, fetchmail, etc
S: Conectiva S.A.
D: IPX, LLC, DCCP, cyc2x, wl3501_cs, net/ hacks
S: Mandriva
S: R. Tocantins, 89 - Cristo Rei
S: 80050-430 - Curitiba - Paran
S: Brazil
......
......@@ -410,7 +410,26 @@ Kernel messages do not have to be terminated with a period.
Printing numbers in parentheses (%d) adds no value and should be avoided.
Chapter 13: References
Chapter 13: Allocating memory
The kernel provides the following general purpose memory allocators:
kmalloc(), kzalloc(), kcalloc(), and vmalloc(). Please refer to the API
documentation for further information about them.
The preferred form for passing a size of a struct is the following:
p = kmalloc(sizeof(*p), ...);
The alternative form where struct name is spelled out hurts readability and
introduces an opportunity for a bug when the pointer variable type is changed
but the corresponding sizeof that is passed to a memory allocator is not.
Casting the return value which is a void pointer is redundant. The conversion
from void pointer to any other pointer type is guaranteed by the C programming
language.
Chapter 14: References
The C Programming Language, Second Edition
by Brian W. Kernighan and Dennis M. Ritchie.
......
......@@ -1105,7 +1105,7 @@ static struct block_device_operations opt_fops = {
</listitem>
<listitem>
<para>
Function names as strings (__func__).
Function names as strings (__FUNCTION__).
</para>
</listitem>
<listitem>
......
......@@ -13,6 +13,8 @@ the BIOS on Dell servers (starting from servers sold since 1999), desktops
and notebooks (starting from those sold in 2005).
Please go to http://support.dell.com register and you can find info on
OpenManage and Dell Update packages (DUP).
Libsmbios can also be used to update BIOS on Dell systems go to
http://linux.dell.com/libsmbios/ for details.
Dell_RBU driver supports BIOS update using the monilothic image and packetized
image methods. In case of moniolithic the driver allocates a contiguous chunk
......@@ -22,8 +24,8 @@ would place each packet in contiguous physical memory. The driver also
maintains a link list of packets for reading them back.
If the dell_rbu driver is unloaded all the allocated memory is freed.
The rbu driver needs to have an application which will inform the BIOS to
enable the update in the next system reboot.
The rbu driver needs to have an application (as mentioned above)which will
inform the BIOS to enable the update in the next system reboot.
The user should not unload the rbu driver after downloading the BIOS image
or updating.
......@@ -42,9 +44,11 @@ In case of packet mechanism the single memory can be broken in smaller chuks
of contiguous memory and the BIOS image is scattered in these packets.
By default the driver uses monolithic memory for the update type. This can be
changed to contiguous during the driver load time by specifying the load
changed to packets during the driver load time by specifying the load
parameter image_type=packet. This can also be changed later as below
echo packet > /sys/devices/platform/dell_rbu/image_type
Also echoing either mono ,packet or init in to image_type will free up the
memory allocated by the driver.
Do the steps below to download the BIOS image.
1) echo 1 > /sys/class/firmware/dell_rbu/loading
......@@ -53,9 +57,13 @@ Do the steps below to download the BIOS image.
The /sys/class/firmware/dell_rbu/ entries will remain till the following is
done.
echo -1 > /sys/class/firmware/dell_rbu/loading
echo -1 > /sys/class/firmware/dell_rbu/loading.
Until this step is completed the drivr cannot be unloaded.
If an user by accident executes steps 1 and 3 above without executing step 2;
it will make the /sys/class/firmware/dell_rbu/ entries to disappear.
The entries can be recreated by doing the following
echo init > /sys/devices/platform/dell_rbu/image_type
NOTE: echoing init in image_type does not change it original value.
Also the driver provides /sys/devices/platform/dell_rbu/data readonly file to
read back the image downloaded. This is useful in case of packet update
......
......@@ -15,7 +15,7 @@ retrieve the data as it becomes available.
The format of the data logged into the channel buffers is completely
up to the relayfs client; relayfs does however provide hooks which
allow clients to impose some stucture on the buffer data. Nor does
allow clients to impose some structure on the buffer data. Nor does
relayfs implement any form of data filtering - this also is left to
the client. The purpose is to keep relayfs as simple as possible.
......
An ad-hoc collection of notes on IA64 MCA and INIT processing. Feel
free to update it with notes about any area that is not clear.
---
MCA/INIT are completely asynchronous. They can occur at any time, when
the OS is in any state. Including when one of the cpus is already
holding a spinlock. Trying to get any lock from MCA/INIT state is
asking for deadlock. Also the state of structures that are protected
by locks is indeterminate, including linked lists.
---
The complicated ia64 MCA process. All of this is mandated by Intel's
specification for ia64 SAL, error recovery and and unwind, it is not as
if we have a choice here.
* MCA occurs on one cpu, usually due to a double bit memory error.
This is the monarch cpu.
* SAL sends an MCA rendezvous interrupt (which is a normal interrupt)
to all the other cpus, the slaves.
* Slave cpus that receive the MCA interrupt call down into SAL, they
end up spinning disabled while the MCA is being serviced.
* If any slave cpu was already spinning disabled when the MCA occurred
then it cannot service the MCA interrupt. SAL waits ~20 seconds then
sends an unmaskable INIT event to the slave cpus that have not
already rendezvoused.
* Because MCA/INIT can be delivered at any time, including when the cpu
is down in PAL in physical mode, the registers at the time of the
event are _completely_ undefined. In particular the MCA/INIT
handlers cannot rely on the thread pointer, PAL physical mode can
(and does) modify TP. It is allowed to do that as long as it resets
TP on return. However MCA/INIT events expose us to these PAL
internal TP changes. Hence curr_task().
* If an MCA/INIT event occurs while the kernel was running (not user
space) and the kernel has called PAL then the MCA/INIT handler cannot
assume that the kernel stack is in a fit state to be used. Mainly
because PAL may or may not maintain the stack pointer internally.
Because the MCA/INIT handlers cannot trust the kernel stack, they
have to use their own, per-cpu stacks. The MCA/INIT stacks are
preformatted with just enough task state to let the relevant handlers
do their job.
* Unlike most other architectures, the ia64 struct task is embedded in
the kernel stack[1]. So switching to a new kernel stack means that
we switch to a new task as well. Because various bits of the kernel
assume that current points into the struct task, switching to a new
stack also means a new value for current.
* Once all slaves have rendezvoused and are spinning disabled, the
monarch is entered. The monarch now tries to diagnose the problem
and decide if it can recover or not.
* Part of the monarch's job is to look at the state of all the other
tasks. The only way to do that on ia64 is to call the unwinder,
as mandated by Intel.
* The starting point for the unwind depends on whether a task is
running or not. That is, whether it is on a cpu or is blocked. The
monarch has to determine whether or not a task is on a cpu before it
knows how to start unwinding it. The tasks that received an MCA or
INIT event are no longer running, they have been converted to blocked
tasks. But (and its a big but), the cpus that received the MCA
rendezvous interrupt are still running on their normal kernel stacks!
* To distinguish between these two cases, the monarch must know which
tasks are on a cpu and which are not. Hence each slave cpu that
switches to an MCA/INIT stack, registers its new stack using
set_curr_task(), so the monarch can tell that the _original_ task is
no longer running on that cpu. That gives us a decent chance of
getting a valid backtrace of the _original_ task.
* MCA/INIT can be nested, to a depth of 2 on any cpu. In the case of a
nested error, we want diagnostics on the MCA/INIT handler that
failed, not on the task that was originally running. Again this
requires set_curr_task() so the MCA/INIT handlers can register their
own stack as running on that cpu. Then a recursive error gets a
trace of the failing handler's "task".
[1] My (Keith Owens) original design called for ia64 to separate its
struct task and the kernel stacks. Then the MCA/INIT data would be
chained stacks like i386 interrupt stacks. But that required
radical surgery on the rest of ia64, plus extra hard wired TLB
entries with its associated performance degradation. David
Mosberger vetoed that approach. Which meant that separate kernel
stacks meant separate "tasks" for the MCA/INIT handlers.
---
INIT is less complicated than MCA. Pressing the nmi button or using
the equivalent command on the management console sends INIT to all
cpus. SAL picks one one of the cpus as the monarch and the rest are
slaves. All the OS INIT handlers are entered at approximately the same
time. The OS monarch prints the state of all tasks and returns, after
which the slaves return and the system resumes.
At least that is what is supposed to happen. Alas there are broken
versions of SAL out there. Some drive all the cpus as monarchs. Some
drive them all as slaves. Some drive one cpu as monarch, wait for that
cpu to return from the OS then drive the rest as slaves. Some versions
of SAL cannot even cope with returning from the OS, they spin inside
SAL on resume. The OS INIT code has workarounds for some of these
broken SAL symptoms, but some simply cannot be fixed from the OS side.
---
The scheduler hooks used by ia64 (curr_task, set_curr_task) are layer
violations. Unfortunately MCA/INIT start off as massive layer
violations (can occur at _any_ time) and they build from there.
At least ia64 makes an attempt at recovering from hardware errors, but
it is a difficult problem because of the asynchronous nature of these
errors. When processing an unmaskable interrupt we sometimes need
special code to cope with our inability to take any locks.
---
How is ia64 MCA/INIT different from x86 NMI?
* x86 NMI typically gets delivered to one cpu. MCA/INIT gets sent to
all cpus.
* x86 NMI cannot be nested. MCA/INIT can be nested, to a depth of 2
per cpu.
* x86 has a separate struct task which points to one of multiple kernel
stacks. ia64 has the struct task embedded in the single kernel
stack, so switching stack means switching task.
* x86 does not call the BIOS so the NMI handler does not have to worry
about any registers having changed. MCA/INIT can occur while the cpu
is in PAL in physical mode, with undefined registers and an undefined
kernel stack.
* i386 backtrace is not very sensitive to whether a process is running
or not. ia64 unwind is very, very sensitive to whether a process is
running or not.
---
What happens when MCA/INIT is delivered what a cpu is running user
space code?
The user mode registers are stored in the RSE area of the MCA/INIT on
entry to the OS and are restored from there on return to SAL, so user
mode registers are preserved across a recoverable MCA/INIT. Since the
OS has no idea what unwind data is available for the user space stack,
MCA/INIT never tries to backtrace user space. Which means that the OS
does not bother making the user space process look like a blocked task,
i.e. the OS does not copy pt_regs and switch_stack to the user space
stack. Also the OS has no idea how big the user space RSE and memory
stacks are, which makes it too risky to copy the saved state to a user
mode stack.
---
How do we get a backtrace on the tasks that were running when MCA/INIT
was delivered?
mca.c:::ia64_mca_modify_original_stack(). That identifies and
verifies the original kernel stack, copies the dirty registers from
the MCA/INIT stack's RSE to the original stack's RSE, copies the
skeleton struct pt_regs and switch_stack to the original stack, fills
in the skeleton structures from the PAL minstate area and updates the
original stack's thread.ksp. That makes the original stack look
exactly like any other blocked task, i.e. it now appears to be
sleeping. To get a backtrace, just start with thread.ksp for the
original task and unwind like any other sleeping task.
---
How do we identify the tasks that were running when MCA/INIT was
delivered?
If the previous task has been verified and converted to a blocked
state, then sos->prev_task on the MCA/INIT stack is updated to point to
the previous task. You can look at that field in dumps or debuggers.
To help distinguish between the handler and the original tasks,
handlers have _TIF_MCA_INIT set in thread_info.flags.
The sos data is always in the MCA/INIT handler stack, at offset
MCA_SOS_OFFSET. You can get that value from mca_asm.h or calculate it
as KERNEL_STACK_SIZE - sizeof(struct pt_regs) - sizeof(struct
ia64_sal_os_state), with 16 byte alignment for all structures.
Also the comm field of the MCA/INIT task is modified to include the pid
of the original task, for humans to use. For example, a comm field of
'MCA 12159' means that pid 12159 was running when the MCA was
delivered.
Revised: 2000-Dec-05.
Again: 2002-Jul-06
Again: 2005-Sep-19
NOTE:
......@@ -18,8 +19,8 @@ called USB Request Block, or URB for short.
and deliver the data and status back.
- Execution of an URB is inherently an asynchronous operation, i.e. the
usb_submit_urb(urb) call returns immediately after it has successfully queued
the requested action.
usb_submit_urb(urb) call returns immediately after it has successfully
queued the requested action.
- Transfers for one URB can be canceled with usb_unlink_urb(urb) at any time.
......@@ -94,8 +95,9 @@ To free an URB, use
void usb_free_urb(struct urb *urb)
You may not free an urb that you've submitted, but which hasn't yet been
returned to you in a completion callback.
You may free an urb that you've submitted, but which hasn't yet been
returned to you in a completion callback. It will automatically be
deallocated when it is no longer in use.
1.4. What has to be filled in?
......@@ -145,30 +147,36 @@ to get seamless ISO streaming.
1.6. How to cancel an already running URB?
For an URB which you've submitted, but which hasn't been returned to
your driver by the host controller, call
There are two ways to cancel an URB you've submitted but which hasn't
been returned to your driver yet. For an asynchronous cancel, call
int usb_unlink_urb(struct urb *urb)
It removes the urb from the internal list and frees all allocated
HW descriptors. The status is changed to reflect unlinking. After
usb_unlink_urb() returns with that status code, you can free the URB
with usb_free_urb().
HW descriptors. The status is changed to reflect unlinking. Note
that the URB will not normally have finished when usb_unlink_urb()
returns; you must still wait for the completion handler to be called.
There is also an asynchronous unlink mode. To use this, set the
the URB_ASYNC_UNLINK flag in urb->transfer flags before calling
usb_unlink_urb(). When using async unlinking, the URB will not
normally be unlinked when usb_unlink_urb() returns. Instead, wait
for the completion handler to be called.
To cancel an URB synchronously, call
void usb_kill_urb(struct urb *urb)
It does everything usb_unlink_urb does, and in addition it waits
until after the URB has been returned and the completion handler
has finished. It also marks the URB as temporarily unusable, so
that if the completion handler or anyone else tries to resubmit it
they will get a -EPERM error. Thus you can be sure that when
usb_kill_urb() returns, the URB is totally idle.
1.7. What about the completion handler?
The handler is of the following type:
typedef void (*usb_complete_t)(struct urb *);
typedef void (*usb_complete_t)(struct urb *, struct pt_regs *)
i.e. it gets just the URB that caused the completion call.
I.e., it gets the URB that caused the completion call, plus the
register values at the time of the corresponding interrupt (if any).
In the completion handler, you should have a look at urb->status to
detect any USB errors. Since the context parameter is included in the URB,
you can pass information to the completion handler.
......@@ -176,17 +184,11 @@ you can pass information to the completion handler.
Note that even when an error (or unlink) is reported, data may have been
transferred. That's because USB transfers are packetized; it might take
sixteen packets to transfer your 1KByte buffer, and ten of them might
have transferred succesfully before the completion is called.
have transferred succesfully before the completion was called.
NOTE: ***** WARNING *****
Don't use urb->dev field in your completion handler; it's cleared
as part of giving urbs back to drivers. (Addressing an issue with
ownership of periodic URBs, which was otherwise ambiguous.) Instead,
use urb->context to hold all the data your driver needs.
NOTE: ***** WARNING *****
Also, NEVER SLEEP IN A COMPLETION HANDLER. These are normally called
NEVER SLEEP IN A COMPLETION HANDLER. These are normally called
during hardware interrupt processing. If you can, defer substantial
work to a tasklet (bottom half) to keep system latencies low. You'll
probably need to use spinlocks to protect data structures you manipulate
......@@ -229,24 +231,10 @@ ISO data with some other event stream.
Interrupt transfers, like isochronous transfers, are periodic, and happen
in intervals that are powers of two (1, 2, 4 etc) units. Units are frames
for full and low speed devices, and microframes for high speed ones.
Currently, after you submit one interrupt URB, that urb is owned by the
host controller driver until you cancel it with usb_unlink_urb(). You
may unlink interrupt urbs in their completion handlers, if you need to.
After a transfer completion is called, the URB is automagically resubmitted.
THIS BEHAVIOR IS EXPECTED TO BE REMOVED!!
Interrupt transfers may only send (or receive) the "maxpacket" value for
the given interrupt endpoint; if you need more data, you will need to
copy that data out of (or into) another buffer. Similarly, you can't
queue interrupt transfers.
THESE RESTRICTIONS ARE EXPECTED TO BE REMOVED!!
Note that this automagic resubmission model does make it awkward to use
interrupt OUT transfers. The portable solution involves unlinking those
OUT urbs after the data is transferred, and perhaps submitting a final
URB for a short packet.
The usb_submit_urb() call modifies urb->interval to the implemented interval
value that is less than or equal to the requested interval value.
In Linux 2.6, unlike earlier versions, interrupt URBs are not automagically
restarted when they complete. They end when the completion handler is
called, just like other URBs. If you want an interrupt URB to be restarted,
your completion handler must resubmit it.
......@@ -686,6 +686,13 @@ P: Guennadi Liakhovetski
M: g.liakhovetski@gmx.de
S: Maintained
DCCP PROTOCOL
P: Arnaldo Carvalho de Melo
M: acme@mandriva.com
L: dccp@vger.kernel.org
W: http://www.wlug.org.nz/DCCP
S: Maintained
DECnet NETWORK LAYER
P: Patrick Caulfield
M: patrick@tykepenguin.com
......@@ -1056,8 +1063,6 @@ M: wli@holomorphy.com
S: Maintained
I2C SUBSYSTEM
P: Greg Kroah-Hartman
M: greg@kroah.com
P: Jean Delvare
M: khali@linux-fr.org
L: lm-sensors@lm-sensors.org
......@@ -2259,6 +2264,12 @@ M: kristen.c.accardi@intel.com
L: pcihpd-discuss@lists.sourceforge.net
S: Maintained
SKGE, SKY2 10/100/1000 GIGABIT ETHERNET DRIVERS
P: Stephen Hemminger
M: shemminger@osdl.org
L: netdev@vger.kernel.org
S: Maintained
SPARC (sparc32):
P: William L. Irwin
M: wli@holomorphy.com
......@@ -2271,12 +2282,6 @@ M: R.E.Wolff@BitWizard.nl
L: linux-kernel@vger.kernel.org ?
S: Supported
SPX NETWORK LAYER
P: Jay Schulist
M: jschlst@samba.org
L: netdev@vger.kernel.org
S: Supported
SRM (Alpha) environment access
P: Jan-Benedict Glaw
M: jbglaw@lug-owl.de
......
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 14
EXTRAVERSION =-rc1
EXTRAVERSION =-rc2
NAME=Affluent Albatross
# *DOCUMENTATION*
......
......@@ -149,6 +149,9 @@ CONFIGURING the kernel:
"make gconfig" X windows (Gtk) based configuration tool.
"make oldconfig" Default all questions based on the contents of
your existing ./.config file.
"make silentoldconfig"
Like above, but avoids cluttering the screen
with question already answered.
NOTES on "make config":
- having unnecessary drivers will make the kernel bigger, and can
......@@ -169,9 +172,6 @@ CONFIGURING the kernel:
should probably answer 'n' to the questions for
"development", "experimental", or "debugging" features.
- Check the top Makefile for further site-dependent configuration
(default SVGA mode etc).
COMPILING the kernel:
- Make sure you have gcc 2.95.3 available.
......@@ -199,6 +199,9 @@ COMPILING the kernel:
are installing a new kernel with the same version number as your
working kernel, make a backup of your modules directory before you
do a "make modules_install".
In alternative, before compiling, edit your Makefile and change the
"EXTRAVERSION" line - its content is appended to the regular kernel
version.
- In order to boot your new kernel, you'll need to copy the kernel
image (e.g. .../linux/arch/i386/boot/bzImage after compilation)
......
......@@ -37,6 +37,7 @@
#include <linux/namei.h>
#include <linux/uio.h>
#include <linux/vfs.h>
#include <linux/rcupdate.h>
#include <asm/fpu.h>
#include <asm/io.h>
......@@ -975,6 +976,7 @@ osf_select(int n, fd_set __user *inp, fd_set __user *outp, fd_set __user *exp,
long timeout;
int ret = -EINVAL;
struct fdtable *fdt;
int max_fdset;
timeout = MAX_SCHEDULE_TIMEOUT;
if (tvp) {
......@@ -996,8 +998,11 @@ osf_select(int n, fd_set __user *inp, fd_set __user *outp, fd_set __user *exp,
}
}
rcu_read_lock();
fdt = files_fdtable(current->files);
if (n < 0 || n > fdt->max_fdset)
max_fdset = fdt->max_fdset;
rcu_read_unlock();
if (n < 0 || n > max_fdset)
goto out_nofds;
/*
......
......@@ -394,6 +394,22 @@ clipper_init_irq(void)
* 10 64 bit PCI option slot 3 (not bus 0)
*/
static int __init
isa_irq_fixup(struct pci_dev *dev, int irq)
{
u8 irq8;
if (irq > 0)
return irq;
/* This interrupt is routed via ISA bridge, so we'll
just have to trust whatever value the console might
have assigned. */
pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &irq8);
return irq8 & 0xf;
}
static int __init
dp264_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
{
......@@ -407,25 +423,13 @@ dp264_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
{ 16+ 3, 16+ 3, 16+ 2, 16+ 1, 16+ 0} /* IdSel 10 slot 3 */
};
const long min_idsel = 5, max_idsel = 10, irqs_per_slot = 5;
struct pci_controller *hose = dev->sysdata;
int irq = COMMON_TABLE_LOOKUP;
if (irq > 0) {
if (irq > 0)
irq += 16 * hose->index;
} else {
/* ??? The Contaq IDE controller on the ISA bridge uses
"legacy" interrupts 14 and 15. I don't know if anything
can wind up at the same slot+pin on hose1, so we'll
just have to trust whatever value the console might
have assigned. */
u8 irq8;
pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &irq8);
irq = irq8;
}
return irq;
return isa_irq_fixup(dev, irq);
}
static int __init
......@@ -453,7 +457,8 @@ monet_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
{ 24, 24, 25, 26, 27} /* IdSel 15 slot 5 PCI2*/
};
const long min_idsel = 3, max_idsel = 15, irqs_per_slot = 5;
return COMMON_TABLE_LOOKUP;
return isa_irq_fixup(dev, COMMON_TABLE_LOOKUP);
}
static u8 __init
......@@ -507,7 +512,8 @@ webbrick_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
{ 47, 47, 46, 45, 44}, /* IdSel 17 slot 3 */
};
const long min_idsel = 7, max_idsel = 17, irqs_per_slot = 5;
return COMMON_TABLE_LOOKUP;
return isa_irq_fixup(dev, COMMON_TABLE_LOOKUP);