- Oct 29, 2010
-
-
Al Viro authored
Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- Oct 26, 2010
-
-
Hagen Paul Pfeifer authored
Use the new {max,min}3 macros to save some cycles and bytes on the stack. This patch substitutes trivial nested macros with their counterpart. Signed-off-by:
Hagen Paul Pfeifer <hagen@jauu.net> Cc: Joe Perches <joe@perches.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Hartley Sweeten <hsweeten@visionengravers.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Roland Dreier <rolandd@cisco.com> Cc: Sean Hefty <sean.hefty@intel.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Jason Gunthorpe authored
Clean up properly if pci_set_consistent_dma_mask() fails. Signed-off-by:
Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Ralph Campbell authored
Some PCIe root complex chip sets don't support advanced error reporting. Allow the driver to load OK if pci_enable_pcie_error_reporting() fails. Signed-off-by:
Ralph Campbell <ralph.campbell@qlogic.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Ralph Campbell authored
If CONFIG_PCI_MSI is not set, and a QLE7140 is present, the pointer "dd" is uninitialized. Signed-off-by:
Ralph Campbell <ralph.campbell@qlogic.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Jason Gunthorpe authored
Noticed this odd looking thing in dmesg: ib_qib 0000:02:00.0: <3>ib_qib: Unable to enable pcie error reporting: -5 which is due to a bad use of dev_info. Signed-off-by:
Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Acked-by:
Ralph Campbell <ralph.campbell@qlogic.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Joe Perches authored
Signed-off-by:
Joe Perches <joe@perches.com> Acked-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Joe Perches authored
Signed-off-by:
Joe Perches <joe@perches.com> Acked-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
- Oct 25, 2010
-
-
Christoph Hellwig authored
Instead of always assigning an increasing inode number in new_inode move the call to assign it into those callers that actually need it. For now callers that need it is estimated conservatively, that is the call is added to all filesystems that do not assign an i_ino by themselves. For a few more filesystems we can avoid assigning any inode number given that they aren't user visible, and for others it could be done lazily when an inode number is actually needed, but that's left for later patches. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Dave Chinner <dchinner@redhat.com> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Eli Cohen authored
Since an IB transport port may use either IB or Ethernet as its link layer, add the file /sys/class/infiniband/<device>/ports/<port_num>/link_layer to show the link layer for the port. Signed-off-by:
Eli Cohen <eli@mellanox.co.il> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Eli Cohen authored
This patch allows IBoE traffic to be encapsulated in 802.1Q tagged VLAN frames. The VLAN tag is encoded in the GID and derived from it by a simple computation. The netdev notifier callback is modified to catch VLAN device addition/removal and the port's GID table is updated to reflect the change, so that for each netdevice there is an entry in the GID table. When the port's GID table is exhausted, GID entries will not be added. Only children of the main interfaces can add to the GID table; if a VLAN interface is added on another VLAN interface (e.g. "vconfig add eth2.6 8"), then that interfaces will not add an entry to the GID table. Signed-off-by:
Eli Cohen <eli@mellanox.co.il> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Eli Cohen authored
Add 802.1q VLAN support to IBoE. The VLAN tag is encoded within the GID derived from a link local address in the following way: GID[11] GID[12] contain the VLAN ID when the GID contains a VLAN. The 3 bits user priority field of the packets are identical to the 3 bits of the SL. In case of rdma_cm apps, the TOS field is used to generate the SL field by doing a shift right of 5 bits effectively taking to 3 MS bits of the TOS field. Signed-off-by:
Eli Cohen <eli@mellanox.co.il> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Eli Cohen authored
Add support for IBoE to mlx4_ib. The bulk of the code is handling the new address vector fields; mlx4 needs the MAC address of a remote node to include it in a WQE (for datagrams) or in the QP context (for connected QPs). Address resolution is done by assuming all unicast GIDs are either link-local IPv6 addresses. Multicast group attach/detach needs to update the NIC's multicast filters; but since attaching a QP to a multicast group can be done before the QP is bound to a port, for IBoE we need to keep track of all multicast groups that a QP is attached too before it transitions from INIT to RTR (since it does not have a port in the INIT state). Signed-off-by:
Eli Cohen <eli@mellanox.co.il> [ Many things cleaned up and otherwise monkeyed with; hope I didn't introduce too many bugs. - Roland ] Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Eli Cohen authored
Signed-off-by:
Eli Cohen <eli@mellanox.co.il> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
- Oct 24, 2010
-
-
David Dillow authored
srp_send_tsk_mgmt() was missing the proper DMA sync calls before posting the buffer to the device. Signed-off-by:
David Dillow <dillowda@ornl.gov> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Bart Van Assche authored
Use the list_first_entry() macro in ib_srp instead of open-coding the equivalent, which makes the source code slightly more descriptive. The list_first_entry() macro itself was introduced in kernel 2.6.22. Signed-off-by:
Bart Van Assche <bvanassche@acm.org> Signed-off-by:
David Dillow <dillowda@ornl.gov> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Bart Van Assche authored
As proposed by the SRP (draft) standard, ib_srp reserves one ring element for SRP_TSK_MGMT requests. This patch makes sure that the SCSI mid-layer never tries to queue more than (SRP request limit) - 1 SCSI commands to ib_srp. This improves performance for targets whose request limit is less than or equal to SRP_NORMAL_REQ_SQ_SIZE by reducing the number of BUSY responses reported by ib_srp to the SCSI mid-layer. Signed-off-by:
Bart Van Assche <bvanassche@acm.org> Signed-off-by:
David Dillow <dillowda@ornl.gov> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
David Dillow authored
Signed-off-by:
David Dillow <dillowda@ornl.gov> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
- Oct 23, 2010
-
-
Jack Morgenstein authored
The Node Description cannot be changed via MADs (it is read-only). Until now, it was changed in the driver via sysfs, and the new Node Description was simply inserted by the driver into MAD responses (replacing the description returned by FW). System startup scripts use the sysfs interface to change the node description at driver startup to show the hostname, etc. However, this has a race condition: the SM could discover the original FW node description rather than the system-specific description if it queried the port before the startup scripts finish running. For mlx4, we fix this with a new FW command (SET_NODE) that allows passing the new node description to FW. When this command is invoked, FW sends a trap 144 to the SM. When it gets this trap, the SM can query the node to obtain the new node description -- thus eliminating the effects of the race. This patch simply calls SET_NODE command when a new node description is entered via sysfs (thus causing trap 144 to be issued by the FW). We ignore all failures of the SET_NODE command (including those caused by using a device FW that predates the SET_NODE command), since in that case things work just as before. Signed-off-by:
Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
matt mooney authored
Signed-off-by:
matt mooney <mfm@muteddisk.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Steve Wise authored
For iWARP connections, the connect request is carried in a TCP payload on an already established TCP connection. So if the ucma's backlog is full, the connection request is transmitted and acked at the TCP level by the time the connect request gets dropped in the ucma. The end result is the connection gets rejected by the iWARP provider. Further, a 32 node 256NP OpenMPI job will generate > 128 connect requests on some ranks. This patch increases the default max backlog to 1024, and adds a sysctl variable so the backlog can be adjusted at run time. Signed-off-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Eli Cohen authored
Use the net device's dev_id field to encode the port number of the pci device. This can be used to to associate a net device with the pci device's port. The encoding is: dev_id = port - 1. Signed-off-by:
Eli Cohen <eli@mellanox.co.il> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
- Oct 22, 2010
-
-
David Dillow authored
This patch adds support for SRP_CRED_REQ to avoid a lockup by targets that use that mechanism to return credits to the initiator. This prevents a lockup observed in the field where we would never add the credits from the SRP_CRED_REQ to our current count, and would therefore never send another command to the target. Minimal support for SRP_AER_REQ is also added, as these messages can also be used to convey additional credits to the initiator. Based upon extensive debugging and code by Bart Van Assche and a bug report by Chris Worley. Signed-off-by:
David Dillow <dillowda@ornl.gov> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Bart Van Assche authored
The transmit ring in ib_srp (srp_target.tx_ring) is currently only used for allocating requests sent by the initiator to the target. This patch prepares using that ring for allocation of both requests and responses. Also, this patch differentiates the uses of SRP_SQ_SIZE, increases the size of the IB send completion queue by one element and reserves one transmit ring slot for SRP_TSK_MGMT requests. Signed-off-by:
Bart Van Assche <bvanassche@acm.org> Signed-off-by:
David Dillow <dillowda@ornl.gov> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Jason Gunthorpe authored
See table 35 in IBA - the header order for RDMA_WRITE_ONLY_WITH_IMMEDIATE and SEND_LAST_WITH_IMMEDIATE is different: the RDMA_WRITE_ONLY has a RETH header before the immediate data, so we need a different code path to extract the immediate data. I tested this with a userspace app that does RDMA_WRITE with immediate on a QLE7140. Signed-off-by:
Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by:
Ralph Campbell <ralph.campbell@qlogic.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Steve Wise authored
The flushing of work requests for user QPs is implemented entirely in the user mode library. The only kernel interaction is to mark the user QP object indicating it is in error when the QP exits RTS. When the user QP operations are called by the application (eg: post_send, post_recv), the QP in error bit is checked and if set, the library flushes the QP. If, however, the application is not doing IO, but rather just polling the CQ, it will never get flushed work requests. This breaks some classes of applications. This patch adds logic to mark user CQs in error when a QP that is bound to the CQ is marked in error. The library poll code can then notice the CQ is in error and flush all the in error QPs bound to that CQ. Design: - add 1 extra CQE entry to the CQ memory that will be used to indicate in error status. - return the desired CQ memory size that should be mapped by the library - bump the ABI since the create_cq uverbs response changes. - detect older libraries and reduce the mmap size accordingly. (The ABI bump doesn't break old libraries, since they didn't check the ABI field anyway) Signed-off-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Steve Wise authored
Remove the local service t4_pktgl_to_skb() and use cxgb4_pktgl_to_skb() exported by cxgb4. Signed-off-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Steve Wise authored
Signed-off-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
- Oct 18, 2010
-
-
Justin P. Mattock authored
The patch below updates broken web addresses in the kernel Signed-off-by:
Justin P. Mattock <justinmattock@gmail.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Finn Thain <fthain@telegraphics.com.au> Cc: Randy Dunlap <rdunlap@xenotime.net> Cc: Matt Turner <mattst88@gmail.com> Cc: Dimitry Torokhov <dmitry.torokhov@gmail.com> Cc: Mike Frysinger <vapier.adi@gmail.com> Acked-by:
Ben Pfaff <blp@cs.stanford.edu> Acked-by:
Hans J. Koch <hjk@linutronix.de> Reviewed-by:
Finn Thain <fthain@telegraphics.com.au> Signed-off-by:
Jiri Kosina <jkosina@suse.cz>
-
- Oct 16, 2010
-
-
Randy Dunlap authored
Fix kconfig dependency warning to satisfy dependencies: warning: (MLX4_EN && NETDEVICES && NETDEV_10000 && PCI && INET || MLX4_INFINIBAND && INFINIBAND) selects MLX4_CORE which has unmet direct dependencies (NETDEVICES && NETDEV_10000 && PCI) Signed-off-by:
Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Oct 15, 2010
-
-
Arnd Bergmann authored
All file_operations should get a .llseek operation so we can make nonseekable_open the default for future file operations without a .llseek pointer. The three cases that we can automatically detect are no_llseek, seq_lseek and default_llseek. For cases where we can we can automatically prove that the file offset is always ignored, we use noop_llseek, which maintains the current behavior of not returning an error from a seek. New drivers should normally not use noop_llseek but instead use no_llseek and call nonseekable_open at open time. Existing drivers can be converted to do the same when the maintainer knows for certain that no user code relies on calling seek on the device file. The generated code is often incorrectly indented and right now contains comments that clarify for each added line why a specific variant was chosen. In the version that gets submitted upstream, the comments will be gone and I will manually fix the indentation, because there does not seem to be a way to do that using coccinelle. Some amount of new code is currently sitting in linux-next that should get the same modifications, which I will do at the end of the merge window. Many thanks to Julia Lawall for helping me learn to write a semantic patch that does all this. ===== begin semantic patch ===== // This adds an llseek= method to all file operations, // as a preparation for making no_llseek the default. // // The rules are // - use no_llseek explicitly if we do nonseekable_open // - use seq_lseek for sequential files // - use default_llseek if we know we access f_pos // - use noop_llseek if we know we don't access f_pos, // but we still want to allow users to call lseek // @ open1 exists @ identifier nested_open; @@ nested_open(...) { <+... nonseekable_open(...) ...+> } @ open exists@ identifier open_f; identifier i, f; identifier open1.nested_open; @@ int open_f(struct inode *i, struct file *f) { <+... ( nonseekable_open(...) | nested_open(...) ) ...+> } @ read disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ read_no_fpos disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { ... when != off } @ write @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ write_no_fpos @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { ... when != off } @ fops0 @ identifier fops; @@ struct file_operations fops = { ... }; @ has_llseek depends on fops0 @ identifier fops0.fops; identifier llseek_f; @@ struct file_operations fops = { ... .llseek = llseek_f, ... }; @ has_read depends on fops0 @ identifier fops0.fops; identifier read_f; @@ struct file_operations fops = { ... .read = read_f, ... }; @ has_write depends on fops0 @ identifier fops0.fops; identifier write_f; @@ struct file_operations fops = { ... .write = write_f, ... }; @ has_open depends on fops0 @ identifier fops0.fops; identifier open_f; @@ struct file_operations fops = { ... .open = open_f, ... }; // use no_llseek if we call nonseekable_open //////////////////////////////////////////// @ nonseekable1 depends on !has_llseek && has_open @ identifier fops0.fops; identifier nso ~= "nonseekable_open"; @@ struct file_operations fops = { ... .open = nso, ... +.llseek = no_llseek, /* nonseekable */ }; @ nonseekable2 depends on !has_llseek @ identifier fops0.fops; identifier open.open_f; @@ struct file_operations fops = { ... .open = open_f, ... +.llseek = no_llseek, /* open uses nonseekable */ }; // use seq_lseek for sequential files ///////////////////////////////////// @ seq depends on !has_llseek @ identifier fops0.fops; identifier sr ~= "seq_read"; @@ struct file_operations fops = { ... .read = sr, ... +.llseek = seq_lseek, /* we have seq_read */ }; // use default_llseek if there is a readdir /////////////////////////////////////////// @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier readdir_e; @@ // any other fop is used that changes pos struct file_operations fops = { ... .readdir = readdir_e, ... +.llseek = default_llseek, /* readdir is present */ }; // use default_llseek if at least one of read/write touches f_pos ///////////////////////////////////////////////////////////////// @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read.read_f; @@ // read fops use offset struct file_operations fops = { ... .read = read_f, ... +.llseek = default_llseek, /* read accesses f_pos */ }; @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, ... + .llseek = default_llseek, /* write accesses f_pos */ }; // Use noop_llseek if neither read nor write accesses f_pos /////////////////////////////////////////////////////////// @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; identifier write_no_fpos.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, .read = read_f, ... +.llseek = noop_llseek, /* read and write both use no f_pos */ }; @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write_no_fpos.write_f; @@ struct file_operations fops = { ... .write = write_f, ... +.llseek = noop_llseek, /* write uses no f_pos */ }; @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; @@ struct file_operations fops = { ... .read = read_f, ... +.llseek = noop_llseek, /* read uses no f_pos */ }; @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; @@ struct file_operations fops = { ... +.llseek = noop_llseek, /* no read or write fn */ }; ===== End semantic patch ===== Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Cc: Julia Lawall <julia@diku.dk> Cc: Christoph Hellwig <hch@infradead.org>
-
- Oct 14, 2010
-
-
Eli Cohen authored
Add support for packing IBoE packet headers. Signed-off-by:
Eli Cohen <eli@mellanox.co.il> [ Clean up and fix ib_ud_header_init() a bit. - Roland ] Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
- Oct 13, 2010
-
-
Eli Cohen authored
Add support for IBoE device binding and IP --> GID resolution. Path resolving and multicast joining are implemented within cma.c by filling in the responses and running callbacks in the CMA work queue. IP --> GID resolution always yields IPv6 link local addresses; remote GIDs are derived from the destination MAC address of the remote port. Multicast GIDs are always mapped to multicast MACs as is done in IPv6. (IPv4 multicast is enabled by translating IPv4 multicast addresses to IPv6 multicast as described in <http://www.mail-archive.com/ipng@sunroof.eng.sun.com/msg02134.html >.) Some helper functions are added to ib_addr.h. Signed-off-by:
Eli Cohen <eli@mellanox.co.il> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Eli Cohen authored
Since IBoE is using Ethernet as its link layer, there is no central management entity so there is need for QP0. QP1 is still needed since it handles communications between CM agents. This patch will skip QP0 and create only QP1 for IBoE ports. Signed-off-by:
Eli Cohen <eli@mellanox.co.il> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Eli Cohen authored
IPoIB is IP-over-Infiniband link layer. In the case of IBoE, the link layer is Ethernet and IP can work directly over Ethernet, so disable IPoIB for non-IB_LINK_LAYER_INFINIBAND ports. Signed-off-by:
Eli Cohen <eli@mellanox.co.il> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
- Oct 12, 2010
-
-
Eric Dumazet authored
We tried very hard to remove all possible dev_hold()/dev_put() pairs in network stack, using RCU conversions. There is still an unavoidable device refcount change for every dst we create/destroy, and this can slow down some workloads (routers or some app servers, mmap af_packet) We can switch to a percpu refcount implementation, now dynamic per_cpu infrastructure is mature. On a 64 cpus machine, this consumes 256 bytes per device. On x86, dev_hold(dev) code : before lock incl 0x280(%ebx) after: movl 0x260(%ebx),%eax incl fs:(%eax) Stress bench : (Sending 160.000.000 UDP frames, IP route cache disabled, dual E5540 @2.53GHz, 32bit kernel, FIB_TRIE) Before: real 1m1.662s user 0m14.373s sys 12m55.960s After: real 0m51.179s user 0m15.329s sys 10m15.942s Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Oct 11, 2010
-
-
Animesh K Trivedi authored
A process can get stuck in an uninterruptible wait in the kernel while destroying a cm_id when iw_cm_connect() fails: For example, When creation of a PD fails but the user continues with an attempt to connect to the server without checking the return value, in iw_cm_connect() a NULL qp is found so the call fails. However the IWCM_F_CONNECT_WAIT bit is not cleared. destroy_cm_id() then waits forever for IWCM_F_CONNECT_WAIT to be cleared. The same problem exists on the passive side with the accept call. Fix this by clearing the bit and waking up any waiters in the appropriate spots. Signed-off-by:
Animesh Trivedi <atr@zurich.ibm.com> Acked-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Steve Wise authored
We can replace our equivalent open-coded version. Signed-off-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Steve Wise authored
Incorporate BKL removal changes. Signed-off-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-
Eli Cohen authored
Fix the limit on the size of max fast registration WRs that can be posted to match hardware capabilities. Signed-off-by:
Eli Cohen <eli@mellanox.co.il> Signed-off-by:
Roland Dreier <rolandd@cisco.com>
-