- Sep 06, 2010
-
-
Santiago Leon authored
Use netdev_dbg to standardise the debug output. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
These functions appear before their use, so we can remove the redundant prototypes. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
We were using alloc_skb which doesn't create any headroom. Change it to use netdev_alloc_skb to match most other drivers. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
We export all the driver specific statistics via ethtool, so there is no need to duplicate this in procfs. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
This patch enables TCP checksum offload support for IPv6 on ibmveth. This completely eliminates the generation and checking of the checksum for IPv6 packets that are completely virtual and never touch a physical network. A basic TCPIPV6_STREAM netperf run showed a ~30% throughput improvement when an MTU of 64000 was used. This featured is enabled by default, as is the case for IPv4 checksum offload. When checksum offload is enabled the driver will negotiate IPv4 and IPv6 offload with the firmware separately and enable what is available. As long as either IPv4 or IPv6 offload is supported and enabled the device will report that checksum offload is enabled. The device stats, available through ethtool, will display which checksum offload features are supported/enabled by firmware. Performance testing against a stock kernel shows no regression for IPv4 or IPv6 in terms of throughput or processor utilization with checksum disabled or enabled. Signed-off-by:
Robert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
Remove code in the device probe function where we set up the checksum offload feature and replace it with a call to an existing function that is doing the same. This is done to clean up the driver in preparation of adding IPv6 checksum offload support. Signed-off-by:
Robert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
On some machines we can improve the bandwidth by ensuring rx buffers are not in the cache. Add a module option that is disabled by default that flushes rx buffers on insertion. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
ibmveth can scatter gather up to 6 segments. If we go over this then we have no option but to call skb_linearize, like other drivers with similar limitations do. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Anton Blanchard authored
We want to order the read in ibmveth_rxq_pending_buffer and the read of ibmveth_rxq_buffer_valid which are both cacheable memory. smp_rmb() is good enough for this. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
For small packets, create a new skb and copy the packet into it so we avoid tearing down and creating a TCE entry. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
Use the existing bounce buffer if we send a buffer under a certain size. This saves the overhead of a TCE map/unmap. I can't see any reason for the wmb() in the bounce buffer case, if we need a barrier it will be before we call h_send_logical_lan but we have nothing in the common case. Remove it. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
The ibmveth adapter needs locking in the transmit routine to protect the bounce_buffer but it sets LLTX and forgets to add any of its own locking. Just remove the deprecated LLTX option. Remove the stats lock in the process. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
At the moment we try and replenish the receive ring on every rx interrupt. We even have a pool->threshold but aren't using it. To limit the maximum latency incurred when refilling, change the threshold from 1/2 to 7/8 and reduce the largest rx pool from 768 buffers to 512 which should be more than enough. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santiago Leon authored
Replace some modulus operators with an increment and compare to avoid an integer divide. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Santiago Leon <santil@linux.vnet.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Denis Kirjanov authored
Signed-off-by:
Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Denis Kirjanov authored
Signed-off-by:
Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Denis Kirjanov authored
Signed-off-by:
Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Change several wan drivers to make strings and other initialize only parameters const. Compile tested only (with no new warnings) Signed-off-by:
Stephen Hemminger <shemminger@vyatta.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Sep 05, 2010
-
-
Eric Dumazet authored
While porting GRO to r8169, I found this driver has a bug in its rx path. All skbs given to network stack had their ip_summed set to CHECKSUM_NONE, while hardware said they had correct TCP/UDP checksums. The reason is driver sets skb->ip_summed on the original skb before the copy eventually done by copybreak. The fresh skb gets the ip_summed = CHECKSUM_NONE value, forcing network stack to recompute checksum, and preventing my GRO patch to work. Fix is to make the ip_summed setting after skb copy. Note : rx_copybreak current value is 16383, so all frames are copied... Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Acked-by:
Francois Romieu <romieu@fr.zoreil.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Sep 03, 2010
-
-
Casey Leedom authored
Don't call flush_workqueue() on the cxgb3 Work Queue in cxgb_down() when we're being called from the fatal error task ... which is executing on the cxgb3 Work Queue. Signed-off-by:
Casey Leedom <leedom@chelsio.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Casey Leedom authored
Platform code needs to deal with them now. Signed-off-by:
Dimitris Michailidis <dm@chelsio.com> Signed-off-by:
Casey Leedom <leedom@chelsio.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Casey Leedom authored
Signed-off-by:
Dimitris Michailidis <dm@chelsio.com> Signed-off-by:
Casey Leedom <leedom@chelsio.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Casey Leedom authored
Signed-off-by:
Casey Leedom <leedom@chelsio.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
struct tulip_private is a bit large (order-1 allocation even on 32bit arch), try to shrink it, remove its net_device_stats field. Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Sep 02, 2010
-
-
Eric Dumazet authored
fresh skbs have ip_summed set to CHECKSUM_NONE (0) We can avoid setting again skb->ip_summed to CHECKSUM_NONE in drivers. Introduce skb_checksum_none_assert() helper so that we keep this assertion documented in driver sources. Change most occurrences of : skb->ip_summed = CHECKSUM_NONE; by : skb_checksum_none_assert(skb); Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
get_stats() method incorrectly clears a global array before folding various stats. This can break SNMP applications. Switch to 64 bit flavor to work on a user supplied buffer, and provide 64bit counters even on 32bit arches. Fix a bug in bnad_netdev_hwstats_fill(), for rx_fifo_errors, missing a folding (only the last counter was taken into account) Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Acked-by:
Rasesh Mody <rmody@brocade.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Kill last_rx use in l2tp and two net drivers Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Sep 01, 2010
-
-
David S. Miller authored
Add a dma_addr_t 64-bit case for powerpc with 64-bit phys addresses. Reported-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Huang Weiyi authored
Remove duplicated #include('s) in drivers/net/pxa168_eth.c Signed-off-by:
Huang Weiyi <weiyi.huang@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
- napi_gro_flush() is exported from net/core/dev.c, to avoid an irq_save/irq_restore in the packet receive path. - use napi_gro_receive() instead of netif_receive_skb() - use napi_gro_flush() before calling __napi_complete() - turn on NETIF_F_GRO by default - Tested on a Marvell 88E8001 Gigabit NIC Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
Add a call to of_node_put in the error handling code following a call to of_find_matching_node. This patch also moves the existing call to of_node_put after the call to iounmap in the error handling code, to make it possible to jump to of_node_put without doing iounmap. These appear to be disjoint operations, so the ordering doesn't matter. This patch furthermore changes the -ENODEV result in the error handling code for of_find_matching_node to a return of 0, as found in the error handling code for of_iomap, because the return type of the function is unsigned. The semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/ ) // <smpl> @r exists@ local idexpression x; expression E,E1,E2; statement S; @@ *x = (of_find_node_by_path |of_find_node_by_name |of_find_node_by_phandle |of_get_parent |of_get_next_parent |of_get_next_child |of_find_compatible_node |of_match_node |of_find_node_by_type |of_find_node_with_property |of_find_matching_node |of_parse_phandle )(...); ... if (x == NULL) S <... when != x = E *if (...) { ... when != of_node_put(x) when != if (...) { ... of_node_put(x); ... } ( return <+...x...+>; | * return ...; ) } ...> ( E2 = x; | of_node_put(x); ) // </smpl> Signed-off-by:
Julia Lawall <julia@diku.dk> Reviewed-by:
Wolfram Sang <w.sang@pengutronix.de> Acked-by:
Wolfgang Grandegger <wg@grandegger.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Denis Kirjanov authored
Use helper routine to disable chip interrupts. Signed-off-by:
Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Yaniv Rosner authored
Update version to 1.52.53-5 Signed-off-by:
Yaniv Rosner <yanivr@broadcom.com> Signed-off-by:
Eilon Greenstein <eilong@broadcom.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Yaniv Rosner authored
Add BCM84823 to the supported PHYs Signed-off-by:
Yaniv Rosner <yanivr@broadcom.com> Signed-off-by:
Eilon Greenstein <eilong@broadcom.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Yaniv Rosner authored
Change 848xx LED configuration according to the new microcode (Boards were shipped with only with the new microcode) Signed-off-by:
Yaniv Rosner <yanivr@broadcom.com> Signed-off-by:
Eilon Greenstein <eilong@broadcom.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Yaniv Rosner authored
Remove unneeded setting of XAUI low power to BCM8727. This was required only in older microcode which is not in the field. Signed-off-by:
Yaniv Rosner <yanivr@broadcom.com> Signed-off-by:
Eilon Greenstein <eilong@broadcom.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Yaniv Rosner authored
Change BCM848xx behavior to fit IEEE such that setting 10Mb/100Mb will use force speed, and setting 1Gb/10Gb will use auto-negotiation with the specific speed advertised Signed-off-by:
Yaniv Rosner <yanivr@broadcom.com> Signed-off-by:
Eilon Greenstein <eilong@broadcom.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Yaniv Rosner authored
Reset link before any new link settings to avoid potential link issue caused by previous link settings Signed-off-by:
Yaniv Rosner <yanivr@broadcom.com> Signed-off-by:
Eilon Greenstein <eilong@broadcom.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Yaniv Rosner authored
In BCM8727 based boards, setting default 10G link speed after link was set to 1G may lead to link down issue. The problem was setting the right value, but to the wrong registers Signed-off-by:
Yaniv Rosner <yanivr@broadcom.com> Signed-off-by:
Eilon Greenstein <eilong@broadcom.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Yaniv Rosner authored
Fix potential link issue caused by insufficient delay time during SPIROM load of BCM8073/BCM8727 Signed-off-by:
Yaniv Rosner <yanivr@broadcom.com> Signed-off-by:
Eilon Greenstein <eilong@broadcom.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-