- 12 May, 2016 2 commits
-
-
Emmanuel Grumbach authored
This allows finding vendor IE from a specific vendor. Signed-off-by:
Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by:
Luca Coelho <luciano.coelho@intel.com> Signed-off-by:
Johannes Berg <johannes.berg@intel.com>
-
Sara Sharon authored
Some hardware (iwlwifi an example) de-aggregate AMSDUs and copy the IV as is to the generated MPDUs, so the same PN appears in multiple packets without being a replay attack. Allow driver to explicitly indicate that a frame is allowed to have the same PN as the previous frame. Signed-off-by:
Sara Sharon <sara.sharon@intel.com> Signed-off-by:
Luca Coelho <luciano.coelho@intel.com> Signed-off-by:
Johannes Berg <johannes.berg@intel.com>
-
- 27 Apr, 2016 1 commit
-
-
Johannes Berg authored
Nicolas converted most users, but didn't realize some were generated by macros. Convert those over as well. Signed-off-by:
Johannes Berg <johannes.berg@intel.com> Acked-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
Johannes Berg <johannes.berg@intel.com>
-
- 26 Apr, 2016 3 commits
-
-
Johannes Berg authored
Nicolas's patch missed this, now generating docbook warnings. Add the missing descriptions to address that. Fixes: 2dad624e ("wireless: use nla_put_u64_64bit()") Signed-off-by:
Johannes Berg <johannes.berg@intel.com>
-
Kanchanapally, Vidyullatha authored
Since cfg80211 maintains separate BSS table entries for APs if the same BSSID, SSID pair is seen on multiple channels, it is possible that it can map the current_bss to a BSS entry on the wrong channel. This current_bss will not get flushed unless disconnected and cfg80211 reports a wrong channel as the associated channel. Fix this by introducing a new cfg80211_connect_bss() function which is similar to cfg80211_connect_result(), but it includes an additional parameter: the bss the STA is connected to. This allows drivers to provide the exact bss entry that matches the BSS to which the connection was completed. Reviewed-by:
Jouni Malinen <jouni@qca.qualcomm.com> Signed-off-by:
Vidyullatha Kanchanapally <vkanchan@qti.qualcomm.com> Signed-off-by:
Sunil Dutt <usdutt@qti.qualcomm.com> Signed-off-by:
Johannes Berg <johannes.berg@intel.com>
-
Mohammed Shafi Shajakhan authored
Add support for the a station statistics netlink attribute: NL80211_STA_INFO_RX_DURATION. If present, this attribute contains the aggregate PPDU duration (in microseconds) for all the frames from the peer. This is useful to help understand the total time spent transmitting to us by all of the connected peers. Signed-off-by:
Mohammed Shafi Shajakhan <mohammed@qti.qualcomm.com> Signed-off-by:
Johannes Berg <johannes.berg@intel.com>
-
- 25 Apr, 2016 17 commits
-
-
Michal Kazior authored
This works on the same implementation principle as codel*.h, i.e. there's a generic header with structures and macros and a implementation header carrying function definitions to include in given, e.g. driver or module. The fairness logic comes from net/sched/sch_fq_codel.c but is generalized so it is more flexible and easier to re-use. Signed-off-by:
Michal Kazior <michal.kazior@tieto.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Michal Kazior authored
It was impossible to include codel.h for the purpose of having access to codel_params or codel_vars structure definitions and using them for embedding in other more complex structures. This splits allows codel.h itself to be treated like any other header file while codel_qdisc.h and codel_impl.h contain function definitions with logic that was previously in codel.h. This copies over copyrights and doesn't involve code changes other than adding a few additional include directives to net/sched/sch*codel.c. Signed-off-by:
Michal Kazior <michal.kazior@tieto.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Michal Kazior authored
This strips out qdisc specific bits from the code and makes it slightly more reusable. Codel will be used by wireless/mac80211 in the future. Signed-off-by:
Michal Kazior <michal.kazior@tieto.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Jiri Benc authored
Commit 751a587a ("route: fix breakage after moving lwtunnel state") moved lwtstate to the end of dst_entry for 32bit archs. This makes it share the cacheline with __refcnt which had an unkown effect on performance. For this reason, the pointer was kept in place for 64bit archs. However, later performance measurements showed this is of no concern. It turns out that every performance sensitive path that accesses lwtstate accesses also struct rtable or struct rt6_info which share the same cache line. Thus, to get rid of a few #ifdefs, move the field to the end of the struct also for 64bit. Signed-off-by:
Jiri Benc <jbenc@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
There's some inconsistency in current logic determining whether the link settings of a given interface can be changed; I.e., in all modes other than the so-called `deault' mode the interfaces are forbidden from changing the configuration - but even this rule is not applied to all user APIs that may change the configuration. Instead, let the core-module [qed] decide whether an interface can change the configuration by supporting a new API function. We also revise the current rule, allowing all interfaces to change their configurations while laying the infrastructure for future modes where an interface would be blocked from making such a configuration. Signed-off-by:
Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
There's a difference in statsitics' names starting at qed and propagating to qede, where egress counters indicate ranges while ingress counters indiciate high-end. Align all statistcs to follow the same conventions - name indicates range. Signed-off-by:
Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Craig Gallek authored
d894ba18 ("soreuseport: fix ordering for mixed v4/v6 sockets") was merged as a bug fix to the net tree. Two conflicting changes were committed to net-next before the above fix was merged back to net-next: ca065d0c ("udp: no longer use SLAB_DESTROY_BY_RCU") 3b24d854 ("tcp/dccp: do not touch listener sk_refcnt under synflood") These changes switched the datastructure used for TCP and UDP sockets from hlist_nulls to hlist. This patch applies the necessary parts of the net tree fix to net-next which were not automatic as part of the merge. Fixes: 1602f49b ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net") Signed-off-by:
Craig Gallek <kraig@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Valdis reported tons of stack dumps caused by WARN_ON() in sock_owned_by_user() This test needs to be relaxed if/when lockdep disables itself. Note that other lockdep_sock_is_held() callers are all from rcu_dereference_protected() sections which already are disabled if/when lockdep has been disabled. Fixes: fafc4e1e ("sock: tigthen lockdep checks for sock_owned_by_user") Reported-by:
Valdis Kletnieks <Valdis.Kletnieks@vt.edu> Signed-off-by:
Eric Dumazet <edumazet@google.com> Acked-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 24 Apr, 2016 1 commit
-
-
Eric Dumazet authored
Linux TCP stack painfully segments all TSO/GSO packets before retransmits. This was fine back in the days when TSO/GSO were emerging, with their bugs, but we believe the dark age is over. Keeping big packets in write queues, but also in stack traversal has a lot of benefits. - Less memory overhead, because write queues have less skbs - Less cpu overhead at ACK processing. - Better SACK processing, as lot of studies mentioned how awful linux was at this ;) - Less cpu overhead to send the rtx packets (IP stack traversal, netfilter traversal, drivers...) - Better latencies in presence of losses. - Smaller spikes in fq like packet schedulers, as retransmits are not constrained by TCP Small Queues. 1 % packet losses are common today, and at 100Gbit speeds, this translates to ~80,000 losses per second. Losses are often correlated, and we see many retransmit events leading to 1-MSS train of packets, at the time hosts are already under stress. Signed-off-by:
Eric Dumazet <edumazet@google.com> Acked-by:
Yuchung Cheng <ycheng@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 23 Apr, 2016 7 commits
-
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
With this function, nla_data() is aligned on a 64-bit area. Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
nla_data() is now aligned on a 64-bit area. Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
nla_data() is now aligned on a 64-bit area. In fact, there is no user of this function. Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
nla_data() is now aligned on a 64-bit area. The temporary function nla_put_be64_32bit() is removed in this patch. Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
nla_data() is now aligned on a 64-bit area. A temporary version (nla_put_be64_32bit()) is added for nla_put_net64(). This function is removed in the next patch. Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
nla_data() is now aligned on a 64-bit area. Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 21 Apr, 2016 9 commits
-
-
Hannes Frederic Sowa authored
Equivalent to "vxlan: break dependency with netdev drivers", don't autoload geneve module in case the driver is loaded. Instead make the coupling weaker by using netdevice notifiers as proxy. Cc: Jesse Gross <jesse@kernel.org> Signed-off-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
Currently all drivers depend and autoload the vxlan module because how vxlan_get_rx_port is linked into them. Remove this dependency: By using a new event type in the netdevice notifier call chain we proxy the request from the drivers to flush and resetup the vxlan ports not directly via function call but by the already existing netdevice notifier call chain. I added a separate new event type, NETDEV_OFFLOAD_PUSH_VXLAN, to do so. We don't need to save those ids, as the event type field is an unsigned long and using specialized event types for this purpose seemed to be a more elegant way. This also comes in beneficial if in future we want to add offloading knobs for vxlan. Cc: Jesse Gross <jesse@kernel.org> Signed-off-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Introduce the feature of multi-packet WQE (RX Work Queue Element) referred to as (MPWQE or Striding RQ), in which WQEs are larger and serve multiple packets each. Every WQE consists of many strides of the same size, every received packet is aligned to a beginning of a stride and is written to consecutive strides within a WQE. In the regular approach, each regular WQE is big enough to be capable of serving one received packet of any size up to MTU or 64K in case of device LRO is enabled, making it very wasteful when dealing with small packets or device LRO is enabled. For its flexibility, MPWQE allows a better memory utilization (implying improvements in CPU utilization and packet rate) as packets consume strides according to their size, preserving the rest of the WQE to be available for other packets. MPWQE default configuration: Num of WQEs = 16 Strides Per WQE = 2048 Stride Size = 64 byte The default WQEs memory footprint went from 1024*mtu (~1.5MB) to 16 * 2048 * 64 = 2MB per ring. However, HW LRO can now be supported at no additional cost in memory footprint, and hence we turn it on by default and get an even better performance. Performance tested on ConnectX4-Lx 50G. To isolate the feature under test, the numbers below were measured with HW LRO turned off. We verified that the performance just improves when LRO is turned back on. * Netperf single TCP stream: - BW raised by 10-15% for representative packet sizes: default, 64B, 1024B, 1478B, 65536B. * Netperf multi TCP stream: - No degradation, line rate reached. * Pktgen: packet rate raised by 2-10% for traffic of different message sizes: 64B, 128B, 256B, 1024B, and 1500B. * Pktgen: packet loss in bursts of small messages (64byte), single stream: - | num packets | packets loss before | packets loss after | 2K | ~ 1K | 0 | 8K | ~ 6K | 0 | 16K | ~13K | 0 | 32K | ~28K | 0 | 64K | ~57K | ~24K As expected as the driver can receive as many small packets (<=64B) as the number of total strides in the ring (default = 2048 * 16) vs. 1024 (default ring size regardless of packets size) before this feature. Signed-off-by:
Tariq Toukan <tariqt@mellanox.com> Signed-off-by:
Achiad Shochat <achiad@mellanox.com> Signed-off-by:
Saeed Mahameed <saeedm@mellanox.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
A queue counter can collect several statistics for one or more hardware queues (QPs, RQs, etc ..) that the counter is attached to. For Ethernet it will provide an "out of buffer" counter which collects the number of all packets that are dropped due to lack of software buffers. Here we add device commands to alloc/query/dealloc queue counters. Signed-off-by:
Tariq Toukan <tariqt@mellanox.com> Signed-off-by:
Rana Shahout <ranas@mellanox.com> Signed-off-by:
Saeed Mahameed <saeedm@mellanox.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Daniel Jurgens authored
Maintain the PCI status and provide wrappers for enabling and disabling the PCI device. Performing the actions more than once without doing its opposite results in warning logs. This occurred when EEH hotplugged the device causing a warning for disabling an already disabled device. Fixes: 2ba5fbd6 ('net/mlx4_core: Handle AER flow properly') Signed-off-by:
Daniel Jurgens <danielj@mellanox.com> Signed-off-by:
Yishai Hadas <yishaih@mellanox.com> Signed-off-by:
Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Martin KaFai Lau authored
After receiving sacks, tcp_shifted_skb() will collapse skbs if possible. tx_flags and tskey also have to be merged. This patch reuses the tcp_skb_collapse_tstamp() to handle them. BPF Output Before: ~~~~~ <no-output-due-to-missing-tstamp-event> BPF Output After: ~~~~~ <...>-2024 [007] d.s. 88.644374: : ee_data:14599 Packetdrill Script: ~~~~~ +0 `sysctl -q -w net.ipv4.tcp_min_tso_segs=10` +0 `sysctl -q -w net.ipv4.tcp_no_metrics_save=1` +0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 +0 bind(3, ..., ...) = 0 +0 listen(3, 1) = 0 0.100 < S 0:0(0) win 32792 <mss 1460,sackOK,nop,nop,nop,wscale 7> 0.100 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 7> 0.200 < . 1:1(0) ack 1 win 257 0.200 accept(3, ..., ...) = 4 +0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0 0.200 write(4, ..., 1460) = 1460 +0 setsockopt(4, SOL_SOCKET, 37, [2688], 4) = 0 0.200 write(4, ..., 13140) = 13140 0.200 > P. 1:1461(1460) ack 1 0.200 > . 1461:8761(7300) ack 1 0.200 > P. 8761:14601(5840) ack 1 0.300 < . 1:1(0) ack 1 win 257 <sack 1461:14601,nop,nop> 0.300 > P. 1:1461(1460) ack 1 0.400 < . 1:1(0) ack 14601 win 257 0.400 close(4) = 0 0.400 > F. 14601:14601(0) ack 1 0.500 < F. 1:1(0) ack 14602 win 257 0.500 > . 14602:14602(0) ack 2 Signed-off-by:
Martin KaFai Lau <kafai@fb.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Yuchung Cheng <ycheng@google.com> Acked-by:
Soheil Hassas Yeganeh <soheil@google.com> Tested-by:
Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This patch folds NETIF_F_ALL_TSO into the bitmask for NETIF_F_GSO_SOFTWARE. The idea is to avoid duplication of defines since the only difference between the two was the GSO_UDP bit. Signed-off-by:
Alexander Duyck <aduyck@mirantis.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-