- 08 Jan, 2015 5 commits
-
-
Ying Xue authored
Move condition statements of verifying whether hash table size exceeds its maximum threshold or reaches its minimum threshold from resizing functions to resizing decision functions, avoiding unnecessary wakeup for worker queue thread. Signed-off-by:
Ying Xue <ying.xue@windriver.com> Cc: Thomas Graf <tgraf@suug.ch> Acked-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Ying Xue authored
When remove an object from hash table, we currently only traverse old bucket table to check whether the object exists. If the object is not found in it, we will try again. But in the second search loop, we still search the object from the old table instead of future table. As a result, the object may be not removed from hash table especially when resizing is currently in progress and the object is just saved in the future table. Signed-off-by:
Ying Xue <ying.xue@windriver.com> Cc: Thomas Graf <tgraf@suug.ch> Acked-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Ying Xue authored
Involve a new function called rhashtable_lookup_insert() which makes lookup and insertion atomic under bucket lock protection, helping us avoid to introduce an extra lock when we search and insert an object into hash table. Signed-off-by:
Ying Xue <ying.xue@windriver.com> Signed-off-by:
Thomas Graf <tgraf@suug.ch> Acked-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Ying Xue authored
Introduce rhashtable_wakeup_worker() helper function to reduce duplicated code where to wake up worker. By the way, as long as the both "future_tbl" and "tbl" bucket table pointers point to the same bucket array, we should try to wake up the resizing worker thread, otherwise, it indicates the work of resizing hash table is not finished yet. However, currently we will wake up the worker thread only when the two pointers point to different bucket array. Obviously this is wrong. So, the issue is also fixed as well in the patch. Signed-off-by:
Ying Xue <ying.xue@windriver.com> Cc: Thomas Graf <tgraf@suug.ch> Acked-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Ying Xue authored
Define an internal compare function and relevant compare argument, and then make use of rhashtable_lookup_compare() to lookup key in hash table, reducing duplicated code between rhashtable_lookup() and rhashtable_lookup_compare(). Signed-off-by:
Ying Xue <ying.xue@windriver.com> Cc: Thomas Graf <tgraf@suug.ch> Acked-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 03 Jan, 2015 7 commits
-
-
Thomas Graf authored
In order to allow for wider usage of rhashtable, use a special nulls marker to terminate each chain. The reason for not using the existing nulls_list is that the prev pointer usage would not be valid as entries can be linked in two different buckets at the same time. The 4 nulls base bits can be set through the rhashtable_params structure like this: struct rhashtable_params params = { [...] .nulls_base = (1U << RHT_BASE_SHIFT), }; This reduces the hash length from 32 bits to 27 bits. Signed-off-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Introduces an array of spinlocks to protect bucket mutations. The number of spinlocks per CPU is configurable and selected based on the hash of the bucket. This allows for parallel insertions and removals of entries which do not share a lock. The patch also defers expansion and shrinking to a worker queue which allows insertion and removal from atomic context. Insertions and deletions may occur in parallel to it and are only held up briefly while the particular bucket is linked or unzipped. Mutations of the bucket table pointer is protected by a new mutex, read access is RCU protected. In the event of an expansion or shrinking, the new bucket table allocated is exposed as a so called future table as soon as the resize process starts. Lookups, deletions, and insertions will briefly use both tables. The future table becomes the main table after an RCU grace period and initial linking of the old to the new table was performed. Optimization of the chains to make use of the new number of buckets follows only the new table is in use. The side effect of this is that during that RCU grace period, a bucket traversal using any rht_for_each() variant on the main table will not see any insertions performed during the RCU grace period which would at that point land in the future table. The lookup will see them as it searches both tables if needed. Having multiple insertions and removals occur in parallel requires nelems to become an atomic counter. Signed-off-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
The removal function of nft_hash currently stores a reference to the previous element during lookup which is used to optimize removal later on. This was possible because a lock is held throughout calling rhashtable_lookup() and rhashtable_remove(). With the introdution of deferred table resizing in parallel to lookups and insertions, the nftables lock will no longer synchronize all table mutations and the stored pprev may become invalid. Removing this optimization makes removal slightly more expensive on average but allows taking the resize cost out of the insert and remove path. Signed-off-by:
Thomas Graf <tgraf@suug.ch> Cc: netfilter-devel@vger.kernel.org Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Subsequent patches will require access to the bucket tail. Access to the tail is relatively cheap as the automatic resizing of the table should keep the number of entries per bucket to no more than 0.75 on average. Signed-off-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
This patch is in preparation to introduce per bucket spinlocks. It extends all iterator macros to take the bucket table and bucket index. It also introduces a new rht_dereference_bucket() to handle protected accesses to buckets. It introduces a barrier() to the RCU iterators to the prevent the compiler from caching the first element. The lockdep verifier is introduced as stub which always succeeds and properly implement in the next patch when the locks are introduced. Signed-off-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Signed-off-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Hash the key inside of rhashtable_lookup_compare() like rhashtable_lookup() does. This allows to simplify the hashing functions and keep them private. Signed-off-by:
Thomas Graf <tgraf@suug.ch> Cc: netfilter-devel@vger.kernel.org Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 10 Dec, 2014 1 commit
-
-
Daniel Borkmann authored
This patch effectively reverts commit 500f8087 ("net: ovs: use CRC32 accelerated flow hash if available"), and other remaining arch_fast_hash() users such as from nfsd via commit 6282cd56 ("NFSD: Don't hand out delegations for 30 seconds after recalling them.") where it has been used as a hash function for bloom filtering. While we think that these users are actually not much of concern, it has been requested to remove the arch_fast_hash() library bits that arose from [1] entirely as per recent discussion [2]. The main argument is that using it as a hash may introduce bias due to its linearity (see avalanche criterion) and thus makes it less clear (though we tried to document that) when this security/performance trade-off is actually acceptable for a general purpose library function. Lets therefore avoid any further confusion on this matter and remove it to prevent any future accidental misuse of it. For the time being, this is going to make hashing of flow keys a bit more expensive in the ovs case, but future work could reevaluate a different hashing discipline. [1] https://patchwork.ozlabs.org/patch/299369/ [2] https://patchwork.ozlabs.org/patch/418756/ Cc: Neil Brown <neilb@suse.de> Cc: Francesco Fusco <fusco@ntop.org> Cc: Jesse Gross <jesse@nicira.com> Cc: Thomas Graf <tgraf@suug.ch> Signed-off-by:
Daniel Borkmann <dborkman@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 24 Nov, 2014 1 commit
-
-
Thomas Graf authored
Verify whether both the lock and RCU protected iterators see all test entries before and after expanding and shrinking has been performed. Also verify whether the number of entries in the hashtable remains stable during expansion and shrinking. Signed-off-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 13 Nov, 2014 4 commits
-
-
Thomas Graf authored
Reallocation is only required for shrinking and expanding and both rely on a mutex for synchronization and callers of rhashtable_init() are in non atomic context. Therefore, no reason to continue passing allocation hints through the API. Instead, use GFP_KERNEL and add __GFP_NOWARN | __GFP_NORETRY to allow for silent fall back to vzalloc() without the OOM killer jumping in as pointed out by Eric Dumazet and Eric W. Biederman. Signed-off-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
Currently mutex_is_held can only test locks in the that are global since it takes no arguments. This prevents rhashtable from being used in places where locks are lock, e.g., per-namespace locks. This patch adds a parent field to mutex_is_held and rhashtable_params so that local locks can be used (and tested). Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
The rhashtable function mutex_is_held is only used when PROVE_LOCKING is enabled. This patch makes the mutex_is_held field in rhashtable optional depending on PROVE_LOCKING. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
My editor spewed garbage that looked like memory corruption on my screen. It turns out that a number of occurences of "fi" got turned into a ligature. This patch replaces these ligatures with the ASCII letters "fi". Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Cheers, Acked-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 19 Sep, 2014 1 commit
-
-
Fabian Frederick authored
linux/log2.h was included twice. Signed-off-by:
Fabian Frederick <fabf@skynet.be> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 03 Sep, 2014 2 commits
-
-
Ying Xue authored
Although rhashtable library allows user to specify a quiet big size for user's created hash table, the table may be shrunk to a very small size - HASH_MIN_SIZE(4) after object is removed from the table at the first time. Subsequently, even if the total amount of objects saved in the table is quite lower than user's initial setting in a long time, the hash table size is still dynamically adjusted by rhashtable_shrink() or rhashtable_expand() each time object is inserted or removed from the table. However, as synchronize_rcu() has to be called when table is shrunk or expanded by the two functions, we should permit user to set the minimum table size through configuring the minimum number of shifts according to user specific requirement, avoiding these expensive actions of shrinking or expanding because of calling synchronize_rcu(). Signed-off-by:
Ying Xue <ying.xue@windriver.com> Acked-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
No need for rht_dereference() from rhashtable_destroy() since the existing callers don't hold the mutex when invoking this function from: 1) Netlink, this is called in case of memory allocation errors in the initialization path, no nl_sk_hash_lock is held. 2) Netfilter, this is called from the rcu callback, no nfnl_lock is held either. I think it's reasonable to assume that the caller has to make sure that no hash resizing may happen before releasing the bucket array. Therefore, the caller should be responsible for releasing this in a safe way, document this to make people aware of it. This resolves a rcu lockdep splat in nft_hash: =============================== [ INFO: suspicious RCU usage. ] 3.16.0+ #178 Not tainted ------------------------------- lib/rhashtable.c:596 suspicious rcu_dereference_protected() usage! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 1 1 lock held by ksoftirqd/2/18: #0: (rcu_callback){......}, at: [<ffffffff810918fd>] rcu_process_callbacks+0x27e/0x4c7 stack backtrace: CPU: 2 PID: 18 Comm: ksoftirqd/2 Not tainted 3.16.0+ #178 Hardware name: LENOVO 23259H1/23259H1, BIOS G2ET32WW (1.12 ) 05/30/2012 0000000000000001 ffff88011706bb68 ffffffff8143debc 0000000000000000 ffff880117062610 ffff88011706bb98 ffffffff81077515 ffff8800ca041a50 0000000000000004 ffff8800ca386480 ffff8800ca041a00 ffff88011706bbb8 Call Trace: [<ffffffff8143debc>] dump_stack+0x4e/0x68 [<ffffffff81077515>] lockdep_rcu_suspicious+0xfa/0x103 [<ffffffff81228b1b>] rhashtable_destroy+0x46/0x52 [<ffffffffa06f21a7>] nft_hash_destroy+0x73/0x82 [nft_hash] Signed-off-by:
Pablo Neira Ayuso <pablo@netfilter.org> Acked-by:
Thomas Graf <tgraf@suug.ch>
-
- 26 Aug, 2014 1 commit
-
-
Geert Uytterhoeven authored
Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Cc: Thomas Graf <tgraf@suug.ch> Signed-off-by:
Jiri Kosina <jkosina@suse.cz>
-
- 14 Aug, 2014 2 commits
-
-
Thomas Graf authored
No need to export rht_obj(), all inner to outer object translations occur internally. It was intended to be used with rht_for_each() which now primarily serves as the iterator for rhashtable_remove_pprev() to effectively flush and free the full table. Signed-off-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Properly annotate next pointers as access is RCU protected in the lookup path. Signed-off-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 02 Aug, 2014 1 commit
-
-
Thomas Graf authored
Generic implementation of a resizable, scalable, concurrent hash table based on [0]. The implementation supports both, fixed size keys specified via an offset and length, or arbitrary keys via own hash and compare functions. Lookups are lockless and protected as RCU read side critical sections. Automatic growing/shrinking based on user configurable watermarks is available while allowing concurrent lookups to take place. Objects to be hashed must include a struct rhash_head. The reason for not using the existing struct hlist_head is that the expansion and shrinking will have two buckets point to a single entry which would lead in obscure reverse chaining behaviour. Code includes a boot selftest if CONFIG_TEST_RHASHTABLE is defined. [0] https://www.usenix.org/legacy/event/atc11/tech/final_files/Triplett.pdf Signed-off-by:
Thomas Graf <tgraf@suug.ch> Reviewed-by:
Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-