forked from Minki/linux
lib: fix data race in rhashtable_rehash_one
rhashtable_rehash_one() uses complex logic to update entry->next field, after INIT_RHT_NULLS_HEAD and NULLS_MARKER expansion: entry->next = 1 | ((base + off) << 1) This can be compiled along the lines of: entry->next = base + off entry->next <<= 1 entry->next |= 1 Which will break concurrent readers. NULLS value recomputation is not needed here, so just remove the complex logic. The data race was found with KernelThreadSanitizer (KTSAN). Signed-off-by: Dmitry Vyukov <dvyukov@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Acked-by: Thomas Graf <tgraf@suug.ch> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
parent
23eedbc243
commit
7def0f952e
@ -187,10 +187,7 @@ static int rhashtable_rehash_one(struct rhashtable *ht, unsigned int old_hash)
|
|||||||
head = rht_dereference_bucket(new_tbl->buckets[new_hash],
|
head = rht_dereference_bucket(new_tbl->buckets[new_hash],
|
||||||
new_tbl, new_hash);
|
new_tbl, new_hash);
|
||||||
|
|
||||||
if (rht_is_a_nulls(head))
|
RCU_INIT_POINTER(entry->next, head);
|
||||||
INIT_RHT_NULLS_HEAD(entry->next, ht, new_hash);
|
|
||||||
else
|
|
||||||
RCU_INIT_POINTER(entry->next, head);
|
|
||||||
|
|
||||||
rcu_assign_pointer(new_tbl->buckets[new_hash], entry);
|
rcu_assign_pointer(new_tbl->buckets[new_hash], entry);
|
||||||
spin_unlock(new_bucket_lock);
|
spin_unlock(new_bucket_lock);
|
||||||
|
Loading…
Reference in New Issue
Block a user