forked from arangodb/arangodb
-
Notifications
You must be signed in to change notification settings - Fork 0
/
tsan_arangodb_suppressions.txt
55 lines (47 loc) · 2.79 KB
/
tsan_arangodb_suppressions.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
race:boost/lockfree/queue.hpp
race:boost/lockfree/detail/freelist.hpp
race:boost::lockfree::queue
# These operators contain an assertion that checks the refCount.
# That read operation is racy, but since it is just an assert we don't care!
race:SharedAqlItemBlockPtr::operator*
race:SharedAqlItemBlockPtr::operator->
# logCrashInfo calls LOG_TOPIC which in turn calls std::string::reserve
signal:lib/Basics/CrashHandler.cpp
signal:crashHandlerSignalHandler
# A compiler optimization in DBImpl::ReleaseSnapshot() produces code where a
# register is populated with different addresses based on some condition, and
# this register is later read to populate the variable `oldest_snapshot`.
# However, this generated read is a non-atomic read, which therefore results in
# a false positive race warning. I have created an according GitHub issue:
# https://github.com/google/sanitizers/issues/1398
race:VersionSet::SetLastSequence
# The following scenario is flagged by TSAN: T1(M0 -> M1), T2(M1 -> M2), T3(M2 -> M0)
# T1 establishes leadership
# - calls `LogLeader::executeAppendEntriesRequests` and acquires M0 (LogLeader::_guardedLeaderData)
# - calls `ReplicatedStateManager::leadershipEstablished` and acquires M1 (ReplicatedStateManager::_guarded)
# T2 creates the shard
# - calls `ReplicatedStateManager::getLeader` and acquires M1 (ReplicatedStateManager::_guarded)
# - calls `LeaderStateManager::getStateMachine` and acquires M2 (LeaderStateManager::_guardedData)
# T3 does recovery
# - calls `LeaderStateManager::recoverEntries` and acquires M2 (LeaderStateManager::_guardedData)
# - calls `LogLeader::triggerAsyncReplication` and acquires M0 (LogLeader::_guardedLeaderData)
# It is a false positive because:
# * T3 (recovery) is spawned due to T1 (leadership established) and it is guaranteed that T1 already holds M0 and
# M1 before T3 is started. T1 will definitely finish and release its locks, regardless of what other threads are
# doing.
# * T2 (shard creation) may only do significant work if T3 (recovery) has already finished
# (see `LeaderStateManager<S>::getStateMachine()`). Therefore, if T2 acquires M2 before T3 has started,
# it will release its locks and try again later, because the leader state is unusable unless recovery is completed.
deadlock:replication2::replicated_log::LogLeader::triggerAsyncReplication
deadlock:replication2::replicated_state::ReplicatedStateManager
# TODO - this should be removed once BTS-685 is fixed
race:AllowImplicitCollectionsSwitcher
race:graph::RefactoredTraverserCache::appendVertex
# TODO Fix known thread leaks
thread:ClusterFeature::startHeartbeatThread
thread:CacheManagerFeature::start
thread:DatabaseFeature::start
# TODO Fix lock order inversion
deadlock:consensus::Agent::setPersistedState
# TODO Fix data race in arangodbtests
race:DummyConnection::sendRequest