diff options
| author | Gerhard Lazu <gerhard@lazu.co.uk> | 2018-02-27 17:03:06 +0000 |
|---|---|---|
| committer | Michael Klishin <michael@clojurewerkz.org> | 2018-03-28 00:56:35 +0300 |
| commit | 3c5d57f35e81c4c84aeae5b0f06edeb68b2ff50b (patch) | |
| tree | bb0e2315506cc0f0e7981bdb2a536a1104996b12 /check_xref | |
| parent | b7a3fb6923f540cea7947a93014deb5331ad8650 (diff) | |
| download | rabbitmq-server-git-3c5d57f35e81c4c84aeae5b0f06edeb68b2ff50b.tar.gz | |
Group queue deletions on_node_down into 10 operations per transaction
When many queues are being deleted, we believe that it's faster to have
fewer Mnesia transactions and therefore group 10 queue deletions into a
single Mnesia transaction. This number (10) is arbitrary, we didn't try
with a different number. Creating 1 Mnesia transaction for every queue
deletion feels too many transaction, and having a single Mnesia
transaction for all queue deletions is too few transactions. This felt
like a sensible option.
We cannot determine if this is a good change because
rabbit_core_metrics:queue_deleted/1 takes the most time and obscures all
observations. According to qcachegrind,
rabbit_misc:execute_mnesia_transaction/1 takes 1.8s while
rabbit_core_metrics:queue_deleted/1 takes 132s out of which ets:select/2
takes 131s.
How can we optimise rabbit_core_metrics:queue_deleted/1 ? We are
thinking that rather than calling ets:select/2 twice for every queue, we
should call it twice for all queues that need to be deleted. We don't
know whether this is possible. Alternatively, we might look into
ets:first/1 & ets:next/2 to iterate over the entire table ONCE with all
the queues that have been deleted. Thoughts @dcorbacho @michaelklishin ?
For initial context, see #1513
Partner-in-crime: @essen
Diffstat (limited to 'check_xref')
0 files changed, 0 insertions, 0 deletions
