| Commit message (Collapse) | Author | Age | Files | Lines |
| |\
| |
| | |
Remove Ra segment_max_entries override
|
| | |
| |
| |
| |
| | |
So that it uses Ra's internal default of 4096 instead which is safer for
larger message sizes.
|
| |\ \
| | |
| | |
| | |
| | | |
rabbitmq/handle-deadlocks-in-peer_discovery_classic_config_SUITE
peer_discovery_classic_config_SUITE: Handle dead-locks
|
| | | | |
|
| |/ /
| |
| |
| |
| |
| |
| |
| |
| | |
... when nodes are waiting for each other to finish Mnesia
initialization.
So if the success condition is not met, we reset and restart all nodes
except the first one to trigger peer discovery again. We check the
success condition after that.
|
| | |
| |
| |
| |
| | |
The raft.* conf parameters only take place at the next available
opportunity and won't affect in-flight data.
|
| |\ \
| | |
| | |
| | |
| | | |
rabbitmq/work-around-cli-circular-dep-in-feature_flags_SUITE
feature_flags_SUITE: Work around CLI/rabbitmq-server circular dependency
|
| | | | |
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We need to copy `rabbit` in `my_plugins` plugins directory, not because
`my_plugin` depends on it, but because the CLI erroneously depends on
the broker.
This can't be fixed easily because this is a circular dependency
(i.e. the broker depends on the CLI). So until a proper solution is
implemented, keep this second copy of the broker for the CLI to find it.
|
| |\ \ \
| |/ /
|/| |
| | |
| | | |
rabbitmq/lrb-fix-flaky-peer_discovery_classic_config-test
Ensure randomized_startup_delay_range custom value is used
|
| |/ /
| |
| |
| |
| |
| |
| |
| | |
This ensure the test completes within the 90sec time limit
https://pivotal.slack.com/archives/C055BSG8E/p1585840790221000
https://github.com/rabbitmq/rabbitmq-server/commit/609501c46d7e18a7ea103bfa0188e73c3c4fc951#commitcomment-38268475
|
| |\ \
| |/
|/| |
rabbit_fifo: set timer when publish rejected
|
| | |
| |
| |
| | |
and no current leader is known so that we can re-try after a timeout
|
| | |
| |
| |
| | |
in single active consmer test
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
10s is not enough for CI.
|
| |\ \
| |/
|/| |
clustering_management_SUITE: No need to stop node after start_app failure in `erlang_config`
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... in `erlang_config`.
Since #2180, a failed `start_app` does not take the node down anymore.
Trying to restart the node just after was failing since (because the
node is still there), but this remained unnoticed so far because the
return value of `start_node()` is not checked.
However, since
rabbitmq/rabbitmq-ct-helpers@c033d9272afaf3575505533c81f1c0c7cfcb6206,
the Make recipe which starts the node automatically stops it if the
start failed somewhere. This is in order to not leave an unwanted node
around.
This means that after the failing
`rabbit_ct_broker_helpers:start_node()`, the node was effectively
stopped this time, leading to the rest of the testcase to fail.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
for up to N seconds
Depends on rabbitmq/rabbitmq-ct-helpers@98f1c4a8012c006965257f2875873bf9d08416bc
|
| |
|
|
| |
to make it clear that it is a mock-based unit test one
|
| |\
| |
| | |
Move rabbit_channel config value to config record
|
| | |
| |
| |
| |
| |
| | |
writer_gc_threshold is a static value and shoudl be in the static config
record not in the main channel record that should only hold mutable data
fields.
|
| | |
| |
| |
| | |
as opposed to eacces
|
| |\ \
| |/
|/| |
Increase wait timeouts in rabbit_fifo_int
|
| |/
|
|
| |
250ms is probably a bit on the low side.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
from unit_SUITE and unit_inbroker_parallel_SUITE
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |\
| |
| | |
Reduce memory usage during startup
|
| | | |
|
| | |
| |
| |
| | |
https://github.com/rabbitmq/rabbitmq-common/pull/368/commits/36c9fbe59af6d6cce67fc430b333c44f30cc4c40
|
| | |
| |
| |
| |
| |
| |
| |
| | |
using spawn_link in rabbit_msg_store:build_index alters the
supervision tree such that there are unwanted side effects in
rabbit_vhost_msg_store. We monitor the spawned process so that if
there is a failure to enqueue the scan for each file, the vhost
fails to start and reports an error.
|
| | |
| |
| |
| |
| |
| | |
Make use of the new dispatch_sync function in
https://github.com/rabbitmq/rabbitmq-common/pull/368 to block only when all
workers are busy
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In the case of large backlogs of persistent messages (10s of millions
of messages)
Previously we queued a job for every file with worker_pool:submit_async,
however if there are 50 million messages, this corresponds to ~79,000 files
and the same number of pending tasks in the worker pool. The mailbox for
worker_pool explodes under these circumstances, using massive amounts of
memory.
The following was helpful in zeroing in on the problem:
https://elixirforum.com/t/extremely-high-memory-usage-in-genservers/4035
|
| | | |
|