| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
This ensure the test completes within the 90sec time limit
https://pivotal.slack.com/archives/C055BSG8E/p1585840790221000
https://github.com/rabbitmq/rabbitmq-server/commit/609501c46d7e18a7ea103bfa0188e73c3c4fc951#commitcomment-38268475
|
| |\
| |
| | |
rabbit_fifo: set timer when publish rejected
|
| | |
| |
| |
| | |
and no current leader is known so that we can re-try after a timeout
|
| | |
| |
| |
| | |
in single active consmer test
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
10s is not enough for CI.
|
| |\ \
| |/
|/| |
clustering_management_SUITE: No need to stop node after start_app failure in `erlang_config`
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... in `erlang_config`.
Since #2180, a failed `start_app` does not take the node down anymore.
Trying to restart the node just after was failing since (because the
node is still there), but this remained unnoticed so far because the
return value of `start_node()` is not checked.
However, since
rabbitmq/rabbitmq-ct-helpers@c033d9272afaf3575505533c81f1c0c7cfcb6206,
the Make recipe which starts the node automatically stops it if the
start failed somewhere. This is in order to not leave an unwanted node
around.
This means that after the failing
`rabbit_ct_broker_helpers:start_node()`, the node was effectively
stopped this time, leading to the rest of the testcase to fail.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
for up to N seconds
Depends on rabbitmq/rabbitmq-ct-helpers@98f1c4a8012c006965257f2875873bf9d08416bc
|
| |
|
|
| |
to make it clear that it is a mock-based unit test one
|
| |\
| |
| | |
Move rabbit_channel config value to config record
|
| | |
| |
| |
| |
| |
| | |
writer_gc_threshold is a static value and shoudl be in the static config
record not in the main channel record that should only hold mutable data
fields.
|
| | |
| |
| |
| | |
as opposed to eacces
|
| |\ \
| |/
|/| |
Increase wait timeouts in rabbit_fifo_int
|
| |/
|
|
| |
250ms is probably a bit on the low side.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
from unit_SUITE and unit_inbroker_parallel_SUITE
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |\
| |
| | |
Reduce memory usage during startup
|
| | | |
|
| | |
| |
| |
| | |
https://github.com/rabbitmq/rabbitmq-common/pull/368/commits/36c9fbe59af6d6cce67fc430b333c44f30cc4c40
|
| | |
| |
| |
| |
| |
| |
| |
| | |
using spawn_link in rabbit_msg_store:build_index alters the
supervision tree such that there are unwanted side effects in
rabbit_vhost_msg_store. We monitor the spawned process so that if
there is a failure to enqueue the scan for each file, the vhost
fails to start and reports an error.
|
| | |
| |
| |
| |
| |
| | |
Make use of the new dispatch_sync function in
https://github.com/rabbitmq/rabbitmq-common/pull/368 to block only when all
workers are busy
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In the case of large backlogs of persistent messages (10s of millions
of messages)
Previously we queued a job for every file with worker_pool:submit_async,
however if there are 50 million messages, this corresponds to ~79,000 files
and the same number of pending tasks in the worker pool. The mailbox for
worker_pool explodes under these circumstances, using massive amounts of
memory.
The following was helpful in zeroing in on the problem:
https://elixirforum.com/t/extremely-high-memory-usage-in-genservers/4035
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| | |
... in case the log file was not fsync'd yet (and thus we don't see the
content yet).
This happens sometimes in Travis CI for instance.
|
| |\ \
| | |
| | | |
Wait for commits on test suite
|
| |/ /
| |
| |
| | |
Don't wait for consensus as the publish could be delayed
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
... while building `my_plugin`.
We clear ALL_DEPS_DIRS to make sure they are not recompiled when the
plugin is built. `rabbit` was previously compiled with -DTEST and if
it is recompiled because of this plugin, it will be recompiled without
-DTEST: the testsuite depends on test code so we can't allow that.
Note that we do not clear the DEPS variable: we need it to be correct
because it is used to generate `my_plugin.app` (and a RabbitMQ plugin
must depend on `rabbit`).
|
| | |
| |
| |
| |
| | |
... to explicitely inject its own feature flags, instead of relying on
actual module attributes.
|
| | |
| |
| |
| |
| |
| | |
Backends can return duplicates, sometimes for reasons outside of their
control, e.g. implicit or explicit versioning of values by the data
store they are backed by.
|
| | | |
|
| |\ \
| |/
|/|
| |
| | |
rabbitmq/fix-ff-registry-loading+improve-ff-testing
Fix feature flags registry loading + improve feature flags testing
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
... before deleting it and load the new code.
In some rare cases, the soft purge didn't work because another process
was running the old code. Thus the delete would fail.
Now, we wait for the soft purge to succeed before proceeding.
|
| |/
|
|
|
|
| |
This should be more robust than relying the caller (through a forced
exception). Way more robust considering that the latter seems to not
work at all :)
|