summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Merge pull request #2300 from rabbitmq/remove-segment-max-entries-defaultMichael Klishin2020-04-071-2/+0
|\ | | | | Remove Ra segment_max_entries override
| * Remove Ra segment_max_entries overridekjnilsson2020-04-031-2/+0
| | | | | | | | | | So that it uses Ra's internal default of 4096 instead which is safer for larger message sizes.
* | Merge pull request #2303 from ↵Michael Klishin2020-04-071-7/+20
|\ \ | | | | | | | | | | | | rabbitmq/handle-deadlocks-in-peer_discovery_classic_config_SUITE peer_discovery_classic_config_SUITE: Handle dead-locks
| * | Factor out common code, add multiple triesLuke Bakken2020-04-061-32/+13
| | |
| * | peer_discovery_classic_config_SUITE: Handle dead-locksJean-Sébastien Pédron2020-04-061-6/+38
|/ / | | | | | | | | | | | | | | | | ... when nodes are waiting for each other to finish Mnesia initialization. So if the success condition is not met, we reset and restart all nodes except the first one to trigger peer discovery again. We check the success condition after that.
* | Remove overly cautious comment from sample configkjnilsson2020-04-061-3/+0
| | | | | | | | | | The raft.* conf parameters only take place at the next available opportunity and won't affect in-flight data.
* | Merge pull request #2302 from ↵Jean-Sébastien Pédron2020-04-062-6/+41
|\ \ | | | | | | | | | | | | rabbitmq/work-around-cli-circular-dep-in-feature_flags_SUITE feature_flags_SUITE: Work around CLI/rabbitmq-server circular dependency
| * | rabbitmq-env: Fix indentationJean-Sébastien Pédron2020-04-061-2/+2
| | |
| * | feature_flags_SUITE: Work around CLI/rabbitmq-server circular dependencyJean-Sébastien Pédron2020-04-061-4/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to copy `rabbit` in `my_plugins` plugins directory, not because `my_plugin` depends on it, but because the CLI erroneously depends on the broker. This can't be fixed easily because this is a circular dependency (i.e. the broker depends on the CLI). So until a proper solution is implemented, keep this second copy of the broker for the CLI to find it.
* | | Merge pull request #2301 from ↵Michael Klishin2020-04-042-14/+26
|\ \ \ | |/ / |/| | | | | | | | rabbitmq/lrb-fix-flaky-peer_discovery_classic_config-test Ensure randomized_startup_delay_range custom value is used
| * | Ensure randomized_startup_delay_range custom value is usedLuke Bakken2020-04-032-14/+26
|/ / | | | | | | | | | | | | | | This ensure the test completes within the 90sec time limit https://pivotal.slack.com/archives/C055BSG8E/p1585840790221000 https://github.com/rabbitmq/rabbitmq-server/commit/609501c46d7e18a7ea103bfa0188e73c3c4fc951#commitcomment-38268475
* | Merge pull request #2295 from rabbitmq/rabbit-fifo-fixGerhard Lazu2020-04-021-9/+9
|\ \ | |/ |/| rabbit_fifo: set timer when publish rejected
| * rabbit_fifo: set timer when publish rejectedkjnilsson2020-04-011-9/+9
| | | | | | | | and no current leader is known so that we can re-try after a timeout
* | Bump test timeoutskjnilsson2020-04-021-9/+24
| | | | | | | | in single active consmer test
* | Reduce randomized startup delay range for this testMichael Klishin2020-04-011-3/+6
| |
* | Wait for up to 90s in this testMichael Klishin2020-04-011-2/+2
| |
* | Await cluster formation for 40sMichael Klishin2020-04-011-3/+3
| | | | | | | | 10s is not enough for CI.
* | Merge pull request #2294 from rabbitmq/fix-clustering_management/erlang_configJean-Sébastien Pédron2020-04-011-12/+3
|\ \ | |/ |/| clustering_management_SUITE: No need to stop node after start_app failure in `erlang_config`
| * clustering_management_SUITE: No need to stop node after start_app failureJean-Sébastien Pédron2020-04-011-12/+3
|/ | | | | | | | | | | | | | | | | | | ... in `erlang_config`. Since #2180, a failed `start_app` does not take the node down anymore. Trying to restart the node just after was failing since (because the node is still there), but this remained unnoticed so far because the return value of `start_node()` is not checked. However, since rabbitmq/rabbitmq-ct-helpers@c033d9272afaf3575505533c81f1c0c7cfcb6206, the Make recipe which starts the node automatically stops it if the start failed somewhere. This is in order to not leave an unwanted node around. This means that after the failing `rabbit_ct_broker_helpers:start_node()`, the node was effectively stopped this time, leading to the rest of the testcase to fail.
* More debug logging around peer discovery lockingMichael Klishin2020-04-011-2/+11
|
* Rename one more test suiteMichael Klishin2020-03-311-1/+1
|
* Rename one more test suiteMichael Klishin2020-03-311-1/+1
|
* Rename one more test suiteMichael Klishin2020-03-311-1/+1
|
* peer_discovery_classic_config_SUITE: re-evaluate cluster formation condition ↵Michael Klishin2020-03-311-6/+12
| | | | | | for up to N seconds Depends on rabbitmq/rabbitmq-ct-helpers@98f1c4a8012c006965257f2875873bf9d08416bc
* Rename a test suiteMichael Klishin2020-03-311-1/+1
| | | | to make it clear that it is a mock-based unit test one
* Merge pull request #2293 from rabbitmq/fix-rabbit-channel-recordMichael Klishin2020-03-311-13/+13
|\ | | | | Move rabbit_channel config value to config record
| * Move rabbit_channel config value to config recordkjnilsson2020-03-311-13/+13
| | | | | | | | | | | | writer_gc_threshold is a static value and shoudl be in the static config record not in the main channel record that should only hold mutable data fields.
* | unit_log_management_SUITE: handle erofs returned on macOSMichael Klishin2020-03-311-1/+7
| | | | | | | | as opposed to eacces
* | Merge pull request #2292 from rabbitmq/rabbit_fifo_int_tweaksMichael Klishin2020-03-311-4/+6
|\ \ | |/ |/| Increase wait timeouts in rabbit_fifo_int
| * Increase wait timeouts in rabbit_fifo_intkjnilsson2020-03-311-4/+6
|/ | | | 250ms is probably a bit on the low side.
* Remove a test suite split artifactMichael Klishin2020-03-311-6/+0
|
* Finish splitting unit_*_SUITE suitesMichael Klishin2020-03-318-932/+845
|
* Rename a suite to group better together with unit_access_controlMichael Klishin2020-03-301-1/+1
|
* Continue splitting unit_*_SUITE suitesMichael Klishin2020-03-3013-1034/+1032
|
* Extract several more focused test suitesMichael Klishin2020-03-309-488/+853
| | | | from unit_SUITE and unit_inbroker_parallel_SUITE
* unit_SUITE: extract tests for memory monitor and pg_localMichael Klishin2020-03-303-13/+117
|
* Move VM memory monitor unit tests into their own suiteMichael Klishin2020-03-302-41/+51
|
* Split the rest of unit_inbroker_non_parallel_SUITEMichael Klishin2020-03-302-58/+115
|
* Move app management unit tests into their own suiteMichael Klishin2020-03-302-60/+117
|
* Move disk space monitor tests into their own suiteMichael Klishin2020-03-302-52/+121
|
* Move file handle cache tests into their own suiteMichael Klishin2020-03-302-206/+288
|
* Move log management tests into their own suiteMichael Klishin2020-03-302-298/+415
|
* Create SECURITY.mdMichael Klishin2020-03-251-0/+24
|
* Merge pull request #2279 from rabbitmq/startup_memory_fixMichael Klishin2020-03-252-232/+70
|\ | | | | Reduce memory usage during startup
| * Compile from scratchstartup_memory_fixMichael Klishin2020-03-231-2/+2
| |
| * Move worker_pool_SUITE to rabbitmq-commonPhilip Kuryloski2020-03-231-188/+0
| | | | | | | | https://github.com/rabbitmq/rabbitmq-common/pull/368/commits/36c9fbe59af6d6cce67fc430b333c44f30cc4c40
| * Fail vhost startup if index workers are queued unsuccessfullyPhilip Kuryloski2020-03-231-5/+6
| | | | | | | | | | | | | | | | using spawn_link in rabbit_msg_store:build_index alters the supervision tree such that there are unwanted side effects in rabbit_vhost_msg_store. We monitor the spawned process so that if there is a failure to enqueue the scan for each file, the vhost fails to start and reports an error.
| * Improve worker_pool worker utilizationPhilip Kuryloski2020-03-201-1/+1
| | | | | | | | | | | | Make use of the new dispatch_sync function in https://github.com/rabbitmq/rabbitmq-common/pull/368 to block only when all workers are busy
| * Reduce memory usage during startupPhilip Kuryloski2020-03-181-44/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In the case of large backlogs of persistent messages (10s of millions of messages) Previously we queued a job for every file with worker_pool:submit_async, however if there are 50 million messages, this corresponds to ~79,000 files and the same number of pending tasks in the worker pool. The mailbox for worker_pool explodes under these circumstances, using massive amounts of memory. The following was helpful in zeroing in on the problem: https://elixirforum.com/t/extremely-high-memory-usage-in-genservers/4035
* | More logging around peer discovery backend initialisationMichael Klishin2020-03-241-3/+5
| |