| Commit message (Collapse) | Author | Age | Files | Lines |
| |\
| |
| | |
Reserve file handles for quorum queues
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| |
| | |
also fix a compiler warning.
@dcorbacho.
|
| | |
| |
| |
| | |
[169063174]
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
These should be taken into account into the limits, but always be granted.
Files must be reserved by the queues themselves using `set_reservation/0` or
`set_reservation/1`. This is an absolute reservation that increases or
decreases the number of files reserved to reach the given amount on every
call.
[#169063174]
|
| | | |
|
| | | |
|
| |\ \
| | |
| | | |
Make test more resilient to timing changes
|
| |/ /
| |
| |
| |
| |
| | |
* Drain the mailbox of the process until the expected message arrives.
If tests are slower for some reason, we might generate more events as
they're emitted on an interval by the queue process.
|
| |\ \
| | |
| | | |
Optimise QQ memory use
|
| | | |
| | |
| | |
| | | |
When taking a snapshot point
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Take fewer release cursor snapshots points as the message backlog grows.
Also introduces a compacted form of the internal message header map
where initially it is only an integer representing the size of the
message body. Later when additional keys need to be added it is expanded
into a full map. This avoid creating and holding many individial maps
with just a size element.
[#169064158]
|
| |\ \ \
| | | |
| | | | |
Update core metrics with down state
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
As the processes responsible to update `queue_metrics` are the queues
themselves, if they crash nothing else updates their metric state to down.
Thus, showing as running in the UI. This change uses the existing gc process
to check the state of the local queues and update it if required, while
it is doing the normal gc scanning. It covers all types of queues.
We consider a queue as down if the process/master/leader does no answer to
requests or it is dead. There could be other situations where a queue is
functionally down, but that is not covered here.
[#163218289]
|
| |\ \ \ \
| |/ / /
|/| | | |
Handle infinity timeout when awaiting boot finish
|
| |/ / /
| | |
| | |
| | | |
Spotted by @lukebakken.
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | | |
This reverts commit 50fbc5259df3b23ec34af156516203b0b8030c5e, reversing
changes made to 46a0181bc9729f20fad631c779dfbb817c71c206.
There are test failures in two suites.
|
| |\ \ \
| |_|/
|/| | |
Update core metrics with down state
|
| | |\ \
| |/ /
|/| | |
|
| |\ \ \
| | | |
| | | | |
Make it possible to bypass queue master locator when declaring a queue
|
| |/ / /
| | |
| | |
| | |
| | |
| | |
| | | |
Needed by the sharding plugin so the shards are created in the
requested node, not wherever a non-related policy might try to enforce.
[#168224238]
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In a cluster, if e.g. RabbitMQ 3.7.17 packages are deployed on all
cluster member, but the nodes are not restarted yet, the first node to
restart will fail. This might happen if the timing is unfortunate during
a parallel upgrade of a cluster: nodes A and B files were updated, and
node A finishes to restart while node B is still between the files
update and the post-install script.
The reason is that the `rabbit_feature_flags` module is available on all
nodes after the package deployment. However, the module may be loaded in
a pre-feature-flags already running node. In this unexpected context,
the module fails to respond properly to the queries of the remote
restarting node.
To fix this, we use an `on_load()` hook to prevent this module from being
loaded by the Erlang code server if the context is unexpected. This will
cause the query to abort with an undefined function call, exactly like
if the module was really missing.
Outside of a running RabbitMQ instance, the load of the module is
permitted. This is useful in the case of running EUnit tests for
instance (even though this specific module doesn't have any).
The previous patch was an early version to verify the hypothesis only.
Fixes #2132.
[#169086629]
|
| | |/
|/|
| |
| |
| |
| |
| |
| |
| | |
... instead of `rabbitmqctl stop` in
`simple_confirm_availability_on_leader_change` testcase.
The problem with the latter is that it doesn't wait for the broker to
actually stop. Therefore we end up with an error when we try to restart
it after because the previous instance is still running.
|
| | |
| |
| |
| | |
References #2133.
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| | |
rabbit.feature_flags_file would not be set in an EUnit test
environment, so proceed with code loading if eunit is loaded.
References #2133.
|
| |\ \ |
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In a cluster, if e.g. RabbitMQ 3.7.17 packages are deployed on all
cluster member, but the nodes are not restarted yet, the first node to
restart will fail.
The reason is that the `rabbit_feature_flags` module is available on all
nodes after the package deployment. However, the module may be loaded in
a pre-feature-flags already running node. In this unexpected context,
the module fails to respond properly to the queries of the remote
restarting node.
To fix this, we use an `on_lod()` hook to prevent this module from being
loaded by the Erlang code server if the context is unexpected. This will
cause the query to abort with an undefined function call, exactly like
if the module was missing.
Fixes #2132.
|
| | | | |
|
| |\ \ \
| |/ /
|/| | |
Doc: man pages for certificate commands
|
| | | | |
|
| |/ /
| |
| |
| | |
[#163597674]
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
| |
As the processes responsible to update `queue_metrics` are the queues
themselves, if they crash nothing else updates their metric state to down.
Thus, showing as running in the UI. This change uses the existing gc process
to check the state of the local queues and update it if required, while
it is doing the normal gc scanning. It covers all types of queues.
We consider a queue as down if the process/master/leader does no answer to
requests or it is dead. There could be other situations where a queue is
functionally down, but that is not covered here.
[#163218289]
|
| |\
| |
| | |
Error handling improvements in rabbit_epmd_monitor:check_epmd/1
|
| | | |
|
| |/
|
|
|
|
|
|
| |
To produce less noise and more informative messages.
This also introduces a function that allows for manual
check triggering.
Closes #2130.
|
| | |
|
| | |
|
| | |
|
| |\
| |
| | |
fix health check returning ok in case of partition
|
| |/
|
|
|
|
|
| |
Health check assumed if the result of partitions() is a list then everything
is ok, this is not the case the result is always a list.
rabbit_node_monitor:partitions() returns a non empty list in case there
is an mnesia_partition.
|
| |
|
|
| |
That would cause the wrong commands to be resent by the channel.
|
| |\
| |
| | |
rabbitmqctl & rabbitmq-diagnostics manpage polishing
|
| | |
| |
| |
| | |
In rabbitmqctl manpage.
|
| |/ |
|
| | |
|