| Commit message (Collapse) | Author | Age | Files | Lines |
| |\ |
|
| | |\
| | |
| | | |
Quorum queue grow and shrink commands
|
| | | | |
|
| | | |\ |
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
To filter non-running nodes instead of by connection as it should
not be possible to add a quorum
queue server running on an erlang node without RabbitMQ.
[#162782801]
|
| | | | | |
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
To allow operators to grow quorum queue clusters with some degree of
selection.
[#162782801]
|
| | | | | |
|
| | | | | |
|
| | | |\ \ |
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Also add rabbitmq-queue integration test for shrink command.
[#162782789]
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Also change formatting.
[#162782789]
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
That takes a node and removes all quorum queue members for this node and
returns an list of results for each queue.
[#162782789]
|
| | | |_|/
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We use rabbitmqctl(8) inside `rabbitmq-env` to query the settings of the
remote nodes (path to plugins, feature flags file, the database, etc).
However, before this patch, we didn't pass the name of the remote node
as specified by the user with `-n`. Therefore, the default node name was
used (`rabbit@$hostname`) and that node may not exist at all.
This caused the executed script to run with possibly incorrect settings.
In particular, this prevented rabbitmq-plugins(8) from working properly
on a node started with `gmake run-broker`.
This should now be fixed because we extract the remote node name from
the command line arguments and pass it to the child rabbitmqctl(8).
|
| | | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | | |
A failure to locate source directory should be logged as an error
into both upgrade and regular log for extra visibility.
See https://groups.google.com/forum/#!topic/rabbitmq-users/toq2dpocm0k
for background.
|
| | |\ \
| | | |
| | | | |
Avoid synchronous channel request to connection process
|
| | | | | |
|
| | | | | |
|
| | | | |
| | | |
| | | |
| | | | |
synchronous channel requests back to the connection process
|
| | |\ \ \
| | |/ /
| |/| | |
Check exclusive owner before durable argument
|
| | | | | |
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
See #1887 for context. When an exclusive queue is redeclared with
the exclusive property set to `false`, the code considers it to be
an ownership check. This is a leaked implementation detail that's
been around for years, so changing it might do more harm than good.
What we can do is provide a bit more information about when the
check might fail in the message.
|
| | |/ /
| | |
| | |
| | | |
Fixes #1887
|
| | |\ \
| | | |
| | | | |
Recover bindings for all durable queues including failed to recover.
|
| | | |\ \
| | |/ /
| |/| | |
|
| | |\ \ \
| | | | |
| | | | | |
Do not fail on bind/unbind operations if the binding records are inconsistent.
|
| | | | | |
| | | | |
| | | | |
| | | | | |
Older versions still can return binding_not_found error.
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
If there is a record for the rabbit_durable_route table but no record
for rabbit_route table, the binding operations should still proceed to
create/remove bindings. This will allow the clients to fix data inconsistency
that server did not fix during recovery.
[#163952284]
|
| | |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
`rabbit_binding:list_for_destination/1`
Follow-up to #1721.
Even though the default exchange bindings are deleted at schema
migration time, this filtering improves backwards compatibility
for mixed version clusters.
|
| | |\ \ \
| | | | |
| | | | | |
Remove inet_dist_listen_* configuration
|
| | |/ / /
| | | |
| | | |
| | | | |
part of https://github.com/rabbitmq/rabbitmq-server/pull/1881
|
| | | | |
| | | |
| | | |
| | | | |
(cherry picked from commit 260098ecd053ec10e407ec65ac2a17512f4d4455)
|
| | | |/
| |/|
| | |
| | |
| | |
| | | |
The rabbit_prelaunch module can read the os:getenv(RABBITMQ_CONFIG_ARG_FILE) correctly
(cherry picked from commit 68d52f96bd14d79f9cfd8789637d935182365edc)
|
| | | |\ |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
If a queue fails to recover it may still be restarted by the supervisor
and eventually start. After that some bindings may be in rabbit_durable_route
but not rabbit_route. This can cause binding not found errors.
If bindings are recovered for failed queues, the behaviour will be
the same as for the crashed queues. (which is currently broken
but needs to be fixed separately)
Addresses #1873
[#163919158]
|
| |/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
when publishing to an unavailable quorum queue
If a channel publishes to a quorum queue at a time when the queue is
unavailable and no commands make it to the queue, the channel will
eventually go into flow and never exit flow as reads from socket will
never take place and the queue will never communicate with the channel.
To avoid this dead-lock the channel sets a longish timer when a qourum queue
reaches it's internal command buffer limit and cancels it when falling
below again. When the timer triggers the quorum queue client resends all it's
pending commands to ensure liveness.
[#163975460]
|
| | | |
| | |
| | |
| | |
| | |
| | | |
It simplifies the test of feature flags support.
While here, update the `queue_parallel` testsuite to use it.
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Asserting that a process on a remote node is down at this very
moment is inherently racy and opportunistic.
rabbit_ct_broker_helpers:force_vhost_failure/2 will retry up to 10 times
to make sure that the top vhost supervision tree process did go down.
That is good enough.
Per discussion with @kjnilsson.
|
| | | | |
|
| | |/
|/| |
|
| |\ \
| | |
| | | |
Fix more dialyzer warnings
|
| | |\ \
| |/ /
|/| | |
|
| |\ \ \
| |_|/
|/| | |
|
| | |\ \
|/ / /
| | |
| | | |
jsoref-spelling
|
| | | | |
|
| | | | |
|
| | | | |
|