| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an aggregated commit of all changes related to the
initial implementation of queue types and on top of that the stream
queue type. The varios commit messages have simply been included mostly
un-edited below.
Make rabbit_amqqueue:not_found_or_absent_dirty/1 visible
For use in the stream plugin.
Use bigger retention policy on max-age test
Set coordinator timeout to 30s
Handle coordinator unavailable error
Handle operator policies as maps when checking if is applicable
Add is_policy_applicable/2 to classic queues
Ignore restart commands if the stream has been deleted
It could happen that after termination some of the monitors are still
up and trigger writer/replica restarts
Policy support on stream queues
Remove subscription events on stream coordinator
Ensure old leaders are removed from monitors
Introduce delay when retrying a failed phase
Note that this ensures monitor is setup, there was a bug where no
monitor was really started when re-trying the same phase
Restart replicas after leader election instead of relying on old monitors
Use timer for stream coordinator retries
Fix stream stats for members/online
Multiple fixes for replica monitoring and restart
Ensure pending commands are appended at the end and re-run
Ensure phase is reset with the state
Remove duplicates from replica list
Restart current phase on state_enter
Remove unused import
Ensure rabbit is running when checking for stream quorum
Restart replicas
Add a close/1 function to queue types
So that we can get a chance of cleaning up resources if needed.
Stream queues close their osiris logs at this point.
fix compiler errors
stream-queue: take retention into account
When calculating ready messages metrics.
Add osiris to the list of rabbit deps
Retry restart of replicas
Do not restart replicas or leaders after receiving a delete cluster command
Add more logging to the stream coordinator
Monitor subscribed processes on the stream coordinator
Memory breakdown for stream queues
Update quorum queue event formatter
rabbit_msg_record fixes
Refactor channel confirms
Remove old unconfirmed_messages module that was designed to handle
multiple queue fan in logic including all ha mirrors etc. Replaced with
simpler rabbit_confirms module that handles the fan out and leaves any
queue specific logic (such as confirms from mirrors) to the queue type
implemention. Also this module has a dedicated test module. Which is
nice.
Backward compatibility with 3.8.x events
Supports mixed version cluster upgrades
Match specification when stream queue already exists
Max age retention for stream queues
Stop all replicas before starting leader election
stream: disallow global qos
remove IS_CLASSIC|QUORUM macros
Ensure only classic queues are notified on channel down
This also removes the delivering_queues map in the channel state as it
should not be needed for this and just cause additional unecessary
accounting.
Polish AMQP 1.0/0.9.1 properties conversion
Support byte in application properties, handle 1-bit representation for
booleans.
Use binary in header for long AMQP 1.0 ID
Fix AMQP 1.0 to 0.9.1 conversion
Fix test due to incorrect type
Convert timestamp application properties to/from seconds
AMQP 1.0 uses milliseconds for timestamp and AMQP 0.9.1 uses seconds, so
conversion needed.
Dialyzer fixes
Handle all message-id types
AMQP 1.0 is more liberal in it's allowed types of message-id and
correlation-id - this adds headers to describe the type of the data in
the message_id / correlation_id properties and also handles the case
where the data cannot fit by again using headers.
Resize stream coordinator cluster when broker configuration changes
convert timestamp to and fro seconds
user_id should be a binary
message annotations keys need to be symbols
stream-queue: default exchange and routing key
As these won't be present for data written using the rabbitmq-stream
plugin.
Add exchange, routing key as message annotations
To the AMQP 1.0 formatted data to enable roundtrip.
Add osiris logging module config
And update logging config test suite.
Restart election when start of new leader fails
The node might have just gone down so we need to try another one
Only aux keeps track of phase now, as it might change if the leader election fails
Stream coordinator refactor - all state is kept on the ra machine
Ensure any ra cluster not a qq is not cleaned up
Fixes to recovery and monitoring
Add AMQP 1.0 common to dependencies
Add rabbit_msg_record module
To handle conversions into internal stream storage format.
Use rabbitmq-common stream-queue branch
Use SSH for osiris dependency
Stream coordinator: delete replica
Stream coordinator: add replica
Stream coordinator: leader failover
Stream coordinator: declare and delete
Test consuming from a random offset
Previous offsets should not be delivered to consumers
Consume from stream replicas and multiple test fixes
Use max-length-bytes and add new max-segment-size
Use SSH for osiris dependency
Basic cancel for stream queues
Publish stream queues and settle/reject/requeue refactor
Consume from stream queues
Fix recovery
Publish stream messages
Add/delete stream replicas
Use safe queue names
Set retention policy for stream queues
Required by the ctl command
[#171207092]
Stream queue delete queue
fix missing callback impl
Stream queue declare
Queue type abstraction
And use the implementing module as the value of the amqqueue record
`type` field. This will allow for easy dispatch to the queue type
implementation.
Queue type abstraction
Move queue declare into rabbit_queue_type
Move queue delete into queue type implementation
Queue type: dequeue/basic_get
Move info inside queue type abstraction
Move policy change into queue type interface
Add purge to queue type
Add recovery to the queue type interface
Rename amqqueue quorum_nodes field
To a more generic an extensible opaque queue type specific map.
Fix tests and handle classic API response
Fix HA queue confirm bug
All mirrors need to be present as queue names. This introduces context
linking allowing additional queue refs to be linked to a single "master"
queue ref contining the actual queue context.
Fix issue with events of deleted queues
Also update queue type smoke test to use a cluster by default.
correct default value of amqqueue getter
Move classic queues further inside queue type interface
why
[TrackerId]
Dialyzer fixes
|
| |
|
|
|
| |
We have seen false positives from 5s timeouts
on heavy loaded nodes, way more than once.
|
| |
|
|
|
|
| |
Make 'tracking_execution_timeout' configurable
Add per_user_connection_channel_tracking_SUITE
|
| |
|
|
| |
Follow-up to #2432
|
| |
|
|
|
|
| |
Version 2.4.0 includes the LICENSE file
Related to https://github.com/pivotal-cf/norsk-config/pull/1167
|
| |\
| |
| | |
Makefile: Upgrade Cuttlefish from 2.2.0 to 2.3.0
|
| | |
| |
| |
| |
| | |
This fixes a crash of Cuttlefish when a line is made only of
whitespaces.
|
| |/
|
|
| |
Signed-off-by: Pierre Fenoll <pierrefenoll@gmail.com>
|
| | |
|
| |
|
|
|
|
| |
Specifically:
- changed default quorum_commands_soft_limit from 256 to 32
- override Ra wal_max_batch_size to 4096
|
| | |
|
| |
|
|
|
| |
Those are all the testsuites which take more than 5 minutes in
Concourse.
|
| |
|
|
| |
In continuation to #2208.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In our testing we observe no meaningfully different peak
memory (carrier size) limit and 18 to 65% lower tail latencies
with small messages, and teen % lower with 50 MB messages.
Throughput is single digit % higher.
CPU usage was not meaningfully different on a four core host.
References rabbitmq/rabbitmq-common#343.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A large part of the rabbitmq-server(8) and CLI scripts, both
Bourne-shell and Windows Batch versions, was moved to Erlang code and
the RabbitMQ startup procedure was reorganized to be closer to a regular
Erlang application.
A new application called `rabbitmq_prelaunch` is responsible for:
1. Querying the environment variables to initialize important
variables (using the new `rabbit_env` module in rabbitmq-common).
2. Checking the compatibility with the Erlang/OTP runtime.
3. Configuring Erlang distribution.
5. Writing the PID file.
The application is started early (i.e. it is started before `rabbit`).
The `rabbit` application runs the second half of the prelaunch sequence
at the beginning of the application `start()` function. This second
phase is responsible for the following steps:
1. Preparing the feature flags registry.
2. Reading and validating the configuration.
3. Configuring logging.
4. Running the various cluster checks.
In addition to this prelaunch sequence, the `rabbit` application start
procedure ends with a "postlaunch" sequence which takes care of
starting enabled plugins.
Thanks to this, RabbitMQ can be started with `application:start(rabbit)`
as any other Erlang application. The only caveats are:
* Mnesia must be stopped at the time `rabbit_prelaunch` is started,
and must remain stopped when `rabbit` is started, to allow the
Erlang distribution setup and cluster checks. `rabbit` takes care of
starting Mnesia.
* Likewise for Ra, because it relies on the `ra` application
environment to be configured.
Transitioning from scripts to Erlang code has the following benefits:
* RabbitMQ start behavior should be identical between Unix and
Windows. Also, features should be on par now. For instance, RabbitMQ
now writes a PID file on Windows, like it always did on Unix-based
systems.
* The difference between published packages and a development
environment are greatly reduced. In fact, we removed all the "if
this is a dev working copy, then ..." blocks.
As part of that, the `rabbit` application is now treated like its
plugins: it is packaged as an `.ez` archive and written to the
`plugins` directory (even though it is not technically a plugin).
Also in a development copy, the CLI is copied to the top-level
project. So when testing a plugin for instance, the CLI to use is
`sbin/rabbitmqctl` in the current directory, not the master copy in
`rabbit/scripts`.
* As a consequence of the previous two points, maintaining and testing
on Windows is now made easy. It should even be possible to setup CI
on Windows.
* There are less issues with paths containing non-US-ASCII characters,
which can happen on Windows because RabbitMQ stores its data in user
directories by default.
This process brings at least one more benefit: we now have early logging
during this prelaunch phase, which eases diagnostics and debugging.
There are also behavior changes:
* The new format configuration files used to be converted to an
Erlang-term-based file by the Cuttlefish CLI. To do that,
configuration schemas were copied to a temporary directory and the
generated configuration file was written to RabbitMQ data directory.
Now, Cuttlefish is used as a library: everything happens in memory.
No schemas are copied, no generated configuration is written to
disk.
* The PID file is removed when the Erlang VM exits.
* The `rabbit_config` module was trimmed significantly because most of
the configuration handling is done in `rabbit_prelaunch_conf` now.
* The RabbitMQ nodename does not appear on the command line, therefore
it is missing from ps(1) and top(1) output.
* The `rabbit:start()` function will probably behave differently in
some ways because it defers everything to the Erlang application
controller (instead of reimplementing it).
|
| |
|
|
|
|
|
| |
From both new and old location. Note that it is unlikely
that both will be defined.
Part of rabbitmq/rabbitmq-management#749.
|
| |
|
|
|
|
|
|
|
| |
management.load_definitions is still there but being superseded
with just
load_definitions = /path/to/definitions/file.json
Part of rabbitmq/rabbitmq-management#749.
|
| |
|
|
| |
Part of rabbitmq/rabbitmq-management#749.
|
| |
|
|
| |
Fixes rabbitmqr/rabbitmq-website#502
|
| | |
|
| |
|
|
| |
[#164931055]
|
| |
|
|
|
|
|
|
| |
Also handle case where client does not support consumer cancellation and
rename the queue_cleanup timer to a generic "tick" timer for channels to
perform periodic activities.
[#164212469]
|
| |
|
|
| |
The code already queries `rabbit_pbe`.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
In a few places, the migration of the `rabbit_queue` and
`rabbit_durable_queue` Mnesia tables might conflict with accesses to
those tables.
[#159298729]
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The goal is that each breaking change is implemented in way that a
RabbitMQ node having that change can work in the same cluster as other
nodes which do not know about it.
The feature flags are here to indicate which breaking changes are
"available" (i.e. the node knows about the breaking changes, but it
still runs in a backward-compatible mode) and which ones are currently
activated (i.e. the node uses the new code and older node can not be
part of the cluster anymore).
Enabling a feature flag (i.e. activating a breaking change) is a manual
operation where the user validates that the cluster is ready to switch
to the new world after the breaking change. Therefore the subsystem
ensures RabbitMQ nodes can talk to each other even if they don't have
the same code as long as they agree on using the common subset, and
prevents RabbitMQ nodes from talking to each other once the new code is
being used and one node does not understand it.
The consequence is that if a breaking change is implemented using this
new subsystem, a cluster can be upgraded one node at a time instead of
shutting down the entire cluster to upgrade.
Of course, the ability to implement a breaking change in such a way
entirely depends on the nature of that change. This new subsystem does
not guarantee that a cluster shutdown will never be required again.
[#159298729]
|
| | |
|
| |\
| |
| | |
Add sysmon-handler to RabbitMQ
|
| | |
| |
| |
| |
| |
| |
| |
| | |
Fixes #952
Update for rabbitmq-sysmon -> sysmon-handler rename
Use hex.pm package for sysmon_handler, copy schema into rabbit.schema
|
| | |
| |
| |
| | |
This is only so we don't reach the log size limit on Travis CI.
|
| |/
|
|
|
| |
This should help reduce the load in Travis CI or when doing quick
testing locally.
|
| |
|
|
|
|
|
|
|
|
|
| |
Add `max_message_size` configuration to configure limit
in bytes.
If message is bigger - channel exception will be thrown.
Default limit is 128MB.
There is still a hard limit of 521MB.
[#161983593]
|
| | |
|
| |
|
|
| |
It was mistakenly put back in a merge.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Test queue.declare method with quorum type
[#154472130]
* Cosmetics
[#154472130]
* Start quorum queue
Includes ra as a rabbit dependency
[#154472152]
* Update info and list operations to use quorum queues
Basic implementation. Might need an update when more functionality
is added to the quorum queues.
[#154472152]
* Stop quorum queue
[#154472158]
* Restart quorum queue
[#154472164]
* Introduce UId in ra config to support newer version of ra
Improved ra stop
[#154472158]
* Put data inside VHost specific subdirs
[#154472164]
* Include ra in rabbit deps to support stop_app/start_app command
[#154472164]
* Stop quorum queues in `rabbit_amqqueue:stop/1`
[#154472158]
* Revert creation of fifo ets table inside rabbit
Now supported by ra
[#154472158]
* Filter quorum queues
[#154472158]
* Test restart node with quorum queues
[#154472164]
* Publish to quorum queues
[#154472174]
* Use `ra:restart_node/1`
[#154472164]
* Wait for stats to be published when querying quorum queues
[#154472174]
* Test publish and queue length after restart
[#154472174]
* Consume messages from quorum queues with basic.get
[#154472211]
* Autoack messages from quorum queues on basic.get
[#154472211]
* Fix no_ack meaning
no_ack = true is equivalent to autoack
[#154472211]
* Use data_dir as provided in the config
If we modify the data_dir, ra is not able to delete the data
when a queue is deleted
[#154472158]
* Remove unused code/variables
[#154472158]
* Subscribe to a quorum queue
Supports auto-ack
[#154472215]
* Ack messages consumed from quorum queues
[#154472221]
* Nack messages consumed from quorum queues
[#154804608]
* Use delivery tag as consumer tag for basic.get in quorum queues
[#154472221]
* Support for publisher confirms in quorum queues
[#154472198]
* Integrate with ra_fifo_client
* Clear queue state on queue.delete
[#154472158]
* Fix quorum nack
[#154804608]
* Test redelivery after nack
[#154804608]
* Nack without requeueing
[#154472225]
* Test multiple acks
[#154804208]
* Test multiple nacks
[#154804314]
* Configure dead letter exchange with queue declare
[#155076661]
* Use a per-vhost process to handle dead-lettering
Needs to hold state for quorum queues
[#155401802]
* Implement dead-lettering on nack'ed messages
[#154804620]
* Use queue name as a resource on message delivery
Fixes a previously introduced bug
[#154804608]
* Handle ra events on dead letter process
[#155401802]
* Pass empty queue states to queue delete
Queue deletion on vhost deletion calls directly to rabbit_amqqueue.
Queue states are not available, but we can provide an empty map as
in deletion the states are only needed for cleanup.
* Generate quorum queue stats and events
Consumer delete events are still pending, as depend on basic.cancel
(not implemented yet), ra terminating or ra detecting channel down
[#154472241]
* Ensure quorum mapping entries are available before metric emission
[#154472241]
* Configure data_dir, uses new RABBITMQ_QUORUM_BASE env var
[#154472152]
* Use untracked enqueues when sending wihtout channel
Updated several other calls missed during the quorum implementation
* Revert "Configure data_dir, uses new RABBITMQ_QUORUM_BASE env var"
This reverts commit f2261212410affecb238fcbd1fb451381aee4036.
* Configure data_dir, uses new RABBITMQ_QUORUM_DIR based on mnesia dir
[#154472152]
* Fix get_quorum_state
* Fix calculation of quorum pids
* Move all quorum queues code to its own module
[#154472241]
* Return an error when declaring a quorum queue with an incompatible argument
[#154521696]
* Cleanup of quorum queue state after queue delete
Also fixes some existing problems where the state wasn't properly
stored
[#155458625]
* Revert Revert "Declare a quorum queue using the queue.declare method"
* Remove duplicated state info
[#154472241]
* Start/stop multi-node quorum queue
[#154472231]
[#154472236]
* Restart nodes in a multi-node quorum cluster
[#154472238]
* Test restart and leadership takeover on multiple nodes
[#154472238]
* Wait for leader down after deleting a quorum cluster
It ensures an smooth delete-declare sequence without race
conditions. The test included here detected the situation before
the fix.
[#154472236]
* Populate quorum_mapping from mnesia when not available
Ensures that leader nodes that don't have direct requests can get
the mapping ra name -> queue name
* Cosmetics
* Do not emit core metrics if queue has just been deleted
* Use rabbit_mnesia:is_process_alive
Fixes bug introduced by cac9583e1bb2705be7f06c2ab7f416a75d11c875
[#154472231]
* Only try to report stats if quorum process is alive
* Implement cancel consumer callback
Deletes metrics and sends consumer deleted event
* Remove unnecessary trigger election call
ra:restart_node has already been called during the recovery
* Apply cancellation callback on node hosting the channel
* Cosmetics
* Read new fifo metrics which store directly total, ready and unack
* Implement basic.cancel for quorum queues
* Store leader in amqqueue record, report all in stats
[#154472407]
* Declare quorum queue in mnesia before starting the ra cluster
Record needs to be stored first to update the leader on ra effects
* Revert
* Purge quorum queues
[#154472182]
* Improve use of untracked_enqueue
Choose the persisted leader id instead of just using the id of the
leader at point of creation.
* Store quorum leader in the pid field of amqqueue record
Same as mirrored queues, no real need for an additional field
* Improve recovery
When a ra node has never been started on a rabbit node ensure it doesn't
fail but instead rebuilds the config and starts the node as a new node.
Also fix issue when a quorum queue is declared when one of it's rabbit
nodes are unavailable.
[#157054606]
* Cleanup core metrics after leader change
[#157054473]
* Return an error on sync_queue on quorum queues
[#154472334]
* Return an error on cancel_sync_queue on quorum queues
[#154472337]
* Fix basic_cancel and basic_consume return values
Ensure the quorum queue state is always returned by these functions.
* Restore arity of amqqeueu delete and purge functions.
This avoids some breaking changes in the cli.
* Fix bug returning consumers.
* remove rogue debug log
* Integrate ingress flow control with quorum queues
[#157000583]
* Configure commands soft limit
[#157000583]
* Support quorum pids on rabbit_mnesia:is_process_alive
* Publish consumers metric for quorum queues
* Whitelist quorum directory in is_virgin_node
Allow the quorum directoy to exist without affecting the status of the
Rabbit node.
* Delete queue_metrics on leader change.
Also run the become_leader handler in a separate process to avoid
blocking.
[#157424225]
* Report cluster status in quorum queue infos. New per node status command.
Related to
[#157146500]
* Remove quorum_mapping table
As we can store the full queue name resource as the cluster id of the
ra_fifo_client state we can avoid needed the quorum_mapping table.
* Fix xref issue
* Provide quorum members information in stats
[#157146500]
* fix unused variable
* quorum queue multiple declare handling
Extend rabbit_amqqueue:internal_declare/2 to indicate if the queue
record was created or exisiting. From this we can then provide a code
path that should handle concurrent queue declares of the same quorum
queue.
* Return an error when declaring exclusive/auto-delete quorum queue
[#157472160]
* Restore lost changes
from 79c9bd201e1eac006a42bd162e7c86df96496629
* recover another part of commit
* fixup cherry pick
* Ra io/file metrics handler and stats publishing
[#157193081]
* Revert "Ra io/file metrics handler and stats publishing"
This reverts commit 05d15c786540322583fc655709825db215b70952.
* Do not issue confirms on node down for quorum queues.
Only a ra_event should be used to issue positive confirms for a quorum
queue.
* Ra stats publishing
[#157193081]
* Pick consumer utilisation from ra data
[#155402726]
* Handle error when deleting a quorum queue and all nodes are already down
This is in fact a successful deletion as all raft nodes are already 'stopped'
[#158656366]
* Return an error when declaring non-durable quorum queues
[#158656454]
* Rename dirty_query to committed_query
* Delete stats on leader node
[#158661152]
* Give full list of nodes to fifo client
* Handle timeout in quorum basic_get
* Fix unused variable error
* Handle timeout in basic get
[#158656366]
* Force GC after purge
[#158789389]
* Increase `ra:delete_cluster` timeout to 120s
* Revert "Force GC after purge"
This reverts commit 5c98bf22994eb39004760799d3a2c5041d16e9d4.
* Add quorum member command
[#157481599]
* Delete quorum member command
[#157481599]
* Implement basic.recover for quorum queues
[#157597411]
* Change concumer utilisation
to use the new ra_fifo table and api.
* Set max quorum queue size limit
Defaults to 7, can be configured per queue on queue.declare
Nodes are selected randomly from the list of nodes, but the one
that is executing the queue.declare command
[#159338081]
* remove potentially unrelated changes to rabbit_networking
* Move ra_fifo to rabbit
Copied ra_fifo to rabbit and renamed it rabbit_fifo.
[#159338031]
* rabbit_fifo tidy up
* rabbit_fifo tidy up
* rabbit_fifo: customer -> consumer rename
* Move ra_fifo tests
[#159338031]
* Tweak quorum_queue defaults
* quorum_queue test reliability
* Optimise quorum_queue test suite.
By only starting a rabbit cluster per group rather than test.
[#160612638]
* Renamings in line with ra API changes
* rabbit_fifo fixes
* Update with ra API changes
Ra has consolidated and simplified it's api. These changes update to
confirm to that.
* Update rabbit_fifo with latest ra changes
* Clean up out of date comment
* Return map of states
* Add test case for basic.get on an empty queue
Before the previous patch, any subsequent basic.get would crash as
the map of states had been replaced by a single state.
* Clarify use of deliver tags on record_sent
* Clean up queues after testcase
* Remove erlang monitor of quorum queues in rabbit_channel
The eol event can be used instead
* Use macros to make clearer distinctions between quorum/classic queues
Cosmetic only
* Erase queue stats on 'eol' event
* Update to follow Ra's cluster_id -> cluster_name rename.
* Rename qourum-cluster-size
To quorum-initial-group-size
* Issue confirms on quorum queue eol
Also avoid creating quorum queue session state on queue operation
methods.
* Only classic queues should be notified on channel down
* Quorum queues do not support global qos
Exit with protocol error of a basic.consume for a quorum queue is issued
on a channel with global qos enabled.
* unused variable name
* Refactoring
Strictly enfornce that channels do not monitor quorum queues.
* Refactor foreach_per_queue in the channel.
To make it call classic and quorum queues the same way.
[#161314899]
* rename function
* Query classic and quorum queues separately
during recovery as they should not be marked as stopped during failed
vhost recovery.
* Remove force_event_refresh function
As the only user of this function, the management API no longer requires
it.
* fix errors
* Remove created_at from amqqueue record
[#161343680]
* rabbit_fifo: support AMQP 1.0 consumer credit
This change implements an alternative consumer credit mechanism similar
to AMQP 1.0 link credit where the credit (prefetch) isn't automatically
topped up as deliveries are settled and instead needs to be manually
increased using a credit command. This is to be integrated with the AMQP
1.0 plugin.
[#161256187]
* Add basic.credit support for quorum queues.
Added support for AMQP 1.0 transfer flow control.
[#161256187]
* Make quorum queue recover idempotent
So that if a vhost crashes and runs the recover steps it doesn't fail
because ra servers are still running.
[#161343651]
* Add tests for vhost deletion
To ensure quorum queues are cleaned up on vhost removal.
Also fix xref issue.
[#161343673]
* remove unused clause
* always return latest value of queue
* Add rabbitmq-queues scripts. Remove ra config from .bat scripts.
* Return error if trying to get quorum status of a classic queue.
|
| |
|
|
|
|
| |
Fix up syslog protocol options to set ip to localhost if unset
Ensure default handler for OTP 21.1+ logger is removed
|
| |
|
|
|
|
|
|
| |
schlagert/syslog version 3.4.3 adds support for specifying a host name for `dest_host`, this PR supports it as well
See this gist for my test procedure:
git@gist.github.com:7ab6348839c3dc7b18b6662f024e089c.git
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
Since channel 0 exists on every connection for negotiation and error
communication. 655365 = (1 << 16) - 1, so 2047 = (1 << 11) - 1.
|
| |
|
|
| |
Closes #1593.
|
| |\ |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously the value was limited to 1 due to a bug in Ranch 1.0:
it didn't depend on the ssl app, which means Ranch/ssl shut down order
was not guaranteed. This led to scary but entirely harmless exception traces
in the log, one per TLS acceptor.
The issue is no longer present in Ranch 1.3.x and 1.4.0.
Per discussion with @essen.
|
| | | |
|
| | |
| |
| |
| |
| | |
rabbitmq-server#1367
[#151216783]
|
| |\ \
| |/ |
|