| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
`erlang:floor/1` is not available in 19.3.
[#156729133]
|
| |\
| |
| | |
Introduce rabbit_nodes:await_running_count/2
|
| | |
| |
| |
| | |
[#156729133]
|
| | |
| |
| |
| |
| |
| | |
clause return value
[#156729133]
|
| | |
| |
| |
| |
| |
| |
| | |
It will wait until the cluster has N members, up to so many seconds.
The function will return immediately for the value of 1.
Part of rabbitmq/rabbitmq-cli#235.
|
| |\ \
| |/
|/|
| |
| | |
rabbitmq/rabbitmq-server-1596-connection-name-to-connection-closed-event
Add client properties to connection.closed events
|
| | |
| |
| |
| |
| |
| | |
Fixes #1596
[#157500358]
|
| | |
| |
| |
| |
| |
| | |
Fixes #1596
[#157500358]
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This commit adds client properties to connection.closed events. The
original need was to have only the optional user-provided connection
name added to correlate connections between created and closed events,
but all the client properties are finally added for the sake of
consistency between the 2 events.
This commit uses the process dictionary to convey the client properties,
as they're not yet available in the connection state when the call to
send the closed event is made (in the after block, just after the
network connection has been established).
Fixes #1596
[#157500358]
|
| | |
| |
| |
| |
| |
| |
| |
| | |
erlang:ceil/1 is not available in 19.3.
Part of rabbitmq/rabbitmq-management#575.
[#157817330]
|
| |\ \
| | |
| | | |
Introduce rabbit_vhost:await_running_on_all_nodes/2
|
| | | |
| | |
| | |
| | |
| | |
| | | |
Part of rabbitmq/rabbitmq-management#575.
[#157817330]
|
| |/ / |
|
| |\ \
| | |
| | | |
Syslog integration
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | |
| | |
| | |
| | |
| | | |
Syslog backend is no longer configured via lager handler,
facility and identity options are in the syslog application
config now.
|
| | | | |
|
| | | |
| | |
| | |
| | |
| | | |
Configure the syslog application directly instead of relying on
rabbit_lager module.
|
| | | | |
|
| | |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Follow up to 5cdee1530d5002b316b80f488a5d87417e1d0db0
dirty_match_object does not provide much performance improvement
while it's breaking auto-delete exchanges cleanup.
A transaction with a binding deletion will call auto-delete
exchange removal, which will call a cleanup. On this cleanup
the deleted binding should not be dirty-deleted again.
Follow-up to #1589
[#156352963]
|
| | |
| |
| |
| | |
(cherry picked from commit ad5abba68c3c335a856ade2a2a38ba2c1de871fa)
|
| |/
|
|
|
|
| |
They are no longer used in 3.7.x.
(cherry picked from commit 20de46c1df013874147835ac8bdc1e707ac95030)
|
| |\
| |
| | |
Do not set table-wide and partial locks when deleting bindings.
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Dirty deletes are faster and idempotent, which means
that it can be run in transaction as long as it's locked
in the begining of transaction, which is done in
`lock_resource`.
Speed improvement is aquired by not setting record locks
for each record, since we already have a record lock
Addresses #1566
[#156352963]
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of locking entire table we can use a custom global lock on
the affected resource (source or destination).
This can improve performance when multiple records deleted at the
same time, for example when connection with exclusive queues closes.
Resource lock also aquired when adding or removing a binding, so it
won't conflict with bulk removal.
Addresses #1566
[#156352963]
|
| |\ \
| | |
| | | |
Hard cap for maximum priorities
|
| | | |
| | |
| | |
| | | |
Part of #1590.
|
| | | |
| | |
| | |
| | |
| | |
| | | |
References #1590.
[#157380396]
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This reverts commit f5aa1fbe043395806d9b9ed8780892924431466c.
This feature wasn't available in the original implementation for a reason:
policies are dynamic and can change after a queue's been declared. However,
queue priorities are (at least currently) set in stone from the moment of
queue creation. This was mentioned in the docs but not explicitly enough and got overlooked.
Credit for the [re-]discovery goes to @acogoluegnes :)
References #1590.
[#157380396]
|
| | | |
| | |
| | |
| | | |
Part of #1590.
|
| | | |
| | |
| | |
| | | |
Part of #1590.
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is the value we advertise in the docs
and it should be enforced to avoid process explosion
e.g. when an overflow value is provided.
Part of #1590.
[#157380396]
|
| |/ /
| |
| |
| |
| |
| | |
References #1590.
[#157380396]
|
| |\ \
| |/
|/| |
Change channel_max default to 2047
|
| | |
| |
| |
| |
| | |
Since channel 0 exists on every connection for negotiation and error
communication. 655365 = (1 << 16) - 1, so 2047 = (1 << 11) - 1.
|
| |/
|
|
| |
Closes #1593.
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
promoted
There can be a race condition when a master queue is briefly restarted.
If master rejoins a stopping GM it is also stopped.
In that case it sould be sefier to stop the slave and let another be
promoted and also stopped. If master return it will either rejoin
slaves or create a new GM.
|
| |\
| |
| | |
One more place where a map definition must be converted to proplist before validation
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
validation
Fixes rabbitmq/rabbitmq-management#565.
References rabbitmq/rabbitmq-server#1493, rabbitmq/rabbitmq-federation#70,
rabbitmq/rabbitmq-shovel#38, rabbitmq/rabbitmq-federation#73.
[#157045132]
|
| |\ \
| |/
|/| |
Policy key to not promote unsynchronised queues.
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This new policy controls if unsynchronised slaves should be promoted
after master crash. If set to `when-synced`, unsynchronised slaves
will not be promoted, keeping the state of the queue, but making it
unavailable until master node returns.
This change is supposed to make the cluster shutdown safier,
because queues can fail or be killed on shutdown.
The queues without master will be available from the management UI
and can be deleted and redeclared, but will not automatically loose
messages.
Trying to declare or passively declare the queue will result in a
timeout error. Same way as if the master was gracefully stopped with
ha-promote-on-shutdown: when-synced
[#156811690]
|
| |\ \
| | |
| | | |
Handle bump_reduce_memory_use non-true case
|