| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| |
|
|
| |
In its current form this probably does not fit busier scenarios.
|
| |
|
|
| |
This reverts commit 01c4ca2aa919922bb1da921674a87d1b96ec0528.
|
| |
|
|
|
|
|
|
| |
instead of relying on a Lager transform-driven return value.
This should avoid erlang/otp#4576 on Erlang 24.
Per discussion with @lhoguin.
|
| |
|
|
|
|
| |
This reverts commit fb4f88e7dd5ade4097225c08024b07c14be31643.
See erlang/otp#4576
|
| | |
|
| |\
| |
| | |
Upgrade Lager to 3.9 for OTP 24 compatibility
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
`rabbit_federation_queue_link:go()` and the exchange equivalent are asynchronous (cast).
They are luckily executed after the links are started by the decorators, and probably
most of the time they find the links up and ready to `go`. However, this might not always
be the case. The retry introduced on the previous commit that is triggered by the link itself
once if finds that federation is down, guarantees that the `go` will be handled by the link
process. Thus, the calls to rabbit_federation_queue_link:go() and rabbit_federation_exchange_link:go()
can be removed from the app.
|
| | |
| |
| |
| | |
Per discussion with @dcorbacho.
|
| | | |
|
| | |
| |
| |
| | |
there is still one failing queue federation test.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| | |
Async threads are basically not used these days.
Dirty I/O schedulers, on the other hand, are used a lot.
|
| | |
| |
| |
| | |
More responsive when the system is overloaded with file calls.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
RABBITMQ_NODE_PORT is exported by default and set to 5672. Re-exporting it in that
case will actually break the case where we set up rabbit with tls on the default port:
2021-02-28 07:44:10.732 [error] <0.453.0> Failed to start Ranch listener
{acceptor,{172,17,1,93},5672} in ranch_ssl:listen([{cacerts,'...'},{key,'...'},{cert,'...'},{ip,{172,17,1,93}},{port,5672},
inet,{keepalive,true}, {versions,['tlsv1.1','tlsv1.2']},{certfile,"/etc/pki/tls/certs/rabbitmq.crt"},{keyfile,"/etc/pki/tls/private/rabbitmq.key"},
{depth,1},{secure_renegotiate,true},{reuse_sessions,true},{honor_cipher_order,true},{verify,verify_none},{fail_if_no_peer_cert,false}])
for reason eaddrinuse (address already in use)
This is because by explicitely always exporting it, we force rabbit to listen to
that port via tcp and that is a problem when we want to do SSL on that port.
Since 5672 is the default port already we can just avoid exporting this port when
the user does not customize the port.
Tested both in a non-TLS env (A) and in a TLS-env (B) successfully:
(A) Non-TLS
[root@messaging-0 /]# grep -ir -e tls -e ssl /etc/rabbitmq
[root@messaging-0 /]#
[root@messaging-0 /]# pcs status |grep rabbitmq
* rabbitmq-bundle-0 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-0
* rabbitmq-bundle-1 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-1
* rabbitmq-bundle-2 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-2
(B) TLS
[root@messaging-0 /]# grep -ir -e tls -e ssl /etc/rabbitmq/ |head -n3
/etc/rabbitmq/rabbitmq.config: {ssl, [{versions, ['tlsv1.1', 'tlsv1.2']}]},
/etc/rabbitmq/rabbitmq.config: {ssl_listeners, [{"172.17.1.48", 5672}]},
/etc/rabbitmq/rabbitmq.config: {ssl_options, [
[root@messaging-0 ~]# pcs status |grep rabbitmq
* rabbitmq-bundle-0 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-0
* rabbitmq-bundle-1 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-1
* rabbitmq-bundle-2 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-2
Note: I don't believe we should export RABBITMQ_NODE_PORT at all, since you can specify all ports
in the rabbit configuration anyways, but prefer to play it safe here as folks might rely on being
able to customize this.
Signed-off-by: Michele Baldessari <michele@acksyn.org>
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently every call to unblock_client_access() is followed by a log line
showing which function requested the unblocking. When we pass the parameter
OCF_RESKEY_avoid_using_iptables=true it makes no sense to log
unblocking of iptables since it is effectively a no-op.
Let's move that logging inside the unblock_client_access() function
allowing a parameter to log which function called it.
Tested on a cluster with rabbitmq bundles with avoid_using_iptables=true
and observed no spurious logging any longer:
[root@messaging-0 ~]# journalctl |grep 'unblocked access to RMQ port' |wc -l
0
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We introduce the OCF_RESKEY_allowed_cluster_node parameter which can be used to specify
which nodes of the cluster rabbitmq is expected to run on. When this variable is not
set the resource agent assumes that all nodes of the cluster (output of crm_node -l)
are eligible to run rabbitmq. The use case here is clusters that have a large
numbers of node, where only a specific subset is used for rabbitmq (usually this is
done with some constraints).
Tested in a 9-node cluster as follows:
[root@messaging-0 ~]# pcs resource config rabbitmq
Resource: rabbitmq (class=ocf provider=rabbitmq type=rabbitmq-server-ha)
Attributes: allowed_cluster_nodes="messaging-0 messaging-1 messaging-2" avoid_using_iptables=true
Meta Attrs: container-attribute-target=host master-max=3 notify=true ordered=true
Operations: demote interval=0s timeout=30 (rabbitmq-demote-interval-0s)
monitor interval=5 timeout=30 (rabbitmq-monitor-interval-5)
monitor interval=3 role=Master timeout=30 (rabbitmq-monitor-interval-3)
notify interval=0s timeout=20 (rabbitmq-notify-interval-0s)
promote interval=0s timeout=60s (rabbitmq-promote-interval-0s)
start interval=0s timeout=200s (rabbitmq-start-interval-0s)
stop interval=0s timeout=200s (rabbitmq-stop-interval-0s)
[root@messaging-0 ~]# pcs status |grep -e rabbitmq -e messaging
* Online: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
...
* Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]:
* rabbitmq-bundle-0 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-0
* rabbitmq-bundle-1 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-1
* rabbitmq-bundle-2 (ocf::rabbitmq:rabbitmq-server-ha): Master messaging-2
|
| | |
| |
| |
| | |
Per suggestion from @adamhooper in #2852
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
since pg2 was removed in OTP 24.
The only decision worth mentioning here is that both plugins share
a pg scope, which is started in rabbitmq_management_agent supervision
tree and idempotently started in rabbitmq_management without
attaching the scope pid to its tree.
Per discussion with @lhoguin.
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
This reverts commit b1eaf8c9e20bfe7068cafcccf73f7117fc0e196b.
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
In stream protocol.
|
| | | |
|
| | |
| |
| |
| |
| | |
`lager_util:expand_path/1` use changes are
due to erlang-lager/lager#540
|
| | |
| |
| |
| | |
(cherry picked from commit 3a169cc9df14b925a0ebf33741f7e58b79345c72)
|
| |\ \
| | |
| | | |
pg2 => pg for OTP 24 compatibility
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
`rabbit_federation_queue_link:go()` and the exchange equivalent are asynchronous (cast).
They are luckily executed after the links are started by the decorators, and probably
most of the time they find the links up and ready to `go`. However, this might not always
be the case. The retry introduced on the previous commit that is triggered by the link itself
once if finds that federation is down, guarantees that the `go` will be handled by the link
process. Thus, the calls to rabbit_federation_queue_link:go() and rabbit_federation_exchange_link:go()
can be removed from the app.
|
| | | |
| | |
| | |
| | | |
Per discussion with @dcorbacho.
|
| | | | |
|
| |/ /
| |
| |
| | |
there is still one failing queue federation test.
|
| |\ \
| | |
| | | |
Clean up rabbit_fifo_usage table on queue.delete
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|