| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| |
|
|
| |
multiple queues, eliminates the need for multiple reads, provided the /next/ copy of the message is requested before the previous copy of the message has been acked. Should reduce memory pressure.
|
| |
|
|
| |
means that the mixed_queue avoids unnecessary term_to_binary calls. Tests adjusted and whole test suite still passes
|
| | |
|
| |\ |
|
| | | |
|
| | |
| |
| |
| | |
standard unix variable which should be honoured
|
| | |
| |
| |
| | |
functions which I can't work out what to do about... Also cosmetic
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |\ \
| |/ |
|
| | | |
|
| | | |
|
| | |\ |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | |\ \
| | | |
| | | |
| | | | |
further qa is still required
|
| | | | | |
|
| | |\ \ \ |
|
| | | | |/
| | |/| |
|
| | |\ \ \
| | |/ /
| |/| | |
|
| | |/ / |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | |
| | |
| | | |
deliver_from_queue case, we now reduce n calls to mixed_queue:is_empty to 1 call and pass around the remaining count as the acc. l33t
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | |
| | |
| | | |
useful. The code is thus now a good bit simpler.
|
| | | |
| | |
| | |
| | |
| | |
| | | |
to the disk_queue when operating in disk only mode and seems to have substantially improved performance (in addition to avoiding a sync call, repeated lasting for the length of a queue (erlang stdlib) with a million+ items in it can't have been cheap). It now seems to be very much the case that when coming out of disk only mode, huge back logs are recovered reliably.
Also, added reduce_memory_footprint and increase_memory_footprint to control. Both can be run twice and alter whether the disk_queue changes mode or the individual queues.
|
| |\ \ \
| |/ / |
|
| | |\ \ |
|
| | | | | |
|
| | |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This involved some substantial changes to the queue internal data
structures - mostly by choice; the new design is cleaner:
- We no longer keep a list of consumers in the channel
records. Now the channel records just contain a consumer count
instead, and that's only there for efficiency so we can more
easily tell when we need to register/unregister with the limiter.
- We now keep *two* consumer queues - one of active consumers
(that's the one we've always had) and one of blocked consumers.
We round-robin on the first one as before, and move things between the
two queues when blocking/unblocking channels. When doing so the
relative order of a channel's consumers is preserved, so the effects
of any round-robining the active consumers get carried
through to the blocked consumers when they get blocked and then back
to the active consumers when they get unblocked.
|
| | |\ \
| | |/
| | |
| | |
| | |
| | | |
We point to the macports files of the default branch from our web site
and they got broken with the merge of bug20333. This hopefully fixes
that, but further qa is required.
|
| | | | |
|
| | | |
| | |
| | |
| | | |
os x su command
|
| | | |
| | |
| | |
| | | |
UNSENT_MESSAGE_LIMIT made performance better. This then made me wonder if the unblock and notify_sent messages weren't getting through fast enough, and sure enough, using pcast is much better there. Also, turning on dbg:tpl showed that the common path in mixed_queue was to call publish_delivered (i.e. the message has been delivered to a consumer, we just need to record this fact). Making sure everything in there for the non-persistent, non-durable but disk-only mode is asynchronous also helped performance massively.
|
| | | |
| | |
| | |
| | | |
persistent/durable async, and it has improved some issues. But, if you switch to disk only mode, then allow, say 10k messages to build up (use MulticastMain) then switch back to ram mode, then it won't recover - the receive rate will stay very low, and rabbitmqctl list_queues will continue to grow insanely. This is very very odd, because querying the disk_queue directly for the queue length shows it drops to 0, but at least one CPU is maxed out at 100% use, messages continue to arrive, but the delivery rate never goes back up. Mysterious.
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Reversed order - i.e. now when swapping out, the first thing is to alter the disk_queue, and the 2nd thing is to alter the queues.
And vice versa.
The reasoning is as follows:
Changing the disk_queue is a BIG operation because it affects every message in there, from all queues. In order to minimise the impa
ct of this operation, we must do it first, not second, because if we do it first, only persistent messages from durable queues will
be in there, whereas if we do it second, then all messages from all queues will be in there.
Similarly, when swapping in, altering the individual queues is the first thing to do because it prevents the disk queue from growing
further (i.e. only persistent messages to durable queues then make it to the disk queue), and each queue pulls out from the disk qu
eue all the messages in there and so subsequent delivery from the mixed queue becomes very fast (actually, this is a total lie because of the call to rabbit_disk_queue:phantom_deliver in rabbit_mixed_queue:deliver - if I could get rid of this or at least make it async then that would greatly improve matters).
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
rabbit_queue_mode_manager:change_memory_usage(undef, true).
this will first ask all queues to switch from mixed to disk mode, and will on a 2nd call, ask the disk queue to switch to disk only mode.
rabbit_queue_mode_manager:change_memory_usage(undef, false).
moves the other way.
This all works, eg set MulticastMain pushing in messages and switch modes, and it's fine.
One immediate problem is that as soon as everything becomes disk only, the performance suffers, so as a result messages build up. This is as expected. Then, going back to the middle mode (i.e. disk queue in ram_disk mode and queues in disk mode), the switch in the disk queue eats up a lot of memory. I suspect this is the effect of converting the mnesia table from disc_only_copies to disc_copies when there are 40k+ messages in there (one row per message). As a result, this conversion on its own is very dangerous to make. It might be more sensible to use the "weird" mode, where the queues are in mixed mode and the disk queue is in disk_only mode so as to try and get the queues to drain as fast as possible, reducing the size of the mnesia table so that when it is finally converted back, it's small.
More experimentation is needed.
I'll hook the above commands into rabbitmqctl soon.
|