| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
elegant because it means that the delta gen fun used to seed the msg store does not generate any deltas for transient msgs, which means that the msg_store will take care of deleting transient msgs without any further interaction.
|
| |
|
|
| |
queue_index because no fhc. Also added notes about the deletion of queues on startup.
|
| |
|
|
| |
queues. However, it's still not quite right because queues can be deleted by other nodes in the cluster between extracting the list of durable queues in msg_store startup and the startup of queues themselves. This means that we can end up seeding the msg_store with msgs from queues that won't actually start up. You might think that we'd be saved by the fact that _process:terminate deletes the queue, but no, because fwics, _process isn't trapping exits, meaning that the terminate won't be called (and most likely rightly so so that it doesn't upset mnesia's state) by the exit(Pid, shutdown) in amqqueue:recover. So there still needs to be some sort of fix somewhere and somehow
|
| |
|
|
| |
queues on start up, which is a bit cyclical, as I'd like to not start the msg_store until we know which queues are durable and which aren't, but we also can't start the queues until the msg_store is running. Fun.
|
| |
|
|
| |
is empty which we already know has been sent out to a consumer, so it's really just a case of writing to disk the message, and index pub and deliver entries iff the message is persistent
|
| |
|
|
| |
called externally. Also rework requeue, because with the ability to indicate that persistent msgs will already be in msg_store, we don't need to call msg_store:write for persistent msgs, which means that we can also avoid the call to msg_store:remove that would have happened in the call to ack
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
persistent msgs have already been sent to disk
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
purge, we have to walk through the index on disk in order to pull up msgs which have been delivered and not acked
|
| |
|
|
| |
non-acked seqids thus splitting out this functionality
|
| |
|
|
| |
or mins
|
| |
|
|
| |
return the count of the msgs purged. Secondly, it needs to remember to purge msgs and indices on disk that are in q1 or q4.
|
| |
|
|
| |
example, in store_alpha, q1 *must* be empty unless one of q2, gamma and q3 is non empty. As such, to determine whether the alpha goes to q4, we only need to test for emptiness of q2, gamma and q3, not q1 aswell. Similar logic holds for store_beta
|
| |\ |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
That way we don't leave garbage - transactionally published, but
uncommitted messages - in the message store.
Also, we we can get rid of the pending_commits state wart in
disk_queue. That is possible because both tx commits and queue
deletions are issued by the queue process and tx commits are
synchronous, so there is never a chance of there being a pending commit
when doing a deletion.
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
there've been a few associated changes in queue_process.
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
However, we actually assume that the prefetcher does no work. The only other possibility is to assume that the prefetcher always completes, which can lead to q1 being pointlessly evicted to disk. Also, we stop the prefetcher as soon as have to reduce our memory usage, and at that point everything should work out. So, when starting the prefetcher, we don't adjust the ram_msg_count, but as we drain or stop the prefetcher, we include the prefetched alphas in the ram_msg_count and then evict q1 to disk as necessary. Whilst this means we will have more msgs in RAM then we claim, the fact that we stop the prefetcher as soon as we have to reduce our memory usage saves us from ruin.
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
place of lists:min and lists:max where appropriate
|
| | |
| |
| |
| | |
moving around. Generally trying to work towards the mq API so that it can be dropped in
|
| | |
| |
| |
| | |
take them off disk
|
| | |
| |
| |
| | |
from gamma to q3 if possible. Hence a refactoring here.
|
| | |
| |
| |
| | |
simply returning the highest unacked seq id, instead of looking for the highest seq id ever encountered. This could have led to reuse of seq ids. We also need to know the total message count in the queue index which is the number of unacked msgs recorded in the index, and we also need the seq id of the segment boundary of the segment containing the first msg in the queue. This is so that we can form the inital gamma correctly.
|
| | | |
|
| |\ \
| |/ |
|
| | | |
|
| | |
| |
| |
| | |
removal of a lot of algorithmic bugs. No real new features, but code in much better state.
|
| |\ \
| |/ |
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The API to the msg_store has changed: now instead of asking whether a
sync is needed for a set of msg ids, and subsequently requesting a
sync, we request a sync for a set of msg ids and supply a callback
that is invoked when that sync is done. That way the msg_store can
make its own decisions on when to sync, and less logic is required by
callers.
During queue deletion we must remove *all* queue messages from the
store, including those that are part of committed transactions for
which the disk_queue has not yet received the sync callback. To do
that we keep a record of these messages in a dict in the state. The
dict also ensures that we do not act on a sync callback involving a
queue which has since been deleted and perhaps recreated.
|
| | |\ |
|
| | | | |
|
| | | | |
|
| | | |\ |
|
| | | | | |
|
| | | | |
| | | |
| | | |
| | | | |
marginally more useful variable names, and more API, in particular proper support for the prefetcher. Also, totally untested.
|
| | | | | |
|
| | | | | |
|
| | | | | |
|