| Commit message (Collapse) | Author | Age | Files | Lines |
| |\
| |
| |
| | |
further work!
|
| | |
| |
| |
| | |
still honoured as to their own table storage type
|
| | |
| |
| |
| | |
Also removed chattiness of mixed_queue on queue mode transitions
|
| |\ \
| |/ |
|
| | | |
|
| |\ \
| |/ |
|
| | |
| |
| |
| | |
patching 21368 and ensuring clustering still works
|
| |\ \
| |/ |
|
| | | |
|
| | |
| |
| |
| | |
mainly remove the (now) badly named WasDiskNode var
|
| | |
| |
| |
| |
| |
| | |
Also, time for a new optimisation! YAY!
Previously, reading a message off disk meant seeking to the correct position and then reading the data. Now if the handle is already in the right position, then that seek is a waste of quite a lot of time, as it is an OS call. Now, I cache the location of the handle and so avoid seeking when possible. This has a MASSIVE effect on performance, especially in straight line cases, eg where a single prefetcher can drain a queue of disk in about one third of the time it used to take. Just looking at the code coverage from the test suite, there were just 534 seeks and 8582 cases where we found the handle in the right position already. This is a fairly small amount of code, and provides very useful benefits.
|
| | |
| |
| |
| | |
internal_read_message, tidying of API.
|
| |\ \ |
|
| | |\ \ |
|
| | | | | |
|
| | | |\ \
| | |/ /
| |/| | |
|
| | |\ \ \ |
|
| | | | | | |
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
On Debian, we set the value of LOCK_FILE to the empty string, thus
disabling use of a lock file.
Now tested with rpmlint and lintian.
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Put the common init file into packaging/common, and modify the RPM and
deb builds to make the one substitution required to this file at build
time.
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
While the Fedora skeleton init script uses 2, it doesn't seem to be
universal across their init scripts, and the specs aren't clear on
what the value should be. So follow the Debian init script in this
case.
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Not all the actions behave according to specs, but this is a general
issue with our init scripts.
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The RPM init.d has some lock file support, but its not really
functional (we never actually check whether it is present). So rip it
out for now. We should put proper lock/pid file support back later.
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Neither our Fedora/RH or Debian packages actually depend on the value
in this comment - the list of runlevels to install the service into
comes from somewhere else in both cases. But Fedora guidelines say
that "Only services which are really required for a vital system
should define runlevels here". So don't.
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
rabbitmq-server seems a more precise statement of the facility than
rabbitmq.
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
chkconfig is a Redhat thing, so this has no practical impact for
Debian. Some other Debian packages also have init.d scripts with a
chkconfig section, so hopefully the Debian gods will not take offence.
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | |
| | | | |
| | | | | |
Also changed internal_fetch so its result construction, which whilst not wrong, was at least confusing, and had unexecutable code in it. Associated changes elsewhere.
|
| | | | | |
| | | | |
| | | | |
| | | | | |
passed to the prefetcher.
|
| | | | | |
| | | | |
| | | | |
| | | | | |
misinterleaved with a commit for the same transaction so it's not necessary.
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
1) create durable queue
2) send persistent msgs and send queue to disk_only mode (note, requires branch bug21444)
3) when done, set queue to mixed mode
4) send more persistent msgs
5) when done, wait for the prefetcher to do its thing
6) restart rabbit
7) observe that queue length is wrong
Bugs fixed:
o) in the to_disk_only_mode code in mixed_queue, msgs that had come out of the prefetcher weren't being acked. This meant that on a restart, the msgs would be recovered. Given that we have to requeue everything anyway (sometimes) in a mixed -> disk transition, we obviously have to ack these msgs before republishing them. Note that we do this as part of a tx_commit, so it's perfectly safe
o) in the to_disk_only_mode code in mixed_queue, there was a recursion which swapped an IsDurable param with an IsDelivered param. This caused substantial fail.
o) transaction commit coalescing is dangerous, especially when you're relying on calls to the disk queue to happen in order. For example, should you tx_publish, tx_commit and then auto_ack, or requeue_next_n, you would expect that those last calls get to see the msgs tx_published. This is not necessarily the case. A further good example is a tx_commit followed by a queue.delete. So, in the disk_queue for such calls, make sure that we flush properly, but also expose this functionality (it was already exposed, but as a cast, and although not absolutely necessary to be a call, if we're tx_commiting anyway then that's a call, so another full round trip isn't a problem).
One final note, there is no way that this bug would have been discovered and so easily replicated and debugged without the pinning code in bug 21444. We will seriously hamper our own ability to debug and aid clients should the new persister get released without 21444.
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
not make the latter a wrapper for the former.
Done.
|
| |\ \ \ \ \
| |/ / / / |
|
| | |\ \ \ \ |
|
| | | | | | | |
|
| | |\ \ \ \ \ |
|
| | | | | | | | |
|
| | | | | | | | |
|
| | | | | | | | |
|
| | | |\ \ \ \ \
| | |/ / / / /
| |/| | | | | |
|
| | | | | | | | |
|
| | | | | | | | |
|
| | | | | | | | |
|
| | |\ \ \ \ \ \ |
|
| | |\ \ \ \ \ \ \ |
|
| | | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
remaining out case I don't think can exist without manually constructing the necessary structure. I don't believe the API permits it.
|
| | | | | | | | | | |
|
| | | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
in disk_queue and also deal with the clean shutdown and delivery bits.
> ** queue_prefetcher
> - s/publish/deliver ?
No, I really don't like that. Publish is about pushing messages to the receiver. Thus it's named correctly.
|
| |\ \ \ \ \ \ \ \ \
| | |_|_|/ / / / /
| |/| | | | | | | |
|