| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
postgres - it really is impossible to make this kinda thing db agnostic. Performance is terrible, but it does work, and has an identical API to the rabbit_disk_queue
|
| | |
|
| |
|
|
|
|
| |
documentation.
Also, implemented requeue, and corrected bug in startup which would have lead to crashes if acks had appeared non contiguously prior to shutdown.
|
| |
|
|
| |
means that some external thing should keep track of exactly what is in a transaction (this is already the case for publishes, it just needs to be extended for acks), and then present them for the commit. Also, fixed a stupid bug in the stress_gc test which was previously acking everything at once (albeit in a weird order as desired) which meant all files got emptied before the gc ran, not quite what was desired.
|
| |
|
|
| |
lead to a 22minute startup time for 100000 messages. However, by dynamically adding an index during startup to mnesia, and then later removing it, this is reduced to 13.5 seconds. Note however, that to test this with rabbit_tests:rdq_time_insane_startup() requires the disk queue to be edited so that it starts up in ram_disk mode, not disk_only mode as is the code default.
|
| | |
|
| |
|
|
| |
and disk+ram modes. The disk+ram mode uses disk_copies for mnesia and ets for msg_location. This results in a substantial performance improvement (minimum 5 times faster), but is ram limited by number of messages. The disk-only mode uses dets and disk_only_copies for mnesia. This is much slower, but should not be limited.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
remove extra mnesia index - this has significantly improved performance!
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
to stress those code paths more.
|
| | |
|
| | |
|
| | |
|
| |\ |
|
| | | |
|
| | |
| |
| |
| |
| |
| | |
only go away after being ack'd, thus we should redeliver messages.
Also, reworked the stress gc test so that as before it really does ack messages in a non-linear order. This got quite a bit harder now we can't deliver arbitrary messages and need to build the mapping between msgid in the delivery and the seqid needed for the acks.
|
| |/
|
|
|
|
|
|
| |
remove_messages.
The only case that code would be called would be from acks, and the effect would be to increment the read seqid. But this would require acking a message which hasn't been delivered, which is clearly insane.
Also, fixed a bug in the tests.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
{msgid, seqid}, but that's irrelevant), and that ack requires this seq_id (tuple) back in.
This avoids extra mnesia work and makes ack much faster. Given that the amqqueue already tracks unacked messages, this seems reasonable. However, if not, back off to the parent of this revision.
|
| |
|
|
|
|
| |
seqids for each msgid in acks, acks are now very slow.
Thus I'm going to alter the API so that deliver returns the seqid and then ack takes [seqid]. This should make things faster.
|
| |
|
|
| |
disc_only has solved that problem...
|
| |
|
|
|
|
|
|
| |
10 times slower.
This means that the test suite which used to take about 12 mins to run now takes about 2 hours.
Looks like we could now be talking up to 40ms to publish a message. Interestingly, delivery is only twice as slow as with ets, it's publish that's taken the 10+times hit.
Worryingly, the numbers show that performance per message is not constant, and wasn't in ets either. This must be the effect of buckets in both ets and dets filling up and chaining. The dets man page does say that it organises data as a linear hash list, which is a structure I know well, and I am surprised performance is dropping off in this way - maybe suggests poor distribution of their hashing algorithm or rebalancing.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ets table.
Even though this is slightly less optimal because of the loss of doing index lookups in file_detail, this is actually slightly faster due to not having to maintain two tables. Performance:
Msg Count | Msg Size | Queue Count | Startup mu s | Publish mu s | Pub mu s/msg | Pub mu s/byte | Deliver mu s | Del mu s/msg | Del mu s/byte
1024| 512| 1| 2644.0| 41061.0| 40.098633| 0.0783176422| 156031.0| 152.374023| 0.2976055145
4096| 512| 1| 74843.0| 328683.0| 80.244873| 0.1567282677| 629441.0| 153.672119| 0.3001408577
16384| 512| 1| 373729.0| 3614155.0| 220.590515| 0.4308408499| 2969499.0| 181.243835| 0.3539918661
1024| 512| 10| 1605989.0| 281004.0| 27.441797| 0.0535972595| 1936168.0| 189.078906| 0.3692947388
4096| 512| 10| 85912.0| 2940291.0| 71.784448| 0.1402040005| 7662259.0| 187.066870| 0.3653649807
16384| 512| 10| 418213.0| 37962842.0| 231.706799| 0.4525523424| 32293492.0| 197.103833| 0.3849684238
1024| 8192| 1| 1347269.0| 144988.0| 141.589844| 0.0172839165| 173906.0| 169.830078| 0.0207312107
4096| 8192| 1| 93070.0| 606369.0| 148.039307| 0.0180712044| 829812.0| 202.590820| 0.0247303247
16384| 8192| 1| 20014.0| 4976009.0| 303.711487| 0.0370741561| 3211632.0| 196.022461| 0.0239285231
1024| 8192| 10| 77291.0| 348677.0| 34.050488| 0.0041565537| 1877374.0| 183.337305| 0.0223800421
4096| 8192| 10| 104842.0| 2722730.0| 66.472900| 0.0081143677| 7787817.0| 190.132251| 0.0232095033
16384| 8192| 10| 21746.0| 44301448.0| 270.394580| 0.0330071509| 32018244.0| 195.423853| 0.0238554507
1024| 32768| 1| 120732.0| 426700.0| 416.699219| 0.0127166510| 210704.0| 205.765625| 0.0062794685
4096| 32768| 1| 9355.0| 1925633.0| 470.125244| 0.0143470839| 824304.0| 201.246094| 0.0061415434
16384| 32768| 1| 14734.0| 10371560.0| 633.029785| 0.0193185359| 3594753.0| 219.406311| 0.0066957492
1024| 32768| 10| 6052.0| 629362.0| 61.461133| 0.0018756449| 2100901.0| 205.166113| 0.0062611729
4096| 32768| 10| 5546.0| 4203683.0| 102.628979| 0.0031319879| 8899536.0| 217.273828| 0.0066306710
16384| 32768| 10| 22657.0| 50306069.0| 307.043878| 0.0093702355| 36433817.0| 222.374371| 0.0067863273
1024| 131072| 1| 7155.0| 1913696.0| 1868.843750| 0.0142581463| 444638.0| 434.216797| 0.0033128113
4096| 131072| 1| 6671.0| 8232640.0| 2009.921875| 0.0153344870| 1907439.0| 465.683350| 0.0035528820
16384| 131072| 1| 1699.0| 33886514.0| 2068.268677| 0.0157796377| 7291762.0| 445.053833| 0.0033954913
1024| 131072| 10| 7506.0| 1991032.0| 194.436719| 0.0014834344| 4564850.0| 445.786133| 0.0034010783
4096| 131072| 10| 7486.0| 9551800.0| 233.198242| 0.0017791614| 18048697.0| 440.642017| 0.0033618318
16384| 131072| 10| 2771.0| 71072559.0| 433.792474| 0.0033095739| 81144745.0| 495.268219| 0.0037785966
|
| | |
|
| | |
|
| |
|
|
| |
This then means that it's much easier to test how performance changes with modifications to the disk queue.
|
| |\ |
|
| | |\ |
|
| | | | |
|
| | | |\
| | |/
| |/| |
|
| | | |
| | |
| | |
| | | |
This was committed directly on 'default' in error.
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | |
| | |
| | | |
since the benefit is unproven
|
| | | | |
|
| | | |
| | |
| | |
| | | |
which is much more efficient for small numbers of priorities; the common case.
|
| | | | |
|
| | | |
| | |
| | |
| | | |
and make to_list do something sensible.
|
| | | | |
|