| Commit message (Collapse) | Author | Age | Files | Lines |
| |\ |
|
| | |
| |
| |
| | |
same message many times
|
| |\ \
| |/
|/| |
|
| | | |
|
| | |\ |
|
| | | | |
|
| | | | |
|
| | |/ |
|
| | |\ |
|
| | | | |
|
| | | | |
|
| | |/
| |
| |
| | |
several of the previous state entries
|
| | |
| |
| |
| | |
sending a boolean is irrelvant now
|
| | |
| |
| |
| |
| |
| | |
to the same clients until they've actually closed all their handles. This ensures that as more requests come in once we're low on fds, we don't send hundreds of 0 ages to the same clients erroneously. It also means that we always target the correct number of *unique* clients to ask to close their fds, which avoids thrashing the same clients and improves performance markedly.
Also, if on open, we send "close" back to the client, that client *is* blocked (actually, due to have 0 opens) as we know it'll close, send us some closed msgs and then re do the open call. Thus we shouldn't be sending it any set maximum age messages.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |\ |
|
| | | |
| | |
| | |
| | | |
close file handles, the clients might respond very quickly. The fhc will then gather these responses (say, just updates, not closes) and then will sit there for 2 seconds until the timer goes off. Thus the solution is just to subtract the timer period from the calculated average: i.e. the expression is to say 'close file handles that haven't been used for N seconds from NOW' rather than the previous 'close file handles that haven't been used for N seconds from NOW - 2 seconds ago'. This works very nicely and whilst the fhc can get quite busy when there are more users of file handles than there are file handles available, that is hardly surprising, and the fact is starvation is prevented and processes are promptly serviced
|
| | | | |
|
| |\ \ \
| |_|/
|/| | |
|
| |/ / |
|
| |\ \ |
|
| | | | |
|
| | | | |
|
| | |/ |
|
| | |
| |
| |
| | |
subtle for obtains, which effectively implicitly allocates temporarily to the blocked caller (FromPid) whilst monitoring it, and then transfers this to the ForPid when possible. Note the ForPid can die before the obtains is processed, which which point the FromPid must be replied to immediately.
|
| | |
| |
| |
| |
| |
| | |
queue is going down or not
(transplanted from 4c99ba7eedd4b28a096d0412bbbdacb1fa91daa3)
|
| | |
| |
| |
| | |
(transplanted from aaf79aa3cacefee1742eb7f257c6a16ec5720d59)
|
| |\ \ |
|
| |/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There were a number of issues with RABBITMQ_PLUGINS_EXPAND_DIR:
- It was undocumented in the context of the generic unix package, and
if unwisely set could do an effective "rm -rf" in an unintended
location.
- It did not take account of the possibility that multiple nodes could
be starting at once, and so doing plugins activation
simultanteously.
Instead, use RABBITMQ_MNESIA_DIR/plugins-scratch. This avoids the
need to extend the generic unix package documentation, the location is
node-specific, and the distinctive plugins-scratch subdirectory
reduces the risk of unintended file deletions.
(transplanted from 064b8797493bb290156fb72a54f9e9276df0faed)
|
| | | |
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This does in fact not alter the behaviour at all due to the following:
- if the alarm is active the alarm registration will call the handler
straight way, which will send a {conserve_memory, true} message to the
reader.
- the reply to the alarm registration is sent after that, and from the
same process - the alarm process - so by the time the reader loops
around the mainloop again the {conserve_memory, true} message is
guaranteed to be in the mailbox
- on looping around, the reader request a frame_header from the
socket. The reader has already sent connection.open_ok to the client,
and the client may have started sending commands straight away. But
all the reader is going to see of that to start with is an {inet_async,
...} message for a frame_header. That is guaranteed to end up in the
mailbox after the {conserve_memory, true} message.
Thus the reader is guaranteed to process the {conserve_memory, true}
message before handling any more data from the socket.
With this change it is rather more obvious that the memory alarm
status gets taken into account before any more client data is
processed.
|
| |\ |
|
| | | |
|
| |/ |
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|