| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | |
| | |
| | |
| | | |
thing as the task deletes itself too late - its time for a paradigm change, the task should be deleted with its RPoolChannel or explicitly by the user. The test needs to adapt, and shouldn't assume anything unless the RPoolChannel is gone
|
| | |
| | |
| | |
| | | |
task class
|
| | |
| | |
| | |
| | | |
the pool which runs it ) - its not yet stable, but should be solvable.
|
| | |
| | |
| | |
| | | |
the wrong spot. The channel is nothing more than an adapter allowing to read multiple items from a thread-safe queue, the queue itself though must be 'closable' for writing, or needs something like a writable flag.
|
| | |
| | |
| | |
| | | |
at least with my totally overwritten version of the condition - the previous one was somewhat more stable it seems. Nonetheless, this is the fastest version so far
|
| |/
| |
| |
| | |
didn't seem necessary - its a failure, something is wrong - performance not much better than the original one, its depending on the condition performance actually, which I don't get faster
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | | |
its performance considerably.
Channels now use the AsyncQueue, boosting their throughput to about 5k items / s - this is something one can work with, considering the runtime of each item should be large enough to keep the threads busy. This could be a basis, further testing needed
|
| | | |
|
| |/
| |
| |
| | |
unnecessary tasks to be scheduled as we keep track of how many items will be produced for the task at hand. This introduces additional locking, but performns well in multithreaded mode. Performance of the master queue is still a huge issue, its currently the limiting factor, as bypassing the master queue in serial moode gives 15x performance, wich is what I would need
|
| |
| |
| |
| | |
still inconsistencies that need to be fixed, but it already improved, especially the 4-thread performance which now is as fast as the dual-threaded performance
|
| |
| |
| |
| | |
for up to 2 threads, but 4 are killing the queue
|
| |
| |
| |
| | |
havok - lets call this a safe-state
|
| |
| |
| |
| | |
least in tests, and with multiple threads. There is still an sync bug in regard to closed channels to be fixed, as the Task.set_done handling is incorrecft
|
| |
| |
| |
| |
| |
| | |
changing tasks
Now processing more items to test performance, in dual-threaded mode as well, and its rather bad, have to figure out the reason for this, probably gil, but queues could help
|
| |
| |
| |
| | |
task.min_count, to fix theoretical option for a deadlock in serial mode, and unnecessary blocking in async mode
|
| |
| |
| |
| |
| |
| | |
multiple connected pools
Reduced waiting time in tests to make them complete faster
|
| |
| |
| |
| | |
single task for now, but next up are dependent tasks
|
| |
| |
| |
| | |
related to our channel closed flag, which is the only way not to block forever on read(0) channels which were closed by a thread 'in the meanwhile'
|
| |
| |
| |
| | |
chunking test. Next up, actual async processing
|
| |
| |
| |
| | |
including own tests, their design improved to prepare them for some specifics that would be needed for multiprocessing support
|
| |
| |
| |
| |
| |
| |
| | |
is handled by the task system
graph: implemented it including test according to the pools requirements
pool: implemented set_pool_size
|
| |
| |
| |
| | |
while going. Tests will be written soon for verification, its still quite theoretical
|
| |
| |
| |
| | |
going on. The default implementation uses threads, which ends up being nothing more than async, as they are all locked down by internal and the global interpreter lock
|
| |
| |
| |
| | |
to do with the object db. If that really works the way I want, it will become an own project, called async
|
| |
| |
| |
| | |
inter-dependent tasks
|
| |
| |
| |
| | |
Git-Python. I have the feeling it can do much good here :)
|
|/
|
|
| |
restructured odb tests, they are now in an own module to keep the modules small
|
|
|
|
|
|
| |
will always be compressed if generated by the system ( even future memory db's will compress it )
loose db: implemented direct stream copy, indicated by a sha set in the IStream, including test. This will be the case once Packs are exploded for instance
|
|
|
|
| |
for streams starts to show up, but its not yet there
|
|\
| |
| |
| |
| | |
Conflicts:
lib/git/cmd.py
|
| |
| |
| |
| | |
but next there will have to be more through testing
|
| |
| |
| |
| | |
multi-threading implementation of all odb functions
|
| |
| |
| |
| | |
everything. Next is to implement pack-file reading, then alternates which should allow to resolve everything
|
| | |
|
| |
| |
| |
| |
| |
| | |
parsing which truncated newlines although it was ilegitimate. Its up to the reader to truncate therse, nowhere in the git code I could find anyone adding newlines to commits where it is written
Added performance tests for serialization, it does about 5k commits per second if writing to tmpfs
|
| | |
|
| |
| |
| |
| | |
missing ) and added performance tests which are extremely promising
|
| |
| |
| |
| | |
efficiently considering that it copies string buffers all the time
|
| | |
|
| |
| |
| |
| | |
appears to be working
|
| |
| |
| |
| |
| |
| | |
objects will be written using our utilities, and certain object retrieval functionality moves into the GitObjectDatabase which is used by the repo instance
Added performance test for object database access, which shows quite respectable tree parsing performance, and okay blob access. Nonetheless, it will be hard to beat the c performance using a pure python implementation, but it can be a nice practice to write it anyway to allow more direct pack manipulations. Some could benefit from the ability to write packs as these can serve as local cache if alternates are used
|
| |
| |
| |
| | |
bug of course which just didn't kick in yet
|
| |
| |
| |
| |
| |
| | |
from their object information directly. This is faster, and resolves issues with the rev-list format and empty commit messages
Adjusted many tests to go with the changes, as they were still mocked. The mock was removed if necessary and replaced by code that actually executes
|
| |
| |
| |
| | |
tests to work on larger repositories
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
performance is slightly better
git.cmd: added method to provide access to the content stream directly. This is more efficient if large objects are handled, if it is actually used
test.helpers: removed unnecessary code
|
| |
| |
| |
| | |
objects if it could serialize itself
|
| |
| |
| |
| |
| |
| | |
make a big difference, but perhaps its smarter about broken pipes.
Adjusted code to selectively strip terminating newline, only if they are there. The previous code would effectively duplicate the string and strip whitespace from both ends even though there was no need for it. Its a bit faster now as the tests proclaim
|
|/
|
|
| |
easy-to-change class member variable
|