| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
Previously it was located in gitdb, which doesn't have any facilities to use the git command
|
|
|
|
| |
gitdb as well
|
| |
|
|
|
|
|
|
| |
is actually more efficient than the previous implementation
Index now locks its file for reading, and properly uses LockedFD when writing
|
| |
|
| |
|
|
|
|
| |
reduce the file size as much as I would have liked, but certainly is a start for further 'outsourcing'
|
|
|
|
| |
fast, while staying compatible with serialization which requires it to be sorted
|
|
|
|
| |
information - this also speeds up later serialization after changes. its clear though that retrieving actual objects is slower currently as these are not cached anymore. Its worth thinking about moving these encoding, decoding routines to gitdb
|
|
|
|
|
|
|
| |
according to a simple test
( presort still needs implementation )
submodule: added stub to allow the tree to return something, its not implemented though
|
|
|
|
|
|
| |
was added instead
Adjusted all imports to deal with the changed package names
|
| |
|
|
|
|
| |
on starting with just the main thread
|
|
|
|
| |
functionality - previously the callback functionality was bound to channel based readers/writers
|
|
|
|
|
|
| |
anymore, but are abstract.
Added IteratorReader, implementing the reader interface from an iterator. The implementation moved from the TaskIterator to the channel
|
|
|
|
| |
default. It shows that there must be the notion of a producer, which can work if there are no items read
|
| |
|
|
|
|
| |
works on py2.4, 2.5 and 2.6
|
|
|
|
| |
printing thanks to the underlying threading implementation, we can at least make sure that the interpreter doesn't block during shutdown. Now it appears to be running smoothly
|
|
|
|
| |
lock is not actually released or they are not actually notifyied, staying in a beautysleep. This glitch is probably caused by some detail not treated correctly in the thread python module, which is something we cannot fix. It works most of the time as expected though - maybe some cleanup is not done correctly which causes this
|
|
|
|
| |
well as concurrency issues. Now it works okay, but the thread-shutdown is still an issue, as it causes incorrect behaviour making the tests fail. Its good, as it hints at additional issues that need to be solved. There is just a little more left on the feature side, but its nearly there
|
|
|
|
| |
their duty to close the channel if required. Code is a little less maintainable now, but faster, it appears
|
|
|
|
| |
closed only when there is no one else writing to it. This assures that all tasks can continue working, and put their results accordingly. Shutdown is still not working correctly, but that should be solvable as well. Its still not perfect though ...
|
|
|
|
|
|
| |
readers and writers, a ready is not connected to its writer anymore. This changes the refcounting of course, which is why the auto-cleanup for the pool is currently broken.
The benefit of this are faster writes to the channel, reading didn't improve, refcounts should be clearer now
|
|
|
|
| |
fully deterministic as tasks still run into the problem that they try to write into a closed channel, it was closed by one of their task-mates who didn't know someone else was still computing
|
|
|
|
| |
the same and different pools
|
|
|
|
| |
allows the pool to work as expected. Many more tests need to be added, and there still is a problem with shutdown as sometimes it won't kill all threads, mainly because the process came up with worker threads started, which cannot be
|
|
|
|
| |
reference counting mechanism, causing references to the pool to be kepts via cycles
|
|
|
|
| |
write channels, possibly some with callbacks installed etc.. Pool.add_task will respect the users choice now, but provide defaults which are optimized for performance
|
|
|
|
| |
channel implementation, one of which is used as base by the Pool Read channel, releasing it of the duty to call these itself. The write channel with callback subclass allows the transformation of the item to be written
|
|
|
|
| |
improve performance, but which now hinders performance, besides being unnecessary ;)
|
|
|
|
|
|
| |
makes it easier to constomize
pool: in serial mode, created channels will be serial-only, which brings 15% of performance
|
|
|
|
| |
and it runs faster as well, about 2/3 of the performance we have when being in serial mode
|
|
|
|
|
|
| |
Condition implementation, related to the notify method not being treadsafe. Although I was aware of it, I missed the first check which tests for the size - the result could be incorrect if the whole method wasn't locked.
Testing runs stable now, allowing to move on \!
|
|
|
|
| |
required due to the non-atomiciy of the invovled operation. Removed one level of indirection for the lock, by refraining from calling my own 'wrapper' methods, which brought it back to the performance it had before the locking was introduced for the n==1 case
|
|
|
|
| |
one more level of indirection. Clearly this not good from a design standpoint, as a Condition is no Deque, but it helps speeding things up which is what this is about. Could make it a hidden class to indicate how 'special' it is
|
| |
|
|
|
|
|
| |
queue: Queue now derives from deque directly, which safes one dict lookup as the queue does not need to be accessed through self anymore
pool test improved to better verify threads are started correctly
|
|
|
|
|
|
| |
thread-safe, causing locks to be released multiple times. Now it runs very fast, and very stable apparently.
Now its about putting previous features back in, and studying their results, before more complex task graphs can be examined
|
|
|
|
| |
events only with its queue, with boosts performance into brigt green levels
|
|
|
|
| |
the time instead of reusing its own one, it was somewhat hard to manage its state over time and could cause bugs. It works okay, but it occasionally hangs, it appears to be an empty queue, have to gradually put certain things back in, although in the current mode of operation, it should never have empty queues from the pool to the user
|
|
|
|
| |
thing as the task deletes itself too late - its time for a paradigm change, the task should be deleted with its RPoolChannel or explicitly by the user. The test needs to adapt, and shouldn't assume anything unless the RPoolChannel is gone
|
|
|
|
| |
task class
|
|
|
|
| |
the pool which runs it ) - its not yet stable, but should be solvable.
|
|
|
|
| |
the wrong spot. The channel is nothing more than an adapter allowing to read multiple items from a thread-safe queue, the queue itself though must be 'closable' for writing, or needs something like a writable flag.
|
|
|
|
| |
at least with my totally overwritten version of the condition - the previous one was somewhat more stable it seems. Nonetheless, this is the fastest version so far
|
|
|
|
| |
didn't seem necessary - its a failure, something is wrong - performance not much better than the original one, its depending on the condition performance actually, which I don't get faster
|
|
|
|
|
|
| |
its performance considerably.
Channels now use the AsyncQueue, boosting their throughput to about 5k items / s - this is something one can work with, considering the runtime of each item should be large enough to keep the threads busy. This could be a basis, further testing needed
|
| |
|
|
|
|
| |
unnecessary tasks to be scheduled as we keep track of how many items will be produced for the task at hand. This introduces additional locking, but performns well in multithreaded mode. Performance of the master queue is still a huge issue, its currently the limiting factor, as bypassing the master queue in serial moode gives 15x performance, wich is what I would need
|