| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
|
|
|
| |
written according to the encoding of the commit object, and decoded using that information as well. Trees will encode and decode their names with utf8
|
|
|
|
| |
repo.clone: assured backslashes won't reach the remote configuration, as it can cause trouble when re-reading the file later on. Some git commands don't appear to be able to properly deal with backslashes, other's do
|
|
|
|
|
|
| |
return value
remote: fixed evil bug that was caused by some inconsistency of python when __getattr__ and __slots__ are invovled - namely it calles getattr before checking for a slot of the same name, in an alternating fashion
|
|
|
|
| |
method now yields good results on all tested platforms
|
|
|
|
| |
using a / in a hardcoded fashion, leading to absolute paths where the caller expected relative ones
|
|
|
|
| |
it. Its can be assumed though that there are more bugs related to unicode hanging around in the system
|
| |
|
|
|
|
| |
a few more resT syntax errors on the way
|
|
|
|
| |
The first one is faster, although I would have expected the latter one to be faster
|
|
|
|
| |
memory map. Its totally ridiculous, but fixed
|
|
|
|
| |
kind of issue doesn't popup for anyone
|
|
|
|
| |
python 2.4, as the pure python implementation cannot work without memory maps
|
|
|
|
| |
different keywords for distutils and setuptools, the latter one doesn't read the ones of the first one, unfortunately
|
|
|
|
|
|
| |
as well as the previous instance method clone to keep it compatible
Fixed small bug in test code
|
|
|
|
| |
partial_to_complete_sha_hex is working as expected with different input ( it wasn't, of course ;) )
|
|
|
|
| |
rev_parse could be adjusted not to return Objects anymore, providing better performance for those who just want a sha only. On the other hand, the method is high-level and should be convenient to use as well, its a starting point for more usually, hence its unlikely to call it in tight loops
|
|
|
|
| |
thanks to new gitdb functionality
|
|
|
|
| |
utilities - the repo module got rather large
|
|
|
|
| |
Shas still to be done
|
|
|
|
| |
to be started up for sha resolution
|
|
|
|
| |
tests missing
|
|
|
|
| |
its still rather slow and many tests are not yet implemented
|
| |
|
|
|
|
| |
previously due to import errors and a somewhat inconsistent working tree that occurred when switching branches ...
|
|
|
|
| |
them. Incremeneted version to 0.3.0 beta1
|
|
|
|
| |
the rule of trying not to cache possibly heavy data. The data_stream method should be used instead
|
|
|
|
| |
the submodules's naming conventions
|
|
|
|
| |
replaced them by a real test which actually executes code, and puts everything into the tmp directory
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
use 20 byte sha's internally as it is closer to the GitDB implementation
Switched all remaining files back to tabs
Adjusted all remaining docstrings to suit the sphinx doc convention - its likely that there are many of docstring syntax errors though
|
|
|
|
| |
to using git-read-tree to keep the stat information when merging one tree in. After all this is what needed to be implemented in python as well
|
|
|
|
| |
The default is to write the physical index, which is the behaviour you would expect
|
|
|
|
|
|
| |
was actually empty. This is a rare case that can happen during stream testing. Theoretically there shouldn't be any empty streams of course, but practically they do exist sometimes ;); fixed stream.seek implementation, which previously used seek on standard output
Improved GitCmd error handling
|
|
|
|
|
|
| |
not implemented causing incorrect merge results. Added test to cover this issue
Diff: added NULL_BIN_SHA constant for completeness
|
| |
|
|
|
|
| |
including simple test, it may be simple as the methods it uses are throroughly tested
|
| |
|
|
|
|
| |
in progress
|
|
|
|
|
|
|
| |
can do much more than we can ( and faster assumably ), the .new method is used to create new index instances from up to 3 trees.
Implemented multi-tree traversal to facilitate building a stage list more efficiently ( although I am not sure whether it could be faster to use a dictionary together with some intensive lookup ), including test
Added performance to learn how fast certain operations are, and whether one should be preferred over another
|
|
|
|
| |
IO will only be done when required. A possible disadvantage though is that time is spent on compressing the trees, although only the raw data and their shas would theoretically be needed. On the other hand, compressing their data uses less memory. An optimal implementation would just sha the data, check for existance, and compress it to write it to the database right away. This would mean more specialized code though, introducing redundancy. If IStreams would know whether they contain compressed or uncompressed data, and if there was a method to get a sha from data, this would work nicely in the existing framework though
|
| |
|
|
|
|
|
|
|
| |
correctly, a test to explicitly compare the git version with the python implementation is still missing
Tree and Index internally use 20 byte shas, converting them only as needed to reduce memory footprint and processing time
objects: started own 'fun' module containing the most important tree functions, more are likely to be added soon
|
|
|
|
| |
faster as it removes one level of indirection, and makes the main file smaller, improving maintainability
|
| |
|
|
|
|
| |
information to just the stage ( just to be closer to the git-original )
|
|
|
|
| |
python version is about as fast, but could support multithreading using async
|
|
|
|
| |
repo: now has the option to use the pure python git database implementation, which is currently not used though
|