| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
to update submodules such as svn-externals
|
| |
|
|
|
|
|
|
|
| |
heads, including test.
Config: SectionConstraint was updated with additional callable methods, the complete ConfigParser interface should be covered now
Remote: refs methods is much more efficient now as it will set the search path to the directory containing the remote refs - previously it used the remotes/ base directory and pruned the search result
|
|
|
|
| |
though to allow easy configuration of branch-specific settings
|
|
|
|
| |
svn-external like behaviour. Implemented first version of update, which works for now, but probably needs to see more features
|
|
|
|
| |
general may be contradicting if a tag is given there, as well as a commit sha of the submodule. Hence it should really be only a branch
|
|
|
|
| |
was mainly copy-paste from with_rw_repo, what a shame
|
|
|
|
| |
provided for Remotes, including test
|
|
|
|
|
|
| |
local cache - previously a procedural approach was used, which was less code, but slower too. Especially in case of CommitObjects unrolling the loop manually makes a difference.
Submodule: Implemented query methods and did a bit of testing. More is to come, but the test works for now. As special addition, the submodule implementation uses the section name as submodule ID even though it seems to be just the path. This allows to make renames easier
|
|
|
|
| |
corresponding locks. Submodule class now operates on parent_commits, the configuration is either streamed from the repository or written directly into a blob ( or file ) dependending on whether we have a working tree checkout or not which matches our parent_commit
|
|
|
|
| |
usable. It showed that the ConfigParser needs some work. If the root is set, it also needs to refer to the root_commit instead of to the root-tree, as it will have to decide whether it works on the working tree's version of the .gitmodules file or the one in the repository
|
|
|
|
| |
dependent on the setup of the surrounding repository, hence the amount of ref-types found is actually variable, as long as they get more
|
|
|
|
| |
has better testing for the use of paths during reset. The IndexFile now implements this on its own, which also allows for something equivalent to git-reset --hard -- <paths>, which is not possible in the git command for some probably very good reason
|
|
|
|
|
|
| |
instead of the existing and valid. The rest of the ConfigParser handling is correct, as it reads all configuration files available to git
see http://github.com/Byron/GitPython/issues#issue/1
|
|
|
|
| |
head.reset: will now handle resets with paths much better, especially in the --mixed case, see http://github.com/Byron/GitPython/issues#issue/2
|
|
|
|
| |
Fixed test which used the --force flag on move, but there is only a short version (left) it appears
|
|
|
|
|
|
| |
previously, although there was absolutely no need for that.
See http://byronimo.lighthouseapp.com/projects/51787/tickets/41-diff-regex-lib_git_diffpy-cannot-handle-paths-with-spaces
|
|
|
|
| |
into the byte stream, as well as decoded from it
|
|
|
|
|
|
| |
passed in
test_odb: added more information to the message output
|
|
|
|
|
|
| |
http://byronimo.lighthouseapp.com/projects/51787/tickets/44-remoteref-fails-when-there-is-character-in-the-name using supplied patch ( which was manually applied ).
Fixed slightly broken test for remote handling
|
|
|
|
| |
using a / in a hardcoded fashion, leading to absolute paths where the caller expected relative ones
|
|
|
|
| |
The first one is faster, although I would have expected the latter one to be faster
|
| |
|
|
|
|
| |
kind of issue doesn't popup for anyone
|
|
|
|
| |
python 2.4, as the pure python implementation cannot work without memory maps
|
|
|
|
|
|
| |
as well as the previous instance method clone to keep it compatible
Fixed small bug in test code
|
|
|
|
| |
partial_to_complete_sha_hex is working as expected with different input ( it wasn't, of course ;) )
|
|
|
|
| |
rev_parse could be adjusted not to return Objects anymore, providing better performance for those who just want a sha only. On the other hand, the method is high-level and should be convenient to use as well, its a starting point for more usually, hence its unlikely to call it in tight loops
|
|
|
|
| |
thanks to new gitdb functionality
|
|
|
|
| |
Shas still to be done
|
|
|
|
| |
tests missing
|
|
|
|
| |
its still rather slow and many tests are not yet implemented
|
| |
|
|
|
|
| |
the rule of trying not to cache possibly heavy data. The data_stream method should be used instead
|
|
|
|
| |
the submodules's naming conventions
|
|
|
|
| |
replaced them by a real test which actually executes code, and puts everything into the tmp directory
|
| |
|
| |
|
| |
|
|
|
|
| |
to using git-read-tree to keep the stat information when merging one tree in. After all this is what needed to be implemented in python as well
|
|
|
|
| |
The default is to write the physical index, which is the behaviour you would expect
|
|
|
|
|
|
| |
was actually empty. This is a rare case that can happen during stream testing. Theoretically there shouldn't be any empty streams of course, but practically they do exist sometimes ;); fixed stream.seek implementation, which previously used seek on standard output
Improved GitCmd error handling
|
|
|
|
|
|
| |
not implemented causing incorrect merge results. Added test to cover this issue
Diff: added NULL_BIN_SHA constant for completeness
|
| |
|
|
|
|
| |
including simple test, it may be simple as the methods it uses are throroughly tested
|
| |
|
|
|
|
| |
in progress
|
|
|
|
|
|
|
| |
can do much more than we can ( and faster assumably ), the .new method is used to create new index instances from up to 3 trees.
Implemented multi-tree traversal to facilitate building a stage list more efficiently ( although I am not sure whether it could be faster to use a dictionary together with some intensive lookup ), including test
Added performance to learn how fast certain operations are, and whether one should be preferred over another
|
|
|
|
| |
IO will only be done when required. A possible disadvantage though is that time is spent on compressing the trees, although only the raw data and their shas would theoretically be needed. On the other hand, compressing their data uses less memory. An optimal implementation would just sha the data, check for existance, and compress it to write it to the database right away. This would mean more specialized code though, introducing redundancy. If IStreams would know whether they contain compressed or uncompressed data, and if there was a method to get a sha from data, this would work nicely in the existing framework though
|
| |
|