| Commit message (Collapse) | Author | Age | Files | Lines |
| |\
| |
| | |
sha1dc: update to fix endianess issues on AIX/HP-UX
|
| | |
| |
| |
| |
| |
| |
| | |
Update our copy of sha1dc to the upstream commit 855827c (Detect
endianess on HP-UX, 2019-05-09). Changes include fixes to endian
detection on AIX and HP-UX systems as well as a define that
allows us to force aligned access, which we're not using yet.
|
| | |
| |
| |
| |
| |
| |
| | |
When we send HTTP credentials but the server rejects them, tear down the
authentication context so that we can start fresh. To maintain this
state, additionally move all of the authentication handling into
`on_auth_required`.
|
| | |
| |
| |
| |
| |
| | |
When we're issuing a CONNECT to a proxy, we expect to keep-alive to the
proxy. However, during authentication negotiations, the proxy may close
the connection. Reconnect if the server closes the connection.
|
| | |
| |
| |
| |
| |
| |
| | |
When we have a keep-alive connection to the server, that server may
legally drop the connection for any reason once a successful request and
response has occurred. It's common for servers to drop the connection
after some amount of time or number of requests have occurred.
|
| | |
| |
| |
| |
| |
| |
| |
| | |
We stop the read loop when we have read all the data. We should also
consider the server's feelings.
If the server hangs up on us, we need to stop our read loop. Otherwise,
we'll try to read from the server - and fail - ad infinitum.
|
| | |
| |
| |
| |
| |
| | |
Instead of using `is_complete` to decide whether we have connection or
request affinity for authentication mechanisms, set a boolean on the
mechanism definition itself.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For request-based authentication mechanisms (Basic, Digest) we should
keep the authentication context alive across socket connections, since
the authentication headers must be transmitted with every request.
However, we should continue to remove authentication contexts for
mechanisms with connection affinity (NTLM, Negotiate) since we need to
reauthenticate for every socket connection.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Hold an individual authentication context instead of trying to maintain
all the contexts; we can select the preferred context during the initial
negotiation.
Subsequent authentication steps will re-use the chosen authentication
(until such time as it's rejected) instead of trying to manage multiple
contexts when all but one will never be used (since we can only
authenticate with a single mechanism at a time.)
Also, when we're given a 401 or 407 in the middle of challenge/response
handling, short-circuit immediately without incrementing the retry
count. The multi-step authentication is expected, and not a "retry" and
should not be penalized as such.
This means that we don't need to keep the contexts around and ensures
that we do not unnecessarily fail for too many retries when we have
challenge/response auth on a proxy and a server and potentially
redirects in play as well.
|
| | | |
|
| | |
| |
| |
| |
| |
| | |
A "connection" to a server is transient, and we may reconnect to a
server in the midst of authentication failures (if the remote indicates
that we should, via `Connection: close`) or in a redirect.
|
| | | |
|
| | |
| |
| |
| | |
Include https://github.com/ethomson/ntlmclient as a dependency.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Ensure that the server supports the particular credential type that
we're specifying. Previously we considered credential types as an
input to an auth mechanism - since the HTTP transport only supported
default credentials (via negotiate) and username/password credentials
(via basic), this worked. However, if we are to add another mechanism
that uses username/password credentials, we'll need to be careful to
identify the types that are accepted.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We must always consume the full parser body if we're going to
keep-alive. So in the authentication failure case, continue advancing
the http message parser until it's complete, then we can retry the
connection.
Not doing so would mean that we have to tear the connection down and
start over. Advancing through fully (even though we don't use the data)
will ensure that we can retry a connection with keep-alive.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we get an authentication failure, we must consume the entire body
of the response. If we only read half of the body (on the assumption
that we can ignore the rest) then we will never complete the parsing of
the message. This means that we will never set the complete flag, and
our replay must actually tear down the connection and try again.
This is particularly problematic for stateful authentication mechanisms
(SPNEGO, NTLM) that require that we keep the connection alive.
Note that the prior code is only a problem when the 401 that we are
parsing is too large to be read in a single chunked read from the http
parser.
But now we will continue to invoke the http parser until we've got a
complete message in the authentication failed scenario. Note that we
need not do anything with the message, so when we get an authentication
failed, we'll stop adding data to our buffer, we'll simply loop in the
parser and let it advance its internal state.
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Some authentication mechanisms (like HTTP Basic and Digest) have a
one-step mechanism to create credentials, but there are more complex
mechanisms like NTLM and Negotiate that require challenge/response after
negotiation, requiring several round-trips. Add an `is_complete`
function to know when they have round-tripped enough to be a single
authentication and should now either have succeeded or failed to
authenticate.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We cannot examine the keep-alive status of the http parser in
`http_connect`; it's too late and the critical information about whether
keep-alive is supported has been destroyed.
Per the documentation for `http_should_keep_alive`:
> If http_should_keep_alive() in the on_headers_complete or
> on_message_complete callback returns 0, then this should be
> the last message on the connection.
Query then and set the state.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
Increase the permissible replay count; with multiple-step authentication
schemes (NTLM, Negotiate), proxy authentication and redirects, we need
to be mindful of the number of steps it takes to get connected.
7 seems high but can be exhausted quickly with just a single authentication
failure over a redirected multi-state authentication pipeline.
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We did not properly support default credentials for proxies, only for
destination servers. Refactor the credential handling to support sending
either username/password _or_ default credentials to either the proxy or
the destination server.
This actually shares the authentication logic between proxy servers and
destination servers. Due to copy/pasta drift over time, they had
diverged. Now they share a common logic which is: first, use
credentials specified in the URL (if there were any), treating empty
username and password (ie, "http://:@foo.com/") as default credentials,
for compatibility with git. Next, call the credential callbacks.
Finally, fallback to WinHTTP compatibility layers using built-in
authentication like we always have.
Allowing default credentials for proxies requires moving the security
level downgrade into the credential setting routines themselves.
We will update our security level to "high" by default which means that
we will never send default credentials without prompting. (A lower
setting, like the WinHTTP default of "medium" would allow WinHTTP to
handle credentials for us, despite what a user may have requested with
their structures.) Now we start with "high" and downgrade to "low" only
after a user has explicitly requested default credentials.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
"Connection data" is an imprecise and largely incorrect name; these
structures are actually parsed URLs. Provide a parser that takes a URL
string and produces a URL structure (if it is valid).
Separate the HTTP redirect handling logic from URL parsing, keeping a
`gitno_connection_data_handle_redirect` whose only job is redirect
handling logic and does not parse URLs itself.
|
| | |
| |
| |
| |
| | |
The trace logging callbacks should match the other callback naming
conventions, using the `_cb` suffix instead of a `_callback` suffix.
|
| |/
|
|
|
| |
The credential callbacks should match the other callback naming
conventions, using the `_cb` suffix instead of a `_callback` suffix.
|
| |\
| |
| | |
ignore: handle escaped trailing whitespace
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The gitignore's pattern format specifies that "Trailing spaces
are ignored unless they are quoted with backslash ("\")". We do
not honor this currently and will treat a pattern "foo\ " as if
it was "foo\" only and a pattern "foo\ \ " as "foo\ \".
Fix our code to handle those special cases and add tests to avoid
regressions.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The stripping of trailing spaces currently happens as part of
`git_attr_fnmatch__parse`. As we aren't currently parsing
trailing whitespaces correct in case they're escaped, we'll have
to change that code, though. To make actual behavioural change
easier to review, refactor the code up-front by pulling it out
into its own function that is expected to retain the exact same
functionality as before. Like this, the fix will be trivial to
apply.
|
| |\ \
| |/
|/| |
Ignore: only treat one leading slash as a root identifier
|
| | |
| |
| |
| |
| |
| |
| | |
For compatibility with git, only skip the first leading slash in an
ignore file. That is: `/a.txt` indicates to ignore a file named `a.txt`
at the root. However `//b.txt` does not indicate that a file named
`b.txt` at the root should be ignored.
|
| | |
| |
| |
| |
| | |
When `allow_space` is unset, ensure that leading whitespace is not
skipped.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When evicting cache entries, we first retrieve the object that is
to be evicted, delete the object and then finally delete the key
from the cache. In case where the cache eviction caused us to
free the cached object, though, its key will point to invalid
memory now when trying to remove it from the cache map. On my
system, this causes us to not properly remove the key from the
map, as its memory has been overwritten already and thus the key
lookup it will fail and we cannot delete it.
Fix this by only decrementing the refcount of the evictee after
we have removed it from our cache map. Add a test that caused a
segfault previous to that change.
|
| | |
| |
| |
| | |
See: https://www.netbsd.org/changes/changes-7.0.html
|
| | |
| |
| | |
error was never initialized and a garbage value returned on success.
|
| | |
| |
| |
| |
| |
| | |
The `parse_section_header_ext` name suggests that it as an extended
function for parsing the section header. It is not. Rename it to
`parse_subsection_header` to better reflect its true mission.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we reach a whitespace after a section name, we assume that what
will follow will be a quoted subsection name. Pass the current position
of the line being parsed to the subsection parser, so that it can
validate that subsequent characters are additional whitespace or a
single quote.
Previously we would begin parsing after the section name, looking for
the first quotation mark. This allows invalid characters to embed
themselves between the end of the section name and the first quotation
mark, eg `[section foo "subsection"]`, which is illegal.
|
| | |
| |
| |
| |
| | |
When we don't specify a particular column, don't write it in the error
message. (column "0" is unhelpful.)
|
| | |
| |
| |
| |
| | |
Update the configuration parsing error messages to be lower-cased for
consistency with the rest of the library.
|
| |\ \
| | |
| | | |
Loosen restriction on wildcard "*" refspecs
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When we transform a refspec with a component containing a glob, then
we simply copy over the component until the next separator from
the matching ref. E.g. if we have a ref "refs/heads/foo/bar" and
a refspec "refs/heads/*/bar:refs/remotes/origin/*/bar", we:
1. Copy over everything until hitting the glob from the <dst>
part: "refs/remotes/origin/".
2. Strip the common prefix of ref and <src> part until the glob,
which is "refs/heads/". This leaves us with a ref of "foo/bar".
3. Copy from the ref until the next "/" separator, resulting in
"refs/remotes/origin/foo".
4. Copy over the remaining part of the <dst> spec, which is
"bar": "refs/remotes/origin/foo/bar".
This worked just fine in a world where globs in refspecs were
restricted such that a globbing component may only contain a
single "*", only. But this restriction has been lifted, so that a
glob component may be nested between other characters, causing
the above algorithm to fail. Most notably the third step, where
we copy until hitting the next "/" separator, might result in a
wrong transformation. Given e.g. a ref "refs/gbranchg/head" and a
refspec "refs/g*g/head:refs/remotes/origin/*", we'd also be
copying the "g" between "branch" and "/" and end up with the
wrong transformed ref "refs/remotes/origin/branchg".
Instead of copying until the next component separator, we should
copy until we hit the pattern after the "*". So in the above
example, we'd copy until hitting the string "g/head".
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In commit cd377f45c9 (refs: loosen restriction on wildcard "*"
refspecs, 2015-07-22) in git.git, the restrictions on wildcard
"*" refspecs has been loosened. While wildcards were previously
only allowed if the component is a single "*", this was changed
to also accept other patterns as part of the component.
We never adapted to that change and still reject any wildcard
patterns that aren't a single "*" only. Update our tests to
reflect the upstream change and adjust our own code accordingly.
|
| |\ \ \
| | | |
| | | | |
Use PCRE for our fallback regex engine when regcomp_l is unavailable
|
| | | | |
| | | |
| | | |
| | | | |
This avoids any misunderstanding with the REGEX keyword in cmake.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Use PCRE2 and its POSIX compatibility layer if requested by the user.
Although PCRE2 is adequate for our needs, the PCRE2 POSIX layer as
installed on Debian and Ubuntu systems is broken, so we do not opt-in to
it by default to avoid breaking users on those platforms.
|
| | | | |
| | | |
| | | |
| | | |
| | | | |
Attempt to locate a system-installed version of PCRE and use its POSIX
compatibility layer, if possible.
|
| | | | | |
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Users can now select which regex implementation they want to use: one of
the system `regcomp_l`, the system PCRE, the builtin PCRE or the
system's `regcomp`.
By default the system `regcomp_l` will be used if it exists, otherwise
the system PCRE will be used. If neither of those exist, then the
builtin PCRE implementation will be used.
The system's `regcomp` is not used by default due to problems with
locales.
|
| | | | |
| | | |
| | | |
| | | |
| | | | |
Move some win32 type definitions to a standalone file so that they can
be included before other header files try to use the definitions.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When searching for a configuration key for the diff driver, we construct
the config key by modifying a buffer and then passing it to
`git_config_get_multivar_foreach`. We do not check though whether the
modification of the buffer actually succeded, so we could in theory end
up passing the OOM buffer to the config function.
Fix that by checking return codes. While at it, switch to use
`git_buf_PUTS` to avoid repetition of the appended string to calculate
its length.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Use PCRE 8.42 as the builtin regex implementation, using its POSIX
compatibility layer. PCRE uses ASCII by default and the users locale
will not influence its behavior, so its `regcomp` implementation is
similar to `regcomp_l` with a C locale.
|