| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since we are working with indexes anyway, don't bother calling
enumerate() with a slice from self.tokens (which requires copying
memory). Instead, just generate the indexes using range() and use
normal indexing to access the desired tokens.
The old behavior resulted in quadratic runtime with respect to the
number of tokens, which significantly impacted performance for
statements with very large numbers of tokens.
With the new behavior, the runtime is now linear with respect to the
number of tokens.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Python 2.7 and 3.4 are end-of-life. They are no longer receiving bug
fixes, including for security issues. Python 2.7 went EOL on 2020-01-01
and 3.4 on 2019-03-18. For additional details on support Python
versions, see:
Supported: https://devguide.python.org/#status-of-python-branches
EOL: https://devguide.python.org/devcycle/#end-of-life-branches
Removing support for EOL Pythons will reduce testing and maintenance
resources while allowing the library to move towards modern Python 3.
Using pypinfo, we can show the PyPI download statistics, showing less
than 10% of users are using Python 2.7.
| python_version | percent | download_count |
| -------------- | ------: | -------------: |
| 3.7 | 45.36% | 3,056,010 |
| 3.6 | 26.46% | 1,782,778 |
| 3.8 | 12.22% | 823,213 |
| 2.7 | 9.97% | 671,459 |
| 3.5 | 5.86% | 394,846 |
| 3.4 | 0.10% | 6,700 |
| 3.9 | 0.03% | 2,346 |
| 2.6 | 0.00% | 57 |
| 3.3 | 0.00% | 21 |
| 3.10 | 0.00% | 6 |
| Total | | 6,737,436 |
Library users who continue to use Python 2.7 will still be able to
install previous versions of sqlparse.
Compatibility shims have been dropped, simplifying the code.
Using pyupgrade, the codebase has been updated to take advantage of
modern syntax <https://github.com/asottile/pyupgrade>.
The wheel is no longer marked as "universal" as it is now Python 3 only.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This change adopts some parts of the pull request #509 by
john-bodley. Thanks!
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes the formatting on INSERT by creating a new
instance of sql.Values to group all the values.
SQL: insert into foo values (1, 'foo'), (2, 'bar'), (3, 'baz')
Before:
insert into foo
values (1,
'foo'), (2,
'bar'), (3,
'baz')
After:
insert into foo
values (1, 'foo'),
(2, 'bar'),
(3, 'baz')
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
added a class named Having and inherited it with TokenList
It will be easier for further manipulations as a HAVING clause contains multiple conditions just like WHERE clause
|
|
|
|
|
|
| |
* fix "WITH name" case
* fix "WITH name" case (flake8 fix)
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Working with non-ascii in Python require all-unicode approach, but
str literals in Python 2.7 are bytes. The patch makes them unicode.
Syntax u'' is supported in Python 2.7 and 3.3+.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
If the value is Single it's already quoted with apostrophes.
Avoid double apostrophes it that case by using double-quotes instead.
For example, if the value is 'value' the output is "'value'" instead of
''value''.
|
| |
|
| |
|
| |
|
|
|
|
| |
both will now return the "next" token and not itself when passing own index
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| | |
Prevent calling token_index.
|
| |
| |
| |
| |
| | |
Prevent some more calls to token_index in group_identifier_list. They
are now all gone.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The method group_aliased was making a lot of calls to token_index. By
specializing token_next_by to token_idx_next_by, the calls to
token_index became superfluous.
Also use token_idx_next_by in group_identifier_list.
It was making a lot of calls, which is now more than reduced in half.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
All the matching between open/close was done all the time, first finding
the matching closing token, and then grouping the tokens in between, and
recurse over the newly created list.
Instead, it is more efficient to look for the previous open-token on
finding a closing-token, group these two together, and then continue on.
squashed: Handle token indices in group_tokens_between and find_matching.
|
| |
| |
| |
| |
| |
| | |
When having been guaranteed that the tokens form a range, it is possible
to get rid of a lot of calls to `Token.tokens.remove(...)` which are
expensive.
|
| |
| |
| |
| | |
Closes issue #226
|
| | |
|
| | |
|
| | |
|