summaryrefslogtreecommitdiff
path: root/sqlparse/sql.py
Commit message (Collapse)AuthorAgeFilesLines
* Fix get_type with comments between WITH keywordShikanime Deva2023-01-041-13/+14
|
* Refactor to reduce redundant code.Daniel Harding2022-08-081-10/+7
|
* Don't make slice copies in TokenList._token_matching().Daniel Harding2022-08-081-1/+4
| | | | | | | | | | | | | | Since we are working with indexes anyway, don't bother calling enumerate() with a slice from self.tokens (which requires copying memory). Instead, just generate the indexes using range() and use normal indexing to access the desired tokens. The old behavior resulted in quadratic runtime with respect to the number of tokens, which significantly impacted performance for statements with very large numbers of tokens. With the new behavior, the runtime is now linear with respect to the number of tokens.
* Update copyright notice.Andi Albrecht2020-10-071-1/+1
|
* Remove support for end-of-life PythonsJon Dufresne2020-08-311-20/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Python 2.7 and 3.4 are end-of-life. They are no longer receiving bug fixes, including for security issues. Python 2.7 went EOL on 2020-01-01 and 3.4 on 2019-03-18. For additional details on support Python versions, see: Supported: https://devguide.python.org/#status-of-python-branches EOL: https://devguide.python.org/devcycle/#end-of-life-branches Removing support for EOL Pythons will reduce testing and maintenance resources while allowing the library to move towards modern Python 3. Using pypinfo, we can show the PyPI download statistics, showing less than 10% of users are using Python 2.7. | python_version | percent | download_count | | -------------- | ------: | -------------: | | 3.7 | 45.36% | 3,056,010 | | 3.6 | 26.46% | 1,782,778 | | 3.8 | 12.22% | 823,213 | | 2.7 | 9.97% | 671,459 | | 3.5 | 5.86% | 394,846 | | 3.4 | 0.10% | 6,700 | | 3.9 | 0.03% | 2,346 | | 2.6 | 0.00% | 57 | | 3.3 | 0.00% | 21 | | 3.10 | 0.00% | 6 | | Total | | 6,737,436 | Library users who continue to use Python 2.7 will still be able to install previous versions of sqlparse. Compatibility shims have been dropped, simplifying the code. Using pyupgrade, the codebase has been updated to take advantage of modern syntax <https://github.com/asottile/pyupgrade>. The wheel is no longer marked as "universal" as it is now Python 3 only.
* [fix] Fixing typed literal regressionJohn Bodley2020-02-021-3/+2
|
* Update sql.pyJohn Bodley2020-01-201-1/+2
|
* Update sql.pyJohn Bodley2020-01-201-1/+1
|
* [sql] Adding TIMESTAMP to typed literalJohn Bodley2020-01-201-1/+1
|
* Code cleanup.Andreas Albrecht2019-10-201-11/+11
|
* support typed literals (if that's what they're called)Dvořák Václav2019-10-201-0/+7
|
* [sql] Fix TokenList.__init__ when no tokens are providedJohn Bodley2019-10-091-1/+1
|
* Restrict detection of alias names (fixes #455).Andreas Albrecht2019-10-091-15/+27
| | | | | This change adopts some parts of the pull request #509 by john-bodley. Thanks!
* Avoid formatting of psql commands (fixes #469).Andi Albrecht2019-03-111-0/+4
|
* [tokenizer] Grouping GROUP/ORDER BYJohn Bodley2019-03-101-2/+2
|
* Fix formatting on INSERT (fixes #329)Fredy Wijaya2019-03-101-0/+4
| | | | | | | | | | | | | | | | | | | | This patch fixes the formatting on INSERT by creating a new instance of sql.Values to group all the values. SQL: insert into foo values (1, 'foo'), (2, 'bar'), (3, 'baz') Before: insert into foo values (1, 'foo'), (2, 'bar'), (3, 'baz') After: insert into foo values (1, 'foo'), (2, 'bar'), (3, 'baz')
* Code cleanup.Andreas Albrecht2019-03-101-4/+6
|
* Revamped pprint_treeMrVallentin2019-01-071-4/+8
|
* Update copyright header (fixes #372).Andi Albrecht2018-07-311-1/+2
|
* Fix Failing Build - Flake8Kevin Boyette2018-07-281-2/+2
|
* Added HAVING classslickholms2018-07-081-0/+6
| | | | added a class named Having and inherited it with TokenList It will be easier for further manipulations as a HAVING clause contains multiple conditions just like WHERE clause
* fix "WITH name" case (#418)andrew deryabin2018-07-081-1/+2
| | | | | | * fix "WITH name" case * fix "WITH name" case (flake8 fix)
* Fix issue with get_real_name returning incorrect nameFredy Wijaya2018-03-211-3/+4
|
* Fix typostypoVictor Uriarte2017-11-291-3/+3
|
* Fix parsing of UNION ALL after WHERE (fixes #349).Andi Albrecht2017-07-291-2/+3
|
* Fix parsing of INTO keyword in WHERE clauses (fixes #324).Andi Albrecht2017-03-021-1/+1
|
* Correct license link (fixes #288).Andi Albrecht2016-09-141-1/+1
|
* Convert string literals to unicode for Py27Oleg Broytman2016-08-311-6/+6
| | | | | | Working with non-ascii in Python require all-unicode approach, but str literals in Python 2.7 are bytes. The patch makes them unicode. Syntax u'' is supported in Python 2.7 and 3.3+.
* Unify_naming_schema. Closes #283Victor Uriarte2016-08-221-19/+12
|
* Clean-up quotingVictor Uriarte2016-08-221-8/+4
|
* Avoid double apostrophesOleg Broytman2016-08-061-2/+11
| | | | | | | | If the value is Single it's already quoted with apostrophes. Avoid double apostrophes it that case by using double-quotes instead. For example, if the value is 'value' the output is "'value'" instead of ''value''.
* Returning clause ends where clauseDarik Gamble2016-06-251-1/+1
|
* token_next shouldn't ignore skip_cmDarik Gamble2016-06-201-19/+7
|
* Make use of token_index more obviousVictor Uriarte2016-06-151-10/+3
|
* Normalize behavior between token_next and token_next_byVictor Uriarte2016-06-151-1/+2
| | | | both will now return the "next" token and not itself when passing own index
* Rename token_idx_ funcs to simply token_ funcsVictor Uriarte2016-06-151-23/+23
|
* Remove functions no-longer usedVictor Uriarte2016-06-151-51/+0
|
* Change token_ funcs to token_idx funcsVictor Uriarte2016-06-151-26/+38
|
* Change argument order to match order of all other functionsVictor Uriarte2016-06-141-2/+2
|
* Remove unused code from sql.py and style up some changesVictor Uriarte2016-06-141-38/+7
|
* Merge remote-tracking branch 'core/long_live_indexes' into developVictor Uriarte2016-06-141-1/+105
|\
| * Use a specialized token_idx_next.Sjoerd Job Postmus2016-06-121-0/+20
| | | | | | | | Prevent calling token_index.
| * Index-based token_idx_prevSjoerd Job Postmus2016-06-121-6/+22
| | | | | | | | | | Prevent some more calls to token_index in group_identifier_list. They are now all gone.
| * Use specialized token_idx_next_by in group_aliased.Sjoerd Job Postmus2016-06-121-0/+20
| | | | | | | | | | | | | | | | | | | | The method group_aliased was making a lot of calls to token_index. By specializing token_next_by to token_idx_next_by, the calls to token_index became superfluous. Also use token_idx_next_by in group_identifier_list. It was making a lot of calls, which is now more than reduced in half.
| * Replace _group_matching with an inward-out grouping algorithmSjoerd Job Postmus2016-06-121-4/+9
| | | | | | | | | | | | | | | | | | | | | | All the matching between open/close was done all the time, first finding the matching closing token, and then grouping the tokens in between, and recurse over the newly created list. Instead, it is more efficient to look for the previous open-token on finding a closing-token, group these two together, and then continue on. squashed: Handle token indices in group_tokens_between and find_matching.
| * Special-case group_tokens(..., tokens_between())Sjoerd Job Postmus2016-06-121-0/+23
| | | | | | | | | | | | When having been guaranteed that the tokens form a range, it is possible to get rid of a lot of calls to `Token.tokens.remove(...)` which are expensive.
* | Fix token-parent behaviorVictor Uriarte2016-06-121-0/+3
| | | | | | | | Closes issue #226
* | Remove token_first; its redundant to token_next(idx=0)Victor Uriarte2016-06-121-20/+8
| |
* | Restyle pprint_tree and align upto idx=99Victor Uriarte2016-06-121-3/+11
| |
* | Add sql.Operation tokenlistVictor Uriarte2016-06-121-0/+4
| |