summaryrefslogtreecommitdiff
path: root/sqlparse/sql.py
Commit message (Collapse)AuthorAgeFilesLines
* Remove unused code from sql.py and style up some changesVictor Uriarte2016-06-141-38/+7
|
* Merge remote-tracking branch 'core/long_live_indexes' into developVictor Uriarte2016-06-141-1/+105
|\
| * Use a specialized token_idx_next.Sjoerd Job Postmus2016-06-121-0/+20
| | | | | | | | Prevent calling token_index.
| * Index-based token_idx_prevSjoerd Job Postmus2016-06-121-6/+22
| | | | | | | | | | Prevent some more calls to token_index in group_identifier_list. They are now all gone.
| * Use specialized token_idx_next_by in group_aliased.Sjoerd Job Postmus2016-06-121-0/+20
| | | | | | | | | | | | | | | | | | | | The method group_aliased was making a lot of calls to token_index. By specializing token_next_by to token_idx_next_by, the calls to token_index became superfluous. Also use token_idx_next_by in group_identifier_list. It was making a lot of calls, which is now more than reduced in half.
| * Replace _group_matching with an inward-out grouping algorithmSjoerd Job Postmus2016-06-121-4/+9
| | | | | | | | | | | | | | | | | | | | | | All the matching between open/close was done all the time, first finding the matching closing token, and then grouping the tokens in between, and recurse over the newly created list. Instead, it is more efficient to look for the previous open-token on finding a closing-token, group these two together, and then continue on. squashed: Handle token indices in group_tokens_between and find_matching.
| * Special-case group_tokens(..., tokens_between())Sjoerd Job Postmus2016-06-121-0/+23
| | | | | | | | | | | | When having been guaranteed that the tokens form a range, it is possible to get rid of a lot of calls to `Token.tokens.remove(...)` which are expensive.
* | Fix token-parent behaviorVictor Uriarte2016-06-121-0/+3
| | | | | | | | Closes issue #226
* | Remove token_first; its redundant to token_next(idx=0)Victor Uriarte2016-06-121-20/+8
| |
* | Restyle pprint_tree and align upto idx=99Victor Uriarte2016-06-121-3/+11
| |
* | Add sql.Operation tokenlistVictor Uriarte2016-06-121-0/+4
| |
* | Replace remove with list comprehension on sql.pyVictor Uriarte2016-06-111-1/+3
| | | | | | | | Help performance for #62, #135
* | Refactor sql.py insert_afterVictor Uriarte2016-06-111-1/+1
| |
* | Refactor sql.py group_tokensVictor Uriarte2016-06-111-19/+11
| | | | | | | | | | first token in group had no parents and almost became batman
* | Fix get_token_at_offset behavior at edgeVictor Uriarte2016-06-111-1/+1
| | | | | | | | | | At position 6 (with an index starting at 0) it should have been on 2nd word for the example sql='select * from dual'
* | Remove unneded code from sql.pyVictor Uriarte2016-06-111-12/+2
| | | | | | | | Remove HACK code. Code is now properly updated
* | Fix Case statementsAdam Greenhall2016-06-061-1/+4
| |
* | Refactor match logicfilters_sqlVictor Uriarte2016-06-041-18/+9
| |
* | Simplify indexVictor Uriarte2016-06-041-10/+9
| |
* | Refactor one-time use functionsVictor Uriarte2016-06-041-24/+3
| |
* | Allow tokenlists to skip over commentsVictor Uriarte2016-06-041-10/+13
| | | | | | | | Rename ignore_cm to skip_cm for consistency
* | Clean-up code style sql.pyVictor Uriarte2016-06-041-35/+31
| | | | | | | | other items inside slots are already defined in parent class
* | Clean Token/Tokenlist init'sVictor Uriarte2016-06-041-8/+3
| |
* | Clean-up rename variables in loops to token sql.pyVictor Uriarte2016-06-041-19/+19
| |
* | Simplify sql.py naming/aliasVictor Uriarte2016-06-041-19/+6
| |
* | Change pprint w new str format; can out to fileVictor Uriarte2016-06-041-14/+13
| |
* | Add unicode-str compatible cls decoratorVictor Uriarte2016-06-041-28/+9
| |
* | Add or Update copyright year to filesVictor Uriarte2016-06-041-0/+5
| |
* | Fix flake8 stylingVictor Uriarte2016-05-291-1/+1
|/
* refactor sql.py functionsVictor Uriarte2016-05-111-82/+30
|
* refactor remove quotesVictor Uriarte2016-05-101-11/+3
|
* Add group matching M_tokens and refactor group matchingVictor Uriarte2016-05-101-19/+15
| | | | remove slots in subclasses
* generalize group_tokens for more use casesVictor Uriarte2016-05-101-9/+25
|
* adding powerful _token_matching and imt helperVictor Uriarte2016-05-101-6/+30
|
* update sqlVictor Uriarte2016-05-101-8/+8
|
* Code cleanup.Andi Albrecht2016-04-031-2/+4
|
* Ensure get_type() works for queries that use WITH.Andrew Tipton2016-03-021-0/+12
|
* Fix version check when casting TokenList to string (fixes #212).Andi Albrecht2015-12-081-1/+1
|
* Remove sql.Token.to_unicode.Andi Albrecht2015-10-261-8/+0
|
* Cleanup module code.Andi Albrecht2015-10-261-8/+9
|
* Use compat module for single Python 2/3 code base.Andi Albrecht2015-10-261-11/+12
| | | | This change includes minor fixes and code cleanup too.
* Speed up token_index by providing a starting index.Ryan Wooden2015-10-211-1/+9
|
* Limit number of tokens checked in group_identifier.Ryan Wooden2015-10-211-2/+2
| | | | | This significantly improves performance when grouping a large list of IDs.
* Ignore comments at beginning of statement when calling Statement.get_type ↵Andi Albrecht2015-07-261-2/+10
| | | | (fixes #186).
* Improve detection of aliased identifiers (fixes #185).Andi Albrecht2015-04-191-1/+2
|
* Group square-brackets into identifiersDarik Gamble2015-03-041-4/+5
| | | | Indentifier.get_array_indices() looks for square brackets, and yields lists of bracket grouped tokens as array indices
* Parse square brackets as a group just like parensDarik Gamble2015-03-041-0/+9
| | | | | - add class sql.SquareBrackets - replace group_parenthesis() with more generic group_brackets(), which groups square and round brackets, so each can contain groups of the other
* Move get_parent_name() from Identifer to TokenList (so Function can use it)Darik Gamble2015-02-091-21/+11
|
* get_name() uses _get_first_name()Darik Gamble2015-02-091-0/+4
|
* get_alias() uses _get_first_name(), and searches in reverse for "column ↵Darik Gamble2015-02-091-13/+9
| | | | expression alias"