| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implemented the "cartesian product warning" for UPDATE and DELETE
statements, those which include multiple tables that are not correlated
together in some way.
Fixed issue where :func:`_dml.update` construct that included multiple
tables and no VALUES clause would raise with an internal error. Current
behavior for :class:`_dml.Update` with no values is to generate a SQL
UPDATE statement with an empty "set" clause, so this has been made
consistent for this specific sub-case.
Fixes: #9721
Change-Id: I556639811cc930d2e37532965d2ae751882af921
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Repaired a major shortcoming which was identified in the
:ref:`engine_insertmanyvalues` performance optimization feature first
introduced in the 2.0 series. This was a continuation of the change in
2.0.9 which disabled the SQL Server version of the feature due to a
reliance in the ORM on apparent row ordering that is not guaranteed to take
place. The fix applies new logic to all "insertmanyvalues" operations,
which takes effect when a new parameter
:paramref:`_dml.Insert.returning.sort_by_parameter_order` on the
:meth:`_dml.Insert.returning` or :meth:`_dml.UpdateBase.return_defaults`
methods, that through a combination of alternate SQL forms, direct
correspondence of client side parameters, and in some cases downgrading to
running row-at-a-time, will apply sorting to each batch of returned rows
using correspondence to primary key or other unique values in each row
which can be correlated to the input data.
Performance impact is expected to be minimal as nearly all common primary
key scenarios are suitable for parameter-ordered batching to be
achieved for all backends other than SQLite, while "row-at-a-time"
mode operates with a bare minimum of Python overhead compared to the very
heavyweight approaches used in the 1.x series. For SQLite, there is no
difference in performance when "row-at-a-time" mode is used.
It's anticipated that with an efficient "row-at-a-time" INSERT with
RETURNING batching capability, the "insertmanyvalues" feature can be later
be more easily generalized to third party backends that include RETURNING
support but not necessarily easy ways to guarantee a correspondence
with parameter order.
Fixes: #9618
References: #9603
Change-Id: I1d79353f5f19638f752936ba1c35e4dc235a8b7c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression related to the implementation for the new
"insertmanyvalues" feature where an internal ``TypeError`` would occur in
arrangements where a :func:`_sql.insert` would be referred towards inside
of another :func:`_sql.insert` via a CTE; made additional repairs for this
use case for positional dialects such as asyncpg when using
"insertmanyvalues".
at the core here is a change to positional insertmanyvalues
where we now get exactly the positions for the "manyvalues" within
the larger list, allowing non-"manyvalues" on the left and right
sides at the same time, not assuming anything about how RETURNING
renders etc., since CTEs are in the mix also.
Fixes: #9173
Change-Id: I5ff071fbef0d92a2d6046b9c4e609bb008438afd
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug / regression where using :func:`.bindparam()` with the same name
as a column in the :meth:`.Update.values` method of :class:`.Update`, as
well as the :meth:`.Insert.values` method of :class:`.Insert` in 2.0 only,
would in some cases silently fail to honor the SQL expression in which the
parameter were presented, replacing the expression with a new parameter of
the same name and discarding any other elements of the SQL expression, such
as SQL functions, etc. The specific case would be statements that were
constructed against ORM entities rather than plain :class:`.Table`
instances, but would occur if the statement were invoked with a
:class:`.Session` or a :class:`.Connection`.
:class:`.Update` part of the issue was present in both 2.0 and 1.4 and is
backported to 1.4.
Fixes: #9075
Change-Id: Ie954bc1f492ec6a566163588182ef4910c7ee452
|
|
|
|
| |
Change-Id: I625af65b3fb1815b1af17dc2ef47dd697fdc3fb1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changed how the positional compilation is performed. It's rendered by the compiler
the same as the pyformat compilation. The string is then processed to replace
the placeholders with the correct ones, and to obtain the correct order of the
parameters.
This vastly simplifies the computation of the order of the parameters, that in
case of nested CTE is very hard to compute correctly.
Reworked how numeric paramstyle behavers:
- added support for repeated parameter, without duplicating them like in normal
positional dialects
- implement insertmany support. This requires that the dialect supports out of
order placehoders, since all parameters that are not part of the VALUES clauses
are placed at the beginning of the parameter tuple
- support for different identifiers for a numeric parameter. It's for example
possible to use postgresql style placeholder $1, $2, etc
Added two new dialect based on sqlite to test "numeric" fully using
both :1 style and $1 style. Includes a workaround for SQLite's
not-really-correct numeric implementation.
Changed parmstyle of asyncpg dialect to use numeric, rendering with its native
$ identifiers
Fixes: #8926
Fixes: #8849
Change-Id: I7c640467d49adfe6d795cc84296fc7403dcad4d6
|
|
|
|
|
|
|
|
| |
command run is "pyupgrade --py37-plus --keep-runtime-typing --keep-percent-format <files...>"
pyupgrade will change assert_ to assertTrue. That was reverted since assertTrue does not
exists in sqlalchemy fixtures
Change-Id: Ie1ed2675c7b11d893d78e028aad0d1576baebb55
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug in new "insertmanyvalues" feature where INSERT that included a
subquery with :func:`_sql.bindparam` inside of it would fail to render
correctly in "insertmanyvalues" format. This affected psycopg2 most
directly as "insertmanyvalues" is used unconditionally with this driver.
Fixes: #8639
Change-Id: I67903fa86afe208899d4f23f940e0727d1be2ce3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* ORM Insert now includes "bulk" mode that will run
essentially the same process as session.bulk_insert_mappings;
interprets the given list of values as ORM attributes for
key names
* ORM UPDATE has a similar feature, without RETURNING support,
for session.bulk_update_mappings
* Added support for upserts to do RETURNING ORM objects as well
* ORM UPDATE/DELETE with list of parameters + WHERE criteria
is a not implemented; use connection
* ORM UPDATE/DELETE defaults to "auto" synchronize_session;
use fetch if RETURNING is present, evaluate if not, as
"fetch" is much more efficient (no expired object SELECT problem)
and less error prone if RETURNING is available
UPDATE: howver this is inefficient! please continue to
use evaluate for simple cases, auto can move to fetch
if criteria not evaluable
* "Evaluate" criteria will now not preemptively
unexpire and SELECT attributes that were individually
expired. Instead, if evaluation of the criteria indicates that
the necessary attrs were expired, we expire the object
completely (delete) or expire the SET attrs unconditionally
(update). This keeps the object in the same unloaded state
where it will refresh those attrs on the next pass, for
this generally unusual case. (originally #5664)
* Core change! update/delete rowcount comes from len(rows)
if RETURNING was used. SQLite at least otherwise did not
support this. adjusted test_rowcount accordingly
* ORM DELETE with a list of parameters at all is also a not
implemented as this would imply "bulk", and there is no
bulk_delete_mappings (could be, but we dont have that)
* ORM insert().values() with single or multi-values translates
key names based on ORM attribute names
* ORM returning() implemented for insert, update, delete;
explcit returning clauses now interpret rows in an ORM
context, with support for qualifying loader options as well
* session.bulk_insert_mappings() assigns polymorphic identity
if not set.
* explicit RETURNING + synchronize_session='fetch' is now
supported with UPDATE and DELETE.
* expanded return_defaults() to work with DELETE also.
* added support for composite attributes to be present
in the dictionaries used by bulk_insert_mappings and
bulk_update_mappings, which is also the new ORM bulk
insert/update feature, that will expand the composite
values into their individual mapped attributes the way they'd
be on a mapped instance.
* bulk UPDATE supports "synchronize_session=evaluate", is the
default. this does not apply to session.bulk_update_mappings,
just the new version
* both bulk UPDATE and bulk INSERT, the latter with or without
RETURNING, support *heterogenous* parameter sets.
session.bulk_insert/update_mappings did this, so this feature
is maintained. now cursor result can be both horizontally
and vertically spliced :)
This is now a long story with a lot of options, which in
itself is a problem to be able to document all of this
in some way that makes sense. raising exceptions for
use cases we haven't supported is pretty important here
too, the tradition of letting unsupported things just not work
is likely not a good idea at this point, though there
are still many cases that aren't easily avoidable
Fixes: #8360
Fixes: #7864
Fixes: #7865
Change-Id: Idf28379f8705e403a3c6a937f6a798a042ef2540
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the feature is enabled for all built in backends
when RETURNING is used,
except for Oracle that doesn't need it, and on
psycopg2 and mssql+pyodbc it is used for all INSERT statements,
not just those that use RETURNING.
third party dialects would need to opt in to the new feature
by setting use_insertmanyvalues to True.
Also adds dialect-level guards against using returning
with executemany where we dont have an implementation to
suit it. execute single w/ returning still defers to the
server without us checking.
Fixes: #6047
Fixes: #7907
Change-Id: I3936d3c00003f02e322f2e43fb949d0e6e568304
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As almost every dialect supports RETURNING now, RETURNING
is also made more of a default assumption.
* the default compiler generates a RETURNING clause now
when specified; CompileError is no longer raised.
* The dialect-level implicit_returning parameter now has
no effect. It's not fully clear if there are real world
cases relying on the dialect-level parameter, so we will see
once 2.0 is released. ORM-level RETURNING can be disabled
at the table level, and perhaps "implicit returning" should
become an ORM-level option at some point as that's where
it applies.
* Altered ORM update() / delete() to respect table-level
implicit returning for fetch.
* Since MariaDB doesnt support UPDATE returning, "full_returning"
is now split into insert_returning, update_returning, delete_returning
* Crazy new thing. Dialects that have *both* cursor.lastrowid
*and* returning. so now we can pick between them for SQLite
and mariadb. so, we are trying to keep it on .lastrowid for
simple inserts with an autoincrement column, this helps with
some edge case test scenarios and i bet .lastrowid is faster
anyway. any return_defaults() / multiparams etc then we
use returning
* SQLite decided they dont want to return rows that match in
ON CONFLICT. this is flat out wrong, but for now we need to
work with it.
Fixes: #6195
Fixes: #7011
Closes: #7047
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7047
Pull-request-sha: d25d5ea3abe094f282c53c7dd87f5f53a9e85248
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Change-Id: I9908ce0ff7bdc50bd5b27722081767c31c19a950
|
|
|
|
|
|
|
|
|
|
|
| |
An informative error is raised for the use case where
:meth:`.Insert.from_select` is being passed a "compound select" object such
as a UNION, yet the INSERT statement needs to append additional columns to
support Python-side or explicit SQL defaults from the table metadata. In
this case a subquery of the compound object should be passed.
Fixes: #8073
Change-Id: Ic4a5dbf84ec49d2451901be05cb9cf6ae93f02b7
|
|
|
|
|
|
|
|
| |
a whole bunch of errors were apparently blocked by 0.0.4
being installed.
Fixes: #8020
Change-Id: I22a0faeaabe03de501897893391946d677c2df7e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Altered the compilation mechanics of the :class:`.Insert` construct such
that the "autoincrement primary key" column value will be fetched via
``cursor.lastrowid`` or RETURNING even if present in the parameter set or
within the :meth:`.Insert.values` method as a plain bound value, for
single-row INSERT statements on specific backends that are known to
generate autoincrementing values even when explicit NULL is passed. This
restores a behavior that was in the 1.3 series for both the use case of
separate parameter set as well as :meth:`.Insert.values`. In 1.4, the
parameter set behavior unintentionally changed to no longer do this, but
the :meth:`.Insert.values` method would still fetch autoincrement values up
until 1.4.21 where :ticket:`6770` changed the behavior yet again again
unintentionally as this use case was never covered.
The behavior is now defined as "working" to suit the case where databases
such as SQLite, MySQL and MariaDB will ignore an explicit NULL primary key
value and nonetheless invoke an autoincrement generator.
Fixes: #7998
Change-Id: I5d4105a14217945f87fbe9a6f2a3c87f6ef20529
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to simplify pyproject.toml change the remaining files
that aren't going to be typed on this first pass
(unless of course someone wants to type some of these)
to include # mypy: ignore-errors. for the moment, only a handful
of ORM modules are to have more type checking implemented.
It's important that ignore-errors is used and
not "# type: ignore", as in the latter case, mypy doesn't even
read the existing types in the file, which makes it impossible to
type any files that refer to those modules at all.
to simplify ongoing typing work use inline mypy config
for remaining files that are "done" for now, indicating the
level of type checking they currently have.
Change-Id: I98669c1a305c2f0adba85d10b5425541f3fe9533
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
after some experimentation it seems mypy is more amenable
to the generic types being fully integrated rather than
having separate spin-off types. so key structures
like Result, Row, Select become generic. For DML
Insert, Update, Delete, these are spun into type-specific
subclasses ReturningInsert, ReturningUpdate, ReturningDelete,
which is fine since the "row-ness" of these constructs
doesn't happen until returning() is called in any case.
a Tuple based model is then integrated so that these
objects can carry along information about their return
types. Overloads at the .execute() level carry through
the Tuple from the invoked object to the result.
To suit the issue of AliasedClass generating attributes
that are dynamic, experimented with a custom subclass
AsAliased, but then just settled on having aliased()
lie to the type checker and return `Type[_O]`, essentially.
will need some type-related accessors for with_polymorphic()
also.
Additionally, identified an issue in Update when used
"mysql style" against a join(), it basically doesn't work
if asked to UPDATE two tables on the same column name.
added an error message to the specific condition where
it happens with a very non-specific error message that we
hit a thing we can't do right now, suggest multi-table
update as a possible cause.
Change-Id: I5eff7eefe1d6166ee74160b2785c5e6a81fa8b95
|
|
|
|
| |
Change-Id: I42ed77f559e3ee5b8c600d98457ee37803ef0ea6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Full "RETURNING" support is implemented for the cx_Oracle dialect, meaning
multiple RETURNING rows are now recived for DML statements that produce
more than one row for RETURNING.
cx_Oracle 7 is now the minimum version for cx_Oracle.
Getting Oracle to do multirow returning took about 5 minutes. however,
getting Oracle's RETURNING system to integrate with ORM-enabled
insert, update, delete, is a big deal because that architecture wasn't
really working very robustly, including some recent changes in 1.4
for FromStatement were done in a hurry, so this patch also cleans up
the FromStatement situation and begins to establish it more concretely
as the base for all ReturnsRows / TextClause ORM scenarios.
Fixes: #6245
Change-Id: I2b4e6007affa51ce311d2d5baa3917f356ab961f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
hitting DML which is causing us to open up the
ColumnCollection structure a bit, as we do put anonymous
column expressions with None here. However, we still want
Table /TableClause to have named column collections that
don't return None, so parametrize the "key" in this
collection also.
* rename some "immutable" elements to "readonly". we change
the contents of immutablecolumncollection underneath, so it's
not "immutable"
Change-Id: I2593995a4e5c6eae874bed5bf76117198be8ae97
|
|
|
|
|
|
|
| |
non-strict checking for mostly internal or semi-internal
code
Change-Id: Ib91b47f1a8ccc15e666b94bad1ce78c4ab15b0ec
|
|
|
|
|
|
|
|
|
|
| |
__future__.annotations mode allows us to use non-string
annotations for argument and return types in most cases,
but more importantly it removes a large amount of runtime
overhead that would be spent in evaluating the annotations.
Change-Id: I2f5b6126fe0019713fc50001be3627b664019ede
References: #6810
|
|
|
|
| |
Change-Id: I49abf2607e0eb0623650efdf0091b1fb3db737ea
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
Black's `target-version` was still set to `['py27', 'py36']`. Set it to `[py37]` instead.
Also update Black and other pre-commit hooks and re-format with Black.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #7536
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7536
Pull-request-sha: b3aedf5570d7e0ba6c354e5989835260d0591b08
Change-Id: I8be85636fd2c9449b07a8626050c8bd35bd119d5
|
|
|
|
| |
Change-Id: I8172fdcc3103ff92aa049827728484c8779af6b7
|
|
|
|
|
|
|
|
|
| |
Fixed issue where using ORM column expressions as keys in the list of
dictionaries passed to :meth:`_sql.Insert.values` for "multi-valued insert"
would not be processed correctly into the correct column expressions.
Fixes: #7060
Change-Id: I1c4c286c33ea6eeaafba617996828f5c88ff0a1c
|
|
|
|
|
|
|
|
|
| |
Fixed issue related to new ``add_cte()`` feature where pairing two
"INSERT..FROM SELECT" statements simultaneously would lose track of the two
independent SELECT statements, leading to the wrong SQL.
Fixes: #7036
Change-Id: I90fe47eb203bc5c1ea5810db0edba08250b2b7e6
|
|
|
|
|
|
|
|
|
| |
Fixed critical caching issue where the ORM's persistence feature using
INSERT..RETURNING would cache an incorrect query when mixing the "bulk
save" and standard "flush" forms of INSERT.
Fixes: #6793
Change-Id: Ifeb61c1226d3fa6d5e1c2e29b6f5ff77a27d6a2d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added new method :meth:`_sql.HasCTE.add_cte` to each of the
:func:`_sql.select`, :func:`_sql.insert`, :func:`_sql.update` and
:func:`_sql.delete` constructs. This method will add the given
:class:`_sql.CTE` as an "independent" CTE of the statement, meaning it
renders in the WITH clause above the statement unconditionally even if it
is not otherwise referenced in the primary statement. This is a popular use
case on the PostgreSQL database where a CTE is used for a DML statement
that runs against database rows independently of the primary statement.
Fixes: #6752
Change-Id: Ibf635763e40269cbd10f4c17e208850d8e8d0188
|
|
|
|
|
|
| |
Also replace http://pypi.python.org/pypi with https://pypi.org/project
Change-Id: I84b5005c39969a82140706472989f2a30b0c7685
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue where usage of an explicit :class:`.Sequence` would produce
inconsistent "inline" behavior for an :class:`.Insert` construct that
includes multiple values phrases; the first seq would be inline but
subsequent ones would be "pre-execute", leading to inconsistent sequence
ordering. The sequence expressions are now fully inline.
Fixes: #6361
Change-Id: Ie16794ec0e19979a7e6c8d1bef5716a9fc199889
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression where the introduction of the INSERT syntax "INSERT...
VALUES (DEFAULT)" was not supported on some backends that do however
support "INSERT..DEFAULT VALUES", including SQLite. The two syntaxes are
now each individually supported or non-supported for each dialect, for
example MySQL supports "VALUES (DEFAULT)" but not "DEFAULT VALUES".
Support for Oracle is still not enabled as there are unresolved issues
in using RETURNING at the same time.
Fixes: #6254
Change-Id: I47959bc826e3d9d2396ccfa290eb084841b02e77
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adjusted the psycopg2 dialect to emit an explicit PostgreSQL-style cast for
bound parameters that contain ARRAY elements. This allows the full range of
datatypes to function correctly within arrays. The asyncpg dialect already
generated these internal casts in the final statement. This also includes
support for array slice updates as well as the PostgreSQL-specific
:meth:`_postgresql.ARRAY.contains` method.
Fixes: #6023
Change-Id: Ia7519ac4371a635f05ac69a3a4d0f4e6d2f04cad
|
|
|
|
|
|
|
|
|
|
| |
These were revealed by running `pylint --disable all --enable spelling --spelling-dict en_US` over all sources.
Closes: #5868
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5868
Pull-request-sha: bb249195d92e3b806e81ecf1192d5a1b3cd5db48
Change-Id: I96080ec93a9fbd20ce21e9e16265b3c77f22bb14
|
|
|
|
| |
Change-Id: Ic5bb19ca8be3cb47c95a0d3315d84cb484bac47c
|
|
|
|
|
|
|
| |
It's better, the majority of these changes look more readable to me.
also found some docstrings that had formatting / quoting issues.
Change-Id: I582a45fde3a5648b2f36bab96bad56881321899b
|
|
|
|
|
|
|
|
|
|
|
|
| |
MariaDB should not run a Sequence if it has optional=True.
Additionally, rework the rules in crud.py to accommodate the
new combination MariaDB brings us, which is a dialect
that supports both cursor.lastrowid, explicit sequences,
*and* no support for returning.
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Fixes: #5528
Change-Id: I9a8ea69a34983affa95dfd22186e2908fdf0d58c
|
|
|
|
|
|
|
|
|
|
|
|
| |
The fix in 180ae7c1a53385f72b0047496ac001ec5099cc3e
didn't do much as the code was not preserving parameter
order at all, in fact. Reworked stmt_parameters to be
delivered in the correct order up front and preserve
throughout crud.py which was not being done at all
before.
Fixes: #5510
Change-Id: I0795c71df73005a25d1bbf216732d41b41e11a5f
|
|
|
|
|
|
|
|
|
|
| |
The test for the parameter fix in c0685e5f41 was not working
deterministically on Python 2, so use ordered_values(), however
a second issue in ordered_values() was assuming each element was a
column, so also test for array-assignment expressions with
ordered_values.
Change-Id: I944c72a52700ffb4ab5ae1a83ae21f1efc84b505
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes related to rendering of complex UPDATE DML
which was not correctly preserving positional parameter
order in conjunction with DML features that are only known
to work on the PostgreSQL database. Both pg8000
and asyncpg use positional parameters which is why these
issues are suddenly apparent.
crud.py now takes on the task of rendering the column
expressions for SET or VALUES so that for the very unusual
case that the column expression is a compound expression
that includes a bound parameter (namely an array index),
the bound parameter order is preserved.
Additionally, crud.py passes through the positional_names
keyword argument into bindparam_string() which is necessary
when CTEs are being rendered, as PG supports complex
CTE / INSERT / UPDATE scenarios.
Change-Id: I7f03920500e19b721636b84594de78a5bfdcbc82
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Build on #5401 to allow the ORM to take advanage
of executemany INSERT + RETURNING.
Implemented the feature
updated tests
to support INSERT DEFAULT VALUES, needed to come up with
a new syntax for compiler INSERT INTO table (anycol) VALUES (DEFAULT)
which can then be iterated out for executemany.
Added graceful degrade to plain executemany for PostgreSQL <= 8.2
Renamed EXECUTEMANY_DEFAULT to EXECUTEMANY_PLAIN
Fix issue where unicode identifiers or parameter names wouldn't
work with execute_values() under Py2K, because we have to
encode the statement and therefore have to encode the
insert_single_values_expr too.
Correct issue from #5401 to support executemany + return_defaults
for a PK that is explicitly pre-generated, meaning we aren't actually
getting RETURNING but need to return it from compiled_parameters.
Fixes: #5263
Change-Id: Id68e5c158c4f9ebc33b61c06a448907921c2a657
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The psycopg2 dialect now defaults to using the very performant
``execute_values()`` psycopg2 extension for compiled INSERT statements,
and also impements RETURNING support when this extension is used. This
allows INSERT statements that even include an autoincremented SERIAL
or IDENTITY value to run very fast while still being able to return the
newly generated primary key values. The ORM will then integrate this
new feature in a separate change.
Implements RETURNING for insert with executemany
Adds support to return_defaults() mode and inserted_primary_key
to support mutiple INSERTed rows, via return_defauls_rows
and inserted_primary_key_rows accessors.
within default execution context, new cached compiler
getters are used to fetch primary keys from rows
inserted_primary_key now returns a plain tuple. this
is not yet a row-like object however this can be
added.
Adds distinct "values_only" and "batch" modes, as
"values" has a lot of benefits but "batch" breaks
cursor.rowcount
psycopg2 minimum version 2.7 so we can remove the
large number of checks for very old versions of
psycopg2
simplify tests to no longer distinguish between
native and non-native json
Fixes: #5401
Change-Id: Ic08fd3423d4c5d16ca50994460c0c234868bd61c
|
|
|
|
|
|
|
|
|
| |
This is based off of
I8091919d45421e3f53029b8660427f844fee0228 and includes
all documentation-only changes as a separate merge,
once the parent is merged.
Change-Id: I711adea23df0f9f0b1fe7c76210bd2de6d31842d
|
|
|
|
|
|
|
|
| |
Replaces a wide array of Sphinx-relative doc references
with an abbreviated absolute form now supported by
zzzeeksphinx.
Change-Id: I94bffcc3f37885ffdde6238767224296339698a2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Supercedes: If78fbb557c6f2cae637799c3fec2cbc5ac248aaf
Trying to see if by making the cache key memoized, we
still can have the older "identity" form of caching
which is the cheapest of all, at the same time as the
newer "cache key each time" version that is not nearly
as cheap; but still much cheaper than no caching at all.
Also needed is a per-execution update of _keymap when
we invoke from a cached select, so that Column objects
that are anonymous or otherwise adapted will match up.
this is analogous to the adaption of bound parameters
from the cache key.
Adds test coverage for the keymap / construct_params()
changes related to caching. Also hones performance
to a large extent for statement construction and
cache key generation.
Also includes a new memoized attribute
approach that vastly simplifies the previous approach
of "group_expirable_memoized_property" and finally
integrates cleanly with _clone(), _generate(), etc.
no more hardcoding of attributes is needed, as well
as that most _reset_memoization() calls are no longer
needed as the reset is inherent in a _generate() call;
this also has dramatic performance improvements.
Change-Id: I95c560ffcbfa30b26644999412fb6a385125f663
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Targeting select / insert / update / delete, the goal
is to minimize overhead of construction and generative methods
so that only the raw arguments passed are handled. An interim
stage that converts the raw state into more compiler-ready state
is added, which is analogous to the ORM QueryContext which will
also be rolled in to be a similar concept, as is currently
being prototyped in I19e05b3424b07114cce6c439b05198ac47f7ac10.
the ORM update/delete BulkUD concept is also going to be rolled
onto this idea. So while the compiler-ready state object,
here called DMLState, looks a little thin, it's the
base of a bigger pattern that will allow for ORM functionality
to embed itself directly into the compiler, execution
context, and result set objects.
This change targets the DML objects, primarily focused on the
values() method which is the most complex process. The
work done by values() is minimized as much as possible
while still being able to create a cache key. Additional
computation is then offloaded to a new object ValuesState
that is handled by the compiler.
Architecturally, a big change here is that insert.values()
and update.values() will generate BindParameter objects for
the values now, which are then carefully received by crud.py
so that they generate the expected names. This is so that
the values() portion of these constructs is cacheable.
for the "multi-values" version of Insert, this is all skipped
and the plan right now is that a multi-values insert is
not worth caching (can always be revisited).
Using the
coercions system in values() also gets us nicer validation
for free, we can remove the NotAClauseElement thing from
schema, and we also now require scalar_subquery() is called
for an insert/update that uses a SELECT as a column value,
1.x deprecation path is added.
The traversal system is then applied to the DML objects
including tests so that they have traversal, cloning, and
cache key support. cloning is not a use case for DML however
having it present allows better validation of the structure
within the tests.
Special per-dialect DML is explicitly not cacheable at the moment,
more as a proof of concept that third party DML constructs can
exist as gracefully not-cacheable rather than producing an
incomplete cache key.
A few selected performance improvements have been added as well,
simplifying the immutabledict.union() method and adding
a new SQLCompiler function that can generate delimeter-separated
clauses like WHERE and ORDER BY without having to build
a ClauseList object at all. The use of ClauseList will
be removed from Select in an upcoming commit. Overall,
ClaustList is unnecessary for internal use and only adds
overhead to statement construction and will likely be removed
as much as possible except for explcit use of conjunctions like
and_() and or_().
Change-Id: I408e0b8be91fddd77cf279da97f55020871f75a9
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where a CTE of an INSERT/UPDATE/DELETE that also uses RETURNING
could then not be SELECTed from directly, as the internal state of the
compiler would try to treat the outer SELECT as a DELETE statement itself
and access nonexistent state.
Fixes: #5181
Change-Id: Icba76f2148c8344baa1c04bac4ab6c6d24f23072
|
|
|
|
| |
Change-Id: I08440dc25e40ea1ccea1778f6ee9e28a00808235
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added support for use of the :class:`.Sequence` construct with MariaDB 10.3
and greater, as this is now supported by this database. The construct
integrates with the :class:`.Table` object in the same way that it does for
other databases like PostrgreSQL and Oracle; if is present on the integer
primary key "autoincrement" column, it is used to generate defaults. For
backwards compatibility, to support a :class:`.Table` that has a
:class:`.Sequence` on it to support sequence only databases like Oracle,
while still not having the sequence fire off for MariaDB, the optional=True
flag should be set, which indicates the sequence should only be used to
generate the primary key if the target database offers no other option.
Fixes: #4976
Closes: #4996
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/4996
Pull-request-sha: cb2e1426ea0b6bc6c93dbe8f033a11df9d8c4915
Change-Id: I507bc405eee6cae2c5991345d0eac53a37fe7512
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A major refactoring of all the functions handle all detection of
Core argument types as well as perform coercions into a new class hierarchy
based on "roles", each of which identify a syntactical location within a
SQL statement. In contrast to the ClauseElement hierarchy that identifies
"what" each object is syntactically, the SQLRole hierarchy identifies
the "where does it go" of each object syntactically. From this we define
a consistent type checking and coercion system that establishes well
defined behviors.
This is a breakout of the patch that is reorganizing select()
constructs to no longer be in the FromClause hierarchy.
Also includes a rename of as_scalar() into scalar_subquery(); deprecates
automatic coercion to scalar_subquery().
Partially-fixes: #4617
Change-Id: I26f1e78898693c6b99ef7ea2f4e7dfd0e8e1a1bd
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reworked the :meth:`.ClauseElement.compare` methods in terms of a new
visitor-based approach, and additionally added test coverage ensuring that
all :class:`.ClauseElement` subclasses can be accurately compared
against each other in terms of structure. Structural comparison
capability is used to a small degree within the ORM currently, however
it also may form the basis for new caching features.
Fixes: #4336
Change-Id: I581b667d8e1642a6c27165cc9f4aded1c66effc6
|