| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
| |
Removed the previously deprecated ``case_sensitive`` parameter from
:func:`_sa.create_engine`, which would impact only the lookup of string
column names in Core-only result set rows; it had no effect on the behavior
of the ORM. The effective behavior of what ``case_sensitive`` refers
towards remains at its default value of ``True``, meaning that string names
looked up in ``row._mapping`` will match case-sensitively, just like any
other Python mapping.
Change-Id: I0dc4be3fac37d30202b1603db26fa10a110b618d
|
| |\
| |
| |
| | |
into main
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
in order to remove LegacyRow / LegacyResult, we have
to also lose close_with_result, which connectionless
execution relies upon.
also includes a new profiles.txt file that's all against
py310, as that's what CI is on now. some result counts
changed by one function call which was enough to fail the
low-count result tests.
Replaces Connectable as the common interface between
Connection and Engine with EngineEventsTarget. Engine
is no longer Connectable. Connection and MockConnection
still are.
References: #7257
Change-Id: Iad5eba0313836d347e65490349a22b061356896a
|
| |/
|
|
|
| |
Fixes: #7258
Change-Id: I3577f665eca04f2632b69bcb090f0a4ec9271db9
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue where "expanding IN" would fail to function correctly with
datatypes that use the :meth:`_types.TypeEngine.bind_expression` method,
where the method would need to be applied to each element of the
IN expression rather than the overall IN expression itself.
Fixed issue where IN expressions against a series of array elements, as can
be done with PostgreSQL, would fail to function correctly due to multiple
issues within the "expanding IN" feature of SQLAlchemy Core that was
standardized in version 1.4. The psycopg2 dialect now makes use of the
:meth:`_types.TypeEngine.bind_expression` method with :class:`_types.ARRAY`
to portably apply the correct casts to elements. The asyncpg dialect was
not affected by this issue as it applies bind-level casts at the driver
level rather than at the compiler level.
as part of this commit the "bind translate" feature has been
simplified and also applies to the names in the POSTCOMPILE tag to
accommodate for brackets.
Fixes: #7177
Change-Id: I08c703adb0a9bd6f5aeee5de3ff6f03cccdccdc5
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improve the interface used by adapted drivers, like the asyncio ones,
to access the actual connection object returned by the driver.
The :class:`_engine._ConnectionRecord` and
:class:`_engine._ConnectionFairy` now have two new attributes:
* ``dbapi_connection`` always represents a DBAPI compatible
object. For pep-249 drivers, this is the DBAPI connection as it always
has been, previously accessed under the ``.connection`` attribute.
For asyncio drivers that SQLAlchemy adapts into a pep-249 interface,
the returned object will normally be a SQLAlchemy adaption object
called :class:`_engine.AdaptedConnection`.
* ``driver_connection`` always represents the actual connection object
maintained by the third party pep-249 DBAPI or async driver in use.
For standard pep-249 DBAPIs, this will always be the same object
as that of the ``dbapi_connection``. For an asyncio driver, it will be
the underlying asyncio-only connection object.
The ``.connection`` attribute remains available and is now a legacy alias
of ``.dbapi_connection``.
Fixes: #6832
Change-Id: Ib72f97deefca96dce4e61e7c38ba430068d6a82e
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue where an engine that had ``implicit_returning`` set to False
would fail to function when PostgreSQL's "fast insertmany" feature were
used in conjunction with a ``Sequence``, as well as if any kind of
"executemany" with "return_defaults()" were used in conjunction with a
``Sequence``. Note that PostgreSQL "fast insertmany" uses "RETURNING" by
definition, when the SQL statement is passed to the driver; overall, the
``implicit_returning`` flag is legacy and has no real use in modern
SQLAlchemy, and will be deprecated in a separate change.
Fixes: #6963
Change-Id: Id8e3dd50a21b9124f338067b0fdb57b8f608dca8
|
| |
|
|
|
|
| |
Also replace http://pypi.python.org/pypi with https://pypi.org/project
Change-Id: I84b5005c39969a82140706472989f2a30b0c7685
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression involving how the ORM would resolve a given mapped column
to a result row, where under cases such as joined eager loading, a slightly
more expensive "fallback" could take place to set up this resolution due to
some logic that was removed since 1.3. The issue could also cause
deprecation warnings involving column resolution to be emitted when using a
1.4 style query with joined eager loading.
In order to ensure we don't look up columns by string name in the ORM,
we've turned on future_result=True in all cases, which I thought was
already the assumption here, but apparently not. That in turn
led to the issue that Session autocommit relies on close_with_result=True,
which is legacy result only. This was also hard to figure out.
So a new exception is raised if one tries to use future_result=True
along with close_with_result, and the Session now has an explicit path
for "autocommit" that sets these flags to their legacy values.
This does leave the possibility of some of these fallback cases
emitting warnings for users using session in autocommit along with
joined inheritance and column properties, as this patch identifies
that joined inheritance + column properties produce the fallback
logic when looking up in the result via the adapted column, which
in those tests is actually a Label object that doesn't adapt
nicely.
Fixes: #6596
Change-Id: I107a47e873ae05ab50853bb00a9ea0e1a88d5aee
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ensure that the MySQL and MariaDB dialect ignore the
:class:`_sql.Identity` construct while rendering the
``AUTO_INCREMENT`` keyword in a create table.
The Oracle and PostgreSQL compiler was updated to not render
:class:`_sql.Identity` if the database version does not support it
(Oracle < 12 and PostgreSQL < 10). Previously it was rendered regardless
of the database version.
Fixes: #6338
Change-Id: I2ca0902fdd7b4be4fc1a563cf5585504cbea9360
|
| |\ |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed regression where the introduction of the INSERT syntax "INSERT...
VALUES (DEFAULT)" was not supported on some backends that do however
support "INSERT..DEFAULT VALUES", including SQLite. The two syntaxes are
now each individually supported or non-supported for each dialect, for
example MySQL supports "VALUES (DEFAULT)" but not "DEFAULT VALUES".
Support for Oracle is still not enabled as there are unresolved issues
in using RETURNING at the same time.
Fixes: #6254
Change-Id: I47959bc826e3d9d2396ccfa290eb084841b02e77
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
| |
The :meth:`_engine.Dialect.has_table` method now raises an informative
exception if a non-Connection is passed to it, as this incorrect behavior
seems to be common. This method is not intended for external use outside
of a dialect. Please use the :meth:`.Inspector.has_table` method
or for cross-compatibility with older SQLAlchemy versions, the
:meth:`_engine.Engine.has_table` method.
Fixes: #5780
Fixes: #6062
Fixes: #6260
Change-Id: I9b2439675167019b68d682edee3dcdcfce836987
|
| |
|
|
|
|
|
|
|
| |
Add ``supports_schema = True`` to DefaultDialect and modify
requirements.py to use that attribute so third-party dialects
can explicitly indicate that they do *not* support schemas by
specifying ``supports_schema = False`` in their Dialect class.
Change-Id: Idffee82f6668a15ac7148f2a407a17de785d1fb7
|
| |
|
|
|
|
|
|
| |
Fixed the "stringify" compiler to support a basic stringification
of a "multirow" INSERT statement, i.e. one with multiple tuples
following the VALUES keyword.
Change-Id: I1fe38d204d9965275d3a72157d5a72a53bec4b11
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a new flag to the :class:`_engine.Dialect` class called
:attr:`_engine.Dialect.supports_statement_cache`. This flag now needs to be present
directly on a dialect class in order for SQLAlchemy's
:ref:`query cache <sql_caching>` to take effect for that dialect. The
rationale is based on discovered issues such as :ticket:`6173` revealing
that dialects which hardcode literal values from the compiled statement,
often the numerical parameters used for LIMIT / OFFSET, will not be
compatible with caching until these dialects are revised to use the
parameters present in the statement only. For third party dialects where
this flag is not applied, the SQL logging will show the message "dialect
does not support caching", indicating the dialect should seek to apply this
flag once they have verified that no per-statement literal values are being
rendered within the compilation phase.
Fixes: #6184
Change-Id: I6fd5b5d94200458d4cb0e14f2f556dbc25e27e22
|
| |
|
|
|
|
|
|
|
|
|
| |
the _setup_ins_pk_from_empty() method provides a placeholder
result for inserted_primary_key_rows that is not typically
going to be invoked. For an executemany (or even execute)
that isn't already stating it wants primary key values
up front, defer this computation until explicitly requested.
Change-Id: I6295eafbccc96a0422b9cac34e79db7924c702ca
References: https://github.com/sqlalchemy/sqlalchemy/discussions/5893#discussioncomment-356155
|
| |
|
|
|
|
|
|
|
|
|
| |
Fixed bug where the "schema_translate_map" feature failed to be taken into
account for the use case of direct execution of
:class:`_schema.DefaultGenerator` objects such as sequences, which included
the case where they were "pre-executed" in order to generate primary key
values when implicit_returning was disabled.
Fixes: #5929
Change-Id: I3fed1d0af28be5ce9c9bb572524dcc8411633f60
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The rule to limit index names to 64 also applies to all
DDL names, such as those coming from naming conventions.
Add another limiting variable for constraint names and
create test cases against all constraint types.
Additionally, codified in the test suite MySQL's lack of
support for naming of a FOREIGN KEY constraint after
the name was given, which apparently assigns the name to an
associated KEY but not the constraint itself, until MySQL 8
and MariaDB 10.5 which appear to have resolved the
behavior. However it's not clear how Alembic hasn't had
issues reported with this so far.
Fixed long-lived bug in MySQL dialect where the maximum identifier length
of 255 was too long for names of all types of constraints, not just
indexes, all of which have a size limit of 64. As metadata naming
conventions can create too-long names in this area, apply the limit to the
identifier generator within the DDL compiler.
Fixes: #5898
Change-Id: I79549474845dc29922275cf13321c07598dcea08
|
| |
|
|
| |
Change-Id: Ic5bb19ca8be3cb47c95a0d3315d84cb484bac47c
|
| |
|
|
| |
Change-Id: I4940d184a4dc790782fcddfb9873af3cca844398
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression which occured due to [ticket:5755] which implemented
isolation level support for Oracle. It has been reported that many Oracle
accounts don't actually have permission to query the ``v$transaction``
view so this feature has been altered to gracefully fallback when it fails
upon database connect, where the dialect will assume "READ COMMITTED" is
the default isolation level as was the case prior to SQLAlchemy 1.3.21.
However, explicit use of the :meth:`_engine.Connection.get_isolation_level`
method must now necessarily raise an exception, as Oracle databases with
this restriction explicitly disallow the user from reading the current
isolation level.
Fixes: #5784
Change-Id: Iefc82928744f3c944c18ae8000eb3c9e52e523bc
|
| |
|
|
|
|
|
|
|
| |
Added the "future" keyword to the list of words that are known by the
:func:`_sa.engine_from_config` function, so that the values "true" and
"false" may be configured as "boolean" values when using a key such
as ``sqlalchemy.future = true`` or ``sqlalchemy.future = false``.
Change-Id: Ib4bba748497cc68e4c913dde54c23a4bb08b4deb
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improved support for column names that contain percent signs in the string,
including repaired issues involving anoymous labels that also embedded a
column name with a percent sign in it, as well as re-established support
for bound parameter names with percent signs embedded on the psycopg2
dialect, using a late-escaping process similar to that used by the
cx_Oracle dialect.
* Added new constructor for _anonymous_label() that ensures incoming
string tokens based on column or table names will have percent
signs escaped; abstracts away the format of the label.
* generalized cx_Oracle's quoted_bind_names facility into the compiler
itself, and leveraged this for the psycopg2 dialect's issue with
percent signs in names as well. the parameter substitution is now
integrated with compiler.construct_parameters() as well as the
recently reworked set_input_sizes(), reducing verbosity in the
cx_Oracle dialect.
Fixes: #5653
Change-Id: Ia2ad13ea68b4b0558d410026e5a33f5cb3fbab2c
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Reworked the "setinputsizes()" set of dialect hooks to be correctly
extensible for any arbirary DBAPI, by allowing dialects individual hooks
that may invoke cursor.setinputsizes() in the appropriate style for that
DBAPI. In particular this is intended to support pyodbc's style of usage
which is fundamentally different from that of cx_Oracle. Added support
for pyodbc.
Fixes: #5649
Change-Id: I9f1794f8368bf3663a286932cfe3992dae244a10
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The server_side_cursors engine-wide feature relies upon
regexp parsing of statements a well as general guessing as
to when the feature should be used. This is not within the
2.0 way of doing things and should be removed.
Additionally, mariadbconnector defaults to unbuffered cursors;
add new cursor hooks so that mariadbconnector can specify
buffered or unbuffered cursors without too much difficulty.
This will also correctly default mariadbconnector to buffered
cursors which should repair the segfaults we've been getting.
Try to restore the assert_raises that was removed in
5b6dfc0c38bf1f01da4b8 to see if mariadbconnector segfaults
are resolved.
Change-Id: I77f1c972c742e40694972f578140bb0cac8c39eb
|
| |\ |
|
| | |
| |
| |
| |
| |
| |
| | |
Make optional sequences render as identity in mssql
Remove unused dialect option sequence_default_column_type
Change-Id: I821eeffcb442f8d1b69186a9b798b15c3d8d6ff3
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change includes mainly that the bracketed use within
select() is moved to positional, and keyword arguments are
removed from calls to the select() function. it does not
yet fully address other issues such as keyword arguments passed
to the table.select().
Additionally, allows False / None to both be considered
as "disable" for all of select.correlate(), select.correlate_except(),
query.correlate(), which establishes consistency with
passing of ``False`` for the legact select(correlate=False)
argument.
Change-Id: Ie6c6e6abfbd3d75d4c8de504c0cf0159e6999108
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Added support for PostgreSQL "readonly" and "deferrable" flags for all of
psycopg2, asyncpg and pg8000 dialects. This takes advantage of a newly
generalized version of the "isolation level" API to support other kinds of
session attributes set via execution options that are reliably reset
when connections are returned to the connection pool.
Fixes: #5549
Change-Id: I0ad6d7a095e49d331618274c40ce75c76afdc7dd
|
| |/
|
|
|
|
|
|
|
|
|
|
| |
MariaDB should not run a Sequence if it has optional=True.
Additionally, rework the rules in crud.py to accommodate the
new combination MariaDB brings us, which is a dialect
that supports both cursor.lastrowid, explicit sequences,
*and* no support for returning.
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Fixes: #5528
Change-Id: I9a8ea69a34983affa95dfd22186e2908fdf0d58c
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improved the :func:`_sql.tuple_` construct such that it behaves predictably
when used in a columns-clause context. The SQL tuple is not supported as a
"SELECT" columns clause element on most backends; on those that do
(PostgreSQL, not surprisingly), the Python DBAPI does not have a "nested
type" concept so there are still challenges in fetching rows for such an
object. Use of :func:`_sql.tuple_` in a :func:`_sql.select` or
:class:`_orm.Query` will now raise a :class:`_exc.CompileError` at the
point at which the :func:`_sql.tuple_` object is seen as presenting itself
for fetching rows (i.e., if the tuple is in the columns clause of a
subquery, no error is raised). For ORM use,the :class:`_orm.Bundle` object
is an explicit directive that a series of columns should be returned as a
sub-tuple per row and is suggested by the error message. Additionally ,the
tuple will now render with parenthesis in all contexts. Previously, the
parenthesization would not render in a columns context leading to
non-defined behavior.
As part of this change, Tuple receives a dedicated datatype
which appears to allow us the very desirable change of removing
the bindparam._expanding_in_types attribute as well as
ClauseList._tuple_values (which might already have not been
needed due to #4645).
Fixes: #5127
Change-Id: Iecafa0e0aac2f1f37ec8d0e1631d562611c90200
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using the approach introduced at
https://gist.github.com/zzzeek/6287e28054d3baddc07fa21a7227904e
We can now create asyncio endpoints that are then handled
in "implicit IO" form within the majority of the Core internals.
Then coroutines are re-exposed at the point at which we call
into asyncpg methods.
Patch includes:
* asyncpg dialect
* asyncio package
* engine, result, ORM session classes
* new test fixtures, tests
* some work with pep-484 and a short plugin for the
pyannotate package, which seems to have so-so results
Change-Id: Idbcc0eff72c4cad572914acdd6f40ddb1aef1a7d
Fixes: #3414
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Adjusted the dialect initialization process such that the
:meth:`_engine.Dialect.on_connect` is not called a second time on the first
connection. The hook is called first, then the
:meth:`_engine.Dialect.initialize` is called if that connection is the
first for that dialect, then no more events are called. This eliminates
the two calls to the "on_connect" function which can produce very difficult
debugging situations.
Fixes: #5497
Change-Id: Icefc2e884e30ee7b4ac84b99dc54bf992a6085e3
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in order to accommodate relationship loaders
with lambda caching, a lot more is needed. This is
a full refactor of the lambda system such that it
now has two levels of caching; the first level caches what
can be known from the __code__ element, then the next level
of caching is against the lambda itself and the contents
of __closure__. This allows for the elements inside
the lambdas, like columns and entities, to change and
then be part of the cache key. Lazy/selectinloads' use of
baked queries had to add distinct cache key elements,
which was attempted here but overall things needed to be
more robust than that.
This commit is broken out from the very long and sprawling
commit at Id6b5c03b1ce9ddb7b280f66792212a0ef0a1c541 .
Change-Id: I29a513c98917b1d503abfdd61e6b6e8800851aa8
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in order to support asyncpg as well as pg8000,
we need to revise setinputsizes to work for more cases as well
as adjust NativeForEmulated a bit to work more completely with
the INTERVAL datatype.
- put most of the setinputsizes work into the compiler where
the computation can be cached.
- support per-element setinputsizes for a tuple
- adjust TypeDecorator so that _unwrapped_dialect_impl
will honor a type that the dialect links to directly in
it's adaption mapping. Decouble _unwrapped_dialect_impl
from TypeDecorator._gen_dialect_impl() which has a different
purpose. This allows setinputsizes to do the right thing
with the INTERVAL datatype.
- test cases for Oracle with Variant continue to work
Change-Id: I9e1ea33aeca3b92b365daa4a356d778191070c03
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several weeks of using the future_select() construct
has led to the proposal there be just one select() construct
again which features the new join() method, and otherwise accepts
both the 1.x and 2.x argument styles. This would make
migration simpler and reduce confusion.
However, confusion may be increased by the fact that select().join()
is different Current thinking is we may be better off
with a few hard behavioral changes to old and relatively unknown APIs
rather than trying to play both sides within two extremely similar
but subtly different APIs. At the moment, the .join() thing seems
to be the only behavioral change that occurs without the user
taking any explicit steps. Session.execute() will still
behave the old way as we are adding a future flag.
This change also adds the "future" flag to Session() and
session.execute(), so that interpretation of the incoming statement,
as well as that the new style result is returned, does not
occur for existing applications unless they add the use
of this flag.
The change in general is moving the "removed in 2.0" system
further along where we want the test suite to fully pass
even if the SQLALCHEMY_WARN_20 flag is set.
Get many tests to pass when SQLALCHEMY_WARN_20 is set; this
should be ongoing after this patch merges.
Improve the RemovedIn20 warning; these are all deprecated
"since" 1.4, so ensure that's what the messages read.
Make sure the inforamtion link is on all warnings.
Add deprecation warnings for parameters present and
add warnings to all FromClause.select() types of methods.
Fixes: #5379
Fixes: #5284
Change-Id: I765a0b912b3dcd0e995426427d8bb7997cbffd51
References: #5159
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The coercions system allows us to add in lambdas as arguments
to Core and ORM elements without changing them at all. By allowing
the lambda to produce a deterministic cache key where we can also
cheat and yank out literal parameters means we can move towards
having 90% of "baked" functionality in a clearer way right in
Core / ORM.
As a second step, we can have whole statements inside the lambda,
and can then add generation with __add__(), so then we have
100% of "baked" functionality with full support of ad-hoc
literal values.
Adds some more short_selects tests for the moment for comparison.
Other tweaks inside cache key generation as we're trying to
approach a certain level of performance such that we can
remove the use of "baked" from the loader strategies.
As we have not yet closed #4639, however the caching feature
has been fully integrated as of
b0cfa7379cf8513a821a3dbe3028c4965d9f85bd, we will also
add complete caching documentation here and close that issue
as well.
Closes: #4639
Fixes: #5380
Change-Id: If91f61527236fd4d7ae3cad1f24c38be921c90ba
|
| |\ |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Build on #5401 to allow the ORM to take advanage
of executemany INSERT + RETURNING.
Implemented the feature
updated tests
to support INSERT DEFAULT VALUES, needed to come up with
a new syntax for compiler INSERT INTO table (anycol) VALUES (DEFAULT)
which can then be iterated out for executemany.
Added graceful degrade to plain executemany for PostgreSQL <= 8.2
Renamed EXECUTEMANY_DEFAULT to EXECUTEMANY_PLAIN
Fix issue where unicode identifiers or parameter names wouldn't
work with execute_values() under Py2K, because we have to
encode the statement and therefore have to encode the
insert_single_values_expr too.
Correct issue from #5401 to support executemany + return_defaults
for a PK that is explicitly pre-generated, meaning we aren't actually
getting RETURNING but need to return it from compiled_parameters.
Fixes: #5263
Change-Id: Id68e5c158c4f9ebc33b61c06a448907921c2a657
|
| |\ \
| |/
|/| |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Note the PR has a few remaining doc linking issues
listed in the comment that must be addressed separately.
Signed-off-by: aplatkouski <5857672+aplatkouski@users.noreply.github.com>
Closes: #5371
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5371
Pull-request-sha: 7e7d233cf3a0c66980c27db0fcdb3c7d93bc2510
Change-Id: I9c36e8d8804483950db4b42c38ee456e384c59e3
|
| |\ \
| |/
|/| |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The psycopg2 dialect now defaults to using the very performant
``execute_values()`` psycopg2 extension for compiled INSERT statements,
and also impements RETURNING support when this extension is used. This
allows INSERT statements that even include an autoincremented SERIAL
or IDENTITY value to run very fast while still being able to return the
newly generated primary key values. The ORM will then integrate this
new feature in a separate change.
Implements RETURNING for insert with executemany
Adds support to return_defaults() mode and inserted_primary_key
to support mutiple INSERTed rows, via return_defauls_rows
and inserted_primary_key_rows accessors.
within default execution context, new cached compiler
getters are used to fetch primary keys from rows
inserted_primary_key now returns a plain tuple. this
is not yet a row-like object however this can be
added.
Adds distinct "values_only" and "batch" modes, as
"values" has a lot of benefits but "batch" breaks
cursor.rowcount
psycopg2 minimum version 2.7 so we can remove the
large number of checks for very old versions of
psycopg2
simplify tests to no longer distinguish between
native and non-native json
Fixes: #5401
Change-Id: Ic08fd3423d4c5d16ca50994460c0c234868bd61c
|
| |/
|
|
|
|
| |
See https://twitter.com/raymondh/status/1275937373080023040
Change-Id: Iaa0abb0c433ccedfbd88d00e3970120242ba379b
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch makes several improvements in the area of
bulk updates and deletes as well as the new session mechanics.
RETURNING is now used for an UPDATE or DELETE statement
emitted for a diaelct that supports "full returning"
in order to satisfy the "fetch" strategy; this currently
includes PostgreSQL and SQL Server. The Oracle dialect
does not support RETURNING for more than one row,
so a new dialect capability "full_returning" is added
in addition to the existing "implicit_returning", indicating
this dialect supports RETURNING for zero or more rows,
not just a single identity row.
The "fetch" strategy will gracefully degrade to
the previous SELECT mechanics for dialects that do not
support RETURNING.
Additionally, the "fetch" strategy will attempt to use
evaluation for the VALUES that were UPDATEd, rather
than just expiring the updated attributes. Values should
be evalutable in all cases where the value is not
a SQL expression.
The new approach also incurs some changes in the
session.execute mechanics, where do_orm_execute() event
handlers can now be chained to each return results;
this is in turn used by the handler to detect on a
per-bind basis if the fetch strategy needs to
do a SELECT or if it can do RETURNING. A test suite is
added to test_horizontal_shard that breaks up a single
UPDATE or DELETE operation among multiple backends
where some are SQLite and don't support RETURNING and
others are PostgreSQL and do.
The session event mechanics are corrected
in terms of the "orm pre execute" hook, which now
receives a flag "is_reentrant" so that the two
ORM implementations for this can skip on their work
if they are being called inside of ORMExecuteState.invoke(),
where previously bulk update/delete were calling its
SELECT a second time.
In order for "fetch" to get the correct identity when
called as pre-execute, it also requests the identity_token
for each mapped instance which is now added as an optional
capability of a SELECT for ORM columns. the identity_token
that's placed by horizontal_sharding is now made available
within each result row, so that even when fetching a
merged result of plain rows we can tell which row belongs
to which identity token.
The evaluator that takes place within the ORM bulk update and delete for
synchronize_session="evaluate" now supports the IN and NOT IN operators.
Tuple IN is also supported.
Fixes: #1653
Change-Id: I2292b56ae004b997cef0ba4d3fc350ae1dd5efc1
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A variety of caching issues found by running
all tests with statement caching turned on.
The cache system now has a more conservative approach where
any subclass of a SQL element will by default invalidate
the cache key unless it adds the flag inherit_cache=True
at the class level, or if it implements its own caching.
Add working caching to a few elements that were
omitted previously; fix some caching implementations
to suit lesser used edge cases such as json casts
and array slices.
Refine the way BaseCursorResult and CursorMetaData
interact with caching; to suit cases like Alembic
modifying table structures, don't cache the
cursor metadata if it were created against a
cursor.description using non-positional matching,
e.g. "select *". if a table re-ordered its columns
or added/removed, now that data is obsolete.
Additionally we have to adapt the cursor metadata
_keymap regardless of if we just processed
cursor.description, because if we ran against
a cached SQLCompiler we won't have the right
columns in _keymap.
Other refinements to how and when we do this
adaption as some weird cases
were exposed in the Postgresql dialect,
a text() construct that names just one column that
is not actually in the statement. Fixed that
also as it looks like a cut-and-paste artifact
that doesn't actually affect anything.
Various issues with re-use of compiled result maps
and cursor metadata in conjunction with tables being
changed, such as change in order of columns.
mappers can be cleared but the class remains, meaning
a mapper has to use itself as the cache key not the class.
lots of bound parameter / literal issues, due to Alembic
creating a straight subclass of bindparam that renders
inline directly. While we can update Alembic to not
do this, we have to assume other people might be doing
this, so bindparam() implements the inherit_cache=True
logic as well that was a bit involved.
turn on cache stats in logging.
Includes a fix to subqueryloader which moves all setup to
the create_row_processor() phase and elminates any storage
within the compiled context. This includes some changes
to create_row_processor() signature and a revising of the
technique used to determine if the loader can participate
in polymorphic queries, which is also applied to
selectinloading.
DML update.values() and ordered_values() now coerces the
keys as we have tests that pass an arbitrary class here
which only includes __clause_element__(), so the
key can't be cached unless it is coerced. this in turn
changed how composite attributes support bulk update
to use the standard approach of ClauseElement with
annotations that are parsed in the ORM context.
memory profiling successfully caught that the Session
from Query was getting passed into _statement_20()
so that was a big win for that test suite.
Apparently Compiler had .execute() and .scalar() methods
stuck on it, these date back to version 0.4 and there
was a single test in the PostgreSQL dialect tests
that exercised it for no apparent reason. Removed
these methods as well as the concept of a Compiler
holding onto a "bind".
Fixes: #5386
Change-Id: I990b43aab96b42665af1b2187ad6020bee778784
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added support for "CREATE SEQUENCE" and full :class:`.Sequence` support for
Microsoft SQL Server. This removes the deprecated feature of using
:class:`.Sequence` objects to manipulate IDENTITY characteristics which
should now be performed using ``mssql_identity_start`` and
``mssql_identity_increment`` as documented at :ref:`mssql_identity`. The
change includes a new parameter :paramref:`.Sequence.data_type` to
accommodate SQL Server's choice of datatype, which for that backend
includes INTEGER and BIGINT. The default starting value for SQL Server's
version of :class:`.Sequence` has been set at 1; this default is now
emitted within the CREATE SEQUENCE DDL for all backends.
Fixes: #4235
Fixes: #4633
Change-Id: I6aa55c441e8146c2f002e2e201a7f645e667b916
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit includes that we've removed the "_orm_query"
attribute from compile state as well as query context.
The attribute created reference cycles and also added
method call overhead. As part of this change,
the interface for ORMExecuteState changes a bit, as well
as the interface for the horizontal sharding extension
which now deprecates the "query_chooser" callable
in favor of "execute_chooser", which receives the contextual
object. This will also work more nicely when we implement
the new execution path for bulk updates and deletes.
Pre-merge execution options for statement, connection,
arguments all up front in Connection. that way they
can be passed to the before_execute / after_execute events,
and the ExecutionContext doesn't have to merge as second
time. Core execute is pretty close to 1.3 now.
baked wasn't using the new one()/first()/one_or_none() methods,
fixed that.
Convert non-buffered cursor strategy to be a stateless
singleton. inline all the paths by which the strategy
gets chosen, oracle and SQL Server dialects make use of the
already-invoked post_exec() hook to establish the alternate
strategies, and this is actually much nicer than it was before.
Add caching to mapper instance processor for getters.
Identified a reference cycle per query that was showing
up as a lot of gc cleanup, fixed that.
After all that, performance not budging much. Even
test_baked_query now runs with significantly fewer function
calls than 1.3, still 40% slower.
Basically something about the new patterns just makes
this slower and while I've walked a whole bunch of them
back, it hardly makes a dent. that said, the performance
issues are relatively small, in the 20-40% time increase
range, and the new caching feature
does provide for regular ORM and Core queries that
are cached, and they are faster than non-cached.
Change-Id: I7b0b0d8ca550c05f79e82f75cd8eff0bbfade053
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch replaces the ORM execution flow with a
single pathway through Session.execute() for all queries,
including Core and ORM.
Currently included is full support for ORM Query,
Query.from_statement(), select(), as well as the
baked query and horizontal shard systems. Initial
changes have also been made to the dogpile caching
example, which like baked query makes use of a
new ORM-specific execution hook that replaces the
use of both QueryEvents.before_compile() as well
as Query._execute_and_instances() as the central
ORM interception hooks.
select() and Query() constructs alike can be passed to
Session.execute() where they will return ORM
results in a Results object. This API is currently
used internally by Query. Full support for
Session.execute()->results to behave in a fully
2.0 fashion will be in later changesets.
bulk update/delete with ORM support will also
be delivered via the update() and delete()
constructs, however these have not yet been adapted
to the new system and may follow in a subsequent
update.
Performance is also beginning to lag as of this
commit and some previous ones. It is hoped that
a few central functions such as the coercions
functions can be rewritten in C to re-gain
performance. Additionally, query caching
is now available and some subsequent patches
will attempt to cache more of the per-execution
work from the ORM layer, e.g. column getters
and adapters.
This patch also contains initial "turn on" of the
caching system enginewide via the query_cache_size
parameter to create_engine(). Still defaulting at
zero for "no caching". The caching system still
needs adjustments in order to gain adequate performance.
Change-Id: I047a7ebb26aa85dc01f6789fac2bff561dcd555d
|