| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fix a handful of warnings that were emitting but not raising,
usually because they were inside an "expect_warnings" block.
modify "expect_warnings" to always use "raise_on_any_unexpected"
behavior; remove this parameter.
Fixed issue in semi-private ``await_only()`` and ``await_fallback()``
concurrency functions where the given awaitable would remain un-awaited if
the function threw a ``GreenletError``, which could cause "was not awaited"
warnings later on if the program continued. In this case, the given
awaitable is now cancelled before the exception is thrown.
Change-Id: I33668c5e8c670454a3d879e559096fb873b57244
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improved row processing performance for "binary" datatypes by making the
"bytes" handler conditional on a per driver basis. As a result, the
"bytes" result handler has been disabled for nearly all drivers other than
psycopg2, all of which in modern forms support returning Python "bytes"
directly. Pull request courtesy J. Nick Koston.
Fixes: #9680
Closes: #9681
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9681
Pull-request-sha: 4f2fd88bd9af54c54438a3b72a2f30384b0f8898
Change-Id: I394bdcbebaab272e63b13cc02f60813b7aa76839
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Repaired a major shortcoming which was identified in the
:ref:`engine_insertmanyvalues` performance optimization feature first
introduced in the 2.0 series. This was a continuation of the change in
2.0.9 which disabled the SQL Server version of the feature due to a
reliance in the ORM on apparent row ordering that is not guaranteed to take
place. The fix applies new logic to all "insertmanyvalues" operations,
which takes effect when a new parameter
:paramref:`_dml.Insert.returning.sort_by_parameter_order` on the
:meth:`_dml.Insert.returning` or :meth:`_dml.UpdateBase.return_defaults`
methods, that through a combination of alternate SQL forms, direct
correspondence of client side parameters, and in some cases downgrading to
running row-at-a-time, will apply sorting to each batch of returned rows
using correspondence to primary key or other unique values in each row
which can be correlated to the input data.
Performance impact is expected to be minimal as nearly all common primary
key scenarios are suitable for parameter-ordered batching to be
achieved for all backends other than SQLite, while "row-at-a-time"
mode operates with a bare minimum of Python overhead compared to the very
heavyweight approaches used in the 1.x series. For SQLite, there is no
difference in performance when "row-at-a-time" mode is used.
It's anticipated that with an efficient "row-at-a-time" INSERT with
RETURNING batching capability, the "insertmanyvalues" feature can be later
be more easily generalized to third party backends that include RETURNING
support but not necessarily easy ways to guarantee a correspondence
with parameter order.
Fixes: #9618
References: #9603
Change-Id: I1d79353f5f19638f752936ba1c35e4dc235a8b7c
|
|
|
|
|
|
|
| |
Removed versionadded and versionchanged for version prior to 1.2 since they
are no longer useful.
Change-Id: I5c53d1188bc5fec3ab4be39ef761650ed8fa6d3e
|
|
|
|
|
|
| |
try to get file naming to be more sane for pysqlite file databases
Change-Id: I68ad8c2f6c6c25930fbffdd79b8d429cd7a7dd9a
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
pymssql seems to be maintained again and seems to be working
completely, so let's try re-enabling it.
Fixed issue in the new :class:`.Uuid` datatype which prevented it from
working with the pymssql driver. As pymssql seems to be maintained again,
restored testing support for pymssql.
Tweaked the pymssql dialect to take better advantage of
RETURNING for INSERT statements in order to retrieve last inserted primary
key values, in the same way as occurs for the mssql+pyodbc dialect right
now.
Identified that the ``sqlite`` and ``mssql+pyodbc`` dialects are now
compatible with the SQLAlchemy ORM's "versioned rows" feature, since
SQLAlchemy now computes rowcount for a RETURNING statement in this specific
case by counting the rows returned, rather than relying upon
``cursor.rowcount``. In particular, the ORM versioned rows use case
(documented at :ref:`mapper_version_counter`) should now be fully
supported with the SQL Server pyodbc dialect.
Change-Id: I38a0666587212327aecf8f98e86031ab25d1f14d
References: #5321
Fixes: #9414
|
|/
|
|
|
|
|
|
| |
Fixed bug that prevented SQLAlchemy to connect when using a very old
sqlite version (before 3.9) on python 3.8+.
Fixes: #9379
Change-Id: I10ca347398221c952e1a572dc6ef80e491d1f5cf
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove ``typing.Self`` workaround, now using :pep:`673` for most methods
that return ``Self``. Pull request courtesy Yurii Karabas.
Fixes: #9254
Closes: #9255
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9255
Pull-request-sha: 2947df8ada79f5c3afe9c838e65993302199c2f7
Change-Id: Ic32015ad52e95a61f3913d43ea436aa9402804df
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression caused by issue :ticket:`9058` which adjusted the MySQL
dialect's ``has_table()`` to again use "DESCRIBE", where the specific error
code raised by MySQL version 8 when using a non-existent schema name was
unexpected and failed to be interpreted as a boolean result.
Fixed the SQLite dialect's ``has_table()`` function to correctly report
False for queries that include a non-None schema name for a schema that
doesn't exist; previously, a database error was raised.
Fixes: #9251
Change-Id: I5ef9cf0887865c3c521d88bca0ba18954a108241
|
|
|
|
|
|
| |
change {opensql} to {printsql} in prints, add missing markers
Change-Id: I07b72e6620bb64e329d6b641afa27631e91c4f16
|
|
|
|
| |
Change-Id: I625af65b3fb1815b1af17dc2ef47dd697fdc3fb1
|
|
|
|
|
|
|
|
|
|
| |
Added test support to ensure that all compiler ``visit_xyz()`` methods
across all :class:`.Compiler` implementations in SQLAlchemy accept a
``**kw`` parameter, so that all compilers accept additional keyword
arguments under all circumstances.
Fixes: #8988
Change-Id: I1cefc313e4e64a10ee7dd14400137fbe02ce9523
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression caused by new support for reflection of partial indexes on
SQLite added in 1.4.45 for :ticket:`8804`, where the ``index_list`` pragma
command in very old versions of SQLite (possibly prior to 3.8.9) does not
return the current expected number of columns, leading to exceptions raised
when reflecting tables and indexes.
Fixes: #8969
Change-Id: If317cdcfc6782f7e180df329b6ea0ddb48ce2269
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changed how the positional compilation is performed. It's rendered by the compiler
the same as the pyformat compilation. The string is then processed to replace
the placeholders with the correct ones, and to obtain the correct order of the
parameters.
This vastly simplifies the computation of the order of the parameters, that in
case of nested CTE is very hard to compute correctly.
Reworked how numeric paramstyle behavers:
- added support for repeated parameter, without duplicating them like in normal
positional dialects
- implement insertmany support. This requires that the dialect supports out of
order placehoders, since all parameters that are not part of the VALUES clauses
are placed at the beginning of the parameter tuple
- support for different identifiers for a numeric parameter. It's for example
possible to use postgresql style placeholder $1, $2, etc
Added two new dialect based on sqlite to test "numeric" fully using
both :1 style and $1 style. Includes a workaround for SQLite's
not-really-correct numeric implementation.
Changed parmstyle of asyncpg dialect to use numeric, rendering with its native
$ identifiers
Fixes: #8926
Fixes: #8849
Change-Id: I7c640467d49adfe6d795cc84296fc7403dcad4d6
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added support for the SQLite backend to reflect the "DEFERRABLE" and
"INITIALLY" keywords which may be present on a foreign key construct. Pull
request courtesy Michael Gorven.
Fixes: #8903
Closes: #8904
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/8904
Pull-request-sha: 52aa4cf77482c4051899e21bea75b9830e4c3efa
Change-Id: I713906db1a458d8f1be39625841ca3bbc03ec835
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added support for reflection of expression-oriented WHERE criteria included
in indexes on the SQLite dialect, in a manner similar to that of the
PostgreSQL dialect. Pull request courtesy Tobias Pfeiffer.
Fixes: #8804
Closes: #8806
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/8806
Pull-request-sha: 539dfcb372360911b69aed2a804698bb1a2220b1
Change-Id: I0e34d47dbe2b9c1da6fce531363084843e5127a3
|
|
|
|
|
|
|
|
| |
command run is "pyupgrade --py37-plus --keep-runtime-typing --keep-percent-format <files...>"
pyupgrade will change assert_ to assertTrue. That was reverted since assertTrue does not
exists in sqlalchemy fixtures
Change-Id: Ie1ed2675c7b11d893d78e028aad0d1576baebb55
|
|
|
|
|
|
| |
using latest 3.9-v7.3.9 and returning does not work at all.
Change-Id: I208c3e1ff10949651ffbebc54beea6ede6af1dd3
|
|
|
|
|
|
|
|
|
|
| |
to do this we have to invent our own isolation level
setter based on their current internals. however
now we can ensure thread-safe access. we are trying
to resolve an issue where test suite on CI seems to fail
around the same time each time.
Change-Id: I79c8fc04b9afef0876fb446ad40a7621a772cd34
|
|\ |
|
| |
| |
| |
| | |
Change-Id: I64e4d4dce8c5f5aced3190f9e3682c630462a61e
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
reviewers: these docs publish periodically at:
https://docs.sqlalchemy.org/en/gerrit/4042/orm/queryguide/index.html
See the "last generated" timestamp near the bottom of the
page to ensure the latest version is up
Change includes some other adjustments:
* small typing fixes for end-user benefit
* removal of a bunch of old examples for patterns that nobody
uses or aren't really what we promote now
* modernization of some examples, including inheritance
Change-Id: I9929daab7797be9515f71c888b28af1209e789ff
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* ORM Insert now includes "bulk" mode that will run
essentially the same process as session.bulk_insert_mappings;
interprets the given list of values as ORM attributes for
key names
* ORM UPDATE has a similar feature, without RETURNING support,
for session.bulk_update_mappings
* Added support for upserts to do RETURNING ORM objects as well
* ORM UPDATE/DELETE with list of parameters + WHERE criteria
is a not implemented; use connection
* ORM UPDATE/DELETE defaults to "auto" synchronize_session;
use fetch if RETURNING is present, evaluate if not, as
"fetch" is much more efficient (no expired object SELECT problem)
and less error prone if RETURNING is available
UPDATE: howver this is inefficient! please continue to
use evaluate for simple cases, auto can move to fetch
if criteria not evaluable
* "Evaluate" criteria will now not preemptively
unexpire and SELECT attributes that were individually
expired. Instead, if evaluation of the criteria indicates that
the necessary attrs were expired, we expire the object
completely (delete) or expire the SET attrs unconditionally
(update). This keeps the object in the same unloaded state
where it will refresh those attrs on the next pass, for
this generally unusual case. (originally #5664)
* Core change! update/delete rowcount comes from len(rows)
if RETURNING was used. SQLite at least otherwise did not
support this. adjusted test_rowcount accordingly
* ORM DELETE with a list of parameters at all is also a not
implemented as this would imply "bulk", and there is no
bulk_delete_mappings (could be, but we dont have that)
* ORM insert().values() with single or multi-values translates
key names based on ORM attribute names
* ORM returning() implemented for insert, update, delete;
explcit returning clauses now interpret rows in an ORM
context, with support for qualifying loader options as well
* session.bulk_insert_mappings() assigns polymorphic identity
if not set.
* explicit RETURNING + synchronize_session='fetch' is now
supported with UPDATE and DELETE.
* expanded return_defaults() to work with DELETE also.
* added support for composite attributes to be present
in the dictionaries used by bulk_insert_mappings and
bulk_update_mappings, which is also the new ORM bulk
insert/update feature, that will expand the composite
values into their individual mapped attributes the way they'd
be on a mapped instance.
* bulk UPDATE supports "synchronize_session=evaluate", is the
default. this does not apply to session.bulk_update_mappings,
just the new version
* both bulk UPDATE and bulk INSERT, the latter with or without
RETURNING, support *heterogenous* parameter sets.
session.bulk_insert/update_mappings did this, so this feature
is maintained. now cursor result can be both horizontally
and vertically spliced :)
This is now a long story with a lot of options, which in
itself is a problem to be able to document all of this
in some way that makes sense. raising exceptions for
use cases we haven't supported is pretty important here
too, the tradition of letting unsupported things just not work
is likely not a good idea at this point, though there
are still many cases that aren't easily avoidable
Fixes: #8360
Fixes: #7864
Fixes: #7865
Change-Id: Idf28379f8705e403a3c6a937f6a798a042ef2540
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
the feature is enabled for all built in backends
when RETURNING is used,
except for Oracle that doesn't need it, and on
psycopg2 and mssql+pyodbc it is used for all INSERT statements,
not just those that use RETURNING.
third party dialects would need to opt in to the new feature
by setting use_insertmanyvalues to True.
Also adds dialect-level guards against using returning
with executemany where we dont have an implementation to
suit it. execute single w/ returning still defers to the
server without us checking.
Fixes: #6047
Fixes: #7907
Change-Id: I3936d3c00003f02e322f2e43fb949d0e6e568304
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The SQLite dialect now supports UPDATE..FROM syntax, for UPDATE statements
that may refer to additional tables within the WHERE criteria of the
statement without the need to use subqueries. This syntax is invoked
automatically when using the :class:`_dml.Update` construct when more than
one table or other entity or selectable is used.
Fixes: #7185
Change-Id: I27e94ace9ff761cc45e652fa1abff8cd1f42fec5
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
just in my own testing, if I say insert().return_defaults()
and stringify, I should see it, so make sure all the dialects
default to "insert_returning" etc. , with downgrade on
server version check.
Change-Id: Id64e78fcb03c48b5dcb0feb21cb9cc495edd15e9
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Added new parameter to SQLite for reflection methods called
``sqlite_include_internal=True``; when omitted, local tables that start
with the prefix ``sqlite_``, which per SQLite documentation are noted as
"internal schema" tables such as the ``sqlite_sequence`` table generated to
support "AUTOINCREMENT" columns, will not be included in reflection methods
that return lists of local objects. This prevents issues for example when
using Alembic autogenerate, which previously would consider these
SQLite-generated tables as being remove from the model.
Fixes: #8234
Change-Id: I36ee7a053e04b6c46c912aaa0d7e035a5b88a4f9
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds support for comments on named constraints, including `ForeignKeyConstraint`, `PrimaryKeyConstraint`, `CheckConstraint`, `UniqueConstraint`, solving the [Issue 5667](https://github.com/sqlalchemy/sqlalchemy/issues/5667).
Supports only PostgreSQL backend.
### Description
Following the example of [Issue 1546](https://github.com/sqlalchemy/sqlalchemy/issues/1546), supports comments on constraints. Specifically, enables comments on _named_ ones — as I get it, PostgreSQL prohibits comments on unnamed constraints.
Enables setting the comments for named constraints like this:
```
Table(
'example', metadata,
Column('id', Integer),
Column('data', sa.String(30)),
PrimaryKeyConstraint(
"id", name="id_pk", comment="id_pk comment"
),
CheckConstraint('id < 100', name="cc1", comment="Id value can't exceed 100"),
UniqueConstraint(['data'], name="uc1", comment="Must have unique data field"),
)
```
Provides the DDL representation for constraint comments and routines to create and drop them. Class `.Inspector` reflects constraint comments via methods like `get_check_constraints` .
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- [ ] A short code fix
- [x] A new feature implementation
- Solves the issue 5667.
- The commit message includes `Fixes: 5667`.
- Includes tests based on comment reflection.
**Have a nice day!**
Fixes: #5667
Closes: #7742
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7742
Pull-request-sha: 42a5d3c3e9ccf9a9d5397fd007aeab0854f66130
Change-Id: Ia60f578595afdbd6089541c9a00e37997ef78ad3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rearchitected the schema reflection API to allow some dialects to make use
of high performing batch queries to reflect the schemas of many tables at
once using much fewer queries. The new performance features are targeted
first at the PostgreSQL and Oracle backends, and may be applied to any
dialect that makes use of SELECT queries against system catalog tables to
reflect tables (currently this omits the MySQL and SQLite dialects which
instead make use of parsing the "CREATE TABLE" statement, however these
dialects do not have a pre-existing performance issue with reflection. MS
SQL Server is still a TODO).
The new API is backwards compatible with the previous system, and should
require no changes to third party dialects to retain compatibility;
third party dialects can also opt into the new system by implementing
batched queries for schema reflection.
Along with this change is an updated reflection API that is fully
:pep:`484` typed, features many new methods and some changes.
Fixes: #4379
Change-Id: I897ec09843543aa7012bcdce758792ed3d415d08
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As almost every dialect supports RETURNING now, RETURNING
is also made more of a default assumption.
* the default compiler generates a RETURNING clause now
when specified; CompileError is no longer raised.
* The dialect-level implicit_returning parameter now has
no effect. It's not fully clear if there are real world
cases relying on the dialect-level parameter, so we will see
once 2.0 is released. ORM-level RETURNING can be disabled
at the table level, and perhaps "implicit returning" should
become an ORM-level option at some point as that's where
it applies.
* Altered ORM update() / delete() to respect table-level
implicit returning for fetch.
* Since MariaDB doesnt support UPDATE returning, "full_returning"
is now split into insert_returning, update_returning, delete_returning
* Crazy new thing. Dialects that have *both* cursor.lastrowid
*and* returning. so now we can pick between them for SQLite
and mariadb. so, we are trying to keep it on .lastrowid for
simple inserts with an autoincrement column, this helps with
some edge case test scenarios and i bet .lastrowid is faster
anyway. any return_defaults() / multiparams etc then we
use returning
* SQLite decided they dont want to return rows that match in
ON CONFLICT. this is flat out wrong, but for now we need to
work with it.
Fixes: #6195
Fixes: #7011
Closes: #7047
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7047
Pull-request-sha: d25d5ea3abe094f282c53c7dd87f5f53a9e85248
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Change-Id: I9908ce0ff7bdc50bd5b27722081767c31c19a950
|
|/
|
|
|
|
|
|
| |
Fixed an issue where :meth:`_sql.GenerativeSelect.fetch` would be
ignored when executing a statement using the ORM.
Fixes: #8091
Change-Id: I6790c7272a71278e90de2529c8bc8ae89e54e288
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where the PostgreSQL :meth:`_postgresql.Insert.on_conflict`
method and the SQLite :meth:`_sqlite.Insert.on_conflict` method would both
fail to correctly accommodate a column with a separate ".key" when
specifying the column using its key name in the dictionary passed to
``set_``, as well as if the :attr:`_sqlite.Insert.excluded` or
:attr:`_postgresql.Insert.excluded` collection were used as the dictionary
directly.
Fixes: #8014
Change-Id: I67226aeedcb2c683e22405af64720cc1f990f274
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Altered the compilation mechanics of the :class:`.Insert` construct such
that the "autoincrement primary key" column value will be fetched via
``cursor.lastrowid`` or RETURNING even if present in the parameter set or
within the :meth:`.Insert.values` method as a plain bound value, for
single-row INSERT statements on specific backends that are known to
generate autoincrementing values even when explicit NULL is passed. This
restores a behavior that was in the 1.3 series for both the use case of
separate parameter set as well as :meth:`.Insert.values`. In 1.4, the
parameter set behavior unintentionally changed to no longer do this, but
the :meth:`.Insert.values` method would still fetch autoincrement values up
until 1.4.21 where :ticket:`6770` changed the behavior yet again again
unintentionally as this use case was never covered.
The behavior is now defined as "working" to suit the case where databases
such as SQLite, MySQL and MariaDB will ignore an explicit NULL primary key
value and nonetheless invoke an autoincrement generator.
Fixes: #7998
Change-Id: I5d4105a14217945f87fbe9a6f2a3c87f6ef20529
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to simplify pyproject.toml change the remaining files
that aren't going to be typed on this first pass
(unless of course someone wants to type some of these)
to include # mypy: ignore-errors. for the moment, only a handful
of ORM modules are to have more type checking implemented.
It's important that ignore-errors is used and
not "# type: ignore", as in the latter case, mypy doesn't even
read the existing types in the file, which makes it impossible to
type any files that refer to those modules at all.
to simplify ongoing typing work use inline mypy config
for remaining files that are "done" for now, indicating the
level of type checking they currently have.
Change-Id: I98669c1a305c2f0adba85d10b5425541f3fe9533
|
|
|
|
|
|
|
|
|
|
|
|
| |
SQLite datetime, date, and time datatypes now use Python standard lib
``fromisoformat()`` methods in order to parse incoming datetime, date, and
time string values. This improves performance vs. the previous regular
expression-based approach, and also automatically accommodates for datetime
and time formats that contain either a six-digit "microseconds" format or a
three-digit "milliseconds" format.
Fixes: #7029
Change-Id: I67aab4fe5ee3055e5996050cf4564981413cc221
|
|
|
|
|
|
|
|
|
| |
Fixed bug where the name of CHECK constraints under SQLite would not be
reflected if the name were created using quotes, as is the case when the
name uses mixed case or special characters.
Fixes: #5463
Change-Id: Ic3b1e0a0385fb9e727b0880e90815ea2814df313
|
|
|
|
|
|
|
| |
strict types type_api.py, including TypeDecorator,
NativeForEmulated, etc.
Change-Id: Ib2eba26de0981324a83733954cb7044a29bbd7db
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All modules in sqlalchemy.engine are strictly
typed with the exception of cursor, default, and
reflection. cursor and default pass with non-strict
typing, reflection is waiting on the multi-reflection
refactor.
Behavioral changes:
* create_connect_args() methods return a tuple of list,
dict, rather than a list of list, dict
* removed allow_chars parameter from
pyodbc connector ._get_server_version_info()
method
* the parameter list passed to do_executemany is now
a list in all cases. previously, this was being run
through dialect.execute_sequence_format, which
defaults to tuple and was only intended for individual
tuple params.
* broke up dialect.dbapi into dialect.import_dbapi
class method and dialect.dbapi module object. added
a deprecation path for legacy dialects. it's not
really feasible to type a single attr as a classmethod
vs. module type. The "type_compiler" attribute also
has this problem with greater ability to work around,
left that one for now.
* lots of constants changing to be Enum, so that we can
type them. for fixed tuple-position constants in
cursor.py / compiler.py (which are used to avoid the
speed overhead of namedtuple), using Literal[value]
which seems to work well
* some tightening up in Row regarding __getitem__, which
we can do since we are on full 2.0 style result use
* altered the set_connection_execution_options and
set_engine_execution_options event flows so that the
dictionary of options may be mutated within the event
hook, where it will then take effect as the actual
options used. Previously, changing the dict would
be silently ignored which seems counter-intuitive
and not very useful.
* A lot of DefaultDialect/DefaultExecutionContext
methods and attributes, including underscored ones, move
to interfaces. This is not fully ideal as it means
the Dialect/ExecutionContext interfaces aren't publicly
subclassable directly, but their current purpose
is more of documentation for dialect authors who should
(and certainly are) still be subclassing the DefaultXYZ
versions in all cases
Overall, Result was the most extremely difficult class
hierarchy to type here as this hierarchy passes through
largely amorphous "row" datatypes throughout, which
can in fact by all kinds of different things, like
raw DBAPI rows, or Row objects, or "scalar"/Any, but
at the same time these types have meaning so I tried still
maintaining some level of semantic markings for these,
it highlights how complex Result is now, as it's trying
to be extremely efficient and inlined while also being
very open-ended and extensible.
Change-Id: I98b75c0c09eab5355fc7a33ba41dd9874274f12a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added :class:`.Double`, :class:`.DOUBLE`, :class:`.DOUBLE_PRECISION`
datatypes to the base ``sqlalchemy.`` module namespace, for explicit use of
double/double precision as well as generic "double" datatypes. Use
:class:`.Double` for generic support that will resolve to DOUBLE/DOUBLE
PRECISION/FLOAT as needed for different backends.
Implemented DDL and reflection support for ``FLOAT`` datatypes which
include an explicit "binary_precision" value. Using the Oracle-specific
:class:`_oracle.FLOAT` datatype, the new parameter
:paramref:`_oracle.FLOAT.binary_precision` may be specified which will
render Oracle's precision for floating point types directly. This value is
interpreted during reflection. Upon reflecting back a ``FLOAT`` datatype,
the datatype returned is one of :class:`_types.DOUBLE_PRECISION` for a
``FLOAT`` for a precision of 126 (this is also Oracle's default precision
for ``FLOAT``), :class:`_types.REAL` for a precision of 63, and
:class:`_oracle.FLOAT` for a custom precision, as per Oracle documentation.
As part of this change, the generic :paramref:`_sqltypes.Float.precision`
value is explicitly rejected when generating DDL for Oracle, as this
precision cannot be accurately converted to "binary precision"; instead, an
error message encourages the use of
:meth:`_sqltypes.TypeEngine.with_variant` so that Oracle's specific form of
precision may be chosen exactly. This is a backwards-incompatible change in
behavior, as the previous "precision" value was silently ignored for
Oracle.
Fixes: #5465
Closes: #7674
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7674
Pull-request-sha: 5c68419e5aee2e27bf21a8ac9eb5950d196c77e5
Change-Id: I831f4af3ee3b23fde02e8f6393c83e23dd7cd34d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue where SQLite unique constraint reflection would not work
for an inline UNIQUE constraint where the column name had an underscore
in its name.
Added support for reflecting SQLite inline unique constraints where
the column names are formatted with SQLite "escape quotes" ``[]``
or `` ` ``, which are discarded by the database when producing the
column name.
Fixes: #7736
Change-Id: I635003478dc27193995f7d7a6448f9333a498706
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The SQLite dialect now defaults to :class:`_pool.QueuePool` when a file
based database is used. This is set along with setting the
``check_same_thread`` parameter to ``False``. It has been observed that the
previous approach of defaulting to :class:`_pool.NullPool`, which does not
hold onto database connections after they are released, did in fact have a
measurable negative performance impact. As always, the pool class is always
customizable via the :paramref:`_sa.create_engine.poolclass` parameter.
Fixes: #7490
Change-Id: I5f6c259def0ef43d401c6163dc99f651e519148d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some `__slots__` were not in order.
Fixes #7527
### Description
I'm fixing two types of slots mistakes:
- [x] remove overlapping slots (i.e. slots already defined on a base class)
- [x] fix broken inheritance (i.e. slots class inheriting from a non-slots class)
- [x] slots added to base class `TransactionalContext`. It seemed to use two attributes, which I've added as slots.
- [x] empty slots removed from `ORMOption`. Its base class explicitly makes use of `__dict__` so empty slots don't add anything.
- [x] empty slots added to `PostLoader`. It doesn't appear to use any slots not already defined on its base classes.
- [x] empty slots added to `IterateMappersMixin`. It doesn't appear to use any slots not already defined on its subclasses.
- [x] empty slots added to `ImmutableContainer`. It doesn't use any fields.
- [x] empty slots added to `OperatorType`. It's a protocol.
- [x] empty slots added to `InternalTraversal`, `_HasTraversalDispatch`. They don't seem to use attributes on their own.
### Checklist
This pull request is:
- [x] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
**Have a nice day!**
Closes: #7589
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7589
Pull-request-sha: 70a9c4d46916b7c6907eb1d3ad4f7033ec964191
Change-Id: I6c6e3e69c3c34d0f3bdda7f0684849834fdd1863
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
start applying foundational annotations to key
elements.
two main elements addressed here:
1. removal of public_factory() and replacement with
explicit functions. this just works much better with
typing.
2. typing support for column expressions and operators.
The biggest part of this involves stubbing out all the
ColumnOperators methods under ColumnElement in a
TYPE_CHECKING section. Took me a while to see this
method vs. much more complicated things I thought
I needed.
Also for this version implementing #7519, ColumnElement
types against the Python type and not TypeEngine. it is
hoped this leads to easier transferrence between ORM/Core
as well as eventual support for result set typing.
Not clear yet how well this approach will work and what
new issues it may introduce.
given the current approach we now get full, rich typing for
scenarios like this:
from sqlalchemy import column, Integer, String, Boolean
c1 = column('a', String)
c2 = column('a', Integer)
expr1 = c2.in_([1, 2, 3])
expr2 = c2 / 5
expr3 = -c2
expr4_a = ~(c2 == 5)
expr4_b = ~column('q', Boolean)
expr5 = c1 + 'x'
expr6 = c2 + 10
Fixes: #7519
Fixes: #6810
Change-Id: I078d9f57955549f6f7868314287175f6c61c44cb
|
|
|
|
| |
Change-Id: I49abf2607e0eb0623650efdf0091b1fb3db737ea
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
Black's `target-version` was still set to `['py27', 'py36']`. Set it to `[py37]` instead.
Also update Black and other pre-commit hooks and re-format with Black.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #7536
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7536
Pull-request-sha: b3aedf5570d7e0ba6c354e5989835260d0591b08
Change-Id: I8be85636fd2c9449b07a8626050c8bd35bd119d5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Good new is that pylance likes it and copies over the
singature and everything.
Bad news is that mypy does not support this yet https://github.com/python/mypy/issues/8645
Other minor bad news is that non_generative is not typed. I've tried using a protocol
like the one in the comment but the signature is not ported over by pylance, so it's
probably best to just live without it to have the correct signature.
notes from mike: these three decorators are at the core of getting
the library to be typed, more good news is that pylance will
do all the things we like re: public_factory, see
https://github.com/microsoft/pyright/issues/2758#issuecomment-1002788656
.
For @_generative, we will likely move to using pep 673 once mypy
supports it which may be soon. but overall having the explicit
"return self" in the methods, while a little inconvenient, makes
the typing more straightforward and locally present in the files
rather than being decided at a distance. having "return self"
present, or not, both have problems, so maybe we will be able
to change it again if things change as far as decorator support.
As it is, I feel like we are barely squeaking by with our decorators,
the typing is already pretty out there.
Change-Id: Ic77e13fc861def76a5925331df85c0aa48d77807
References: #6810
|
|\ |
|
| |
| |
| |
| |
| | |
Change-Id: I7aaeb5bc130271624335b79cf586581d6c6c34c7
References: #4600
|