| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
Fixed apparently very old issue where the
:paramref:`_postgresql.ENUM.create_type` parameter, when set to its
non-default of ``False``, would not be propagated when the
:class:`_schema.Column` which it's a part of were copied, as is common when
using ORM Declarative mixins.
Fixes: #9773
Change-Id: I79a7c6f052ec39b42400d92bf591c791feca573b
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
fix a handful of warnings that were emitting but not raising,
usually because they were inside an "expect_warnings" block.
modify "expect_warnings" to always use "raise_on_any_unexpected"
behavior; remove this parameter.
Fixed issue in semi-private ``await_only()`` and ``await_fallback()``
concurrency functions where the given awaitable would remain un-awaited if
the function threw a ``GreenletError``, which could cause "was not awaited"
warnings later on if the program continued. In this case, the given
awaitable is now cancelled before the exception is thrown.
Change-Id: I33668c5e8c670454a3d879e559096fb873b57244
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixed the base class for dialect-specific float/double types; Oracle
:class:`_oracle.BINARY_DOUBLE` now subclasses :class:`_sqltypes.Double`,
and internal types for :class:`_sqltypes.Float` for asyncpg and pg8000 now
correctly subclass :class:`_sqltypes.Float`.
Added suite tests to ensure that floating point types, such as
class:`_types.Float` and :class:`_types.Double` are not resolved as
class:`_types.Numeric` in the dialect, since it may not compatible in
all cases, such as when casting a value.
Change-Id: I20b814e8e029d57921d9728a55f2570f74c35c87
|
|\ \ \ |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Implemented the "cartesian product warning" for UPDATE and DELETE
statements, those which include multiple tables that are not correlated
together in some way.
Fixed issue where :func:`_dml.update` construct that included multiple
tables and no VALUES clause would raise with an internal error. Current
behavior for :class:`_dml.Update` with no values is to generate a SQL
UPDATE statement with an empty "set" clause, so this has been made
consistent for this specific sub-case.
Fixes: #9721
Change-Id: I556639811cc930d2e37532965d2ae751882af921
|
|\ \ \ |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Added reflection support in the Oracle dialect to expression based indexes
and the ordering direction of index expressions.
Fixes: #9597
Change-Id: I40e163496789774e9930f46823d2208c35eab6f8
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed another regression due to the "insertmanyvalues" change in 2.0.10 as
part of :ticket:`9618`, in a similar way as regression :ticket:`9701`, where
:class:`.LargeBinary` datatypes also need additional casts on when using the
asyncpg driver specifically in order to work with the new bulk INSERT
format.
Fixes: #9739
Change-Id: I57370d269ea757f263c1f3a16c324ceae76fd4e8
|
|/
|
|
|
|
|
|
|
|
|
|
| |
Fixed issues regarding reflection of comments for :class:`_schema.Table`
and :class:`_schema.Column` objects, where the comments contained control
characters such as newlines. Additional testing support for these
characters as well as extended Unicode characters in table and column
comments (the latter of which aren't supported by MySQL/MariaDB) added to
testing overall.
Fixes: #9722
Change-Id: Id18bf758fdb6231eb705c61eeaf74bb9fa472601
|
|
|
|
|
|
| |
Additionally add mssql DOUBLE_PRECISION to mssql.__all__
Change-Id: I93f2db85feeff116278c5c6d0678e20d039a3e1f
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed regression caused by the fix for :ticket:`9618` where floating point
values would lose precision being inserted in bulk, using either the
psycopg2 or psycopg drivers.
Implemented the :class:`_sqltypes.Double` type for SQL Server, having it
resolve to ``REAL``, or :class:`_mssql.REAL`, at DDL rendering time.
Fixed issue in Oracle dialects where ``Decimal`` returning types such as
:class:`_sqltypes.Numeric` would return floating point values, rather than
``Decimal`` objects, when these columns were used in the
:meth:`_dml.Insert.returning` clause to return INSERTed values.
Fixes: #9701
Change-Id: I8865496a6ccac6d44c19d0ca2a642b63c6172da9
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Improved row processing performance for "binary" datatypes by making the
"bytes" handler conditional on a per driver basis. As a result, the
"bytes" result handler has been disabled for nearly all drivers other than
psycopg2, all of which in modern forms support returning Python "bytes"
directly. Pull request courtesy J. Nick Koston.
Fixes: #9680
Closes: #9681
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9681
Pull-request-sha: 4f2fd88bd9af54c54438a3b72a2f30384b0f8898
Change-Id: I394bdcbebaab272e63b13cc02f60813b7aa76839
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Adding typing information for various parameters for Postgres types (in accordance to the docs).
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #9594
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9594
Pull-request-sha: c7e39a219108f9e81ad22c008a664b62f09f9d5f
Change-Id: I91b377c246c728885a99df297de7a8933835c540
|
| |
| |
| |
| | |
Change-Id: I6bbef2416f864d1414d56f9bf39026156aed5e67
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
I faced an issue related to pg bouncer and prepared statement cache flow in asyncpg dialect. Regarding this discussion https://github.com/sqlalchemy/sqlalchemy/issues/6467 I prepared PR to support an optional parameter `name` in prepared statement which is allowed, since 0.25.0 version in `asyncpg` https://github.com/MagicStack/asyncpg/pull/846
**UPD:**
the issue with proposal: https://github.com/sqlalchemy/sqlalchemy/issues/9608
### Description
Added optional parameter `name_func` to `AsyncAdapt_asyncpg_connection` class which will call on the `self._connection.prepare()` function and populate a unique name.
so in general instead this
```python
from uuid import uuid4
from asyncpg import Connection
class CConnection(Connection):
def _get_unique_id(self, prefix: str) -> str:
return f'__asyncpg_{prefix}_{uuid4()}__'
engine = create_async_engine(...,
connect_args={
'connection_class': CConnection,
},
)
```
would be enough
```python
from uuid import uuid4
engine = create_async_engine(...,
connect_args={
'name_func': lambda: f'__asyncpg_{uuid4()}__',
},
)
```
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [x] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Fixes: #9608
Closes: #9607
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9607
Pull-request-sha: b4bc8d3e57ab095a26112830ad4bea36083454e3
Change-Id: Icd753366cba166b8a60d1c8566377ec8335cd828
|
|\ \ \
| | | |
| | | |
| | | | |
main
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Repaired a major shortcoming which was identified in the
:ref:`engine_insertmanyvalues` performance optimization feature first
introduced in the 2.0 series. This was a continuation of the change in
2.0.9 which disabled the SQL Server version of the feature due to a
reliance in the ORM on apparent row ordering that is not guaranteed to take
place. The fix applies new logic to all "insertmanyvalues" operations,
which takes effect when a new parameter
:paramref:`_dml.Insert.returning.sort_by_parameter_order` on the
:meth:`_dml.Insert.returning` or :meth:`_dml.UpdateBase.return_defaults`
methods, that through a combination of alternate SQL forms, direct
correspondence of client side parameters, and in some cases downgrading to
running row-at-a-time, will apply sorting to each batch of returned rows
using correspondence to primary key or other unique values in each row
which can be correlated to the input data.
Performance impact is expected to be minimal as nearly all common primary
key scenarios are suitable for parameter-ordered batching to be
achieved for all backends other than SQLite, while "row-at-a-time"
mode operates with a bare minimum of Python overhead compared to the very
heavyweight approaches used in the 1.x series. For SQLite, there is no
difference in performance when "row-at-a-time" mode is used.
It's anticipated that with an efficient "row-at-a-time" INSERT with
RETURNING batching capability, the "insertmanyvalues" feature can be later
be more easily generalized to third party backends that include RETURNING
support but not necessarily easy ways to guarantee a correspondence
with parameter order.
Fixes: #9618
References: #9603
Change-Id: I1d79353f5f19638f752936ba1c35e4dc235a8b7c
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
Fixes: #9509
<!-- Describe your changes in detail -->
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [x] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #9510
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9510
Pull-request-sha: 596648e7989327eef1807057519b2295b48f1adf
Change-Id: I7b527edda09eb78dee6948edd4d49b00ea437011
|
|/
|
|
|
|
|
| |
Removed versionadded and versionchanged for version prior to 1.2 since they
are no longer useful.
Change-Id: I5c53d1188bc5fec3ab4be39ef761650ed8fa6d3e
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed issue that prevented reflection of expression based indexes
with long expressions in PostgreSQL. The expression where erroneously
truncated to the identifier length (that's 63 bytes by default).
Fixes: #9615
Change-Id: I50727b0699e08fa25f10f3c94dcf8b79534bfb75
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Restored the :paramref:`_postgresql.ENUM.name` parameter as optional in the
signature for :class:`_postgresql.ENUM`, as this is chosen automatically
from a given pep-435 ``Enum`` type.
Fixed issue where the comparison for :class:`_postgresql.ENUM` against a
plain string would cast that right-hand side type as VARCHAR, which due to
more explicit casting added to dialects such as asyncpg would produce a
PostgreSQL type mismatch error.
Fixes: #9611
Fixes: #9621
Change-Id: If095544cd1a52016ad2e7cfa2d70c919a94e79c1
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
In #9618 we both can look to re-enable insertmanyvalues
for SQL Server, and also likely *disable* its use for the
ORM unit of work specifically, since that's really where the
only problem is, and it will likely be for all dialects,
not just SQL Server. An approach using sentinel columns will
be rolled out for the unit of work use case.
Change-Id: I3358e30839491769db95b4ac042a661271df3929
References: #9618
References: #9603
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
we will keep trying to find workarounds, however this
patch is the "turn it off" patch
Due to a critical bug identified in SQL Server, the SQLAlchemy
"insertmanyvalues" feature which allows fast INSERT of many rows while also
supporting RETURNING unfortunately needs to be disabled for SQL Server. SQL
Server is apparently unable to guarantee that the order of rows inserted
matches the order in which they are sent back by OUTPUT inserted when
table-valued rows are used with INSERT in conjunction with OUTPUT inserted.
We are trying to see if Microsoft is able to confirm this undocumented
behavior however there is no known workaround, other than it's not safe to
use table-valued expressions with OUTPUT inserted for now.
Fixes: #9603
Change-Id: I4b932fb8774390bbdf4e870a1f6cfe9a78c4b105
|
|\ |
|
| |
| |
| |
| |
| |
| | |
Fixes: #9588
References: #9585
Change-Id: Ic6668311ea488339023d7aab1a186f8465131fd8
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changed the bulk INSERT strategy used for SQL Server "executemany" with
pyodbc when ``fast_executemany`` is set to ``True`` by using
``fast_executemany`` / ``cursor.executemany()`` for bulk INSERT that does
not include RETURNING, restoring the same behavior as was used in
SQLAlchemy 1.4 when this parameter is set. For INSERT statements that use
RETURNING, the "insertmanyvalues" strategy continues to be used as it is
the only current strategy that supports RETURNING with bulk INSERT.
Previously, SQLAlchemy 2.0 would use "insertmanyvalues" for all INSERT
statements when ``use_insertmanyvalues`` was left at its default of
``False``, ignoring if ``fast_executemany`` was set.
New performance details from end users have shown that ``fast_executemany``
is still much faster for very large datasets as it uses ODBC commands that
can receive all rows in a single round trip, allowing for much larger
datasizes than the batches that can be sent by the current
"insertmanyvalues" strategy.
Fixes: #9586
Change-Id: I85955a10ba77c26cdc0c22e362a827d7aaef2852
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue where string datatypes such as :class:`.CHAR`,
:class:`.VARCHAR`, :class:`.TEXT`, as well as binary :class:`.BLOB`, could
not be produced with an explicit length of zero, which has special meaning
for MySQL. Pull request courtesy J. Nick Koston.
Fixes: #9544
Closes: #9543
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9543
Pull-request-sha: dc17fc3e93f0ba90881c4efb06016ddf83c7af8b
Change-Id: I96925d45f16887f5dfd68a5d4f9284b3abc46d25
|
|\ |
|
| |
| |
| |
| | |
Change-Id: I5cd7e9e9ab8a1dae2bd467a1e4299d7f26183301
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed critical regression in PostgreSQL dialects such as asyncpg which rely
upon explicit casts in SQL in order for datatypes to be passed to the
driver correctly, where a :class:`.String` datatype would be cast along
with the exact column length being compared, leading to implicit truncation
when comparing a ``VARCHAR`` of a smaller length to a string of greater
length regardless of operator in use (e.g. LIKE, MATCH, etc.). The
PostgreSQL dialect now omits the length from ``VARCHAR`` when rendering
these casts.
Fixes: #9511
Change-Id: If094146d8cfd989a0b780872f38e86fd41ebfec2
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
### Description
Refactor out the lines in `PGDialect.initialize()` that set backslash escapes into their own method to provide an override hook for [`sqlalchemy-redshift`](https://github.com/sqlalchemy-redshift/sqlalchemy-redshift) to use.
Fixes #9442
### Checklist
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [x] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
Closes: #9475
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9475
Pull-request-sha: 5565afeac20ea3612c3f427f58efacd8487ac159
Change-Id: I9b652044243ab231c19ab55ebc8ee24534365d61
|
|/
|
|
|
|
|
|
|
|
|
|
| |
Added new PostgreSQL type :class:`_postgresql.CITEXT`. Pull request
courtesy Julian David Rath.
Fixes: #9416
Closes: #9417
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9417
Pull-request-sha: 23a83a342ad6d820ee5749ebccda04e54c373f7d
Change-Id: I54699b9457426c20afbdc0acaa41dc57644b0536
|
|
|
|
|
|
| |
try to get file naming to be more sane for pysqlite file databases
Change-Id: I68ad8c2f6c6c25930fbffdd79b8d429cd7a7dd9a
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed reflection bug where Oracle "name normalize" would not work correctly
for reflection of symbols that are in the "PUBLIC" schema, such as
synonyms, meaning the PUBLIC name could not be indicated as lower case on
the Python side for the :paramref:`_schema.Table.schema` argument. Using
uppercase "PUBLIC" would work, but would then lead to awkward SQL queries
including a quoted ``"PUBLIC"`` name as well as indexing the table under
uppercase "PUBLIC", which was inconsistent.
Fixes: #9459
Change-Id: I989bd1e794a5b5ac9aae4f4a8702f14c56cd74c2
|
|
|
|
|
|
| |
the verbiage here was ambiguous previously.
Change-Id: I452ae85bd8b5469d4103970e99cfac752b508274
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
pymssql seems to be maintained again and seems to be working
completely, so let's try re-enabling it.
Fixed issue in the new :class:`.Uuid` datatype which prevented it from
working with the pymssql driver. As pymssql seems to be maintained again,
restored testing support for pymssql.
Tweaked the pymssql dialect to take better advantage of
RETURNING for INSERT statements in order to retrieve last inserted primary
key values, in the same way as occurs for the mssql+pyodbc dialect right
now.
Identified that the ``sqlite`` and ``mssql+pyodbc`` dialects are now
compatible with the SQLAlchemy ORM's "versioned rows" feature, since
SQLAlchemy now computes rowcount for a RETURNING statement in this specific
case by counting the rows returned, rather than relying upon
``cursor.rowcount``. In particular, the ORM versioned rows use case
(documented at :ref:`mapper_version_counter`) should now be fully
supported with the SQL Server pyodbc dialect.
Change-Id: I38a0666587212327aecf8f98e86031ab25d1f14d
References: #5321
Fixes: #9414
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The support for pool ping listeners to receive exception events via the
:meth:`.ConnectionEvents.handle_error` event added in 2.0.0b1 for
:ticket:`5648` failed to take into account dialect-specific ping routines
such as that of MySQL and PostgreSQL. The dialect feature has been reworked
so that all dialects participate within event handling. Additionally,
a new boolean element :attr:`.ExceptionContext.is_pre_ping` is added
which identifies if this operation is occurring within the pre-ping
operation.
For this release, third party dialects which implement a custom
:meth:`_engine.Dialect.do_ping` method can opt in to the newly improved
behavior by having their method no longer catch exceptions or check
exceptions for "is_disconnect", instead just propagating all exceptions
outwards. Checking the exception for "is_disconnect" is now done by an
enclosing method on the default dialect, which ensures that the event hook
is invoked for all exception scenarios before testing the exception as a
"disconnect" exception. If an existing ``do_ping()`` method continues to
catch exceptions and check "is_disconnect", it will continue to work as it
did previously, but ``handle_error`` hooks will not have access to the
exception if it isn't propagated outwards.
Fixes: #5648
Change-Id: I6535d5cb389e1a761aad8c37cfeb332c548b876d
|
|\ \ \
| |_|/
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixed bug that prevented SQLAlchemy to connect when using a very old
sqlite version (before 3.9) on python 3.8+.
Fixes: #9379
Change-Id: I10ca347398221c952e1a572dc6ef80e491d1f5cf
|