summaryrefslogtreecommitdiff
path: root/lib/sqlalchemy/dialects/postgresql/asyncpg.py
Commit message (Collapse)AuthorAgeFilesLines
* Merge "fix test suite warnings" into mainmike bayer2023-05-101-0/+1
|\
| * fix test suite warningsMike Bayer2023-05-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fix a handful of warnings that were emitting but not raising, usually because they were inside an "expect_warnings" block. modify "expect_warnings" to always use "raise_on_any_unexpected" behavior; remove this parameter. Fixed issue in semi-private ``await_only()`` and ``await_fallback()`` concurrency functions where the given awaitable would remain un-awaited if the function threw a ``GreenletError``, which could cause "was not awaited" warnings later on if the program continued. In this case, the given awaitable is now cancelled before the exception is thrown. Change-Id: I33668c5e8c670454a3d879e559096fb873b57244
* | Merge "Ensure float are not implemented as numeric" into mainmike bayer2023-05-091-1/+1
|\ \
| * | Ensure float are not implemented as numericFederico Caselli2023-05-091-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | Fixed the base class for dialect-specific float/double types; Oracle :class:`_oracle.BINARY_DOUBLE` now subclasses :class:`_sqltypes.Double`, and internal types for :class:`_sqltypes.Float` for asyncpg and pg8000 now correctly subclass :class:`_sqltypes.Float`. Added suite tests to ensure that floating point types, such as class:`_types.Float` and :class:`_types.Double` are not resolved as class:`_types.Numeric` in the dialect, since it may not compatible in all cases, such as when casting a value. Change-Id: I20b814e8e029d57921d9728a55f2570f74c35c87
* | add bind casts for BYTEA on asyncpgMike Bayer2023-05-041-0/+6
|/ | | | | | | | | | | Fixed another regression due to the "insertmanyvalues" change in 2.0.10 as part of :ticket:`9618`, in a similar way as regression :ticket:`9701`, where :class:`.LargeBinary` datatypes also need additional casts on when using the asyncpg driver specifically in order to work with the new bulk INSERT format. Fixes: #9739 Change-Id: I57370d269ea757f263c1f3a16c324ceae76fd4e8
* changelog fixes; editsMike Bayer2023-04-211-4/+4
| | | | Change-Id: I6bbef2416f864d1414d56f9bf39026156aed5e67
* Merge "Add name_func optional attribute for asyncpg adapter" into mainFederico Caselli2023-04-211-3/+67
|\
| * Add name_func optional attribute for asyncpg adapterPavel Sirotkin2023-04-211-3/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I faced an issue related to pg bouncer and prepared statement cache flow in asyncpg dialect. Regarding this discussion https://github.com/sqlalchemy/sqlalchemy/issues/6467 I prepared PR to support an optional parameter `name` in prepared statement which is allowed, since 0.25.0 version in `asyncpg` https://github.com/MagicStack/asyncpg/pull/846 **UPD:** the issue with proposal: https://github.com/sqlalchemy/sqlalchemy/issues/9608 ### Description Added optional parameter `name_func` to `AsyncAdapt_asyncpg_connection` class which will call on the `self._connection.prepare()` function and populate a unique name. so in general instead this ```python from uuid import uuid4 from asyncpg import Connection class CConnection(Connection): def _get_unique_id(self, prefix: str) -> str: return f'__asyncpg_{prefix}_{uuid4()}__' engine = create_async_engine(..., connect_args={ 'connection_class': CConnection, }, ) ``` would be enough ```python from uuid import uuid4 engine = create_async_engine(..., connect_args={ 'name_func': lambda: f'__asyncpg_{uuid4()}__', }, ) ``` ### Checklist <!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once) --> This pull request is: - [ ] A documentation / typographical error fix - Good to go, no issue or tests are needed - [ ] A short code fix - please include the issue number, and create an issue if none exists, which must include a complete example of the issue. one line code fixes without an issue and demonstration will not be accepted. - Please include: `Fixes: #<issue number>` in the commit message - please include tests. one line code fixes without tests will not be accepted. - [x] A new feature implementation - please include the issue number, and create an issue if none exists, which must include a complete example of how the feature would look. - Please include: `Fixes: #<issue number>` in the commit message - please include tests. **Have a nice day!** Fixes: #9608 Closes: #9607 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9607 Pull-request-sha: b4bc8d3e57ab095a26112830ad4bea36083454e3 Change-Id: Icd753366cba166b8a60d1c8566377ec8335cd828
* | add deterministic imv returning ordering using sentinel columnsMike Bayer2023-04-211-0/+6
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Repaired a major shortcoming which was identified in the :ref:`engine_insertmanyvalues` performance optimization feature first introduced in the 2.0 series. This was a continuation of the change in 2.0.9 which disabled the SQL Server version of the feature due to a reliance in the ORM on apparent row ordering that is not guaranteed to take place. The fix applies new logic to all "insertmanyvalues" operations, which takes effect when a new parameter :paramref:`_dml.Insert.returning.sort_by_parameter_order` on the :meth:`_dml.Insert.returning` or :meth:`_dml.UpdateBase.return_defaults` methods, that through a combination of alternate SQL forms, direct correspondence of client side parameters, and in some cases downgrading to running row-at-a-time, will apply sorting to each batch of returned rows using correspondence to primary key or other unique values in each row which can be correlated to the input data. Performance impact is expected to be minimal as nearly all common primary key scenarios are suitable for parameter-ordered batching to be achieved for all backends other than SQLite, while "row-at-a-time" mode operates with a bare minimum of Python overhead compared to the very heavyweight approaches used in the 1.x series. For SQLite, there is no difference in performance when "row-at-a-time" mode is used. It's anticipated that with an efficient "row-at-a-time" INSERT with RETURNING batching capability, the "insertmanyvalues" feature can be later be more easily generalized to third party backends that include RETURNING support but not necessarily easy ways to guarantee a correspondence with parameter order. Fixes: #9618 References: #9603 Change-Id: I1d79353f5f19638f752936ba1c35e4dc235a8b7c
* ensure event handlers called for all do_pingMike Bayer2023-03-041-9/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | The support for pool ping listeners to receive exception events via the :meth:`.ConnectionEvents.handle_error` event added in 2.0.0b1 for :ticket:`5648` failed to take into account dialect-specific ping routines such as that of MySQL and PostgreSQL. The dialect feature has been reworked so that all dialects participate within event handling. Additionally, a new boolean element :attr:`.ExceptionContext.is_pre_ping` is added which identifies if this operation is occurring within the pre-ping operation. For this release, third party dialects which implement a custom :meth:`_engine.Dialect.do_ping` method can opt in to the newly improved behavior by having their method no longer catch exceptions or check exceptions for "is_disconnect", instead just propagating all exceptions outwards. Checking the exception for "is_disconnect" is now done by an enclosing method on the default dialect, which ensures that the event hook is invoked for all exception scenarios before testing the exception as a "disconnect" exception. If an existing ``do_ping()`` method continues to catch exceptions and check "is_disconnect", it will continue to work as it did previously, but ``handle_error`` hooks will not have access to the exception if it isn't propagated outwards. Fixes: #5648 Change-Id: I6535d5cb389e1a761aad8c37cfeb332c548b876d
* [asyncpg] Extract rowcount for SELECT statementsMichael Gorven2023-01-041-1/+1
| | | | | | | | | | | | | | Added support to the asyncpg dialect to return the ``cursor.rowcount`` value for SELECT statements when available. While this is not a typical use for ``cursor.rowcount``, the other PostgreSQL dialects generally provide this value. Pull request courtesy Michael Gorven. Fixes: #9048 Closes: #9049 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9049 Pull-request-sha: df16160530c6001d99de059995ad5047a75fb7b0 Change-Id: I095b866779ccea7e4d50bc841fef7605e61c667f
* happy new year 2023Mike Bayer2023-01-031-1/+1
| | | | Change-Id: I625af65b3fb1815b1af17dc2ef47dd697fdc3fb1
* add explicit REGCONFIG, pg full text functionsMike Bayer2022-12-151-0/+6
| | | | | | | | | | | | | | | | | | | Added support for explicit use of PG full text functions with asyncpg and psycopg (SQLAlchemy 2.0 only), with regards to the ``REGCONFIG`` type cast for the first argument, which previously would be incorrectly cast to a VARCHAR, causing failures on these dialects that rely upon explicit type casts. This includes support for :class:`_postgresql.to_tsvector`, :class:`_postgresql.to_tsquery`, :class:`_postgresql.plainto_tsquery`, :class:`_postgresql.phraseto_tsquery`, :class:`_postgresql.websearch_to_tsquery`, :class:`_postgresql.ts_headline`, each of which will determine based on number of arguments passed if the first string argument should be interpreted as a PostgreSQL "REGCONFIG" value; if so, the argument is typed using a newly added type object :class:`_postgresql.REGCONFIG` which is then explicitly cast in the SQL expression. Fixes: #8977 Change-Id: Ib36698a984fd4194bd6e0eb663105f790f3db7d3
* Rewrite positional handling, test for "numeric"Federico Caselli2022-12-051-14/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changed how the positional compilation is performed. It's rendered by the compiler the same as the pyformat compilation. The string is then processed to replace the placeholders with the correct ones, and to obtain the correct order of the parameters. This vastly simplifies the computation of the order of the parameters, that in case of nested CTE is very hard to compute correctly. Reworked how numeric paramstyle behavers: - added support for repeated parameter, without duplicating them like in normal positional dialects - implement insertmany support. This requires that the dialect supports out of order placehoders, since all parameters that are not part of the VALUES clauses are placed at the beginning of the parameter tuple - support for different identifiers for a numeric parameter. It's for example possible to use postgresql style placeholder $1, $2, etc Added two new dialect based on sqlite to test "numeric" fully using both :1 style and $1 style. Includes a workaround for SQLite's not-really-correct numeric implementation. Changed parmstyle of asyncpg dialect to use numeric, rendering with its native $ identifiers Fixes: #8926 Fixes: #8849 Change-Id: I7c640467d49adfe6d795cc84296fc7403dcad4d6
* Try running pyupgrade on the codeFederico Caselli2022-11-161-5/+3
| | | | | | | | command run is "pyupgrade --py37-plus --keep-runtime-typing --keep-percent-format <files...>" pyupgrade will change assert_ to assertTrue. That was reverted since assertTrue does not exists in sqlalchemy fixtures Change-Id: Ie1ed2675c7b11d893d78e028aad0d1576baebb55
* Only convert Range for sqlalchemy Range objectMike Bayer2022-10-211-8/+6
| | | | | | | | | | Refined the new approach to range objects described at :ref:`change_7156` to accommodate driver-specific range and multirange objects, to better accommodate both legacy code as well as when passing results from raw SQL result sets back into new range or multirange expressions. Fixes: #8690 Change-Id: I7e62c47067f695c6380ad0fe2fe19deaf33594d1
* implement batched INSERT..VALUES () () for executemanyMike Bayer2022-09-241-2/+0
| | | | | | | | | | | | | | | | | | | | the feature is enabled for all built in backends when RETURNING is used, except for Oracle that doesn't need it, and on psycopg2 and mssql+pyodbc it is used for all INSERT statements, not just those that use RETURNING. third party dialects would need to opt in to the new feature by setting use_insertmanyvalues to True. Also adds dialect-level guards against using returning with executemany where we dont have an implementation to suit it. execute single w/ returning still defers to the server without us checking. Fixes: #6047 Fixes: #7907 Change-Id: I3936d3c00003f02e322f2e43fb949d0e6e568304
* auto-cast PG range typesMike Bayer2022-09-201-2/+2
| | | | | | | | | | Range type handling has been enhanced so that it automatically renders type casts, so that in-place round trips for statements that don't provide the database with any context don't require the :func:`_sql.cast` construct to be explicit for the database to know the desired type. Change-Id: Id630b726f8a23059dd2f4cbc410bf5229d89cbfb References: #8540
* Use ``;`` instead of ``select 1`` to ping PostgreSQLFederico Caselli2022-09-151-0/+17
| | | | | Fixes: #8491 Change-Id: I941d2a3cf92e5609e2045a53cec94522340951db
* integrate connection.terminate() for supporting dialectsMike Bayer2022-08-241-0/+7
| | | | | | | | | | | | Integrated support for asyncpg's ``terminate()`` method call for cases where the connection pool is recycling a possibly timed-out connection, where a connection is being garbage collected that wasn't gracefully closed, as well as when the connection has been invalidated. This allows asyncpg to abandon the connection without waiting for a response that may incur long timeouts. Fixes: #8419 Change-Id: Ia575af779d5733b483a72dff3690b8bbbad2bb05
* JSONPATH type can be used in casts in PostgreSQLFederico Caselli2022-08-171-4/+9
| | | | | | | | | | Introduced the type :class:`_postgresql.JSONPATH` that can be used in cast expressions. This is required by some PostgreSQL dialects when using functions such as ``jsonb_path_exists`` or ``jsonb_path_match`` that accept a ``jsonpath`` as input. Fixes: #8216 Change-Id: I3e7337eab91680cab1604e1f3058854a0a19c5be
* implement PG ranges/multiranges agnosticallyMike Bayer2022-08-051-0/+95
| | | | | | | | | | | | | | | Ranges now work using a new Range object, multiranges as lists of Range objects (this is what asyncpg does. not sure why psycopg has a "Multirange" type). psycopg, psycopg2, and asyncpg are currently supported. It's not clear how to make ranges work with pg8000, likely needs string conversion; this is straightforward with the new archicture and can be added later. Fixes: #8178 Change-Id: Iab8d8382873d5c14199adbe3f09fd0dc17e2b9f1
* rearchitect reflection for batched performanceFederico Caselli2022-06-181-0/+5
| | | | | | | | | | | | | | | | | | | | | | | Rearchitected the schema reflection API to allow some dialects to make use of high performing batch queries to reflect the schemas of many tables at once using much fewer queries. The new performance features are targeted first at the PostgreSQL and Oracle backends, and may be applied to any dialect that makes use of SELECT queries against system catalog tables to reflect tables (currently this omits the MySQL and SQLite dialects which instead make use of parsing the "CREATE TABLE" statement, however these dialects do not have a pre-existing performance issue with reflection. MS SQL Server is still a TODO). The new API is backwards compatible with the previous system, and should require no changes to third party dialects to retain compatibility; third party dialects can also opt into the new system by implementing batched queries for schema reflection. Along with this change is an updated reflection API that is fully :pep:`484` typed, features many new methods and some changes. Fixes: #4379 Change-Id: I897ec09843543aa7012bcdce758792ed3d415d08
* add backend agnostic UUID datatypeMike Bayer2022-06-011-34/+0
| | | | | | | | | | | | | | | | | | | | Added new backend-agnostic :class:`_types.Uuid` datatype generalized from the PostgreSQL dialects to now be a core type, as well as migrated :class:`_types.UUID` from the PostgreSQL dialect. Thanks to Trevor Gross for the help on this. also includes: * corrects some missing behaviors in the suite literal fixtures test where row round trips weren't being correctly asserted. * fixes some of the ISO literal date rendering added in 952383f9ee0 for #5052 to truncate datetime strings for date/time datatypes in the same way that drivers typically do for bound parameters; this was not working fully and wasn't caught by the broken test fixture Fixes: #7212 Change-Id: I981ac6d34d278c18281c144430a528764c241b04
* inline mypy config; files ignoring type errors for the momentMike Bayer2022-04-281-0/+2
| | | | | | | | | | | | | | | | | | | to simplify pyproject.toml change the remaining files that aren't going to be typed on this first pass (unless of course someone wants to type some of these) to include # mypy: ignore-errors. for the moment, only a handful of ORM modules are to have more type checking implemented. It's important that ignore-errors is used and not "# type: ignore", as in the latter case, mypy doesn't even read the existing types in the file, which makes it impossible to type any files that refer to those modules at all. to simplify ongoing typing work use inline mypy config for remaining files that are "done" for now, indicating the level of type checking they currently have. Change-Id: I98669c1a305c2f0adba85d10b5425541f3fe9533
* pep-484 for engineMike Bayer2022-03-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All modules in sqlalchemy.engine are strictly typed with the exception of cursor, default, and reflection. cursor and default pass with non-strict typing, reflection is waiting on the multi-reflection refactor. Behavioral changes: * create_connect_args() methods return a tuple of list, dict, rather than a list of list, dict * removed allow_chars parameter from pyodbc connector ._get_server_version_info() method * the parameter list passed to do_executemany is now a list in all cases. previously, this was being run through dialect.execute_sequence_format, which defaults to tuple and was only intended for individual tuple params. * broke up dialect.dbapi into dialect.import_dbapi class method and dialect.dbapi module object. added a deprecation path for legacy dialects. it's not really feasible to type a single attr as a classmethod vs. module type. The "type_compiler" attribute also has this problem with greater ability to work around, left that one for now. * lots of constants changing to be Enum, so that we can type them. for fixed tuple-position constants in cursor.py / compiler.py (which are used to avoid the speed overhead of namedtuple), using Literal[value] which seems to work well * some tightening up in Row regarding __getitem__, which we can do since we are on full 2.0 style result use * altered the set_connection_execution_options and set_engine_execution_options event flows so that the dictionary of options may be mutated within the event hook, where it will then take effect as the actual options used. Previously, changing the dict would be silently ignored which seems counter-intuitive and not very useful. * A lot of DefaultDialect/DefaultExecutionContext methods and attributes, including underscored ones, move to interfaces. This is not fully ideal as it means the Dialect/ExecutionContext interfaces aren't publicly subclassable directly, but their current purpose is more of documentation for dialect authors who should (and certainly are) still be subclassing the DefaultXYZ versions in all cases Overall, Result was the most extremely difficult class hierarchy to type here as this hierarchy passes through largely amorphous "row" datatypes throughout, which can in fact by all kinds of different things, like raw DBAPI rows, or Row objects, or "scalar"/Any, but at the same time these types have meaning so I tried still maintaining some level of semantic markings for these, it highlights how complex Result is now, as it's trying to be extremely efficient and inlined while also being very open-ended and extensible. Change-Id: I98b75c0c09eab5355fc7a33ba41dd9874274f12a
* doc fixesMike Bayer2022-02-091-0/+19
| | | | | | | | | | | | | | | | | | * clarify merge behavior for non-present attributes, references #7687 * fix AsyncSession in async_scoped_session documentation, name the scoped session AsyncScopedSession, fixes: #7671 * Use non-deprecated execute() style in sqltypes JSON examples, fixes: #7633 * Add note regarding mitigation for https://github.com/MagicStack/asyncpg/issues/727, fixes #7245 Fixes: #7671 Fixes: #7633 Fixes: #7245 Change-Id: Ic40b4378ca321367a912864f4eddfdd9714fe217
* happy new year 2022Mike Bayer2022-01-061-1/+1
| | | | Change-Id: I49abf2607e0eb0623650efdf0091b1fb3db737ea
* Replace c extension with cython versions.workflow_test_cythonFederico Caselli2021-12-171-1/+1
| | | | | | | | | | | | | | | Re-implement c version immutabledict / processors / resultproxy / utils with cython. Performance is in general in par or better than the c version Added a collection module that has cython version of OrderedSet and IdentitySet Added a new test/perf file to compare the implementations. Run ``python test/perf/compiled_extensions.py all`` to execute the comparison test. See results here: https://docs.google.com/document/d/1nOcDGojHRtXEkuy4vNXcW_XOJd9gqKhSeALGG3kYr6A/edit?usp=sharing Fixes: #7256 Change-Id: I2930ef1894b5048210384728118e586e813f6a76 Signed-off-by: Federico Caselli <cfederico87@gmail.com>
* provide connectionfairy on initializeMike Bayer2021-11-291-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is so that dialect methods that are called within init can assume the same argument structure as when they are called in other places; we can nail down the type of object as well. This change seems to mostly impact the isolation level routines in the dialects, as these are called during initialize() as well as on established connections. these methods can now assume a non-proxied DBAPI connection object in all cases, as it is commonly required that attributes like ".autocommit" are set on the object which don't work well in a proxied situation. Other changes: * adds an interface for the "connectionfairy" concept called PoolProxiedConnection. * Removes ``Connectable`` superclass of Connection. ``Connectable`` was originally meant to provide for the "method which accepts connection or engine" theme. As this pattern is greatly reduced in 2.0 and Engine no longer extends from it, the ``Connectable`` superclass doesnt serve any real purpose. Leading from that, to set this in I also applied pep 484 annotations to the Dialect base, and then in the interests of seeing some of the typing information show up in my IDE did a little bit for Engine, Connection and others. I hope that it's feasible that we can add annotations to specific classes and attributes ahead of when we actually try to mass-populate the whole library. This was the original spirit of pep-484 that we can apply annotations gradually. I do of course want to try to do a mass-populate although i think even in that case we will end up doing a lot of manual work anyway (in particular for the changes here which are distinct from what the stubs have). Fixes: #7122 Change-Id: I5dd7fbff8a7ae520a81c165091af12a6a68826db
* Added support for ``psycopg`` dialect.Federico Caselli2021-11-261-1/+0
| | | | | | | Both sync and async versions are supported. Fixes: #6842 Change-Id: I57751c5028acebfc6f9c43572562405453a2f2a4
* Merge "propose emulated setinputsizes embedded in the compiler" into mainmike bayer2021-11-251-114/+38
|\
| * propose emulated setinputsizes embedded in the compilerMike Bayer2021-11-231-114/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a new system so that PostgreSQL and other dialects have a reliable way to add casts to bound parameters in SQL statements, replacing previous use of setinputsizes() for PG dialects. rationale: 1. psycopg3 will be using the same SQLAlchemy-side "setinputsizes" as asyncpg, so we will be seeing a lot more of this 2. the full rendering that SQLAlchemy's compilation is performing is in the engine log as well as error messages. Without this, we introduce three levels of SQL rendering, the compiler, the hidden "setinputsizes" in SQLAlchemy, and then whatever the DBAPI driver does. With this new approach, users reporting bugs etc. will be less confused that there are as many as two separate layers of "hidden rendering"; SQLAlchemy's rendering is again fully transparent 3. calling upon a setinputsizes() method for every statement execution is expensive. this way, the work is done behind the caching layer 4. for "fast insertmany()", I also want there to be a fast approach towards setinputsizes. As it was, we were going to be taking a SQL INSERT with thousands of bound parameter placeholders and running a whole second pass on it to apply typecasts. this way, we will at least be able to build the SQL string once without a huge second pass over the whole string 5. psycopg2 can use this same system for its ARRAY casts 6. the general need for PostgreSQL to have lots of type casts is now mostly in the base PostgreSQL dialect and works independently of a DBAPI being present. dependence on DBAPI symbols that aren't complete / consistent / hashable is removed I was originally going to try to build this into bind_expression(), but it was revealed this worked poorly with custom bind_expression() as well as empty sets. the current impl also doesn't need to run a second expression pass over the POSTCOMPILE sections, which came out better than I originally thought it would. Change-Id: I363e6d593d059add7bcc6d1f6c3f91dd2e683c0c
* | Clean up most py3k compatFederico Caselli2021-11-241-2/+3
|/ | | | Change-Id: I8172fdcc3103ff92aa049827728484c8779af6b7
* fully support isolation_level parameter in base dialectMike Bayer2021-11-181-13/+4
| | | | | | | | | | | | | | | | | | | | Generalized the :paramref:`_sa.create_engine.isolation_level` parameter to the base dialect so that it is no longer dependent on individual dialects to be present. This parameter sets up the "isolation level" setting to occur for all new database connections as soon as they are created by the connection pool, where the value then stays set without being reset on every checkin. The :paramref:`_sa.create_engine.isolation_level` parameter is essentially equivalent in functionality to using the :paramref:`_engine.Engine.execution_options.isolation_level` parameter via :meth:`_engine.Engine.execution_options` for an engine-wide setting. The difference is in that the former setting assigns the isolation level just once when a connection is created, the latter sets and resets the given level on each connection checkout. Fixes: #6342 Change-Id: Id81d6b1c1a94371d901ada728a610696e09e9741
* removals: all unicode encoding / decodingMike Bayer2021-11-101-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Removed here includes: * convert_unicode parameters * encoding create_engine() parameter * description encoding support * "non-unicode fallback" modes under Python 2 * String symbols regarding Python 2 non-unicode fallbacks * any concept of DBAPIs that don't accept unicode statements, unicode bound parameters, or that return bytes for strings anywhere except an explicit Binary / BLOB type * unicode processors in Python / C Risk factors: * Whether all DBAPIs do in fact return Unicode objects for all entries in cursor.description now * There was logic for mysql-connector trying to determine description encoding. A quick test shows Unicode coming back but it's not clear if there are still edge cases where they return bytes. if so, these are bugs in that driver, and at most we would only work around it in the mysql-connector DBAPI itself (but we won't do that either). * It seems like Oracle 8 was not expecting unicode bound parameters. I'm assuming this was all Python 2 stuff and does not apply for modern cx_Oracle under Python 3. * third party dialects relying upon built in unicode encoding/decoding but it's hard to imagine any non-SQLAlchemy database driver not dealing exclusively in Python unicode strings in Python 3 Change-Id: I97d762ef6d4dd836487b714d57d8136d0310f28a References: #7257
* simplify and publicize the asyncpg JSON(B) codec registrsationMike Bayer2021-11-031-29/+55
| | | | | | | | | | | | Added overridable methods ``PGDialect_asyncpg.setup_asyncpg_json_codec`` and ``PGDialect_asyncpg.setup_asyncpg_jsonb_codec`` codec, which handle the required task of registering JSON/JSONB codecs for these datatypes when using asyncpg. The change is that methods are broken out as individual, overridable methods to support third party dialects that need to alter or disable how these particular codecs are set up. Fixes: #7284 Change-Id: I3eac258fea61f3975bd03c428747f788813ce45e
* Revert "Gracefully degrade unsupported types with asyncpg"mike bayer2021-11-031-29/+14
| | | | | | | | This reverts commit 96c294da8a50d692b3f0b8e508dbbca5d9c22f1b. I have another approach that is more obvious, easier to override explicitly and also I can test it more easily. Change-Id: I11a3be7700dbc6f25d436e450b6fb8e8f6c4fd16
* formatting updatesMike Bayer2021-11-031-2/+3
| | | | Change-Id: I7352bed0115b8fcdb4708e012d83e81d1ae494ed
* Merge "Gracefully degrade unsupported types with asyncpg" into mainmike bayer2021-11-031-14/+29
|\
| * Gracefully degrade unsupported types with asyncpgGord Thompson2021-11-021-14/+29
| | | | | | | | | | | | | | | | | | | | | | | | Fixes: #7284 Modify the on_connect() method of PGDialect_asyncpg to gracefully degrade unsupported types instead of throwing a ValueError. Useful for third-party dialects that derive from PGDialect_asyncpg but whose databases do not support all types (e.g., CockroachDB supports JSONB but not JSON). Change-Id: Ibb7cc8c3de632d27b9716a93d83956a590b2a2b0
* | map Float to asyncpg.FLOAT, test for infinityMike Bayer2021-11-021-0/+9
|/ | | | | Fixes: #7283 Change-Id: I5402a72617b7f9bc366d64bc5ce8669374839984
* support bind expressions w/ expanding IN; apply to psycopg2Mike Bayer2021-10-151-1/+0
| | | | | | | | | | | | | | | | | | | | | | | Fixed issue where "expanding IN" would fail to function correctly with datatypes that use the :meth:`_types.TypeEngine.bind_expression` method, where the method would need to be applied to each element of the IN expression rather than the overall IN expression itself. Fixed issue where IN expressions against a series of array elements, as can be done with PostgreSQL, would fail to function correctly due to multiple issues within the "expanding IN" feature of SQLAlchemy Core that was standardized in version 1.4. The psycopg2 dialect now makes use of the :meth:`_types.TypeEngine.bind_expression` method with :class:`_types.ARRAY` to portably apply the correct casts to elements. The asyncpg dialect was not affected by this issue as it applies bind-level casts at the driver level rather than at the compiler level. as part of this commit the "bind translate" feature has been simplified and also applies to the names in the POSTCOMPILE tag to accommodate for brackets. Fixes: #7177 Change-Id: I08c703adb0a9bd6f5aeee5de3ff6f03cccdccdc5
* Surface driver connection object when using a proxied dialectFederico Caselli2021-09-171-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | Improve the interface used by adapted drivers, like the asyncio ones, to access the actual connection object returned by the driver. The :class:`_engine._ConnectionRecord` and :class:`_engine._ConnectionFairy` now have two new attributes: * ``dbapi_connection`` always represents a DBAPI compatible object. For pep-249 drivers, this is the DBAPI connection as it always has been, previously accessed under the ``.connection`` attribute. For asyncio drivers that SQLAlchemy adapts into a pep-249 interface, the returned object will normally be a SQLAlchemy adaption object called :class:`_engine.AdaptedConnection`. * ``driver_connection`` always represents the actual connection object maintained by the third party pep-249 DBAPI or async driver in use. For standard pep-249 DBAPIs, this will always be the same object as that of the ``dbapi_connection``. For an asyncio driver, it will be the underlying asyncio-only connection object. The ``.connection`` attribute remains available and is now a legacy alias of ``.dbapi_connection``. Fixes: #6832 Change-Id: Ib72f97deefca96dce4e61e7c38ba430068d6a82e
* Replace all http:// links to https://Federico Caselli2021-07-041-1/+1
| | | | | | Also replace http://pypi.python.org/pypi with https://pypi.org/project Change-Id: I84b5005c39969a82140706472989f2a30b0c7685
* Add pgcode / sqlstate for asyncpg error messageMike Bayer2021-04-051-0/+3
| | | | | | | | | | | Added accessors ``.sqlstate`` and synonym ``.pgcode`` to the ``.orig`` attribute of the SQLAlchemy exception class raised by the asyncpg DBAPI adapter, that is, the intermediary exception object that wraps on top of that raised by the asyncpg library itself, but below the level of the SQLAlchemy dialect. Fixes: #6199 Change-Id: Ie0f1ffaaff47c7a50dd1fbccdbe588cdc5322b70
* Default caching to opt-out for 3rd party dialectsMike Bayer2021-04-011-0/+1
| | | | | | | | | | | | | | | | | | | Added a new flag to the :class:`_engine.Dialect` class called :attr:`_engine.Dialect.supports_statement_cache`. This flag now needs to be present directly on a dialect class in order for SQLAlchemy's :ref:`query cache <sql_caching>` to take effect for that dialect. The rationale is based on discovered issues such as :ticket:`6173` revealing that dialects which hardcode literal values from the compiled statement, often the numerical parameters used for LIMIT / OFFSET, will not be compatible with caching until these dialects are revised to use the parameters present in the statement only. For third party dialects where this flag is not applied, the SQL logging will show the message "dialect does not support caching", indicating the dialect should seek to apply this flag once they have verified that no per-statement literal values are being rendered within the compilation phase. Fixes: #6184 Change-Id: I6fd5b5d94200458d4cb0e14f2f556dbc25e27e22
* mutex asyncpg / aiomysql connection state changesMike Bayer2021-02-251-49/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | Added an ``asyncio.Lock()`` within SQLAlchemy's emulated DBAPI cursor, local to the connection, for the asyncpg dialect, so that the space between the call to ``prepare()`` and ``fetch()`` is prevented from allowing concurrent executions on the connection from causing interface error exceptions, as well as preventing race conditions when starting a new transaction. Other PostgreSQL DBAPIs are threadsafe at the connection level so this intends to provide a similar behavior, outside the realm of server side cursors. Apply the same idea to the aiomysql dialect which also would otherwise be subject to corruption if the connection were used concurrently. While this is an issue which can also occur with the threaded connection libraries, we anticipate asyncio users are more likely to attempt using the same connection in multiple awaitables at a time, even though this won't achieve concurrency for that use case, as the asyncio programming style is very encouraging of this. As the failure modes are also more complicated under asyncio, we'd rather not have this being reported. Fixes: #5967 Change-Id: I3670ba0c8f0b593c587c5aa7c6c61f9e8c5eb93a
* expand and further generalize bound parameter translateMike Bayer2021-02-141-14/+14
| | | | | | | | | | | | | | | | | | | | Continued with the improvement made as part of :ticket:`5653` to further support bound parameter names, including those generated against column names, for names that include colons, parenthesis, and question marks, as well as improved test support, so that bound parameter names even if they are auto-derived from column names should have no problem including for parenthesis in psycopg2's "pyformat" style. As part of this change, the format used by the asyncpg DBAPI adapter (which is local to SQLAlchemy's asyncpg diaelct) has been changed from using "qmark" paramstyle to "format", as there is a standard and internally supported SQL string escaping style for names that use percent signs with "format" style (i.e. to double percent signs), as opposed to names that use question marks with "qmark" style (where an escaping system is not defined by pep-249 or Python). Fixes: #5941 Change-Id: Id86f5af81903d7063a8e3505e60df56490f85358
* run handle error for commit/rollback fail and cancel transactionMike Bayer2021-01-151-6/+18
| | | | | | | | | | Fixed bug in asyncpg dialect where a failure during a "commit" or less likely a "rollback" should cancel the entire transaction; it's no longer possible to emit rollback. Previously the connection would continue to await a rollback that could not succeed as asyncpg would reject it. Fixes: #5824 Change-Id: I5a4916740c269b410f4d1a78ed25191de344b9d0