| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
| |
pep-249 exception names linked to exception classes of an entirely
different name, preventing SQLAlchemy's own exception wrapping from
wrapping the error appropriately.
The SQLAlchemy dialect in use needs to implement a new
accessor :attr:`.DefaultDialect.dbapi_exception_translation_map`
to support this feature; this is implemented now for the py-postgresql
dialect.
fixes #3421
|
|
|
|
|
|
|
|
| |
rows failed to implement ``__slots__`` correctly such that it still
had a ``__dict__``. This is resolved, but in the extremely
unlikely case someone was assigning values to the returned tuples,
that will no longer work.
fixes #3420
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and the database can no longer be connected towards, that the checkout
handler failure is caught, the attempt to re-acquire the connection
also raises an exception, but the underlying connection record
is not immediately re-checked in before the exception is propagated
outwards, having the effect that the checked-out record does not close
itself until the stack trace it's associated with is garbage collected,
preventing that record from being used for a new checkout until we
leave the scope of the stack trace. This can lead to confusion
in the specific case of when the number of current stack traces
in memory exceeds the number of connections the pool can return,
as the pool will instead begin to raise errors about no more checkouts
available, rather than attempting a connection again. The fix
applies a checkin of the record before re-raising.
fixes #3419
|
|
|
|
|
|
|
|
|
|
|
| |
inside of :meth:`.Insert.from_select`. This behavior worked
accidentally up until 0.9.9, when it no longer worked due to
unrelated changes as part of :ticket:`3248`. Note that this
is the rendering of the WITH clause after the INSERT, before the
SELECT; the full functionality of CTEs rendered at the top
level of INSERT, UPDATE, DELETE is a new feature targeted for a
later release.
fixes #3418
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
release numbers
(cherry picked from commit 6b55842eef3a243d275bdd5630c1fe62d98e8371)
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
primaryjoin of a relationship involved comparison to an unhashable
type such as an HSTORE, lazy loads would fail due to a hash-oriented
check on the statement parameters, modified in 1.0 as a result of
:ticket:`3061` to use hashing and modified in :ticket:`3368`
to occur in cases more common than "load on pending".
The values are now checked for the ``__hash__`` attribute beforehand.
fixes #3416
|
| |
|
| |
|
|
|
|
|
| |
The sqlalchemy_exasol dialect needs to support Exasol 4.x which does not support
the OFFSET feature. Mark test with testing.requires.offset so that they can be skipped
of in the exasol specific test suite.
|
|
|
|
|
|
|
|
| |
to protect against unknown conditions when splicing inner joins
together within joined eager loads with ``innerjoin=True``; if
some of the joins use a "secondary" table, the assertion needs to
unwrap further joins in order to pass.
fixes #3412
|
|
|
|
|
|
|
|
|
|
|
| |
:ticket:`3341` where in the unusual case of a constraint that refers
to a mixture of :class:`.Column` objects and string column names
at the same time, the auto-attach-on-column-attach logic will be
skipped; for the constraint to be auto-attached in this case,
all columns must be assembled on the target table up front.
Added a new section to the migration document regarding the
original feature as well as this change.
fixes #3411
|
|
|
|
|
|
| |
regressions; regressions where we didn't know an API even worked
in a particular way or that anyone were using it in such a way
hence had no tests for such case.
|
| |
|
|
|
|
|
|
|
| |
function patched onto config. nose/pytest backends now fill
in their exception class here only when loaded
- use more public seeming api to get at py.test Skipped
exception
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
off again so that test fixtures setup/teardown instrumentation as
expected
- clean up test_extendedattr.py and fix it to no longer leak
itself outside by ensuring _reinstall_default_lookups is always called,
part of #3408
- Fixed bug where when using extended attribute instrumentation system,
the correct exception would not be raised when :func:`.class_mapper`
were called with an invalid input that also happened to not
be weak referencable, such as an integer.
fixes #3408
|
|
|
|
| |
to clean up automatically. references #3407
|
|
|
|
|
| |
correctly.
fixes #3406
|
|
|
|
|
|
|
|
|
| |
as failing with the new 'entity' key value added to
:attr:`.Query.column_descriptions`, the logic to discover the "from"
clause is again reworked to accommodate columns from aliased classes,
as well as to report the correct value for the "aliased" flag in these
cases.
fixes #3409
|
| |
|
|
|
|
|
|
|
|
| |
where the check for query state on :meth:`.Query.update` or
:meth:`.Query.delete` compared the empty tuple to itself using ``is``,
which fails on Pypy to produce ``True`` in this case; this would
erronously emit a warning in 0.9 and raise an exception in 1.0.
fixes #3405
|
|
|
|
| |
using descriptors; ensure that mock.patch() honors descriptor setters
|
|
|
|
|
| |
old / new style, direct access, and ad-hoc patching and
unpatching as possible
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
functionality. Added a new "soft invalidate" feature to the
connection pool at the level of the checked out connection wrapper
as well as the :class:`._ConnectionRecord`. This works similarly
to a modern pool invalidation in that connections aren't actively
closed, but are recycled only on next checkout; this is essentially
a per-connection version of that feature. A new event
:class:`.PoolEvents.soft_invalidate` is added to complement it.
fixes #3379
- Added new flag
:attr:`.ExceptionContext.invalidate_pool_on_disconnect`.
Allows an error handler within :meth:`.ConnectionEvents.handle_error`
to maintain a "disconnect" condition, but to handle calling invalidate
on individual connections in a specific manner within the event.
- Added new event :class:`.DialectEvents.do_connect`, which allows
interception / replacement of when the :meth:`.Dialect.connect`
hook is called to create a DBAPI connection. Also added
dialect plugin hooks :meth:`.Dialect.get_dialect_cls` and
:meth:`.Dialect.engine_created` which allow external plugins to
add events to existing dialects using entry points.
fixes #3355
|
|
|
|
|
|
|
| |
of ``entity`` to the :attr:`.Query.column_descriptions` accessor
would fail if the target entity was produced from a core selectable
such as a :class:`.Table` or :class:`.CTE` object.
fixes #3403 references #3320
|
|
|
|
|
|
| |
which is now consumed by Alembic migrations as of 0.7.6. User-defined
types can implement this method to assist in the comparison of
a type against one reflected from the database.
|
|
|
|
|
|
|
|
|
| |
set to a SQL expression for an UPDATE, and the SQL expression when
compared to the previous value of the attribute would produce a SQL
comparison other than ``==`` or ``!=``, the exception "Boolean value
of this clause is not defined" would raise. The fix ensures that
the unit of work will not interpret the SQL expression in this way.
fixes #3402
|
|
|
|
| |
refers to the table having a primary key. fixes #3398
|
|
|
|
|
|
|
| |
on an relationship->scalar non-object attribute comparison would fail,
e.g.
``filter(Parent.some_collection_to_attribute.any(Child.attr == 'foo'))``
fixes #3397
|
|
|
|
|
|
|
|
|
|
| |
a label that overlapped another label that is not truncated; this
because the length threshhold for truncation was greater than
the portion of the label that remains after truncation. These
two values have now been made the same; label_length - 6.
The effect here is that shorter column labels will be "truncated"
where they would not have been truncated before.
fixes #3396
|
|\
| |
| | |
Fix typo in 'Relationships API' docs
|
|/
|
|
| |
exprssed -> expressed
|
|
|
|
|
|
|
|
| |
now skip textual label references when copying ORDER BY elements
to the joined-eager-load subquery, as we can't know that these
expressions are compatible with this placement; either because
they are meant for text(), or because they refer to label names
already stated and aren't bound to a table. fixes #3392
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
passed as a keyword argument to the :meth:`.DDLEvents.before_create`,
:meth:`.DDLEvents.after_create`, :meth:`.DDLEvents.before_drop`, and
:meth:`.DDLEvents.after_drop` events would no longer be a list
of tables, but instead a list of tuples which contained a second
entry with foreign keys to be added or dropped. As the ``tables``
collection, while documented as not necessarily stable, has come
to be relied upon, this change is considered a regression.
Additionally, in some cases for "drop", this collection would
be an iterator that would cause the operation to fail if
prematurely iterated. The collection is now a list of table
objects in all cases and test coverage for the format of this
collection is now added.
fixes #3391
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
event where its invocation was moved to be after the class manager's
instrumentation of the class, which is the opposite of what the
documentation for the event explicitly states. The rationale for the
switch was due to Declarative taking the step of setting up
the full "instrumentation manager" for a class before it was mapped
for the purpose of the new ``@declared_attr`` features
described in :ref:`feature_3150`, but the change was also made
against the classical use of :func:`.mapper` for consistency.
However, SQLSoup relies upon the instrumentation event happening
before any instrumentation under classical mapping.
The behavior is reverted in the case of classical and declarative
mapping, the latter implemented by using a simple memoization
without using class manager.
fixes #3388
|
|
|
|
|
|
|
| |
changes made to the :class:`.Query` object's collection of entities
to load within the event would render in the SQL, but would not
be reflected during the loading process.
fixes #3387
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(hence becoming two regressions); reports that
SELECT statements would GROUP BY a label name and fail was misconstrued
that certain backends such as SQL Server should not be emitting
ORDER BY or GROUP BY on a simple label name at all; when in fact,
we had forgotten that 0.9 was already emitting ORDER BY on a simple
label name for all backends, as described in :ref:`migration_1068`,
as 1.0 had rewritten this logic as part of :ticket:`2992`.
In 1.0.2, the bug is fixed both that SQL Server, Firebird and others
will again emit ORDER BY on a simple label name when passed a
:class:`.Label` construct that is expressed in the columns clause,
and no backend will emit GROUP BY on a simple label name in this case,
as even Postgresql can't reliably do GROUP BY on a simple name
in every case.
fixes #3338, fixes #3385
|