summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/build/changelog/changelog_11.rst46
-rw-r--r--doc/build/changelog/migration_11.rst76
-rw-r--r--doc/build/dialects/postgresql.rst4
3 files changed, 63 insertions, 63 deletions
diff --git a/doc/build/changelog/changelog_11.rst b/doc/build/changelog/changelog_11.rst
index 78f26d15e..455156f30 100644
--- a/doc/build/changelog/changelog_11.rst
+++ b/doc/build/changelog/changelog_11.rst
@@ -61,7 +61,7 @@
``validate_string=True`` is passed to the Enum; any other kind of object is
still of course rejected. While the immediate use
is to allow comparisons to enums with LIKE, the fact that this
- use exists indicates there may be more unknown-string-comparsion use
+ use exists indicates there may be more unknown-string-comparison use
cases than we expected, which hints that perhaps there are some
unknown string-INSERT cases too.
@@ -114,8 +114,8 @@
:tags: feature, postgresql
:tickets: 3529
- Added support for Postgresql's INSERT..ON CONFLICT using a new
- Postgresql-specific :class:`.postgresql.dml.Insert` object.
+ Added support for PostgreSQL's INSERT..ON CONFLICT using a new
+ PostgreSQL-specific :class:`.postgresql.dml.Insert` object.
Pull request and extensive efforts here by Robin Thomas.
.. seealso::
@@ -129,7 +129,7 @@
The DDL for DROP INDEX will emit "CONCURRENTLY" if the
``postgresql_concurrently`` flag is set upon the
:class:`.Index` and if the database in use is detected as
- Postgresql version 9.2 or greater. For CREATE INDEX, database
+ PostgreSQL version 9.2 or greater. For CREATE INDEX, database
version detection is also added which will omit the clause if
PG version is less than 8.2. Pull request courtesy Iuri de Silvio.
@@ -139,7 +139,7 @@
Fixed an issue where a many-to-one change of an object from one
parent to another could work inconsistently when combined with
- an un-flushed modication of the foreign key attribute. The attribute
+ an un-flushed modification of the foreign key attribute. The attribute
move now considers the database-committed value of the foreign key
in order to locate the "previous" parent of the object being
moved. This allows events to fire off correctly including
@@ -208,7 +208,7 @@
:tickets: 3720
Added ``postgresql_tablespace`` as an argument to :class:`.Index`
- to allow specification of TABLESPACE for an index in Postgresql.
+ to allow specification of TABLESPACE for an index in PostgreSQL.
Complements the same-named parameter on :class:`.Table`. Pull
request courtesy Benjamin Bertrand.
@@ -241,7 +241,7 @@
:tags: feature, sql
:pullreq: bitbucket:80
- Implemented reflection of CHECK constraints for SQLite and Postgresql.
+ Implemented reflection of CHECK constraints for SQLite and PostgreSQL.
This is available via the new inspector method
:meth:`.Inspector.get_check_constraints` as well as when reflecting
:class:`.Table` objects in the form of :class:`.CheckConstraint`
@@ -256,7 +256,7 @@
:paramref:`.GenerativeSelect.with_for_update.key_share`, which
will render the ``FOR NO KEY UPDATE`` version of ``FOR UPDATE``
and ``FOR KEY SHARE`` instead of ``FOR SHARE``
- on the Postgresql backend. Pull request courtesy Sergey Skopin.
+ on the PostgreSQL backend. Pull request courtesy Sergey Skopin.
.. change::
:tags: feature, postgresql, oracle
@@ -265,7 +265,7 @@
Added new parameter
:paramref:`.GenerativeSelect.with_for_update.skip_locked`, which
will render the ``SKIP LOCKED`` phrase for a ``FOR UPDATE`` or
- ``FOR SHARE`` lock on the Postgresql and Oracle backends. Pull
+ ``FOR SHARE`` lock on the PostgreSQL and Oracle backends. Pull
request courtesy Jack Zhou.
.. change::
@@ -286,7 +286,7 @@
.. change::
:tags: feature, postgresql
- Added a new dialect for the PyGreSQL Postgresql dialect. Thanks
+ Added a new dialect for the PyGreSQL PostgreSQL dialect. Thanks
to Christoph Zwerschke and Kaolin Imago Fire for their efforts.
.. change::
@@ -360,7 +360,7 @@
Added :meth:`.Select.lateral` and related constructs to allow
for the SQL standard LATERAL keyword, currently only supported
- by Postgresql.
+ by PostgreSQL.
.. seealso::
@@ -433,7 +433,7 @@
would still potentially cause persistence conflicts on the next
transaction, because the instance would not be checked that it
was expired. This fix will resolve a large class of cases that
- erronously cause the "New instance with identity X conflicts with
+ erroneously cause the "New instance with identity X conflicts with
persistent instance Y" error.
.. seealso::
@@ -606,7 +606,7 @@
when the construct contains non-standard SQL elements such as
RETURNING, array index operations, or dialect-specific or custom
datatypes. A string is now returned in these cases rendering an
- approximation of the construct (typically the Postgresql-style
+ approximation of the construct (typically the PostgreSQL-style
version of it) rather than raising an error.
.. seealso::
@@ -752,7 +752,7 @@
:tickets: 3587
Added support for reflecting the source of materialized views
- to the Postgresql version of the :meth:`.Inspector.get_view_definition`
+ to the PostgreSQL version of the :meth:`.Inspector.get_view_definition`
method.
.. change::
@@ -949,7 +949,7 @@
or :class:`.Boolean` with regards to ensuring that the per-table
events are propagated from the implementation type to the outer type.
These events are used
- to ensure that the constraints or Postgresql types (e.g. ENUM)
+ to ensure that the constraints or PostgreSQL types (e.g. ENUM)
are correctly created (and possibly dropped) along with the parent
table.
@@ -981,8 +981,8 @@
and supports index / slice operations, as well as
:func:`.postgresql.array_agg`, which returns a :class:`.postgresql.ARRAY`
with additional comparison features. As arrays are only
- supported on Postgresql at the moment, only actually works on
- Postgresql. Also added a new construct
+ supported on PostgreSQL at the moment, only actually works on
+ PostgreSQL. Also added a new construct
:class:`.postgresql.aggregate_order_by` in support of PG's
"ORDER BY" extension.
@@ -1001,8 +1001,8 @@
on other databases that have an "array" concept, such as DB2 or Oracle.
Additionally, new operators :func:`.expression.any_` and
:func:`.expression.all_` have been added. These support not just
- array constructs on Postgresql, but also subqueries that are usable
- on MySQL (but sadly not on Postgresql).
+ array constructs on PostgreSQL, but also subqueries that are usable
+ on MySQL (but sadly not on PostgreSQL).
.. seealso::
@@ -1039,7 +1039,7 @@
:tags: bug, postgresql
:tickets: 3487
- The Postgresql :class:`.postgresql.ARRAY` type now supports multidimensional
+ The PostgreSQL :class:`.postgresql.ARRAY` type now supports multidimensional
indexed access, e.g. expressions such as ``somecol[5][6]`` without
any need for explicit casts or type coercions, provided
that the :paramref:`.postgresql.ARRAY.dimensions` parameter is set to the
@@ -1054,7 +1054,7 @@
:tickets: 3503
The return type for the :class:`.postgresql.JSON` and :class:`.postgresql.JSONB`
- when using indexed access has been fixed to work like Postgresql itself,
+ when using indexed access has been fixed to work like PostgreSQL itself,
and returns an expression that itself is of type :class:`.postgresql.JSON`
or :class:`.postgresql.JSONB`. Previously, the accessor would return
:class:`.NullType` which disallowed subsequent JSON-like operators to be
@@ -1101,9 +1101,9 @@
:tickets: 3514
Additional fixes have been made regarding the value of ``None``
- in conjunction with the Postgresql :class:`.postgresql.JSON` type. When
+ in conjunction with the PostgreSQL :class:`.postgresql.JSON` type. When
the :paramref:`.JSON.none_as_null` flag is left at its default
- value of ``False``, the ORM will now correctly insert the Json
+ value of ``False``, the ORM will now correctly insert the JSON
"'null'" string into the column whenever the value on the ORM
object is set to the value ``None`` or when the value ``None``
is used with :meth:`.Session.bulk_insert_mappings`,
diff --git a/doc/build/changelog/migration_11.rst b/doc/build/changelog/migration_11.rst
index 1eea7eaca..d0f73b726 100644
--- a/doc/build/changelog/migration_11.rst
+++ b/doc/build/changelog/migration_11.rst
@@ -183,7 +183,7 @@ joined eager loading, as well as when joins are used for the purposes
of filtering on additional columns.
This deduplication relies upon the hashability of the elements within
-the row. With the introduction of Postgresql's special types like
+the row. With the introduction of PostgreSQL's special types like
:class:`.postgresql.ARRAY`, :class:`.postgresql.HSTORE` and
:class:`.postgresql.JSON`, the experience of types within rows being
unhashable and encountering problems here is more prevalent than
@@ -192,7 +192,7 @@ it was previously.
In fact, SQLAlchemy has since version 0.8 included a flag on datatypes that
are noted as "unhashable", however this flag was not used consistently
on built in types. As described in :ref:`change_3499_postgresql`, this
-flag is now set consistently for all of Postgresql's "structural" types.
+flag is now set consistently for all of PostgreSQL's "structural" types.
The "unhashable" flag is also set on the :class:`.NullType` type,
as :class:`.NullType` is used to refer to any expression of unknown
@@ -286,7 +286,7 @@ to track this change.
New options allowing explicit persistence of NULL over a default
----------------------------------------------------------------
-Related to the new JSON-NULL support added to Postgresql as part of
+Related to the new JSON-NULL support added to PostgreSQL as part of
:ref:`change_3514`, the base :class:`.TypeEngine` class now supports
a method :meth:`.TypeEngine.evaluates_none` which allows a positive set
of the ``None`` value on an attribute to be persisted as NULL, rather than
@@ -344,7 +344,7 @@ Improved Session state when a SAVEPOINT is cancelled by the database
--------------------------------------------------------------------
A common case with MySQL is that a SAVEPOINT is cancelled when a deadlock
-occurs within the transaction. The :class:`.Session` has been modfied
+occurs within the transaction. The :class:`.Session` has been modified
to deal with this failure mode slightly more gracefully, such that the
outer, non-savepoint transaction still remains usable::
@@ -703,12 +703,12 @@ would have to be compared during the merge.
.. _change_3708:
-Fix involving many-to-one object moves with user-initiated foriegn key manipulations
+Fix involving many-to-one object moves with user-initiated foreign key manipulations
------------------------------------------------------------------------------------
A bug has been fixed involving the mechanics of replacing a many-to-one
reference to an object with another object. During the attribute operation,
-the location of the object tha was previouly referred to now makes use of the
+the location of the object that was previously referred to now makes use of the
database-committed foreign key value, rather than the current foreign key
value. The main effect of the fix is that a backref event towards a collection
will fire off more accurately when a many-to-one change is made, even if the
@@ -751,7 +751,7 @@ improvement will now be apparent.
.. _change_3662:
-Improvements to the Query.correlate method with polymoprhic entities
+Improvements to the Query.correlate method with polymorphic entities
--------------------------------------------------------------------
In recent SQLAlchemy versions, the SQL generated by many forms of
@@ -796,7 +796,7 @@ Using correlated subqueries against polymorphic mappings still has some
unpolished edges. If for example ``Person`` is polymorphically linked
to a so-called "concrete polymorphic union" query, the above subquery
may not correctly refer to this subquery. In all cases, a way to refer
-to the "polyorphic" entity fully is to create an :func:`.aliased` object
+to the "polymorphic" entity fully is to create an :func:`.aliased` object
from it first::
# works with all SQLAlchemy versions and all types of polymorphic
@@ -956,13 +956,13 @@ Where above, the ``user.name`` column is added unnecessarily. The results
would not be affected, as the additional columns are not included in the
result in any case, but the columns are unnecessary.
-Additionally, when the Postgresql DISTINCT ON format is used by passing
+Additionally, when the PostgreSQL DISTINCT ON format is used by passing
expressions to :meth:`.Query.distinct`, the above "column adding" logic
is disabled entirely.
When the query is being bundled into a subquery for the purposes of
joined eager loading, the "augment column list" rules are are necessarily
-more aggressive so that the ORDER BY can still be satisifed, so this case
+more aggressive so that the ORDER BY can still be satisfied, so this case
remains unchanged.
:ticket:`3641`
@@ -1110,7 +1110,7 @@ RANGE and ROWS expressions for window functions::
Support for the SQL LATERAL keyword
------------------------------------
-The LATERAL keyword is currently known to only be supported by Postgresql 9.3
+The LATERAL keyword is currently known to only be supported by PostgreSQL 9.3
and greater, however as it is part of the SQL standard support for this keyword
is added to Core. The implementation of :meth:`.Select.lateral` employs
special logic beyond just rendering the LATERAL keyword to allow for
@@ -1174,8 +1174,8 @@ SQLAlchemy has always had the convenience feature of enabling the backend databa
"autoincrement" feature for a single-column integer primary key; by "autoincrement"
we mean that the database column will include whatever DDL directives the
database provides in order to indicate an auto-incrementing integer identifier,
-such as the SERIAL keyword on Postgresql or AUTO_INCREMENT on MySQL, and additionally
-that the dialect will recieve these generated values from the execution
+such as the SERIAL keyword on PostgreSQL or AUTO_INCREMENT on MySQL, and additionally
+that the dialect will receive these generated values from the execution
of a :meth:`.Table.insert` construct using techniques appropriate to that
backend.
@@ -1200,7 +1200,7 @@ disable this, one would have to turn off ``autoincrement`` on all columns::
)
With the new behavior, the composite primary key will not have autoincrement
-semantics unless a column is marked explcitly with ``autoincrement=True``::
+semantics unless a column is marked explicitly with ``autoincrement=True``::
# column 'y' will be SERIAL/AUTO_INCREMENT/ auto-generating
Table(
@@ -1457,7 +1457,7 @@ string values::
Negative integer indexes accommodated by Core result rows
---------------------------------------------------------
-The :class:`.RowProxy` object now accomodates single negative integer indexes
+The :class:`.RowProxy` object now accommodates single negative integer indexes
like a regular Python sequence, both in the pure Python and C-extension
version. Previously, negative values would only work in slices::
@@ -1472,9 +1472,9 @@ version. Previously, negative values would only work in slices::
The ``Enum`` type now does in-Python validation of values
---------------------------------------------------------
-To accomodate for Python native enumerated objects, as well as for edge
+To accommodate for Python native enumerated objects, as well as for edge
cases such as that of where a non-native ENUM type is used within an ARRAY
-and a CHECK contraint is infeasible, the :class:`.Enum` datatype now adds
+and a CHECK constraint is infeasible, the :class:`.Enum` datatype now adds
in-Python validation of input values when the :paramref:`.Enum.validate_strings`
flag is used (1.1.0b2)::
@@ -1630,10 +1630,10 @@ UNIONs with parenthesized SELECT statements is much less common than the
JSON support added to Core
--------------------------
-As MySQL now has a JSON datatype in addition to the Postgresql JSON datatype,
+As MySQL now has a JSON datatype in addition to the PostgreSQL JSON datatype,
the core now gains a :class:`sqlalchemy.types.JSON` datatype that is the basis
for both of these. Using this type allows access to the "getitem" operator
-as well as the "getpath" operator in a way that is agnostic across Postgresql
+as well as the "getpath" operator in a way that is agnostic across PostgreSQL
and MySQL.
The new datatype also has a series of improvements to the handling of
@@ -1735,7 +1735,7 @@ and its descendant types.
Array support added to Core; new ANY and ALL operators
------------------------------------------------------
-Along with the enhancements made to the Postgresql :class:`.postgresql.ARRAY`
+Along with the enhancements made to the PostgreSQL :class:`.postgresql.ARRAY`
type described in :ref:`change_3503`, the base class of :class:`.postgresql.ARRAY`
itself has been moved to Core in a new class :class:`.types.ARRAY`.
@@ -1744,7 +1744,7 @@ such as ``array_agg()`` and ``unnest()``. In support of these constructs
for not just PostgreSQL but also potentially for other array-capable backends
in the future such as DB2, the majority of array logic for SQL expressions
is now in Core. The :class:`.types.ARRAY` type still **only works on
-Postgresql**, however it can be used directly, supporting special array
+PostgreSQL**, however it can be used directly, supporting special array
use cases such as indexed access, as well as support for the ANY and ALL::
mytable = Table("mytable", metadata,
@@ -1774,7 +1774,7 @@ type as well.
The :func:`.sql.expression.any_` and :func:`.sql.expression.all_` operators
are open-ended at the Core level, however their interpretation by backend
-databases is limited. On the Postgresql backend, the two operators
+databases is limited. On the PostgreSQL backend, the two operators
**only accept array values**. Whereas on the MySQL backend, they
**only accept subquery values**. On MySQL, one can use an expression
such as::
@@ -1799,7 +1799,7 @@ which is now available using :class:`.array_agg`::
from sqlalchemy import func
stmt = select([func.array_agg(table.c.value)])
-A Postgresql element for an aggregate ORDER BY is also added via
+A PostgreSQL element for an aggregate ORDER BY is also added via
:class:`.postgresql.aggregate_order_by`::
from sqlalchemy.dialects.postgresql import aggregate_order_by
@@ -1849,7 +1849,7 @@ TypeDecorator now works with Enum, Boolean, "schema" types automatically
The :class:`.SchemaType` types include types such as :class:`.Enum`
and :class:`.Boolean` which, in addition to corresponding to a database
-type, also generate either a CHECK constraint or in the case of Postgresql
+type, also generate either a CHECK constraint or in the case of PostgreSQL
ENUM a new CREATE TYPE statement, will now work automatically with
:class:`.TypeDecorator` recipes. Previously, a :class:`.TypeDecorator` for
an :class:`.postgresql.ENUM` had to look like this::
@@ -2004,7 +2004,7 @@ That is, a joinedload of ``.pets`` looks like::
ON pets_1.person_id = CAST(person.id AS INTEGER)
Without the CAST in the ON clause of the join, strongly-typed databases
-such as Postgresql will refuse to implicitly compare the integer and fail.
+such as PostgreSQL will refuse to implicitly compare the integer and fail.
The lazyload case of ``.pets`` relies upon replacing
the ``Person.id`` column at load time with a bound parameter, which receives
@@ -2094,7 +2094,7 @@ necessary to worry about the names themselves in the textual SQL.
:ref:`change_3501`
-Dialect Improvements and Changes - Postgresql
+Dialect Improvements and Changes - PostgreSQL
=============================================
.. _change_3529:
@@ -2102,12 +2102,12 @@ Dialect Improvements and Changes - Postgresql
Support for INSERT..ON CONFLICT (DO UPDATE | DO NOTHING)
--------------------------------------------------------
-The ``ON CONFLICT`` clause of ``INSERT`` added to Postgresql as of
-version 9.5 is now supported using a Postgresql-specific version of the
+The ``ON CONFLICT`` clause of ``INSERT`` added to PostgreSQL as of
+version 9.5 is now supported using a PostgreSQL-specific version of the
:class:`.Insert` object, via :func:`sqlalchemy.dialects.postgresql.dml.insert`.
This :class:`.Insert` subclass adds two new methods :meth:`.Insert.on_conflict_do_update`
and :meth:`.Insert.on_conflict_do_nothing` which implement the full syntax
-supported by Posgresql 9.5 in this area::
+supported by PostgreSQL 9.5 in this area::
from sqlalchemy.dialects.postgresql import insert
@@ -2180,13 +2180,13 @@ This includes:
type :class:`.Integer` where we could no longer perform indexed access
for the remaining dimensions, unless we used :func:`.cast` or :func:`.type_coerce`.
-* The :class:`~.postgresql.JSON` and :class:`~.postgresql.JSONB` types now mirror what Postgresql
+* The :class:`~.postgresql.JSON` and :class:`~.postgresql.JSONB` types now mirror what PostgreSQL
itself does for indexed access. This means that all indexed access for
a :class:`~.postgresql.JSON` or :class:`~.postgresql.JSONB` type returns an expression that itself
is *always* :class:`~.postgresql.JSON` or :class:`~.postgresql.JSONB` itself, unless the
:attr:`~.postgresql.JSON.Comparator.astext` modifier is used. This means that whether
the indexed access of the JSON structure ultimately refers to a string,
- list, number, or other JSON structure, Postgresql always considers it
+ list, number, or other JSON structure, PostgreSQL always considers it
to be JSON itself unless it is explicitly cast differently. Like
the :class:`~.postgresql.ARRAY` type, this means that it is now straightforward
to produce JSON expressions with multiple levels of indexed access::
@@ -2215,7 +2215,7 @@ The JSON cast() operation now requires ``.astext`` is called explicitly
As part of the changes in :ref:`change_3503`, the workings of the
:meth:`.ColumnElement.cast` operator on :class:`.postgresql.JSON` and
-:class:`.postgresql.JSONB` no longer implictly invoke the
+:class:`.postgresql.JSONB` no longer implicitly invoke the
:attr:`.postgresql.JSON.Comparator.astext` modifier; Postgresql's JSON/JSONB types
support CAST operations to each other without the "astext" aspect.
@@ -2269,7 +2269,7 @@ emits::
Check constraints now reflect
-----------------------------
-The Postgresql dialect now supports reflection of CHECK constraints
+The PostgreSQL dialect now supports reflection of CHECK constraints
both within the method :meth:`.Inspector.get_check_constraints` as well
as within :class:`.Table` reflection within the :attr:`.Table.constraints`
collection.
@@ -2326,7 +2326,7 @@ Support for FOR UPDATE SKIP LOCKED / FOR NO KEY UPDATE / FOR KEY SHARE
The new parameters :paramref:`.GenerativeSelect.with_for_update.skip_locked`
and :paramref:`.GenerativeSelect.with_for_update.key_share`
in both Core and ORM apply a modification to a "SELECT...FOR UPDATE"
-or "SELECT...FOR SHARE" query on the Postgresql backend:
+or "SELECT...FOR SHARE" query on the PostgreSQL backend:
* SELECT FOR NO KEY UPDATE::
@@ -2352,8 +2352,8 @@ A new type :class:`.mysql.JSON` is added to the MySQL dialect supporting
the JSON type newly added to MySQL 5.7. This type provides both persistence
of JSON as well as rudimentary indexed-access using the ``JSON_EXTRACT``
function internally. An indexable JSON column that works across MySQL
-and Postgresql can be achieved by using the :class:`.types.JSON` datatype
-common to both MySQL and Postgresql.
+and PostgreSQL can be achieved by using the :class:`.types.JSON` datatype
+common to both MySQL and PostgreSQL.
.. seealso::
@@ -2463,7 +2463,7 @@ the version of SQLite noted in that migration note, 3.7.15.2, was the *last*
version of SQLite to actually have this limitation! The next release was
3.7.16 and support for right nested joins was quietly added. In 1.1, the work
to identify the specific SQLite version and source commit where this change
-was made was done (SQlite's changelog refers to it with the cryptic phrase "Enhance
+was made was done (SQLite's changelog refers to it with the cryptic phrase "Enhance
the query optimizer to exploit transitive join constraints" without linking
to any issue number, change number, or further explanation), and the workarounds
present in this change are now lifted for SQLite when the DBAPI reports
@@ -2505,7 +2505,7 @@ Improved Support for Remote Schemas
The SQLite dialect now implements :meth:`.Inspector.get_schema_names`
and additionally has improved support for tables and indexes that are
created and reflected from a remote schema, which in SQLite is a
-dataase that is assigned a name via the ``ATTACH`` statement; previously,
+database that is assigned a name via the ``ATTACH`` statement; previously,
the``CREATE INDEX`` DDL didn't work correctly for a schema-bound table
and the :meth:`.Inspector.get_foreign_keys` method will now indicate the
given schema in the results. Cross-schema foreign keys aren't supported.
diff --git a/doc/build/dialects/postgresql.rst b/doc/build/dialects/postgresql.rst
index 56b14a8d0..38b2e4741 100644
--- a/doc/build/dialects/postgresql.rst
+++ b/doc/build/dialects/postgresql.rst
@@ -9,7 +9,7 @@ PostgreSQL Data Types
------------------------
As with all SQLAlchemy dialects, all UPPERCASE types that are known to be
-valid with Postgresql are importable from the top level dialect, whether
+valid with PostgreSQL are importable from the top level dialect, whether
they originate from :mod:`sqlalchemy.types` or from the local dialect::
from sqlalchemy.dialects.postgresql import \
@@ -160,7 +160,7 @@ For example:
PostgreSQL Constraint Types
---------------------------
-SQLAlchemy supports Postgresql EXCLUDE constraints via the
+SQLAlchemy supports PostgreSQL EXCLUDE constraints via the
:class:`ExcludeConstraint` class:
.. autoclass:: ExcludeConstraint