summaryrefslogtreecommitdiff
path: root/lib/sqlalchemy/dialects
diff options
context:
space:
mode:
Diffstat (limited to 'lib/sqlalchemy/dialects')
-rw-r--r--lib/sqlalchemy/dialects/firebird/fdb.py2
-rw-r--r--lib/sqlalchemy/dialects/mssql/base.py68
-rw-r--r--lib/sqlalchemy/dialects/mysql/base.py53
-rw-r--r--lib/sqlalchemy/dialects/mysql/dml.py10
-rw-r--r--lib/sqlalchemy/dialects/mysql/json.py2
-rw-r--r--lib/sqlalchemy/dialects/oracle/base.py45
-rw-r--r--lib/sqlalchemy/dialects/oracle/cx_oracle.py8
-rw-r--r--lib/sqlalchemy/dialects/postgresql/array.py40
-rw-r--r--lib/sqlalchemy/dialects/postgresql/base.py141
-rw-r--r--lib/sqlalchemy/dialects/postgresql/dml.py13
-rw-r--r--lib/sqlalchemy/dialects/postgresql/ext.py20
-rw-r--r--lib/sqlalchemy/dialects/postgresql/hstore.py2
-rw-r--r--lib/sqlalchemy/dialects/postgresql/json.py48
-rw-r--r--lib/sqlalchemy/dialects/postgresql/psycopg2.py35
-rw-r--r--lib/sqlalchemy/dialects/sqlite/base.py52
-rw-r--r--lib/sqlalchemy/dialects/sqlite/json.py4
-rw-r--r--lib/sqlalchemy/dialects/sqlite/pysqlite.py3
17 files changed, 312 insertions, 234 deletions
diff --git a/lib/sqlalchemy/dialects/firebird/fdb.py b/lib/sqlalchemy/dialects/firebird/fdb.py
index 46acd0559..7a7b87536 100644
--- a/lib/sqlalchemy/dialects/firebird/fdb.py
+++ b/lib/sqlalchemy/dialects/firebird/fdb.py
@@ -38,7 +38,7 @@ accept every argument that Kinterbasdb does.
of Firebird, and setting this flag to False will also cause the
SQLAlchemy ORM to ignore its usage. The behavior can also be controlled on a
per-execution basis using the ``enable_rowcount`` option with
- :meth:`.Connection.execution_options`::
+ :meth:`_engine.Connection.execution_options`::
conn = engine.connect().execution_options(enable_rowcount=True)
r = conn.execute(stmt)
diff --git a/lib/sqlalchemy/dialects/mssql/base.py b/lib/sqlalchemy/dialects/mssql/base.py
index 43f3aeb04..b5c49246f 100644
--- a/lib/sqlalchemy/dialects/mssql/base.py
+++ b/lib/sqlalchemy/dialects/mssql/base.py
@@ -18,8 +18,10 @@ SQL Server provides so-called "auto incrementing" behavior using the
``IDENTITY`` construct, which can be placed on any single integer column in a
table. SQLAlchemy considers ``IDENTITY`` within its default "autoincrement"
behavior for an integer primary key column, described at
-:paramref:`.Column.autoincrement`. This means that by default, the first
-integer primary key column in a :class:`.Table` will be considered to be the
+:paramref:`_schema.Column.autoincrement`. This means that by default,
+the first
+integer primary key column in a :class:`_schema.Table`
+will be considered to be the
identity column and will generate DDL as such::
from sqlalchemy import Table, MetaData, Column, Integer
@@ -41,7 +43,7 @@ The above example will generate DDL as:
)
For the case where this default generation of ``IDENTITY`` is not desired,
-specify ``False`` for the :paramref:`.Column.autoincrement` flag,
+specify ``False`` for the :paramref:`_schema.Column.autoincrement` flag,
on the first integer primary key column::
m = MetaData()
@@ -51,8 +53,9 @@ on the first integer primary key column::
m.create_all(engine)
To add the ``IDENTITY`` keyword to a non-primary key column, specify
-``True`` for the :paramref:`.Column.autoincrement` flag on the desired
-:class:`.Column` object, and ensure that :paramref:`.Column.autoincrement`
+``True`` for the :paramref:`_schema.Column.autoincrement` flag on the desired
+:class:`_schema.Column` object, and ensure that
+:paramref:`_schema.Column.autoincrement`
is set to ``False`` on any integer primary key column::
m = MetaData()
@@ -62,7 +65,8 @@ is set to ``False`` on any integer primary key column::
m.create_all(engine)
.. versionchanged:: 1.3 Added ``mssql_identity_start`` and
- ``mssql_identity_increment`` parameters to :class:`.Column`. These replace
+ ``mssql_identity_increment`` parameters to :class:`_schema.Column`.
+ These replace
the use of the :class:`.Sequence` object in order to specify these values.
.. deprecated:: 1.3
@@ -85,7 +89,8 @@ is set to ``False`` on any integer primary key column::
marked with IDENTITY will be rejected by SQL Server. In order for the
value to be accepted, a session-level option "SET IDENTITY_INSERT" must be
enabled. The SQLAlchemy SQL Server dialect will perform this operation
- automatically when using a core :class:`~.sql.expression.Insert` construct; if the
+ automatically when using a core :class:`_expression.Insert`
+ construct; if the
execution specifies a value for the IDENTITY column, the "IDENTITY_INSERT"
option will be enabled for the span of that statement's invocation.However,
this scenario is not high performing and should not be relied upon for
@@ -99,7 +104,7 @@ Controlling "Start" and "Increment"
Specific control over the "start" and "increment" values for
the ``IDENTITY`` generator are provided using the
``mssql_identity_start`` and ``mssql_identity_increment`` parameters
-passed to the :class:`.Column` object::
+passed to the :class:`_schema.Column` object::
from sqlalchemy import Table, Integer, Column
@@ -112,7 +117,7 @@ passed to the :class:`.Column` object::
Column('name', String(20))
)
-The CREATE TABLE for the above :class:`.Table` object would be:
+The CREATE TABLE for the above :class:`_schema.Table` object would be:
.. sourcecode:: sql
@@ -123,7 +128,7 @@ The CREATE TABLE for the above :class:`.Table` object would be:
.. versionchanged:: 1.3 The ``mssql_identity_start`` and
``mssql_identity_increment`` parameters are now used to affect the
- ``IDENTITY`` generator for a :class:`.Column` under SQL Server.
+ ``IDENTITY`` generator for a :class:`_schema.Column` under SQL Server.
Previously, the :class:`.Sequence` object was used. As SQL Server now
supports real sequences as a separate construct, :class:`.Sequence` will be
functional in the normal way in a future SQLAlchemy version.
@@ -171,7 +176,8 @@ The process for fetching this value has several variants:
A table that contains an ``IDENTITY`` column will prohibit an INSERT statement
that refers to the identity column explicitly. The SQLAlchemy dialect will
-detect when an INSERT construct, created using a core :func:`~.sql.expression.insert`
+detect when an INSERT construct, created using a core
+:func:`_expression.insert`
construct (not a plain string SQL), refers to the identity column, and
in this case will emit ``SET IDENTITY_INSERT ON`` prior to the insert
statement proceeding, and ``SET IDENTITY_INSERT OFF`` subsequent to the
@@ -213,7 +219,7 @@ MAX on VARCHAR / NVARCHAR
-------------------------
SQL Server supports the special string "MAX" within the
-:class:`.sqltypes.VARCHAR` and :class:`.sqltypes.NVARCHAR` datatypes,
+:class:`_types.VARCHAR` and :class:`_types.NVARCHAR` datatypes,
to indicate "maximum length possible". The dialect currently handles this as
a length of "None" in the base type, rather than supplying a
dialect-specific version of these types, so that a base type
@@ -238,7 +244,7 @@ specified by the string argument "collation"::
from sqlalchemy import VARCHAR
Column('login', VARCHAR(32, collation='Latin1_General_CI_AS'))
-When such a column is associated with a :class:`.Table`, the
+When such a column is associated with a :class:`_schema.Table`, the
CREATE TABLE statement for this column will yield::
login VARCHAR(32) COLLATE Latin1_General_CI_AS NULL
@@ -291,7 +297,8 @@ both via a dialect-specific parameter
accepted by :func:`.create_engine`,
as well as the :paramref:`.Connection.execution_options.isolation_level`
argument as passed to
-:meth:`.Connection.execution_options`. This feature works by issuing the
+:meth:`_engine.Connection.execution_options`.
+This feature works by issuing the
command ``SET TRANSACTION ISOLATION LEVEL <level>`` for
each new connection.
@@ -358,19 +365,22 @@ Per
`SQL Server 2012/2014 Documentation <http://technet.microsoft.com/en-us/library/ms187993.aspx>`_,
the ``NTEXT``, ``TEXT`` and ``IMAGE`` datatypes are to be removed from SQL
Server in a future release. SQLAlchemy normally relates these types to the
-:class:`.UnicodeText`, :class:`.Text` and :class:`.LargeBinary` datatypes.
+:class:`.UnicodeText`, :class:`_expression.TextClause` and
+:class:`.LargeBinary` datatypes.
In order to accommodate this change, a new flag ``deprecate_large_types``
is added to the dialect, which will be automatically set based on detection
of the server version in use, if not otherwise set by the user. The
behavior of this flag is as follows:
-* When this flag is ``True``, the :class:`.UnicodeText`, :class:`.Text` and
+* When this flag is ``True``, the :class:`.UnicodeText`,
+ :class:`_expression.TextClause` and
:class:`.LargeBinary` datatypes, when used to render DDL, will render the
types ``NVARCHAR(max)``, ``VARCHAR(max)``, and ``VARBINARY(max)``,
respectively. This is a new behavior as of the addition of this flag.
-* When this flag is ``False``, the :class:`.UnicodeText`, :class:`.Text` and
+* When this flag is ``False``, the :class:`.UnicodeText`,
+ :class:`_expression.TextClause` and
:class:`.LargeBinary` datatypes, when used to render DDL, will render the
types ``NTEXT``, ``TEXT``, and ``IMAGE``,
respectively. This is the long-standing behavior of these types.
@@ -391,9 +401,10 @@ behavior of this flag is as follows:
* Complete control over whether the "old" or "new" types are rendered is
available in all SQLAlchemy versions by using the UPPERCASE type objects
- instead: :class:`.types.NVARCHAR`, :class:`.types.VARCHAR`,
- :class:`.types.VARBINARY`, :class:`.types.TEXT`, :class:`.mssql.NTEXT`,
- :class:`.mssql.IMAGE` will always remain fixed and always output exactly that
+ instead: :class:`_types.NVARCHAR`, :class:`_types.VARCHAR`,
+ :class:`_types.VARBINARY`, :class:`_types.TEXT`, :class:`_mssql.NTEXT`,
+ :class:`_mssql.IMAGE`
+ will always remain fixed and always output exactly that
type.
.. versionadded:: 1.0.0
@@ -406,7 +417,8 @@ Multipart Schema Names
SQL Server schemas sometimes require multiple parts to their "schema"
qualifier, that is, including the database name and owner name as separate
tokens, such as ``mydatabase.dbo.some_table``. These multipart names can be set
-at once using the :paramref:`.Table.schema` argument of :class:`.Table`::
+at once using the :paramref:`_schema.Table.schema` argument of
+:class:`_schema.Table`::
Table(
"some_table", metadata,
@@ -609,7 +621,7 @@ generated primary key values via IDENTITY columns or other
server side defaults. MS-SQL does not
allow the usage of OUTPUT INSERTED on tables that have triggers.
To disable the usage of OUTPUT INSERTED on a per-table basis,
-specify ``implicit_returning=False`` for each :class:`.Table`
+specify ``implicit_returning=False`` for each :class:`_schema.Table`
which has triggers::
Table('mytable', metadata,
@@ -645,8 +657,8 @@ verifies that the version identifier matched. When this condition occurs, a
warning will be emitted but the operation will proceed.
The use of OUTPUT INSERTED can be disabled by setting the
-:paramref:`.Table.implicit_returning` flag to ``False`` on a particular
-:class:`.Table`, which in declarative looks like::
+:paramref:`_schema.Table.implicit_returning` flag to ``False`` on a particular
+:class:`_schema.Table`, which in declarative looks like::
class MyTable(Base):
__tablename__ = 'mytable'
@@ -1072,7 +1084,7 @@ class TIMESTAMP(sqltypes._Binary):
.. seealso::
- :class:`.mssql.ROWVERSION`
+ :class:`_mssql.ROWVERSION`
"""
@@ -1117,7 +1129,7 @@ class ROWVERSION(TIMESTAMP):
The ROWVERSION datatype does **not** reflect (e.g. introspect) from the
database as itself; the returned datatype will be
- :class:`.mssql.TIMESTAMP`.
+ :class:`_mssql.TIMESTAMP`.
This is a read-only datatype that does not support INSERT of values.
@@ -1125,7 +1137,7 @@ class ROWVERSION(TIMESTAMP):
.. seealso::
- :class:`.mssql.TIMESTAMP`
+ :class:`_mssql.TIMESTAMP`
"""
@@ -1145,7 +1157,7 @@ class VARBINARY(sqltypes.VARBINARY, sqltypes.LargeBinary):
This type is present to support "deprecate_large_types" mode where
either ``VARBINARY(max)`` or IMAGE is rendered. Otherwise, this type
- object is redundant vs. :class:`.types.VARBINARY`.
+ object is redundant vs. :class:`_types.VARBINARY`.
.. versionadded:: 1.0.0
diff --git a/lib/sqlalchemy/dialects/mysql/base.py b/lib/sqlalchemy/dialects/mysql/base.py
index a075b4d6b..e44dfa829 100644
--- a/lib/sqlalchemy/dialects/mysql/base.py
+++ b/lib/sqlalchemy/dialects/mysql/base.py
@@ -79,7 +79,8 @@ to ``MyISAM`` for this value, although newer versions may be defaulting
to ``InnoDB``. The ``InnoDB`` engine is typically preferred for its support
of transactions and foreign keys.
-A :class:`.Table` that is created in a MySQL database with a storage engine
+A :class:`_schema.Table`
+that is created in a MySQL database with a storage engine
of ``MyISAM`` will be essentially non-transactional, meaning any
INSERT/UPDATE/DELETE statement referring to this table will be invoked as
autocommit. It also will have no support for foreign key constraints; while
@@ -122,7 +123,8 @@ All MySQL dialects support setting of transaction isolation level both via a
dialect-specific parameter :paramref:`.create_engine.isolation_level` accepted
by :func:`.create_engine`, as well as the
:paramref:`.Connection.execution_options.isolation_level` argument as passed to
-:meth:`.Connection.execution_options`. This feature works by issuing the
+:meth:`_engine.Connection.execution_options`.
+This feature works by issuing the
command ``SET SESSION TRANSACTION ISOLATION LEVEL <level>`` for each new
connection. For the special AUTOCOMMIT isolation level, DBAPI-specific
techniques are used.
@@ -174,7 +176,8 @@ foreign key::
)
You can disable this behavior by passing ``False`` to the
-:paramref:`~.Column.autoincrement` argument of :class:`.Column`. This flag
+:paramref:`_schema.Column.autoincrement` argument of :class:`_schema.Column`.
+This flag
can also be used to enable auto-increment on a secondary column in a
multi-column key for some storage engines::
@@ -301,7 +304,8 @@ MySQL features two varieties of identifier "quoting style", one using
backticks and the other using quotes, e.g. ```some_identifier``` vs.
``"some_identifier"``. All MySQL dialects detect which version
is in use by checking the value of ``sql_mode`` when a connection is first
-established with a particular :class:`.Engine`. This quoting style comes
+established with a particular :class:`_engine.Engine`.
+This quoting style comes
into play when rendering table and column names as well as when reflecting
existing database structures. The detection is entirely automatic and
no special configuration is needed to use either quoting style.
@@ -323,7 +327,8 @@ available.
* INSERT..ON DUPLICATE KEY UPDATE: See
:ref:`mysql_insert_on_duplicate_key_update`
-* SELECT pragma, use :meth:`.Select.prefix_with` and :meth:`.Query.prefix_with`::
+* SELECT pragma, use :meth:`_expression.Select.prefix_with` and
+ :meth:`_query.Query.prefix_with`::
select(...).prefix_with(['HIGH_PRIORITY', 'SQL_SMALL_RESULT'])
@@ -331,11 +336,13 @@ available.
update(..., mysql_limit=10)
-* optimizer hints, use :meth:`.Select.prefix_with` and :meth:`.Query.prefix_with`::
+* optimizer hints, use :meth:`_expression.Select.prefix_with` and
+ :meth:`_query.Query.prefix_with`::
select(...).prefix_with("/*+ NO_RANGE_OPTIMIZATION(t4 PRIMARY) */")
-* index hints, use :meth:`.Select.with_hint` and :meth:`.Query.with_hint`::
+* index hints, use :meth:`_expression.Select.with_hint` and
+ :meth:`_query.Query.with_hint`::
select(...).with_hint(some_table, "USE INDEX xyz")
@@ -379,7 +386,8 @@ from the proposed insertion. These values are normally specified using
keyword arguments passed to the
:meth:`~.mysql.Insert.on_duplicate_key_update`
given column key values (usually the name of the column, unless it
-specifies :paramref:`.Column.key`) as keys and literal or SQL expressions
+specifies :paramref:`_schema.Column.key`
+) as keys and literal or SQL expressions
as values::
on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(
@@ -396,7 +404,8 @@ forms are accepted, including a single dictionary::
as well as a list of 2-tuples, which will automatically provide
a parameter-ordered UPDATE statement in a manner similar to that described
-at :ref:`updates_order_parameters`. Unlike the :class:`.Update` object,
+at :ref:`updates_order_parameters`. Unlike the :class:`_expression.Update`
+object,
no special flag is needed to specify the intent since the argument form is
this context is unambiguous::
@@ -412,9 +421,10 @@ this context is unambiguous::
.. warning::
- The :meth:`.Insert.on_duplicate_key_update` method does **not** take into
+ The :meth:`_expression.Insert.on_duplicate_key_update`
+ method does **not** take into
account Python-side default UPDATE values or generation functions, e.g.
- e.g. those specified using :paramref:`.Column.onupdate`.
+ e.g. those specified using :paramref:`_schema.Column.onupdate`.
These values will not be exercised for an ON DUPLICATE KEY style of UPDATE,
unless they are manually specified explicitly in the parameters.
@@ -423,7 +433,7 @@ this context is unambiguous::
In order to refer to the proposed insertion row, the special alias
:attr:`~.mysql.Insert.inserted` is available as an attribute on
the :class:`.mysql.Insert` object; this object is a
-:class:`.ColumnCollection` which contains all columns of the target
+:class:`_expression.ColumnCollection` which contains all columns of the target
table::
from sqlalchemy.dialects.mysql import insert
@@ -586,7 +596,8 @@ Foreign Key Arguments to Avoid
MySQL does not support the foreign key arguments "DEFERRABLE", "INITIALLY",
or "MATCH". Using the ``deferrable`` or ``initially`` keyword argument with
-:class:`.ForeignKeyConstraint` or :class:`.ForeignKey` will have the effect of
+:class:`_schema.ForeignKeyConstraint` or :class:`_schema.ForeignKey`
+will have the effect of
these keywords being rendered in a DDL expression, which will then raise an
error on MySQL. In order to use these keywords on a foreign key while having
them ignored on a MySQL backend, use a custom compile rule::
@@ -601,7 +612,7 @@ them ignored on a MySQL backend, use a custom compile rule::
.. versionchanged:: 0.9.0 - the MySQL backend no longer silently ignores
the ``deferrable`` or ``initially`` keyword arguments of
- :class:`.ForeignKeyConstraint` and :class:`.ForeignKey`.
+ :class:`_schema.ForeignKeyConstraint` and :class:`_schema.ForeignKey`.
The "MATCH" keyword is in fact more insidious, and is explicitly disallowed
by SQLAlchemy in conjunction with the MySQL backend. This argument is
@@ -613,7 +624,7 @@ ForeignKeyConstraint at DDL definition time.
.. versionadded:: 0.9.0 - the MySQL backend will raise a
:class:`.CompileError` when the ``match`` keyword is used with
- :class:`.ForeignKeyConstraint` or :class:`.ForeignKey`.
+ :class:`_schema.ForeignKeyConstraint` or :class:`_schema.ForeignKey`.
Reflection of Foreign Key Constraints
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -645,14 +656,16 @@ these constraints. However, MySQL does not have a unique constraint
construct that is separate from a unique index; that is, the "UNIQUE"
constraint on MySQL is equivalent to creating a "UNIQUE INDEX".
-When reflecting these constructs, the :meth:`.Inspector.get_indexes`
-and the :meth:`.Inspector.get_unique_constraints` methods will **both**
+When reflecting these constructs, the
+:meth:`_reflection.Inspector.get_indexes`
+and the :meth:`_reflection.Inspector.get_unique_constraints`
+methods will **both**
return an entry for a UNIQUE index in MySQL. However, when performing
full table reflection using ``Table(..., autoload=True)``,
the :class:`.UniqueConstraint` construct is
-**not** part of the fully reflected :class:`.Table` construct under any
+**not** part of the fully reflected :class:`_schema.Table` construct under any
circumstances; this construct is always represented by a :class:`.Index`
-with the ``unique=True`` setting present in the :attr:`.Table.indexes`
+with the ``unique=True`` setting present in the :attr:`_schema.Table.indexes`
collection.
@@ -1438,7 +1451,7 @@ class MySQLCompiler(compiler.SQLCompiler):
.. note::
- this usage is deprecated. :meth:`.Select.prefix_with`
+ this usage is deprecated. :meth:`_expression.Select.prefix_with`
should be used for special keywords at the start
of a SELECT.
diff --git a/lib/sqlalchemy/dialects/mysql/dml.py b/lib/sqlalchemy/dialects/mysql/dml.py
index 531b31bc3..c19ed6c0b 100644
--- a/lib/sqlalchemy/dialects/mysql/dml.py
+++ b/lib/sqlalchemy/dialects/mysql/dml.py
@@ -31,12 +31,13 @@ class Insert(StandardInsert):
This attribute provides all columns in this row to be referenceable
such that they will render within a ``VALUES()`` function inside the
ON DUPLICATE KEY UPDATE clause. The attribute is named ``.inserted``
- so as not to conflict with the existing :meth:`.Insert.values` method.
+ so as not to conflict with the existing
+ :meth:`_expression.Insert.values` method.
.. seealso::
:ref:`mysql_insert_on_duplicate_key_update` - example of how
- to use :attr:`.Insert.inserted`
+ to use :attr:`_expression.Insert.inserted`
"""
return self.inserted_alias.columns
@@ -56,7 +57,7 @@ class Insert(StandardInsert):
.. warning:: This dictionary does **not** take into account
Python-specified default UPDATE values or generation functions,
- e.g. those specified using :paramref:`.Column.onupdate`.
+ e.g. those specified using :paramref:`_schema.Column.onupdate`.
These values will not be exercised for an ON DUPLICATE KEY UPDATE
style of UPDATE, unless values are manually specified here.
@@ -71,7 +72,8 @@ class Insert(StandardInsert):
Passing a list of 2-tuples indicates that the parameter assignments
in the UPDATE clause should be ordered as sent, in a manner similar
- to that described for the :class:`.Update` construct overall
+ to that described for the :class:`_expression.Update`
+ construct overall
in :ref:`updates_order_parameters`::
insert().on_duplicate_key_update(
diff --git a/lib/sqlalchemy/dialects/mysql/json.py b/lib/sqlalchemy/dialects/mysql/json.py
index 10354842f..733a4d696 100644
--- a/lib/sqlalchemy/dialects/mysql/json.py
+++ b/lib/sqlalchemy/dialects/mysql/json.py
@@ -17,7 +17,7 @@ class JSON(sqltypes.JSON):
support JSON at the time of this writing.
The :class:`.mysql.JSON` type supports persistence of JSON values
- as well as the core index operations provided by :class:`.types.JSON`
+ as well as the core index operations provided by :class:`_types.JSON`
datatype, by adapting the operations to render the ``JSON_EXTRACT``
function at the database level.
diff --git a/lib/sqlalchemy/dialects/oracle/base.py b/lib/sqlalchemy/dialects/oracle/base.py
index ae869b921..bbf65371c 100644
--- a/lib/sqlalchemy/dialects/oracle/base.py
+++ b/lib/sqlalchemy/dialects/oracle/base.py
@@ -61,7 +61,7 @@ of isolation, however the SQLAlchemy Oracle dialect currently only has
explicit support for "READ COMMITTED". It is possible to emit a
"SET TRANSACTION" statement on a connection in order to use SERIALIZABLE
isolation, however the SQLAlchemy dialect will remain unaware of this setting,
-such as if the :meth:`.Connection.get_isolation_level` method is used;
+such as if the :meth:`_engine.Connection.get_isolation_level` method is used;
this method is hardcoded to return "READ COMMITTED" right now.
The AUTOCOMMIT isolation level is also supported by the cx_Oracle dialect.
@@ -304,7 +304,7 @@ Synonym/DBLINK Reflection
When using reflection with Table objects, the dialect can optionally search
for tables indicated by synonyms, either in local or remote schemas or
accessed over DBLINK, by passing the flag ``oracle_resolve_synonyms=True`` as
-a keyword argument to the :class:`.Table` construct::
+a keyword argument to the :class:`_schema.Table` construct::
some_table = Table('some_table', autoload=True,
autoload_with=some_engine,
@@ -318,8 +318,8 @@ knows how to locate the table's information using DBLINK syntax(e.g.
``@dblink``).
``oracle_resolve_synonyms`` is accepted wherever reflection arguments are
-accepted, including methods such as :meth:`.MetaData.reflect` and
-:meth:`.Inspector.get_columns`.
+accepted, including methods such as :meth:`_schema.MetaData.reflect` and
+:meth:`_reflection.Inspector.get_columns`.
If synonyms are not in use, this flag should be left disabled.
@@ -332,18 +332,22 @@ The Oracle dialect can return information about foreign key, unique, and
CHECK constraints, as well as indexes on tables.
Raw information regarding these constraints can be acquired using
-:meth:`.Inspector.get_foreign_keys`, :meth:`.Inspector.get_unique_constraints`,
-:meth:`.Inspector.get_check_constraints`, and :meth:`.Inspector.get_indexes`.
+:meth:`_reflection.Inspector.get_foreign_keys`,
+:meth:`_reflection.Inspector.get_unique_constraints`,
+:meth:`_reflection.Inspector.get_check_constraints`, and
+:meth:`_reflection.Inspector.get_indexes`.
.. versionchanged:: 1.2 The Oracle dialect can now reflect UNIQUE and
CHECK constraints.
-When using reflection at the :class:`.Table` level, the :class:`.Table`
+When using reflection at the :class:`_schema.Table` level, the
+:class:`_schema.Table`
will also include these constraints.
Note the following caveats:
-* When using the :meth:`.Inspector.get_check_constraints` method, Oracle
+* When using the :meth:`_reflection.Inspector.get_check_constraints` method,
+ Oracle
builds a special "IS NOT NULL" constraint for columns that specify
"NOT NULL". This constraint is **not** returned by default; to include
the "IS NOT NULL" constraints, pass the flag ``include_all=True``::
@@ -355,11 +359,13 @@ Note the following caveats:
all_check_constraints = inspector.get_check_constraints(
"some_table", include_all=True)
-* in most cases, when reflecting a :class:`.Table`, a UNIQUE constraint will
+* in most cases, when reflecting a :class:`_schema.Table`,
+ a UNIQUE constraint will
**not** be available as a :class:`.UniqueConstraint` object, as Oracle
mirrors unique constraints with a UNIQUE index in most cases (the exception
seems to be when two or more unique constraints represent the same columns);
- the :class:`.Table` will instead represent these using :class:`.Index`
+ the :class:`_schema.Table` will instead represent these using
+ :class:`.Index`
with the ``unique=True`` flag set.
* Oracle creates an implicit index for the primary key of a table; this index
@@ -371,11 +377,12 @@ Note the following caveats:
Table names with SYSTEM/SYSAUX tablespaces
-------------------------------------------
-The :meth:`.Inspector.get_table_names` and
-:meth:`.Inspector.get_temp_table_names`
+The :meth:`_reflection.Inspector.get_table_names` and
+:meth:`_reflection.Inspector.get_temp_table_names`
methods each return a list of table names for the current engine. These methods
are also part of the reflection which occurs within an operation such as
-:meth:`.MetaData.reflect`. By default, these operations exclude the ``SYSTEM``
+:meth:`_schema.MetaData.reflect`. By default,
+these operations exclude the ``SYSTEM``
and ``SYSAUX`` tablespaces from the operation. In order to change this, the
default list of tablespaces excluded can be changed at the engine level using
the ``exclude_tablespaces`` parameter::
@@ -392,15 +399,15 @@ DateTime Compatibility
Oracle has no datatype known as ``DATETIME``, it instead has only ``DATE``,
which can actually store a date and time value. For this reason, the Oracle
-dialect provides a type :class:`.oracle.DATE` which is a subclass of
+dialect provides a type :class:`_oracle.DATE` which is a subclass of
:class:`.DateTime`. This type has no special behavior, and is only
present as a "marker" for this type; additionally, when a database column
is reflected and the type is reported as ``DATE``, the time-supporting
-:class:`.oracle.DATE` type is used.
+:class:`_oracle.DATE` type is used.
-.. versionchanged:: 0.9.4 Added :class:`.oracle.DATE` to subclass
+.. versionchanged:: 0.9.4 Added :class:`_oracle.DATE` to subclass
:class:`.DateTime`. This is a change as previous versions
- would reflect a ``DATE`` column as :class:`.types.DATE`, which subclasses
+ would reflect a ``DATE`` column as :class:`_types.DATE`, which subclasses
:class:`.Date`. The only significance here is for schemes that are
examining the type of column for use in special Python translations or
for migrating schemas to other database backends.
@@ -411,7 +418,7 @@ Oracle Table Options
-------------------------
The CREATE TABLE phrase supports the following options with Oracle
-in conjunction with the :class:`.Table` construct:
+in conjunction with the :class:`_schema.Table` construct:
* ``ON COMMIT``::
@@ -584,7 +591,7 @@ class DATE(sqltypes.DateTime):
"""Provide the oracle DATE type.
This type has no special Python behavior, except that it subclasses
- :class:`.types.DateTime`; this is to suit the fact that the Oracle
+ :class:`_types.DateTime`; this is to suit the fact that the Oracle
``DATE`` type supports a time value.
.. versionadded:: 0.9.4
diff --git a/lib/sqlalchemy/dialects/oracle/cx_oracle.py b/lib/sqlalchemy/dialects/oracle/cx_oracle.py
index 0f41d7ed8..2cbf5b04a 100644
--- a/lib/sqlalchemy/dialects/oracle/cx_oracle.py
+++ b/lib/sqlalchemy/dialects/oracle/cx_oracle.py
@@ -97,8 +97,8 @@ as that the ``NLS_LANG`` environment variable is set appropriately, so that
the VARCHAR2 and CLOB datatypes can accommodate the data.
In the case that the Oracle database is not configured with a Unicode character
-set, the two options are to use the :class:`.oracle.NCHAR` and
-:class:`.oracle.NCLOB` datatypes explicitly, or to pass the flag
+set, the two options are to use the :class:`_oracle.NCHAR` and
+:class:`_oracle.NCLOB` datatypes explicitly, or to pass the flag
``use_nchar_for_unicode=True`` to :func:`.create_engine`, which will cause the
SQLAlchemy dialect to use NCHAR/NCLOB for the :class:`.Unicode` /
:class:`.UnicodeText` datatypes instead of VARCHAR/CLOB.
@@ -260,12 +260,12 @@ Precision Numerics
SQLAlchemy's numeric types can handle receiving and returning values as Python
``Decimal`` objects or float objects. When a :class:`.Numeric` object, or a
-subclass such as :class:`.Float`, :class:`.oracle.DOUBLE_PRECISION` etc. is in
+subclass such as :class:`.Float`, :class:`_oracle.DOUBLE_PRECISION` etc. is in
use, the :paramref:`.Numeric.asdecimal` flag determines if values should be
coerced to ``Decimal`` upon return, or returned as float objects. To make
matters more complicated under Oracle, Oracle's ``NUMBER`` type can also
represent integer values if the "scale" is zero, so the Oracle-specific
-:class:`.oracle.NUMBER` type takes this into account as well.
+:class:`_oracle.NUMBER` type takes this into account as well.
The cx_Oracle dialect makes extensive use of connection- and cursor-level
"outputtypehandler" callables in order to coerce numeric values as requested.
diff --git a/lib/sqlalchemy/dialects/postgresql/array.py b/lib/sqlalchemy/dialects/postgresql/array.py
index 9f0f676cd..a3537ba60 100644
--- a/lib/sqlalchemy/dialects/postgresql/array.py
+++ b/lib/sqlalchemy/dialects/postgresql/array.py
@@ -25,7 +25,7 @@ def Any(other, arrexpr, operator=operators.eq):
.. seealso::
- :func:`.expression.any_`
+ :func:`_expression.any_`
"""
@@ -39,7 +39,7 @@ def All(other, arrexpr, operator=operators.eq):
.. seealso::
- :func:`.expression.all_`
+ :func:`_expression.all_`
"""
@@ -68,14 +68,16 @@ class array(expression.Tuple):
ARRAY[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1
An instance of :class:`.array` will always have the datatype
- :class:`.ARRAY`. The "inner" type of the array is inferred from
+ :class:`_types.ARRAY`. The "inner" type of the array is inferred from
the values present, unless the ``type_`` keyword argument is passed::
array(['foo', 'bar'], type_=CHAR)
Multidimensional arrays are produced by nesting :class:`.array` constructs.
- The dimensionality of the final :class:`.ARRAY` type is calculated by
- recursively adding the dimensions of the inner :class:`.ARRAY` type::
+ The dimensionality of the final :class:`_types.ARRAY`
+ type is calculated by
+ recursively adding the dimensions of the inner :class:`_types.ARRAY`
+ type::
stmt = select([
array([
@@ -93,7 +95,7 @@ class array(expression.Tuple):
.. seealso::
- :class:`.postgresql.ARRAY`
+ :class:`_postgresql.ARRAY`
"""
@@ -150,11 +152,11 @@ class ARRAY(sqltypes.ARRAY):
"""PostgreSQL ARRAY type.
- .. versionchanged:: 1.1 The :class:`.postgresql.ARRAY` type is now
- a subclass of the core :class:`.types.ARRAY` type.
+ .. versionchanged:: 1.1 The :class:`_postgresql.ARRAY` type is now
+ a subclass of the core :class:`_types.ARRAY` type.
- The :class:`.postgresql.ARRAY` type is constructed in the same way
- as the core :class:`.types.ARRAY` type; a member type is required, and a
+ The :class:`_postgresql.ARRAY` type is constructed in the same way
+ as the core :class:`_types.ARRAY` type; a member type is required, and a
number of dimensions is recommended if the type is to be used for more
than one dimension::
@@ -164,11 +166,12 @@ class ARRAY(sqltypes.ARRAY):
Column("data", postgresql.ARRAY(Integer, dimensions=2))
)
- The :class:`.postgresql.ARRAY` type provides all operations defined on the
- core :class:`.types.ARRAY` type, including support for "dimensions",
+ The :class:`_postgresql.ARRAY` type provides all operations defined on the
+ core :class:`_types.ARRAY` type, including support for "dimensions",
indexed access, and simple matching such as
:meth:`.types.ARRAY.Comparator.any` and
- :meth:`.types.ARRAY.Comparator.all`. :class:`.postgresql.ARRAY` class also
+ :meth:`.types.ARRAY.Comparator.all`. :class:`_postgresql.ARRAY`
+ class also
provides PostgreSQL-specific methods for containment operations, including
:meth:`.postgresql.ARRAY.Comparator.contains`
:meth:`.postgresql.ARRAY.Comparator.contained_by`, and
@@ -176,24 +179,25 @@ class ARRAY(sqltypes.ARRAY):
mytable.c.data.contains([1, 2])
- The :class:`.postgresql.ARRAY` type may not be supported on all
+ The :class:`_postgresql.ARRAY` type may not be supported on all
PostgreSQL DBAPIs; it is currently known to work on psycopg2 only.
- Additionally, the :class:`.postgresql.ARRAY` type does not work directly in
+ Additionally, the :class:`_postgresql.ARRAY`
+ type does not work directly in
conjunction with the :class:`.ENUM` type. For a workaround, see the
special type at :ref:`postgresql_array_of_enum`.
.. seealso::
- :class:`.types.ARRAY` - base array type
+ :class:`_types.ARRAY` - base array type
- :class:`.postgresql.array` - produces a literal array value.
+ :class:`_postgresql.array` - produces a literal array value.
"""
class Comparator(sqltypes.ARRAY.Comparator):
- """Define comparison operations for :class:`.ARRAY`.
+ """Define comparison operations for :class:`_types.ARRAY`.
Note that these operations are in addition to those provided
by the base :class:`.types.ARRAY.Comparator` class, including
diff --git a/lib/sqlalchemy/dialects/postgresql/base.py b/lib/sqlalchemy/dialects/postgresql/base.py
index cb41a8f65..670de4ebf 100644
--- a/lib/sqlalchemy/dialects/postgresql/base.py
+++ b/lib/sqlalchemy/dialects/postgresql/base.py
@@ -86,7 +86,7 @@ All PostgreSQL dialects support setting of transaction isolation level
both via a dialect-specific parameter
:paramref:`.create_engine.isolation_level` accepted by :func:`.create_engine`,
as well as the :paramref:`.Connection.execution_options.isolation_level`
-argument as passed to :meth:`.Connection.execution_options`.
+argument as passed to :meth:`_engine.Connection.execution_options`.
When using a non-psycopg2 dialect, this feature works by issuing the command
``SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL <level>`` for
each new connection. For the special AUTOCOMMIT isolation level,
@@ -129,11 +129,13 @@ Remote-Schema Table Introspection and PostgreSQL search_path
name schemas **other** than ``public`` explicitly within ``Table`` definitions.
The PostgreSQL dialect can reflect tables from any schema. The
-:paramref:`.Table.schema` argument, or alternatively the
+:paramref:`_schema.Table.schema` argument, or alternatively the
:paramref:`.MetaData.reflect.schema` argument determines which schema will
-be searched for the table or tables. The reflected :class:`.Table` objects
+be searched for the table or tables. The reflected :class:`_schema.Table`
+objects
will in all cases retain this ``.schema`` attribute as was specified.
-However, with regards to tables which these :class:`.Table` objects refer to
+However, with regards to tables which these :class:`_schema.Table`
+objects refer to
via foreign key constraint, a decision must be made as to how the ``.schema``
is represented in those remote tables, in the case where that remote
schema name is also a member of the current
@@ -205,7 +207,8 @@ reflection process as follows::
...
<sqlalchemy.engine.result.ResultProxy object at 0x101612ed0>
-The above process would deliver to the :attr:`.MetaData.tables` collection
+The above process would deliver to the :attr:`_schema.MetaData.tables`
+collection
``referred`` table named **without** the schema::
>>> meta.tables['referred'].schema is None
@@ -214,8 +217,8 @@ The above process would deliver to the :attr:`.MetaData.tables` collection
To alter the behavior of reflection such that the referred schema is
maintained regardless of the ``search_path`` setting, use the
``postgresql_ignore_search_path`` option, which can be specified as a
-dialect-specific argument to both :class:`.Table` as well as
-:meth:`.MetaData.reflect`::
+dialect-specific argument to both :class:`_schema.Table` as well as
+:meth:`_schema.MetaData.reflect`::
>>> with engine.connect() as conn:
... conn.execute(text("SET search_path TO test_schema, public"))
@@ -239,7 +242,7 @@ We will now have ``test_schema.referred`` stored as schema-qualified::
you just stick to the simplest use pattern: leave the ``search_path`` set
to its default of ``public`` only, never refer to the name ``public`` as
an explicit schema name otherwise, and refer to all other schema names
- explicitly when building up a :class:`.Table` object. The options
+ explicitly when building up a :class:`_schema.Table` object. The options
described here are only for those users who can't, or prefer not to, stay
within these guidelines.
@@ -251,8 +254,8 @@ which is in the ``public`` (i.e. default) schema will always have the
``.schema`` attribute set to ``None``.
.. versionadded:: 0.9.2 Added the ``postgresql_ignore_search_path``
- dialect-level option accepted by :class:`.Table` and
- :meth:`.MetaData.reflect`.
+ dialect-level option accepted by :class:`_schema.Table` and
+ :meth:`_schema.MetaData.reflect`.
.. seealso::
@@ -304,7 +307,7 @@ or they may be *inferred* by stating the columns and conditions that comprise
the indexes.
SQLAlchemy provides ``ON CONFLICT`` support via the PostgreSQL-specific
-:func:`.postgresql.insert()` function, which provides
+:func:`_postgresql.insert()` function, which provides
the generative methods :meth:`~.postgresql.Insert.on_conflict_do_update`
and :meth:`~.postgresql.Insert.on_conflict_do_nothing`::
@@ -331,7 +334,7 @@ Both methods supply the "target" of the conflict using either the
named constraint or by column inference:
* The :paramref:`.Insert.on_conflict_do_update.index_elements` argument
- specifies a sequence containing string column names, :class:`.Column`
+ specifies a sequence containing string column names, :class:`_schema.Column`
objects, and/or SQL expression elements, which would identify a unique
index::
@@ -381,8 +384,9 @@ named constraint or by column inference:
constraint is unnamed, then inference will be used, where the expressions
and optional WHERE clause of the constraint will be spelled out in the
construct. This use is especially convenient
- to refer to the named or unnamed primary key of a :class:`.Table` using the
- :attr:`.Table.primary_key` attribute::
+ to refer to the named or unnamed primary key of a :class:`_schema.Table`
+ using the
+ :attr:`_schema.Table.primary_key` attribute::
do_update_stmt = insert_stmt.on_conflict_do_update(
constraint=my_table.primary_key,
@@ -407,17 +411,19 @@ for UPDATE::
.. warning::
- The :meth:`.Insert.on_conflict_do_update` method does **not** take into
+ The :meth:`_expression.Insert.on_conflict_do_update`
+ method does **not** take into
account Python-side default UPDATE values or generation functions, e.g.
- those specified using :paramref:`.Column.onupdate`.
+ those specified using :paramref:`_schema.Column.onupdate`.
These values will not be exercised for an ON CONFLICT style of UPDATE,
unless they are manually specified in the
:paramref:`.Insert.on_conflict_do_update.set_` dictionary.
In order to refer to the proposed insertion row, the special alias
:attr:`~.postgresql.Insert.excluded` is available as an attribute on
-the :class:`.postgresql.Insert` object; this object is a
-:class:`.ColumnCollection` which alias contains all columns of the target
+the :class:`_postgresql.Insert` object; this object is a
+:class:`_expression.ColumnCollection`
+which alias contains all columns of the target
table::
from sqlalchemy.dialects.postgresql import insert
@@ -432,7 +438,7 @@ table::
)
conn.execute(do_update_stmt)
-The :meth:`.Insert.on_conflict_do_update` method also accepts
+The :meth:`_expression.Insert.on_conflict_do_update` method also accepts
a WHERE clause using the :paramref:`.Insert.on_conflict_do_update.where`
parameter, which will limit those rows which receive an UPDATE::
@@ -484,7 +490,8 @@ Full Text Search
----------------
SQLAlchemy makes available the PostgreSQL ``@@`` operator via the
-:meth:`.ColumnElement.match` method on any textual column expression.
+:meth:`_expression.ColumnElement.match`
+method on any textual column expression.
On a PostgreSQL dialect, an expression like the following::
select([sometable.c.text.match("search string")])
@@ -505,7 +512,7 @@ Emits the equivalent of::
SELECT to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat')
-The :class:`.postgresql.TSVECTOR` type can provide for explicit CAST::
+The :class:`_postgresql.TSVECTOR` type can provide for explicit CAST::
from sqlalchemy.dialects.postgresql import TSVECTOR
from sqlalchemy import select, cast
@@ -613,8 +620,9 @@ The :class:`.Index` construct allows these to be specified via the
})
Note that the keys in the ``postgresql_ops`` dictionary are the "key" name of
-the :class:`.Column`, i.e. the name used to access it from the ``.c``
-collection of :class:`.Table`, which can be configured to be different than
+the :class:`_schema.Column`, i.e. the name used to access it from the ``.c``
+collection of :class:`_schema.Table`,
+which can be configured to be different than
the actual name of the column as expressed in the database.
If ``postgresql_ops`` is to be used against a complex SQL expression such
@@ -666,7 +674,7 @@ The tablespace can be specified on :class:`.Index` using the
.. versionadded:: 1.1
-Note that the same option is available on :class:`.Table` as well.
+Note that the same option is available on :class:`_schema.Table` as well.
.. _postgresql_index_concurrently:
@@ -722,25 +730,30 @@ PostgreSQL Index Reflection
The PostgreSQL database creates a UNIQUE INDEX implicitly whenever the
UNIQUE CONSTRAINT construct is used. When inspecting a table using
-:class:`.Inspector`, the :meth:`.Inspector.get_indexes`
-and the :meth:`.Inspector.get_unique_constraints` will report on these
+:class:`_reflection.Inspector`, the :meth:`_reflection.Inspector.get_indexes`
+and the :meth:`_reflection.Inspector.get_unique_constraints`
+will report on these
two constructs distinctly; in the case of the index, the key
``duplicates_constraint`` will be present in the index entry if it is
detected as mirroring a constraint. When performing reflection using
``Table(..., autoload=True)``, the UNIQUE INDEX is **not** returned
-in :attr:`.Table.indexes` when it is detected as mirroring a
-:class:`.UniqueConstraint` in the :attr:`.Table.constraints` collection.
+in :attr:`_schema.Table.indexes` when it is detected as mirroring a
+:class:`.UniqueConstraint` in the :attr:`_schema.Table.constraints` collection
+.
-.. versionchanged:: 1.0.0 - :class:`.Table` reflection now includes
- :class:`.UniqueConstraint` objects present in the :attr:`.Table.constraints`
+.. versionchanged:: 1.0.0 - :class:`_schema.Table` reflection now includes
+ :class:`.UniqueConstraint` objects present in the
+ :attr:`_schema.Table.constraints`
collection; the PostgreSQL backend will no longer include a "mirrored"
- :class:`.Index` construct in :attr:`.Table.indexes` if it is detected
+ :class:`.Index` construct in :attr:`_schema.Table.indexes`
+ if it is detected
as corresponding to a unique constraint.
Special Reflection Options
--------------------------
-The :class:`.Inspector` used for the PostgreSQL backend is an instance
+The :class:`_reflection.Inspector`
+used for the PostgreSQL backend is an instance
of :class:`.PGInspector`, which offers additional methods::
from sqlalchemy import create_engine, inspect
@@ -759,7 +772,7 @@ PostgreSQL Table Options
------------------------
Several options for CREATE TABLE are supported directly by the PostgreSQL
-dialect in conjunction with the :class:`.Table` construct:
+dialect in conjunction with the :class:`_schema.Table` construct:
* ``TABLESPACE``::
@@ -805,13 +818,13 @@ ARRAY Types
The PostgreSQL dialect supports arrays, both as multidimensional column types
as well as array literals:
-* :class:`.postgresql.ARRAY` - ARRAY datatype
+* :class:`_postgresql.ARRAY` - ARRAY datatype
-* :class:`.postgresql.array` - array literal
+* :class:`_postgresql.array` - array literal
-* :func:`.postgresql.array_agg` - ARRAY_AGG SQL function
+* :func:`_postgresql.array_agg` - ARRAY_AGG SQL function
-* :class:`.postgresql.aggregate_order_by` - helper for PG's ORDER BY aggregate
+* :class:`_postgresql.aggregate_order_by` - helper for PG's ORDER BY aggregate
function syntax.
JSON Types
@@ -821,18 +834,18 @@ The PostgreSQL dialect supports both JSON and JSONB datatypes, including
psycopg2's native support and support for all of PostgreSQL's special
operators:
-* :class:`.postgresql.JSON`
+* :class:`_postgresql.JSON`
-* :class:`.postgresql.JSONB`
+* :class:`_postgresql.JSONB`
HSTORE Type
-----------
The PostgreSQL HSTORE type as well as hstore literals are supported:
-* :class:`.postgresql.HSTORE` - HSTORE datatype
+* :class:`_postgresql.HSTORE` - HSTORE datatype
-* :class:`.postgresql.hstore` - hstore literal
+* :class:`_postgresql.hstore` - hstore literal
ENUM Types
----------
@@ -843,7 +856,7 @@ complexity on the SQLAlchemy side in terms of when this type should be
CREATED and DROPPED. The type object is also an independently reflectable
entity. The following sections should be consulted:
-* :class:`.postgresql.ENUM` - DDL and typing support for ENUM.
+* :class:`_postgresql.ENUM` - DDL and typing support for ENUM.
* :meth:`.PGInspector.get_enums` - retrieve a listing of current ENUM types
@@ -858,7 +871,7 @@ Using ENUM with ARRAY
The combination of ENUM and ARRAY is not directly supported by backend
DBAPIs at this time. In order to send and receive an ARRAY of ENUM,
use the following workaround type, which decorates the
-:class:`.postgresql.ARRAY` datatype.
+:class:`_postgresql.ARRAY` datatype.
.. sourcecode:: python
@@ -1268,7 +1281,7 @@ PGUuid = UUID
class TSVECTOR(sqltypes.TypeEngine):
- """The :class:`.postgresql.TSVECTOR` type implements the PostgreSQL
+ """The :class:`_postgresql.TSVECTOR` type implements the PostgreSQL
text search type TSVECTOR.
It can be used to do full text queries on natural language
@@ -1289,12 +1302,12 @@ class ENUM(sqltypes.NativeForEmulated, sqltypes.Enum):
"""PostgreSQL ENUM type.
- This is a subclass of :class:`.types.Enum` which includes
+ This is a subclass of :class:`_types.Enum` which includes
support for PG's ``CREATE TYPE`` and ``DROP TYPE``.
- When the builtin type :class:`.types.Enum` is used and the
+ When the builtin type :class:`_types.Enum` is used and the
:paramref:`.Enum.native_enum` flag is left at its default of
- True, the PostgreSQL backend will use a :class:`.postgresql.ENUM`
+ True, the PostgreSQL backend will use a :class:`_postgresql.ENUM`
type as the implementation, so the special create/drop rules
will be used.
@@ -1303,9 +1316,10 @@ class ENUM(sqltypes.NativeForEmulated, sqltypes.Enum):
parent table, in that it may be "owned" by just a single table, or
may be shared among many tables.
- When using :class:`.types.Enum` or :class:`.postgresql.ENUM`
+ When using :class:`_types.Enum` or :class:`_postgresql.ENUM`
in an "inline" fashion, the ``CREATE TYPE`` and ``DROP TYPE`` is emitted
- corresponding to when the :meth:`.Table.create` and :meth:`.Table.drop`
+ corresponding to when the :meth:`_schema.Table.create` and
+ :meth:`_schema.Table.drop`
methods are called::
table = Table('sometable', metadata,
@@ -1316,9 +1330,9 @@ class ENUM(sqltypes.NativeForEmulated, sqltypes.Enum):
table.drop(engine) # will emit DROP TABLE and DROP ENUM
To use a common enumerated type between multiple tables, the best
- practice is to declare the :class:`.types.Enum` or
- :class:`.postgresql.ENUM` independently, and associate it with the
- :class:`.MetaData` object itself::
+ practice is to declare the :class:`_types.Enum` or
+ :class:`_postgresql.ENUM` independently, and associate it with the
+ :class:`_schema.MetaData` object itself::
my_enum = ENUM('a', 'b', 'c', name='myenum', metadata=metadata)
@@ -1353,7 +1367,7 @@ class ENUM(sqltypes.NativeForEmulated, sqltypes.Enum):
my_enum.create(engine)
my_enum.drop(engine)
- .. versionchanged:: 1.0.0 The PostgreSQL :class:`.postgresql.ENUM` type
+ .. versionchanged:: 1.0.0 The PostgreSQL :class:`_postgresql.ENUM` type
now behaves more strictly with regards to CREATE/DROP. A metadata-level
ENUM type will only be created and dropped at the metadata level,
not the table level, with the exception of
@@ -1366,10 +1380,10 @@ class ENUM(sqltypes.NativeForEmulated, sqltypes.Enum):
native_enum = True
def __init__(self, *enums, **kw):
- """Construct an :class:`~.postgresql.ENUM`.
+ """Construct an :class:`_postgresql.ENUM`.
Arguments are the same as that of
- :class:`.types.Enum`, but also including
+ :class:`_types.Enum`, but also including
the following parameters.
:param create_type: Defaults to True.
@@ -1397,7 +1411,7 @@ class ENUM(sqltypes.NativeForEmulated, sqltypes.Enum):
@classmethod
def adapt_emulated_to_native(cls, impl, **kw):
- """Produce a PostgreSQL native :class:`.postgresql.ENUM` from plain
+ """Produce a PostgreSQL native :class:`_postgresql.ENUM` from plain
:class:`.Enum`.
"""
@@ -1412,13 +1426,13 @@ class ENUM(sqltypes.NativeForEmulated, sqltypes.Enum):
def create(self, bind=None, checkfirst=True):
"""Emit ``CREATE TYPE`` for this
- :class:`~.postgresql.ENUM`.
+ :class:`_postgresql.ENUM`.
If the underlying dialect does not support
PostgreSQL CREATE TYPE, no action is taken.
- :param bind: a connectable :class:`.Engine`,
- :class:`.Connection`, or similar object to emit
+ :param bind: a connectable :class:`_engine.Engine`,
+ :class:`_engine.Connection`, or similar object to emit
SQL.
:param checkfirst: if ``True``, a query against
the PG catalog will be first performed to see
@@ -1436,13 +1450,13 @@ class ENUM(sqltypes.NativeForEmulated, sqltypes.Enum):
def drop(self, bind=None, checkfirst=True):
"""Emit ``DROP TYPE`` for this
- :class:`~.postgresql.ENUM`.
+ :class:`_postgresql.ENUM`.
If the underlying dialect does not support
PostgreSQL DROP TYPE, no action is taken.
- :param bind: a connectable :class:`.Engine`,
- :class:`.Connection`, or similar object to emit
+ :param bind: a connectable :class:`_engine.Engine`,
+ :class:`_engine.Connection`, or similar object to emit
SQL.
:param checkfirst: if ``True``, a query against
the PG catalog will be first performed to see
@@ -2276,7 +2290,8 @@ class PGInspector(reflection.Inspector):
def get_foreign_table_names(self, schema=None):
"""Return a list of FOREIGN TABLE names.
- Behavior is similar to that of :meth:`.Inspector.get_table_names`,
+ Behavior is similar to that of
+ :meth:`_reflection.Inspector.get_table_names`,
except that the list is limited to those tables that report a
``relkind`` value of ``f``.
diff --git a/lib/sqlalchemy/dialects/postgresql/dml.py b/lib/sqlalchemy/dialects/postgresql/dml.py
index 626f81018..70d26a94b 100644
--- a/lib/sqlalchemy/dialects/postgresql/dml.py
+++ b/lib/sqlalchemy/dialects/postgresql/dml.py
@@ -23,7 +23,7 @@ class Insert(StandardInsert):
Adds methods for PG-specific syntaxes such as ON CONFLICT.
- The :class:`.postgresql.Insert` object is created using the
+ The :class:`_postgresql.Insert` object is created using the
:func:`sqlalchemy.dialects.postgresql.insert` function.
.. versionadded:: 1.1
@@ -41,7 +41,7 @@ class Insert(StandardInsert):
.. seealso::
:ref:`postgresql_insert_on_conflict` - example of how
- to use :attr:`.Insert.excluded`
+ to use :attr:`_expression.Insert.excluded`
"""
return alias(self.table, name="excluded").columns
@@ -66,7 +66,7 @@ class Insert(StandardInsert):
or the constraint object itself if it has a .name attribute.
:param index_elements:
- A sequence consisting of string column names, :class:`.Column`
+ A sequence consisting of string column names, :class:`_schema.Column`
objects, or other column expression objects that will be used
to infer a target index.
@@ -78,12 +78,13 @@ class Insert(StandardInsert):
Required argument. A dictionary or other mapping object
with column names as keys and expressions or literals as values,
specifying the ``SET`` actions to take.
- If the target :class:`.Column` specifies a ".key" attribute distinct
+ If the target :class:`_schema.Column` specifies a ".
+ key" attribute distinct
from the column name, that key should be used.
.. warning:: This dictionary does **not** take into account
Python-specified default UPDATE values or generation functions,
- e.g. those specified using :paramref:`.Column.onupdate`.
+ e.g. those specified using :paramref:`_schema.Column.onupdate`.
These values will not be exercised for an ON CONFLICT style of
UPDATE, unless they are manually specified in the
:paramref:`.Insert.on_conflict_do_update.set_` dictionary.
@@ -122,7 +123,7 @@ class Insert(StandardInsert):
or the constraint object itself if it has a .name attribute.
:param index_elements:
- A sequence consisting of string column names, :class:`.Column`
+ A sequence consisting of string column names, :class:`_schema.Column`
objects, or other column expression objects that will be used
to infer a target index.
diff --git a/lib/sqlalchemy/dialects/postgresql/ext.py b/lib/sqlalchemy/dialects/postgresql/ext.py
index f11919b4b..e64920719 100644
--- a/lib/sqlalchemy/dialects/postgresql/ext.py
+++ b/lib/sqlalchemy/dialects/postgresql/ext.py
@@ -46,7 +46,7 @@ class aggregate_order_by(expression.ColumnElement):
.. seealso::
- :class:`.array_agg`
+ :class:`_functions.array_agg`
"""
@@ -113,7 +113,8 @@ class ExcludeConstraint(ColumnCollectionConstraint):
where=(Column('group') != 'some group')
)
- The constraint is normally embedded into the :class:`.Table` construct
+ The constraint is normally embedded into the :class:`_schema.Table`
+ construct
directly, or added later using :meth:`.append_constraint`::
some_table = Table(
@@ -136,11 +137,14 @@ class ExcludeConstraint(ColumnCollectionConstraint):
A sequence of two tuples of the form ``(column, operator)`` where
"column" is a SQL expression element or a raw SQL string, most
- typically a :class:`.Column` object, and "operator" is a string
+ typically a :class:`_schema.Column` object,
+ and "operator" is a string
containing the operator to use. In order to specify a column name
- when a :class:`.Column` object is not available, while ensuring
+ when a :class:`_schema.Column` object is not available,
+ while ensuring
that any necessary quoting rules take effect, an ad-hoc
- :class:`.Column` or :func:`.sql.expression.column` object should be
+ :class:`_schema.Column` or :func:`_expression.column`
+ object should be
used.
:param name:
@@ -230,9 +234,9 @@ class ExcludeConstraint(ColumnCollectionConstraint):
def array_agg(*arg, **kw):
- """PostgreSQL-specific form of :class:`.array_agg`, ensures
- return type is :class:`.postgresql.ARRAY` and not
- the plain :class:`.types.ARRAY`, unless an explicit ``type_``
+ """PostgreSQL-specific form of :class:`_functions.array_agg`, ensures
+ return type is :class:`_postgresql.ARRAY` and not
+ the plain :class:`_types.ARRAY`, unless an explicit ``type_``
is passed.
.. versionadded:: 1.1
diff --git a/lib/sqlalchemy/dialects/postgresql/hstore.py b/lib/sqlalchemy/dialects/postgresql/hstore.py
index 7f90ffa0e..679805183 100644
--- a/lib/sqlalchemy/dialects/postgresql/hstore.py
+++ b/lib/sqlalchemy/dialects/postgresql/hstore.py
@@ -141,7 +141,7 @@ class HSTORE(sqltypes.Indexable, sqltypes.Concatenable, sqltypes.TypeEngine):
"""Construct a new :class:`.HSTORE`.
:param text_type: the type that should be used for indexed values.
- Defaults to :class:`.types.Text`.
+ Defaults to :class:`_types.Text`.
.. versionadded:: 1.1.0
diff --git a/lib/sqlalchemy/dialects/postgresql/json.py b/lib/sqlalchemy/dialects/postgresql/json.py
index 9661634c2..811159953 100644
--- a/lib/sqlalchemy/dialects/postgresql/json.py
+++ b/lib/sqlalchemy/dialects/postgresql/json.py
@@ -102,14 +102,14 @@ colspecs[sqltypes.JSON.JSONPathType] = JSONPathType
class JSON(sqltypes.JSON):
"""Represent the PostgreSQL JSON type.
- This type is a specialization of the Core-level :class:`.types.JSON`
- type. Be sure to read the documentation for :class:`.types.JSON` for
+ This type is a specialization of the Core-level :class:`_types.JSON`
+ type. Be sure to read the documentation for :class:`_types.JSON` for
important tips regarding treatment of NULL values and ORM use.
- .. versionchanged:: 1.1 :class:`.postgresql.JSON` is now a PostgreSQL-
- specific specialization of the new :class:`.types.JSON` type.
+ .. versionchanged:: 1.1 :class:`_postgresql.JSON` is now a PostgreSQL-
+ specific specialization of the new :class:`_types.JSON` type.
- The operators provided by the PostgreSQL version of :class:`.JSON`
+ The operators provided by the PostgreSQL version of :class:`_types.JSON`
include:
* Index operations (the ``->`` operator)::
@@ -142,13 +142,15 @@ class JSON(sqltypes.JSON):
data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')].astext == 'some value'
- .. versionchanged:: 1.1 The :meth:`.ColumnElement.cast` operator on
+ .. versionchanged:: 1.1 The :meth:`_expression.ColumnElement.cast`
+ operator on
JSON objects now requires that the :attr:`.JSON.Comparator.astext`
modifier be called explicitly, if the cast works only from a textual
string.
Index operations return an expression object whose type defaults to
- :class:`.JSON` by default, so that further JSON-oriented instructions
+ :class:`_types.JSON` by default,
+ so that further JSON-oriented instructions
may be called upon the result type.
Custom serializers and deserializers are specified at the dialect level,
@@ -166,16 +168,16 @@ class JSON(sqltypes.JSON):
.. seealso::
- :class:`.types.JSON` - Core level JSON type
+ :class:`_types.JSON` - Core level JSON type
- :class:`.JSONB`
+ :class:`_postgresql.JSONB`
""" # noqa
astext_type = sqltypes.Text()
def __init__(self, none_as_null=False, astext_type=None):
- """Construct a :class:`.JSON` type.
+ """Construct a :class:`_types.JSON` type.
:param none_as_null: if True, persist the value ``None`` as a
SQL NULL value, not the JSON encoding of ``null``. Note that
@@ -190,11 +192,11 @@ class JSON(sqltypes.JSON):
.. seealso::
- :attr:`.JSON.NULL`
+ :attr:`_types.JSON.NULL`
:param astext_type: the type to use for the
:attr:`.JSON.Comparator.astext`
- accessor on indexed attributes. Defaults to :class:`.types.Text`.
+ accessor on indexed attributes. Defaults to :class:`_types.Text`.
.. versionadded:: 1.1
@@ -204,7 +206,7 @@ class JSON(sqltypes.JSON):
self.astext_type = astext_type
class Comparator(sqltypes.JSON.Comparator):
- """Define comparison operations for :class:`.JSON`."""
+ """Define comparison operations for :class:`_types.JSON`."""
@property
def astext(self):
@@ -217,7 +219,7 @@ class JSON(sqltypes.JSON):
.. seealso::
- :meth:`.ColumnElement.cast`
+ :meth:`_expression.ColumnElement.cast`
"""
if isinstance(self.expr.right.type, sqltypes.JSON.JSONPathType):
@@ -241,7 +243,8 @@ ischema_names["json"] = JSON
class JSONB(JSON):
"""Represent the PostgreSQL JSONB type.
- The :class:`.JSONB` type stores arbitrary JSONB format data, e.g.::
+ The :class:`_postgresql.JSONB` type stores arbitrary JSONB format data, e.
+ g.::
data_table = Table('data_table', metadata,
Column('id', Integer, primary_key=True),
@@ -254,19 +257,22 @@ class JSONB(JSON):
data = {"key1": "value1", "key2": "value2"}
)
- The :class:`.JSONB` type includes all operations provided by
- :class:`.JSON`, including the same behaviors for indexing operations.
+ The :class:`_postgresql.JSONB` type includes all operations provided by
+ :class:`_types.JSON`, including the same behaviors for indexing operations
+ .
It also adds additional operators specific to JSONB, including
:meth:`.JSONB.Comparator.has_key`, :meth:`.JSONB.Comparator.has_all`,
:meth:`.JSONB.Comparator.has_any`, :meth:`.JSONB.Comparator.contains`,
and :meth:`.JSONB.Comparator.contained_by`.
- Like the :class:`.JSON` type, the :class:`.JSONB` type does not detect
+ Like the :class:`_types.JSON` type, the :class:`_postgresql.JSONB`
+ type does not detect
in-place changes when used with the ORM, unless the
:mod:`sqlalchemy.ext.mutable` extension is used.
Custom serializers and deserializers
- are shared with the :class:`.JSON` class, using the ``json_serializer``
+ are shared with the :class:`_types.JSON` class,
+ using the ``json_serializer``
and ``json_deserializer`` keyword arguments. These must be specified
at the dialect level using :func:`.create_engine`. When using
psycopg2, the serializers are associated with the jsonb type using
@@ -278,14 +284,14 @@ class JSONB(JSON):
.. seealso::
- :class:`.JSON`
+ :class:`_types.JSON`
"""
__visit_name__ = "JSONB"
class Comparator(JSON.Comparator):
- """Define comparison operations for :class:`.JSON`."""
+ """Define comparison operations for :class:`_types.JSON`."""
def has_key(self, other):
"""Boolean expression. Test for presence of a key. Note that the
diff --git a/lib/sqlalchemy/dialects/postgresql/psycopg2.py b/lib/sqlalchemy/dialects/postgresql/psycopg2.py
index 89a63fd47..6d2672bbe 100644
--- a/lib/sqlalchemy/dialects/postgresql/psycopg2.py
+++ b/lib/sqlalchemy/dialects/postgresql/psycopg2.py
@@ -119,18 +119,21 @@ Per-Statement/Connection Execution Options
-------------------------------------------
The following DBAPI-specific options are respected when used with
-:meth:`.Connection.execution_options`, :meth:`.Executable.execution_options`,
-:meth:`.Query.execution_options`, in addition to those not specific to DBAPIs:
+:meth:`_engine.Connection.execution_options`,
+:meth:`.Executable.execution_options`,
+:meth:`_query.Query.execution_options`,
+in addition to those not specific to DBAPIs:
* ``isolation_level`` - Set the transaction isolation level for the lifespan
- of a :class:`.Connection` (can only be set on a connection, not a statement
+ of a :class:`_engine.Connection` (can only be set on a connection,
+ not a statement
or query). See :ref:`psycopg2_isolation_level`.
* ``stream_results`` - Enable or disable usage of psycopg2 server side
cursors - this feature makes use of "named" cursors in combination with
special result handling methods so that result rows are not fully buffered.
If ``None`` or not set, the ``server_side_cursors`` option of the
- :class:`.Engine` is used.
+ :class:`_engine.Engine` is used.
* ``max_row_buffer`` - when using ``stream_results``, an integer value that
specifies the maximum number of rows to buffer at a time. This is
@@ -153,7 +156,8 @@ Modern versions of psycopg2 include a feature known as
have been shown in benchmarking to improve psycopg2's executemany()
performance, primarily with INSERT statements, by multiple orders of magnitude.
SQLAlchemy allows this extension to be used for all ``executemany()`` style
-calls invoked by an :class:`.Engine` when used with :ref:`multiple parameter
+calls invoked by an :class:`_engine.Engine`
+when used with :ref:`multiple parameter
sets <execute_multiple>`, which includes the use of this feature both by the
Core as well as by the ORM for inserts of objects with non-autogenerated
primary key values, by adding the ``executemany_mode`` flag to
@@ -180,13 +184,15 @@ Possible options for ``executemany_mode`` include:
semicolon. This is the same behavior as was provided by the
``use_batch_mode=True`` flag.
-* ``'values'``- For Core :func:`~.sql.expression.insert` constructs only (including those
+* ``'values'``- For Core :func:`_expression.insert`
+ constructs only (including those
emitted by the ORM automatically), the ``psycopg2.extras.execute_values``
extension is used so that multiple parameter sets are grouped into a single
INSERT statement and joined together with multiple VALUES expressions. This
method requires that the string text of the VALUES clause inside the
INSERT statement is manipulated, so is only supported with a compiled
- :func:`~.sql.expression.insert` construct where the format is predictable. For all other
+ :func:`_expression.insert` construct where the format is predictable.
+ For all other
constructs, including plain textual INSERT statements not rendered by the
SQLAlchemy expression language compiler, the
``psycopg2.extras.execute_batch`` method is used. It is therefore important
@@ -213,7 +219,8 @@ more appropriate::
.. seealso::
:ref:`execute_multiple` - General information on using the
- :class:`.Connection` object to execute statements in such a way as to make
+ :class:`_engine.Connection`
+ object to execute statements in such a way as to make
use of the DBAPI ``.executemany()`` method.
.. versionchanged:: 1.3.7 - Added support for
@@ -299,7 +306,8 @@ actually contain percent or parenthesis symbols; as SQLAlchemy in many cases
generates bound parameter names based on the name of a column, the presence
of these characters in a column name can lead to problems.
-There are two solutions to the issue of a :class:`.schema.Column` that contains
+There are two solutions to the issue of a :class:`_schema.Column`
+that contains
one of these characters in its name. One is to specify the
:paramref:`.schema.Column.key` for columns that have such names::
@@ -312,10 +320,12 @@ Above, an INSERT statement such as ``measurement.insert()`` will use
``measurement.c.size_meters > 10`` will derive the bound parameter name
from the ``size_meters`` key as well.
-.. versionchanged:: 1.0.0 - SQL expressions will use :attr:`.Column.key`
+.. versionchanged:: 1.0.0 - SQL expressions will use
+ :attr:`_schema.Column.key`
as the source of naming when anonymous bound parameters are created
in SQL expressions; previously, this behavior only applied to
- :meth:`.Table.insert` and :meth:`.Table.update` parameter names.
+ :meth:`_schema.Table.insert` and :meth:`_schema.Table.update`
+ parameter names.
The other solution is to use a positional format; psycopg2 allows use of the
"format" paramstyle, which can be passed to
@@ -352,7 +362,8 @@ As discussed in :ref:`postgresql_isolation_level`,
all PostgreSQL dialects support setting of transaction isolation level
both via the ``isolation_level`` parameter passed to :func:`.create_engine`,
as well as the ``isolation_level`` argument used by
-:meth:`.Connection.execution_options`. When using the psycopg2 dialect, these
+:meth:`_engine.Connection.execution_options`. When using the psycopg2 dialect
+, these
options make use of psycopg2's ``set_isolation_level()`` connection method,
rather than emitting a PostgreSQL directive; this is because psycopg2's
API-level setting is always emitted at the start of each transaction in any
diff --git a/lib/sqlalchemy/dialects/sqlite/base.py b/lib/sqlalchemy/dialects/sqlite/base.py
index d3105f268..1e265a9eb 100644
--- a/lib/sqlalchemy/dialects/sqlite/base.py
+++ b/lib/sqlalchemy/dialects/sqlite/base.py
@@ -19,7 +19,7 @@ not provide out of the box functionality for translating values between Python
`datetime` objects and a SQLite-supported format. SQLAlchemy's own
:class:`~sqlalchemy.types.DateTime` and related types provide date formatting
and parsing functionality when SQLite is used. The implementation classes are
-:class:`~.sqlite.DATETIME`, :class:`~.sqlite.DATE` and :class:`~.sqlite.TIME`.
+:class:`_sqlite.DATETIME`, :class:`_sqlite.DATE` and :class:`_sqlite.TIME`.
These types represent dates and times as ISO formatted strings, which also
nicely support ordering. There's no reliance on typical "libc" internals for
these functions so historical dates are fully supported.
@@ -216,7 +216,7 @@ SAVEPOINT Support
SQLite supports SAVEPOINTs, which only function once a transaction is
begun. SQLAlchemy's SAVEPOINT support is available using the
-:meth:`.Connection.begin_nested` method at the Core level, and
+:meth:`_engine.Connection.begin_nested` method at the Core level, and
:meth:`.Session.begin_nested` at the ORM level. However, SAVEPOINTs
won't work at all with pysqlite unless workarounds are taken.
@@ -303,11 +303,12 @@ itself depending on the location of the target constraint. To render this
clause within DDL, the extension parameter ``sqlite_on_conflict`` can be
specified with a string conflict resolution algorithm within the
:class:`.PrimaryKeyConstraint`, :class:`.UniqueConstraint`,
-:class:`.CheckConstraint` objects. Within the :class:`.Column` object, there
+:class:`.CheckConstraint` objects. Within the :class:`_schema.Column` object,
+there
are individual parameters ``sqlite_on_conflict_not_null``,
``sqlite_on_conflict_primary_key``, ``sqlite_on_conflict_unique`` which each
correspond to the three types of relevant constraint types that can be
-indicated from a :class:`.Column` object.
+indicated from a :class:`_schema.Column` object.
.. seealso::
@@ -339,9 +340,10 @@ The above renders CREATE TABLE DDL as::
)
-When using the :paramref:`.Column.unique` flag to add a UNIQUE constraint
+When using the :paramref:`_schema.Column.unique`
+flag to add a UNIQUE constraint
to a single column, the ``sqlite_on_conflict_unique`` parameter can
-be added to the :class:`.Column` as well, which will be added to the
+be added to the :class:`_schema.Column` as well, which will be added to the
UNIQUE constraint in the DDL::
some_table = Table(
@@ -417,30 +419,30 @@ http://www.sqlite.org/datatype3.html section 2.1.
The provided typemap will make direct associations from an exact string
name match for the following types:
-:class:`~.types.BIGINT`, :class:`~.types.BLOB`,
-:class:`~.types.BOOLEAN`, :class:`~.types.BOOLEAN`,
-:class:`~.types.CHAR`, :class:`~.types.DATE`,
-:class:`~.types.DATETIME`, :class:`~.types.FLOAT`,
-:class:`~.types.DECIMAL`, :class:`~.types.FLOAT`,
-:class:`~.types.INTEGER`, :class:`~.types.INTEGER`,
-:class:`~.types.NUMERIC`, :class:`~.types.REAL`,
-:class:`~.types.SMALLINT`, :class:`~.types.TEXT`,
-:class:`~.types.TIME`, :class:`~.types.TIMESTAMP`,
-:class:`~.types.VARCHAR`, :class:`~.types.NVARCHAR`,
-:class:`~.types.NCHAR`
+:class:`_types.BIGINT`, :class:`_types.BLOB`,
+:class:`_types.BOOLEAN`, :class:`_types.BOOLEAN`,
+:class:`_types.CHAR`, :class:`_types.DATE`,
+:class:`_types.DATETIME`, :class:`_types.FLOAT`,
+:class:`_types.DECIMAL`, :class:`_types.FLOAT`,
+:class:`_types.INTEGER`, :class:`_types.INTEGER`,
+:class:`_types.NUMERIC`, :class:`_types.REAL`,
+:class:`_types.SMALLINT`, :class:`_types.TEXT`,
+:class:`_types.TIME`, :class:`_types.TIMESTAMP`,
+:class:`_types.VARCHAR`, :class:`_types.NVARCHAR`,
+:class:`_types.NCHAR`
When a type name does not match one of the above types, the "type affinity"
lookup is used instead:
-* :class:`~.types.INTEGER` is returned if the type name includes the
+* :class:`_types.INTEGER` is returned if the type name includes the
string ``INT``
-* :class:`~.types.TEXT` is returned if the type name includes the
+* :class:`_types.TEXT` is returned if the type name includes the
string ``CHAR``, ``CLOB`` or ``TEXT``
-* :class:`~.types.NullType` is returned if the type name includes the
+* :class:`_types.NullType` is returned if the type name includes the
string ``BLOB``
-* :class:`~.types.REAL` is returned if the type name includes the string
+* :class:`_types.REAL` is returned if the type name includes the string
``REAL``, ``FLOA`` or ``DOUB``.
-* Otherwise, the :class:`~.types.NUMERIC` type is used.
+* Otherwise, the :class:`_types.NUMERIC` type is used.
.. versionadded:: 0.9.3 Support for SQLite type affinity rules when reflecting
columns.
@@ -560,7 +562,7 @@ the very specific case where an application is forced to use column names that
contain dots, and the functionality of :meth:`.ResultProxy.keys` and
:meth:`.Row.keys()` is required to return these dotted names unmodified,
the ``sqlite_raw_colnames`` execution option may be provided, either on a
-per-:class:`.Connection` basis::
+per-:class:`_engine.Connection` basis::
result = conn.execution_options(sqlite_raw_colnames=True).exec_driver_sql('''
select x.a, x.b from x where a=1
@@ -569,11 +571,11 @@ per-:class:`.Connection` basis::
''')
assert result.keys() == ["x.a", "x.b"]
-or on a per-:class:`.Engine` basis::
+or on a per-:class:`_engine.Engine` basis::
engine = create_engine("sqlite://", execution_options={"sqlite_raw_colnames": True})
-When using the per-:class:`.Engine` execution option, note that
+When using the per-:class:`_engine.Engine` execution option, note that
**Core and ORM queries that use UNION may not function properly**.
""" # noqa
diff --git a/lib/sqlalchemy/dialects/sqlite/json.py b/lib/sqlalchemy/dialects/sqlite/json.py
index db185dd4d..775f557f8 100644
--- a/lib/sqlalchemy/dialects/sqlite/json.py
+++ b/lib/sqlalchemy/dialects/sqlite/json.py
@@ -9,8 +9,8 @@ class JSON(sqltypes.JSON):
`loadable extension <https://www.sqlite.org/loadext.html>`_ and as such
may not be available, or may require run-time loading.
- The :class:`.sqlite.JSON` type supports persistence of JSON values
- as well as the core index operations provided by :class:`.types.JSON`
+ The :class:`_sqlite.JSON` type supports persistence of JSON values
+ as well as the core index operations provided by :class:`_types.JSON`
datatype, by adapting the operations to render the ``JSON_EXTRACT``
function wrapped in the ``JSON_QUOTE`` function at the database level.
Extracted values are quoted in order to ensure that the results are
diff --git a/lib/sqlalchemy/dialects/sqlite/pysqlite.py b/lib/sqlalchemy/dialects/sqlite/pysqlite.py
index 72bbd0177..307114c03 100644
--- a/lib/sqlalchemy/dialects/sqlite/pysqlite.py
+++ b/lib/sqlalchemy/dialects/sqlite/pysqlite.py
@@ -326,7 +326,8 @@ ourselves. This is achieved using two event listeners::
.. warning:: When using the above recipe, it is advised to not use the
:paramref:`.Connection.execution_options.isolation_level` setting on
- :class:`.Connection` and :func:`.create_engine` with the SQLite driver,
+ :class:`_engine.Connection` and :func:`.create_engine`
+ with the SQLite driver,
as this function necessarily will also alter the ".isolation_level" setting.