summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/build/content/sqlexpression.txt49
1 files changed, 26 insertions, 23 deletions
diff --git a/doc/build/content/sqlexpression.txt b/doc/build/content/sqlexpression.txt
index 9b35b2f23..a0d4c3cd7 100644
--- a/doc/build/content/sqlexpression.txt
+++ b/doc/build/content/sqlexpression.txt
@@ -26,7 +26,7 @@ The `echo` flag is a shortcut to setting up SQLAlchemy logging, which is accompl
The SQL Expression Language constructs its expressions in most cases against table columns. In SQLAlchemy, a column is most often represented by an object called `Column`, and in all cases a `Column` is associated with a `Table`. A collection of `Table` objects and their associated child objects is referred to as **database metadata**. In this tutorial we will explicitly lay out several `Table` objects, but note that SA can also "import" whole sets of `Table` objects automatically from an existing database (this process is called **table reflection**).
-We define our tables all within a catalog called `MetaData`, using the `Table` construct, which resembles regular SQL CREATE TABLE statements.
+We define our tables all within a catalog called `MetaData`, using the `Table` construct, which resembles regular SQL CREATE TABLE statements. We'll make two tables, one of which represents "users" in an application, and another which represents zero or more "email addreses" for each row in the "users" table:
{python}
>>> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
@@ -38,9 +38,10 @@ We define our tables all within a catalog called `MetaData`, using the `Table` c
... )
>>> addresses = Table('addresses', metadata,
- ... Column('id', Integer, primary_key=True),
- ... Column('user_id', None, ForeignKey('users.id')),
- ... Column('email_address', String(50), nullable=False))
+ ... Column('id', Integer, primary_key=True),
+ ... Column('user_id', None, ForeignKey('users.id')),
+ ... Column('email_address', String(50), nullable=False)
+ ... )
All about how to define `Table` objects, as well as how to create them from an existing database automatically, is described in [metadata](rel:metadata).
@@ -72,7 +73,7 @@ Next, to tell the `MetaData` we'd actually like to create our selection of table
## Insert Expressions
-The first SQL expression we'll use is the `Insert` construct, which represents an INSERT statement. This is typically created relative to its target table:
+The first SQL expression we'll create is the `Insert` construct, which represents an INSERT statement. This is typically created relative to its target table:
{python}
>>> ins = users.insert()
@@ -98,7 +99,7 @@ Above, while the `values` keyword limited the VALUES clause to just two columns,
## Executing Inserts {@name=executing}
-The interesting part of an `Insert` is executing it. In this tutorial, we will generally focus on the most explicit method of executing a SQL construct, and later touch upon some "shortcut" ways to do it. The `engine` object we created is a repository for database connections capable of issuing SQL to the database. To acquire one of these we use the `connect()` method:
+The interesting part of an `Insert` is executing it. In this tutorial, we will generally focus on the most explicit method of executing a SQL construct, and later touch upon some "shortcut" ways to do it. The `engine` object we created is a repository for database connections capable of issuing SQL to the database. To acquire a connection, we use the `connect()` method:
{python}
>>> conn = engine.connect()
@@ -127,11 +128,11 @@ What about the `result` variable we got when we called `execute()` ? As the SQL
>>> result.last_inserted_ids()
[1]
-The value of `1` was automatically generated by SQLite, but only because we did not specify the `id` column explicitly; otherwise, our explicit version would have been used. In either case, SQLAlchemy always knows how to get at a newly generated primary key value, even though the method of generating them is different across different databases; each databases' `Dialect` knows the specific steps needed to determine the correct value (or values; note that `last_inserted_ids()` returns a list so that it supports composite primary keys).
+The value of `1` was automatically generated by SQLite, but only because we did not specify the `id` column in our `Insert` statement; otherwise, our explicit value would have been used. In either case, SQLAlchemy always knows how to get at a newly generated primary key value, even though the method of generating them is different across different databases; each databases' `Dialect` knows the specific steps needed to determine the correct value (or values; note that `last_inserted_ids()` returns a list so that it supports composite primary keys).
## Executing Multiple Inserts {@name=execmany}
-Our insert example above was intentionally a little drawn out to show some various behaviors of expression language constructs. In the usual case, an `Insert` statement is usually compiled against the parameters sent to the `execute()` method on `Connection`, so that theres no need to construct the object against a specific set of parameters. Lets create a generic `Insert` statement again and use it in the "normal" way:
+Our insert example above was intentionally a little drawn out to show some various behaviors of expression language constructs. In the usual case, an `Insert` statement is usually compiled against the parameters sent to the `execute()` method on `Connection`, so that there's no need to use the `values` keyword with `Insert`. Lets create a generic `Insert` statement again and use it in the "normal" way:
{python}
>>> ins = users.insert()
@@ -219,7 +220,7 @@ If we'd like to more carefully control the columns which are placed in the COLUM
SELECT users.name, users.fullname
FROM users
[]
- >>> result.fetchall() #doctest: +NORMALIZE_WHITESPACE
+ {stop}>>> result.fetchall() #doctest: +NORMALIZE_WHITESPACE
[(u'jack', u'Jack Jones'), (u'wendy', u'Wendy Williams')]
Lets observe something interesting about the FROM clause. Whereas the generated statement contains two distinct sections, a "SELECT columns" part and a "FROM table" part, our `select()` construct only has a list containing columns. How does this work ? Let's try putting *two* tables into our `select()` statement:
@@ -276,13 +277,13 @@ Since we've stumbled upon SQLAlchemy's operator paradigm, let's go through some
>>> print users.c.id==addresses.c.user_id
users.id = addresses.user_id
-If we put some kind of literal value in there (a literal meaning, not a SQLAlchemy clause object), we get a bind parameter:
+If we use a literal value (a literal meaning, not a SQLAlchemy clause object), we get a bind parameter:
{python}
>>> print users.c.id==7
users.id = :users_id
-The `7` literal is embedded in there; we can use the same trick we did with the `Insert` object to see it:
+The `7` literal is embedded in `ClauseElement`; we can use the same trick we did with the `Insert` object to see it:
{python}
>>> (users.c.id==7).compile().params
@@ -297,9 +298,10 @@ Most Python operators, as it turns out, produce a SQL expression here, like equa
>>> # None converts to IS NULL
>>> print users.c.name == None
users.name IS NULL
-
- >>> print users.c.name > 'fred'
- users.name > :users_name
+
+ >>> # reverse works too
+ >>> print 'fred' > users.c.name
+ :users_name > users.name
If we add two integer columns together, we get an addition expression:
@@ -402,14 +404,14 @@ To gain a "hybrid" approach, any of SA's SQL constructs can have text freely int
... ),
... from_obj=['users', 'addresses']
... )
- >>> print conn.execute(s, x='%@aol.com', y='%@msn.com').fetchall() #doctest: +NORMALIZE_WHITESPACE
+ {sql}>>> print conn.execute(s, x='%@aol.com', y='%@msn.com').fetchall() #doctest: +NORMALIZE_WHITESPACE
SELECT users.fullname || ', ' || addresses.email_address AS title
FROM users, addresses
WHERE users.id = addresses.user_id AND users.name BETWEEN 'm' AND 'z' AND (addresses.email_address LIKE ? OR addresses.email_address LIKE ?)
['%@aol.com', '%@msn.com']
{stop}[(u'Wendy Williams, wendy@aol.com',)]
-Going from constructed SQL to text, we lose some capabilities. We lose the capability for SQLAlchemy to compile our expression to a specific target database; above, our expression won't work with MySQL since it has no `||` construct. It also becomes more tedious for SQLAlchemy to be made aware of the datatypes in use; for example, if our bind parameters required UTF-8 encoding before going in, or conversion from a Python `datetime` into a string (as is required with SQLite), we would have to add extra information to our `text()` construct. Similarly issues arise (and are worked around through explicit means as well) on the result set side, where SQLAlchemy also performs type-specific data conversion in some cases. Finally, what we really lose from our statement is the ability to manipulate it, transform it, and analyze it. These features are critical when using the ORM, which makes heavy usage of relational transformations. To show off what we mean, we'll first introduce the ALIAS construct and the JOIN construct, just so we have some juicier bits to play with.
+Going from constructed SQL to text, we lose some capabilities. We lose the capability for SQLAlchemy to compile our expression to a specific target database; above, our expression won't work with MySQL since it has no `||` construct. It also becomes more tedious for SQLAlchemy to be made aware of the datatypes in use; for example, if our bind parameters required UTF-8 encoding before going in, or conversion from a Python `datetime` into a string (as is required with SQLite), we would have to add extra information to our `text()` construct. Similar issues arise on the result set side, where SQLAlchemy also performs type-specific data conversion in some cases; still more information can be added to `text()` to work around this. But what we really lose from our statement is the ability to manipulate it, transform it, and analyze it. These features are critical when using the ORM, which makes heavy usage of relational transformations. To show off what we mean, we'll first introduce the ALIAS construct and the JOIN construct, just so we have some juicier bits to play with.
## Using Aliases {@name=aliases}
@@ -544,7 +546,7 @@ We also come across that they'd like only users who have an address at MSN. A q
... and_(addresses.c.user_id==users.c.id, addresses.c.email_address.like('%@msn.com'))
... ).correlate(users))
-And finally, the application also wants to see the listing of email addresses at once; so to save queries, we outerjoin the `addresses` table (using an outer join so that users with no addresses come back as well). But also, since the `users` and `addresses` table both have a column named `id`, let's isolate their names from each other in the COLUMNS clause by using labels:
+And finally, the application also wants to see the listing of email addresses at once; so to save queries, we outerjoin the `addresses` table (using an outer join so that users with no addresses come back as well; since we're programmatic, we might not have kept track that we used an EXISTS clause against the `addresses` table too...). Additionally, since the `users` and `addresses` table both have a column named `id`, let's isolate their names from each other in the COLUMNS clause by using labels:
{python}
>>> query = query.column(addresses).select_from(users.outerjoin(addresses)).apply_labels()
@@ -552,29 +554,30 @@ And finally, the application also wants to see the listing of email addresses at
Let's bake for .0001 seconds and see what rises:
{python}
- {sql}>>> conn.execute(query).fetchall()
+ {opensql}>>> conn.execute(query).fetchall()
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, addresses.id AS addresses_id, addresses.user_id AS addresses_user_id, addresses.email_address AS addresses_email_address
FROM users LEFT OUTER JOIN addresses ON users.id = addresses.user_id
WHERE users.name = ? AND (EXISTS (SELECT addresses.id
FROM addresses
WHERE addresses.user_id = users.id AND addresses.email_address LIKE ?)) ORDER BY users.fullname DESC
['jack', '%@msn.com']
- [(1, u'jack', u'Jack Jones', 1, 1, u'jack@yahoo.com'), (1, u'jack', u'Jack Jones', 2, 1, u'jack@msn.com')]
+ {stop}[(1, u'jack', u'Jack Jones', 1, 1, u'jack@yahoo.com'), (1, u'jack', u'Jack Jones', 2, 1, u'jack@msn.com')]
-So we started small, added one little thing at a time, and at the end we have a huge statement..which actually works. Now let's do one more thing; the searching function wants to add another `address` criterion on, however it doesn't want to construct an alias of the `addresses` table; suppose many parts of the application are written to deal specifically with the `addresses` table, and to change all those functions to support receiving an arbitrary alias of the address would be cumbersome. We can actually *convert* the `addresses` table within the *existing* statement to be an alias of itself, using `replace_selectable()`:
+So we started small, added one little thing at a time, and at the end we have a huge statement..which actually works. Now let's do one more thing; the searching function wants to add another `email_address` criterion on, however it doesn't want to construct an alias of the `addresses` table; suppose many parts of the application are written to deal specifically with the `addresses` table, and to change all those functions to support receiving an arbitrary alias of the address would be cumbersome. We can actually *convert* the `addresses` table within the *existing* statement to be an alias of itself, using `replace_selectable()`:
{python}
>>> a1 = addresses.alias()
>>> query = query.replace_selectable(addresses, a1)
>>> print query
- SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, addresses_1.id AS addresses_1_id, addresses_1.user_id AS addresses_1_user_id, addresses_1.email_address AS addresses_1_email_address
+ {opensql}SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, addresses_1.id AS addresses_1_id, addresses_1.user_id AS addresses_1_user_id, addresses_1.email_address AS addresses_1_email_address
FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id
WHERE users.name = :users_name AND (EXISTS (SELECT addresses_1.id
FROM addresses AS addresses_1
WHERE addresses_1.user_id = users.id AND addresses_1.email_address LIKE :addresses_email_address)) ORDER BY users.fullname DESC
One more thing though, with automatic labeling applied as well as anonymous aliasing, how do we retrieve the columns from the rows for this thing ? The label for the `email_addresses` column is now the generated name `addresses_1_email_address`; and in another statement might be something different ! This is where accessing by result columns by `Column` object becomes very useful:
-
+
+ {python}
{sql}>>> for row in conn.execute(query):
... print "Name:", row[users.c.name], "Email Address", row[a1.c.email_address]
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, addresses_1.id AS addresses_1_id, addresses_1.user_id AS addresses_1_user_id, addresses_1.email_address AS addresses_1_email_address
@@ -586,4 +589,4 @@ One more thing though, with automatic labeling applied as well as anonymous alia
{stop}Name: jack Email Address jack@yahoo.com
Name: jack Email Address jack@msn.com
-The above example, by it's end, got significantly more intense than the typical end-user constructed SQL will usually be. However when writing higher-level tools such as ORMs, they become more significant. SQLAlchemy's ORM performs transformations like the above in spades. \ No newline at end of file
+The above example, by it's end, got significantly more intense than the typical end-user constructed SQL will usually be. However when writing higher-level tools such as ORMs, they become more significant. SQLAlchemy's ORM performs transformations like the above in spades.