summaryrefslogtreecommitdiff
path: root/doc/source/historical/RepositoryFormat.trac
diff options
context:
space:
mode:
Diffstat (limited to 'doc/source/historical/RepositoryFormat.trac')
-rw-r--r--doc/source/historical/RepositoryFormat.trac56
1 files changed, 0 insertions, 56 deletions
diff --git a/doc/source/historical/RepositoryFormat.trac b/doc/source/historical/RepositoryFormat.trac
deleted file mode 100644
index 65b215d..0000000
--- a/doc/source/historical/RepositoryFormat.trac
+++ /dev/null
@@ -1,56 +0,0 @@
-This plan has several problems and has been modified; new plan is discussed in wiki:RepositoryFormat2
-
-----
-
-One problem with [http://www.rubyonrails.org/ Ruby on Rails'] (very good) schema migration system is the behavior of scripts that depend on outside sources; ie. the application. If those change, there's no guarantee that such scripts will behave as they did before, and you'll get strange results.
-
-For example, suppose one defines a SQLAlchemy table:
-{{{
-users = Table('users', metadata,
- Column('user_id', Integer, primary_key = True),
- Column('user_name', String(16), nullable = False),
- Column('password', String(20), nullable = False)
-)
-}}}
-and creates it in a change script:
-{{{
-from project import table
-
-def upgrade():
- table.users.create()
-}}}
-
-Suppose we later add a column to this table. We write an appropriate change script:
-{{{
-from project import table
-
-def upgrade():
- # This syntax isn't set in stone yet
- table.users.add_column('email_address', String(60), key='email')
-}}}
-...and change our application's table definition:
-{{{
-users = Table('users', metadata,
- Column('user_id', Integer, primary_key = True),
- Column('user_name', String(16), nullable = False),
- Column('password', String(20), nullable = False),
- Column('email_address', String(60), key='email') #new column
-)
-}}}
-
-Modifying the table definition changes how our first script behaves - it will create the table with the new column. This might work if we only apply change scripts to a few database which are always kept up to date (or very close), but we'll run into errors eventually if our migration scripts' behavior isn't consistent.
-
-----
-
-One solution is to generate .sql files from a Python change script at the time it's added to a repository. The sql generated by the script for each database is set in stone at this point; changes to outside files won't affect it.
-
-This limits what change scripts are capable of - we can't write dynamic SQL; ie., we can't do something like this:
-{{{
-for row in db.execute("select id from table1"):
- db.execute("insert into table2 (table1_id, value) values (:id,42)",**row)
-}}}
-But SQL is usually powerful enough to where the above is rarely necessary in a migration script:
-{{{
-db.execute("insert into table2 select id,42 from table1")
-}}}
-This is a reasonable solution. The limitations aren't serious (everything possible in a traditional .sql script is still possible), and change scripts are much less prone to error.