Merge lp:~jaypipes/glance/bug730213 into lp:~glance-coresec/glance/cactus-trunk
- bug730213
- Merge into cactus-trunk
Status: | Merged | ||||||||
---|---|---|---|---|---|---|---|---|---|
Approved by: | Jay Pipes | ||||||||
Approved revision: | 97 | ||||||||
Merged at revision: | 96 | ||||||||
Proposed branch: | lp:~jaypipes/glance/bug730213 | ||||||||
Merge into: | lp:~glance-coresec/glance/cactus-trunk | ||||||||
Diff against target: |
631 lines (+511/-32) 7 files modified
glance/registry/db/migrate_repo/versions/003_add_disk_format.py (+147/-0) glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql (+58/-0) glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql (+64/-0) glance/registry/db/migration.py (+5/-5) glance/server.py (+2/-2) tests/unit/test_migrations.conf (+5/-0) tests/unit/test_migrations.py (+230/-25) |
||||||||
To merge this branch: | bzr merge lp:~jaypipes/glance/bug730213 | ||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Jay Pipes (community) | Approve | ||
Rick Harris (community) | Approve | ||
Ed Leafe (community) | Approve | ||
Sandy Walsh (community) | Approve | ||
Cory Wright | Pending | ||
Review via email: mp+54528@code.launchpad.net |
Commit message
Add migration testing and migration for disk_format/
Description of the change
OK, migrations are finally under control and properly tested.
Don't think I need to stress how painful this was. I reworked
the TestMigrations test case to properly test walking the
upgrade and downgrade paths for multiple databases. The databases
used in test_migrations are now configurable by placing sql connection
strings in the /tests/
to SQLite-only but has an example of MySQL as well. I'll work with
Monty to set up MySQL and PostgreSQL on the Jenkins box so we can
verify migrate scripts on all major target DB platforms.
There are a number of bugs in SA-Migrate that have workarounds in
this patch. Issue 99 (http://
means that MySQL change scripts cannot contain >1 SQL statement, which
essentially means for MySQL, you *have* to use a Python change script.
However, Issue 117 (http://
points out that you *cannot* use a Python change script when your
database is SQLite and you need to ALTER a table by adding or dropping
a column and the table has secondary indexes on it (which, by chance,
Glance registry's images table did have (on is_public and deleted columns).
So, for SQLite in these situations, you *must* use a SQL changescript, which
is why you see both a 003_add_
is used for MySQL/PostgreSQL and 003_sqlite_
scripts in the migrate repo.
There is a new test case verifies that data is not lost when moving
from schema v2 to v3, where the type column is removed from the images
table. I place the values that were in the type column into the image_properties
table as records with a key column values of 'type'. The test case verifies
that on upgrade from v2 to v3, the type column values are preserved as
image properties, and on downgrade from v3 to v2, the type properties
are re-inserted into the images.type column.
Ed Leafe (ed-leafe) wrote : | # |
lgtm, but then again, I'm no migration expert. Question: are tests like 'test_no_
Jay Pipes (jaypipes) wrote : | # |
> lgtm, but then again, I'm no migration expert. Question: are tests like
> 'test_no_
> change?
Any schema change where data loss occurs (which, to be fair, isn't all that often). Examples of this are:
* Schema change that *moves* a column from one table to another
* Schema change that drops one column but moves the data from that column to be records in a different table (the type -> image_properties migration of this particular merge is one such example of this)
Data loss bugs are critical. Any schema change that introduces the possibility of data loss MUST have a test case for it, IMHO.
-jay
Ed Leafe (ed-leafe) wrote : | # |
Jay - thanks for the explanation. Yes, I think we can agree that data loss is a very bad thing. :)
Jay Pipes (jaypipes) wrote : | # |
Just to respond to Sandy and Ed's comments about this patch increasing the effort to change the schema...
*Most* migrate scripts will run perfectly fine and won't require any new tests at all (the test_walk_
* The migrate script is complex or induces SA-Migrate bugs that require workarounds like the sqlite SQL scripts in this patch.
* The migrate script *moves* a bunch of data around. This really doesn't happen all that often...
-jay
Rick Harris (rconradharris) wrote : | # |
Awesome work here Jay! Thanks for sticking with it and finding a workable solution, owe you a beer.
OpenStack Infra (hudson-openstack) wrote : | # |
There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.
Jay Pipes (jaypipes) wrote : | # |
thx rick :) there was an extra revision I added to give Monty some flexibility in hudson... approving that.
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~jaypipes/glance/bug730213 into lp:glance failed. Below is the output from the failed tests.
running test
running egg_info
creating glance.egg-info
writing glance.
writing top-level names to glance.
writing dependency_links to glance.
writing manifest file 'glance.
reading manifest file 'glance.
reading manifest template 'MANIFEST.in'
writing manifest file 'glance.
running build_ext
We test the following: ... ok
Test for LP Bug #736295 ... ok
We test the following sequential series of actions: ... ok
A test that logging can be configured properly from the ... ok
A test for LP bug #704854 -- Exception thrown by registry ... ok
Tests raises BadRequest for invalid store header ... ok
Tests to add a basic image in the file store ... ok
Tests creates a queued image for no body and no loc header ... ok
test_bad_
test_bad_
test_delete_image (tests.
test_delete_
Test for HEAD /images/<ID> ... ok
test_show_
test_show_
Tests that the /images POST registry API creates the image ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad status is set ... ok
Tests that exception raised for bad matching disk and container ... ok
Tests that the /images DELETE registry API deletes the image ... ok
Tests proper exception is raised if attempt to delete non-existing ... ok
Tests that the /images/detail registry API returns ... ok
Tests that the /images registry API returns list of ... ok
Tests that the root registry API returns "index", ... ok
Tests that the /images PUT registry API updates the image ... ok
Tests proper exception is raised if attempt to update non-existing ... ok
Tests that exception raised trying to set a bad container_format ... ok
Tests that exception raised trying to set a bad disk_format ... ok
Tests that exception raised trying to set a bad status ... ok
Tests that exception raised for bad matching disk and container ... ok
Test ClientConnectio
Tests proper exception is raised if image with ID already exists ... ok
Tests that we can add image metadata and returns the new id ... ok
Tests a bad status is set to a proper one by server ... ok
Tests BadRequest raised when supplying bad store name in meta ... ok
Tests can add image by passing image data as file ... ok
Tests can add image by passing image data as string ... ok
Tests add image by passing image data as string w/ no size attr ... ok
Tests that we can add image metadata with properties ... ok
Tests client returns image as queued ... ok
Tests that image metadata is deleted properly ... ok
Tests cannot delete non-existing image ... ok
Test a simple file backend retrieval wo...
Jay Pipes (jaypipes) wrote : | # |
oops, need to merge trunk and resolve something....
Jay Pipes (jaypipes) wrote : | # |
fixed the problem...
Preview Diff
1 | === added file 'glance/registry/db/migrate_repo/versions/003_add_disk_format.py' |
2 | --- glance/registry/db/migrate_repo/versions/003_add_disk_format.py 1970-01-01 00:00:00 +0000 |
3 | +++ glance/registry/db/migrate_repo/versions/003_add_disk_format.py 2011-03-24 23:05:57 +0000 |
4 | @@ -0,0 +1,147 @@ |
5 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
6 | + |
7 | +# Copyright 2011 OpenStack LLC. |
8 | +# All Rights Reserved. |
9 | +# |
10 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
11 | +# not use this file except in compliance with the License. You may obtain |
12 | +# a copy of the License at |
13 | +# |
14 | +# http://www.apache.org/licenses/LICENSE-2.0 |
15 | +# |
16 | +# Unless required by applicable law or agreed to in writing, software |
17 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
18 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
19 | +# License for the specific language governing permissions and limitations |
20 | +# under the License. |
21 | + |
22 | +from migrate.changeset import * |
23 | +from sqlalchemy import * |
24 | +from sqlalchemy.sql import and_, not_ |
25 | + |
26 | +from glance.registry.db.migrate_repo.schema import ( |
27 | + Boolean, DateTime, Integer, String, Text, from_migration_import) |
28 | + |
29 | + |
30 | +def get_images_table(meta): |
31 | + """ |
32 | + Returns the Table object for the images table that |
33 | + corresponds to the images table definition of this version. |
34 | + """ |
35 | + images = Table('images', meta, |
36 | + Column('id', Integer(), primary_key=True, nullable=False), |
37 | + Column('name', String(255)), |
38 | + Column('disk_format', String(20)), |
39 | + Column('container_format', String(20)), |
40 | + Column('size', Integer()), |
41 | + Column('status', String(30), nullable=False), |
42 | + Column('is_public', Boolean(), nullable=False, default=False, |
43 | + index=True), |
44 | + Column('location', Text()), |
45 | + Column('created_at', DateTime(), nullable=False), |
46 | + Column('updated_at', DateTime()), |
47 | + Column('deleted_at', DateTime()), |
48 | + Column('deleted', Boolean(), nullable=False, default=False, |
49 | + index=True), |
50 | + mysql_engine='InnoDB', |
51 | + useexisting=True) |
52 | + |
53 | + return images |
54 | + |
55 | + |
56 | +def get_image_properties_table(meta): |
57 | + """ |
58 | + No changes to the image properties table from 002... |
59 | + """ |
60 | + (define_image_properties_table,) = from_migration_import( |
61 | + '002_add_image_properties_table', ['define_image_properties_table']) |
62 | + |
63 | + image_properties = define_image_properties_table(meta) |
64 | + return image_properties |
65 | + |
66 | + |
67 | +def upgrade(migrate_engine): |
68 | + meta = MetaData() |
69 | + meta.bind = migrate_engine |
70 | + (define_images_table,) = from_migration_import( |
71 | + '001_add_images_table', ['define_images_table']) |
72 | + (define_image_properties_table,) = from_migration_import( |
73 | + '002_add_image_properties_table', ['define_image_properties_table']) |
74 | + |
75 | + conn = migrate_engine.connect() |
76 | + images = define_images_table(meta) |
77 | + image_properties = define_image_properties_table(meta) |
78 | + |
79 | + # Steps to take, in this order: |
80 | + # 1) Move the existing type column from Image into |
81 | + # ImageProperty for all image records that have a non-NULL |
82 | + # type column |
83 | + # 2) Drop the type column in images |
84 | + # 3) Add the new columns to images |
85 | + |
86 | + # The below wackiness correlates to the following ANSI SQL: |
87 | + # SELECT images.* FROM images |
88 | + # LEFT JOIN image_properties |
89 | + # ON images.id = image_properties.image_id |
90 | + # AND image_properties.key = 'type' |
91 | + # WHERE image_properties.image_id IS NULL |
92 | + # AND images.type IS NOT NULL |
93 | + # |
94 | + # which returns all the images that have a type set |
95 | + # but that DO NOT yet have an image_property record |
96 | + # with key of type. |
97 | + sel = select([images], from_obj=[ |
98 | + images.outerjoin(image_properties, |
99 | + and_(images.c.id == image_properties.c.image_id, |
100 | + image_properties.c.key == 'type'))]).where( |
101 | + and_(image_properties.c.image_id == None, |
102 | + images.c.type != None)) |
103 | + image_records = conn.execute(sel).fetchall() |
104 | + property_insert = image_properties.insert() |
105 | + for record in image_records: |
106 | + conn.execute(property_insert, |
107 | + image_id=record.id, |
108 | + key='type', |
109 | + created_at=record.created_at, |
110 | + deleted=False, |
111 | + value=record.type) |
112 | + |
113 | + disk_format = Column('disk_format', String(20)) |
114 | + disk_format.create(images) |
115 | + container_format = Column('container_format', String(20)) |
116 | + container_format.create(images) |
117 | + |
118 | + images.columns['type'].drop() |
119 | + conn.close() |
120 | + |
121 | + |
122 | +def downgrade(migrate_engine): |
123 | + meta = MetaData() |
124 | + meta.bind = migrate_engine |
125 | + |
126 | + # Steps to take, in this order: |
127 | + # 1) Add type column back to Image |
128 | + # 2) Move the existing type properties from ImageProperty into |
129 | + # Image.type |
130 | + # 3) Drop the disk_format and container_format columns in Image |
131 | + |
132 | + conn = migrate_engine.connect() |
133 | + images = get_images_table(meta) |
134 | + image_properties = get_image_properties_table(meta) |
135 | + |
136 | + type_col = Column('type', String(30)) |
137 | + type_col.create(images) |
138 | + |
139 | + sel = select([image_properties]).where(image_properties.c.key == 'type') |
140 | + type_property_records = conn.execute(sel).fetchall() |
141 | + for record in type_property_records: |
142 | + upd = images.update().where( |
143 | + images.c.id == record.image_id).values(type=record.value) |
144 | + conn.execute(upd) |
145 | + dlt = image_properties.delete().where( |
146 | + image_properties.c.image_id == record.image_id) |
147 | + conn.execute(dlt) |
148 | + conn.close() |
149 | + |
150 | + images.columns['disk_format'].drop() |
151 | + images.columns['container_format'].drop() |
152 | |
153 | === added file 'glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql' |
154 | --- glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql 1970-01-01 00:00:00 +0000 |
155 | +++ glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql 2011-03-24 23:05:57 +0000 |
156 | @@ -0,0 +1,58 @@ |
157 | +BEGIN; |
158 | + |
159 | +/* Make changes to the base images table */ |
160 | +CREATE TEMPORARY TABLE images_backup ( |
161 | + id INTEGER NOT NULL, |
162 | + name VARCHAR(255), |
163 | + size INTEGER, |
164 | + status VARCHAR(30) NOT NULL, |
165 | + is_public BOOLEAN NOT NULL, |
166 | + location TEXT, |
167 | + created_at DATETIME NOT NULL, |
168 | + updated_at DATETIME, |
169 | + deleted_at DATETIME, |
170 | + deleted BOOLEAN NOT NULL, |
171 | + PRIMARY KEY (id) |
172 | +); |
173 | + |
174 | +INSERT INTO images_backup |
175 | +SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted |
176 | +FROM images; |
177 | + |
178 | +DROP TABLE images; |
179 | + |
180 | +CREATE TABLE images ( |
181 | + id INTEGER NOT NULL, |
182 | + name VARCHAR(255), |
183 | + size INTEGER, |
184 | + type VARCHAR(30), |
185 | + status VARCHAR(30) NOT NULL, |
186 | + is_public BOOLEAN NOT NULL, |
187 | + location TEXT, |
188 | + created_at DATETIME NOT NULL, |
189 | + updated_at DATETIME, |
190 | + deleted_at DATETIME, |
191 | + deleted BOOLEAN NOT NULL, |
192 | + PRIMARY KEY (id), |
193 | + CHECK (is_public IN (0, 1)), |
194 | + CHECK (deleted IN (0, 1)) |
195 | +); |
196 | +CREATE INDEX ix_images_deleted ON images (deleted); |
197 | +CREATE INDEX ix_images_is_public ON images (is_public); |
198 | + |
199 | +INSERT INTO images (id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted) |
200 | +SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted |
201 | +FROM images_backup; |
202 | + |
203 | +DROP TABLE images_backup; |
204 | + |
205 | +/* Re-insert the type values from the temp table */ |
206 | +UPDATE images |
207 | +SET type = (SELECT value FROM image_properties WHERE image_id = images.id AND key = 'type') |
208 | +WHERE EXISTS (SELECT * FROM image_properties WHERE image_id = images.id AND key = 'type'); |
209 | + |
210 | +/* Remove the type properties from the image_properties table */ |
211 | +DELETE FROM image_properties |
212 | +WHERE key = 'type'; |
213 | + |
214 | +COMMIT; |
215 | |
216 | === added file 'glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql' |
217 | --- glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql 1970-01-01 00:00:00 +0000 |
218 | +++ glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql 2011-03-24 23:05:57 +0000 |
219 | @@ -0,0 +1,64 @@ |
220 | +BEGIN TRANSACTION; |
221 | + |
222 | +/* Move type column from base images table |
223 | + * to be records in image_properties table */ |
224 | +CREATE TEMPORARY TABLE tmp_type_records (id INTEGER NOT NULL, type VARCHAR(30) NOT NULL); |
225 | +INSERT INTO tmp_type_records |
226 | +SELECT id, type |
227 | +FROM images |
228 | +WHERE type IS NOT NULL; |
229 | + |
230 | +REPLACE INTO image_properties |
231 | +(image_id, key, value, created_at, deleted) |
232 | +SELECT id, 'type', type, date('now'), 0 |
233 | +FROM tmp_type_records; |
234 | + |
235 | +DROP TABLE tmp_type_records; |
236 | + |
237 | +/* Make changes to the base images table */ |
238 | +CREATE TEMPORARY TABLE images_backup ( |
239 | + id INTEGER NOT NULL, |
240 | + name VARCHAR(255), |
241 | + size INTEGER, |
242 | + status VARCHAR(30) NOT NULL, |
243 | + is_public BOOLEAN NOT NULL, |
244 | + location TEXT, |
245 | + created_at DATETIME NOT NULL, |
246 | + updated_at DATETIME, |
247 | + deleted_at DATETIME, |
248 | + deleted BOOLEAN NOT NULL, |
249 | + PRIMARY KEY (id) |
250 | +); |
251 | + |
252 | +INSERT INTO images_backup |
253 | +SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted |
254 | +FROM images; |
255 | + |
256 | +DROP TABLE images; |
257 | + |
258 | +CREATE TABLE images ( |
259 | + id INTEGER NOT NULL, |
260 | + name VARCHAR(255), |
261 | + size INTEGER, |
262 | + status VARCHAR(30) NOT NULL, |
263 | + is_public BOOLEAN NOT NULL, |
264 | + location TEXT, |
265 | + created_at DATETIME NOT NULL, |
266 | + updated_at DATETIME, |
267 | + deleted_at DATETIME, |
268 | + deleted BOOLEAN NOT NULL, |
269 | + disk_format VARCHAR(20), |
270 | + container_format VARCHAR(20), |
271 | + PRIMARY KEY (id), |
272 | + CHECK (is_public IN (0, 1)), |
273 | + CHECK (deleted IN (0, 1)) |
274 | +); |
275 | +CREATE INDEX ix_images_deleted ON images (deleted); |
276 | +CREATE INDEX ix_images_is_public ON images (is_public); |
277 | + |
278 | +INSERT INTO images (id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted) |
279 | +SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted |
280 | +FROM images_backup; |
281 | + |
282 | +DROP TABLE images_backup; |
283 | +COMMIT; |
284 | |
285 | === modified file 'glance/registry/db/migration.py' |
286 | --- glance/registry/db/migration.py 2011-03-08 19:51:25 +0000 |
287 | +++ glance/registry/db/migration.py 2011-03-24 23:05:57 +0000 |
288 | @@ -38,7 +38,7 @@ |
289 | :param options: options dict |
290 | :retval version number |
291 | """ |
292 | - repo_path = _find_migrate_repo() |
293 | + repo_path = get_migrate_repo_path() |
294 | sql_connection = options['sql_connection'] |
295 | try: |
296 | return versioning_api.db_version(sql_connection, repo_path) |
297 | @@ -56,7 +56,7 @@ |
298 | :retval version number |
299 | """ |
300 | db_version(options) # Ensure db is under migration control |
301 | - repo_path = _find_migrate_repo() |
302 | + repo_path = get_migrate_repo_path() |
303 | sql_connection = options['sql_connection'] |
304 | version_str = version or 'latest' |
305 | logger.info("Upgrading %(sql_connection)s to version %(version_str)s" % |
306 | @@ -72,7 +72,7 @@ |
307 | :retval version number |
308 | """ |
309 | db_version(options) # Ensure db is under migration control |
310 | - repo_path = _find_migrate_repo() |
311 | + repo_path = get_migrate_repo_path() |
312 | sql_connection = options['sql_connection'] |
313 | logger.info("Downgrading %(sql_connection)s to version %(version)s" % |
314 | locals()) |
315 | @@ -98,7 +98,7 @@ |
316 | |
317 | :param options: options dict |
318 | """ |
319 | - repo_path = _find_migrate_repo() |
320 | + repo_path = get_migrate_repo_path() |
321 | sql_connection = options['sql_connection'] |
322 | return versioning_api.version_control(sql_connection, repo_path) |
323 | |
324 | @@ -117,7 +117,7 @@ |
325 | upgrade(options, version=version) |
326 | |
327 | |
328 | -def _find_migrate_repo(): |
329 | +def get_migrate_repo_path(): |
330 | """Get the path for the migrate repository.""" |
331 | path = os.path.join(os.path.abspath(os.path.dirname(__file__)), |
332 | 'migrate_repo') |
333 | |
334 | === modified file 'glance/server.py' |
335 | --- glance/server.py 2011-03-17 21:09:06 +0000 |
336 | +++ glance/server.py 2011-03-24 23:05:57 +0000 |
337 | @@ -320,8 +320,8 @@ |
338 | # This is why we can't use a raise with no arguments here: our |
339 | # exception context was destroyed by Eventlet. To work around |
340 | # this, we need to 'memorize' the exception context, and then |
341 | - # re-raise using 3-arg form after Eventlet has run |
342 | - raise exc_type, exc_value, exc_traceback |
343 | + # re-raise here. |
344 | + raise exc_type(exc_traceback) |
345 | |
346 | def create(self, req): |
347 | """ |
348 | |
349 | === added file 'tests/unit/test_migrations.conf' |
350 | --- tests/unit/test_migrations.conf 1970-01-01 00:00:00 +0000 |
351 | +++ tests/unit/test_migrations.conf 2011-03-24 23:05:57 +0000 |
352 | @@ -0,0 +1,5 @@ |
353 | +[DEFAULT] |
354 | +# Set up any number of migration data stores you want, one |
355 | +# The "name" used in the test is the config variable key. |
356 | +sqlite=sqlite:///test_migrations.db |
357 | +#mysql=mysql://root:@localhost/test_migrations |
358 | |
359 | === modified file 'tests/unit/test_migrations.py' |
360 | --- tests/unit/test_migrations.py 2011-03-16 16:11:56 +0000 |
361 | +++ tests/unit/test_migrations.py 2011-03-24 23:05:57 +0000 |
362 | @@ -15,39 +15,244 @@ |
363 | # License for the specific language governing permissions and limitations |
364 | # under the License. |
365 | |
366 | +""" |
367 | +Tests for database migrations. This test case reads the configuration |
368 | +file /tests/unit/test_migrations.conf for database connection settings |
369 | +to use in the tests. For each connection found in the config file, |
370 | +the test case runs a series of test cases to ensure that migrations work |
371 | +properly both upgrading and downgrading, and that no data loss occurs |
372 | +if possible. |
373 | +""" |
374 | + |
375 | +import ConfigParser |
376 | +import datetime |
377 | import os |
378 | import unittest |
379 | - |
380 | +import urlparse |
381 | + |
382 | +from migrate.versioning.repository import Repository |
383 | +from sqlalchemy import * |
384 | +from sqlalchemy.pool import NullPool |
385 | + |
386 | +from glance.common import exception |
387 | import glance.registry.db.migration as migration_api |
388 | -import glance.registry.db.api as api |
389 | -import glance.common.config as config |
390 | +from tests.utils import execute |
391 | |
392 | |
393 | class TestMigrations(unittest.TestCase): |
394 | + |
395 | """Test sqlalchemy-migrate migrations""" |
396 | |
397 | + TEST_DATABASES = {} |
398 | + # Test machines can set the GLANCE_TEST_MIGRATIONS_CONF variable |
399 | + # to override the location of the config file for migration testing |
400 | + CONFIG_FILE_PATH = os.environ.get('GLANCE_TEST_MIGRATIONS_CONF', |
401 | + os.path.join('tests', 'unit', |
402 | + 'test_migrations.conf')) |
403 | + REPOSITORY_PATH = os.path.join('glance', 'registry', 'db', 'migrate_repo') |
404 | + REPOSITORY = Repository(REPOSITORY_PATH) |
405 | + |
406 | + def __init__(self, *args, **kwargs): |
407 | + super(TestMigrations, self).__init__(*args, **kwargs) |
408 | + |
409 | def setUp(self): |
410 | - self.db_path = "glance_test_migration.sqlite" |
411 | - sql_connection = os.environ.get('GLANCE_SQL_CONNECTION', |
412 | - "sqlite:///%s" % self.db_path) |
413 | - |
414 | - self.options = dict(sql_connection=sql_connection, |
415 | - verbose=False) |
416 | - config.setup_logging(self.options, {}) |
417 | + # Load test databases from the config file. Only do this |
418 | + # once. No need to re-run this on each test... |
419 | + if not TestMigrations.TEST_DATABASES: |
420 | + if os.path.exists(TestMigrations.CONFIG_FILE_PATH): |
421 | + cp = ConfigParser.RawConfigParser() |
422 | + try: |
423 | + cp.read(TestMigrations.CONFIG_FILE_PATH) |
424 | + defaults = cp.defaults() |
425 | + for key, value in defaults.items(): |
426 | + TestMigrations.TEST_DATABASES[key] = value |
427 | + except ConfigParser.ParsingError, e: |
428 | + print ("Failed to read test_migrations.conf config file. " |
429 | + "Got error: %s" % e) |
430 | + |
431 | + self.engines = {} |
432 | + for key, value in TestMigrations.TEST_DATABASES.items(): |
433 | + self.engines[key] = create_engine(value, poolclass=NullPool) |
434 | + |
435 | + # We start each test case with a completely blank slate. |
436 | + self._reset_databases() |
437 | |
438 | def tearDown(self): |
439 | - api.configure_db(self.options) |
440 | - api.unregister_models() |
441 | - |
442 | - def test_db_sync_downgrade_then_upgrade(self): |
443 | - migration_api.db_sync(self.options) |
444 | - |
445 | - latest = migration_api.db_version(self.options) |
446 | - |
447 | - migration_api.downgrade(self.options, latest - 1) |
448 | - cur_version = migration_api.db_version(self.options) |
449 | - self.assertEqual(cur_version, latest - 1) |
450 | - |
451 | - migration_api.upgrade(self.options, cur_version + 1) |
452 | - cur_version = migration_api.db_version(self.options) |
453 | - self.assertEqual(cur_version, latest) |
454 | + # We destroy the test data store between each test case, |
455 | + # and recreate it, which ensures that we have no side-effects |
456 | + # from the tests |
457 | + self._reset_databases() |
458 | + |
459 | + def _reset_databases(self): |
460 | + for key, engine in self.engines.items(): |
461 | + conn_string = TestMigrations.TEST_DATABASES[key] |
462 | + conn_pieces = urlparse.urlparse(conn_string) |
463 | + if conn_string.startswith('sqlite'): |
464 | + # We can just delete the SQLite database, which is |
465 | + # the easiest and cleanest solution |
466 | + db_path = conn_pieces.path.strip('/') |
467 | + if os.path.exists(db_path): |
468 | + os.unlink(db_path) |
469 | + # No need to recreate the SQLite DB. SQLite will |
470 | + # create it for us if it's not there... |
471 | + elif conn_string.startswith('mysql'): |
472 | + # We can execute the MySQL client to destroy and re-create |
473 | + # the MYSQL database, which is easier and less error-prone |
474 | + # than using SQLAlchemy to do this via MetaData...trust me. |
475 | + database = conn_pieces.path.strip('/') |
476 | + loc_pieces = conn_pieces.netloc.split('@') |
477 | + host = loc_pieces[1] |
478 | + auth_pieces = loc_pieces[0].split(':') |
479 | + user = auth_pieces[0] |
480 | + password = "" |
481 | + if len(auth_pieces) > 1: |
482 | + if auth_pieces[1].strip(): |
483 | + password = "-p%s" % auth_pieces[1] |
484 | + sql = ("drop database if exists %(database)s; " |
485 | + "create database %(database)s;") % locals() |
486 | + cmd = ("mysql -u%(user)s %(password)s -h%(host)s " |
487 | + "-e\"%(sql)s\"") % locals() |
488 | + exitcode, out, err = execute(cmd) |
489 | + self.assertEqual(0, exitcode) |
490 | + |
491 | + def test_walk_versions(self): |
492 | + """ |
493 | + Walks all version scripts for each tested database, ensuring |
494 | + that there are no errors in the version scripts for each engine |
495 | + """ |
496 | + for key, engine in self.engines.items(): |
497 | + options = {'sql_connection': TestMigrations.TEST_DATABASES[key]} |
498 | + self._walk_versions(options) |
499 | + |
500 | + def _walk_versions(self, options): |
501 | + # Determine latest version script from the repo, then |
502 | + # upgrade from 1 through to the latest, with no data |
503 | + # in the databases. This just checks that the schema itself |
504 | + # upgrades successfully. |
505 | + |
506 | + # Assert we are not under version control... |
507 | + self.assertRaises(exception.DatabaseMigrationError, |
508 | + migration_api.db_version, |
509 | + options) |
510 | + # Place the database under version control |
511 | + migration_api.version_control(options) |
512 | + |
513 | + cur_version = migration_api.db_version(options) |
514 | + self.assertEqual(0, cur_version) |
515 | + |
516 | + for version in xrange(1, TestMigrations.REPOSITORY.latest + 1): |
517 | + migration_api.upgrade(options, version) |
518 | + cur_version = migration_api.db_version(options) |
519 | + self.assertEqual(cur_version, version) |
520 | + |
521 | + # Now walk it back down to 0 from the latest, testing |
522 | + # the downgrade paths. |
523 | + for version in reversed( |
524 | + xrange(0, TestMigrations.REPOSITORY.latest)): |
525 | + migration_api.downgrade(options, version) |
526 | + cur_version = migration_api.db_version(options) |
527 | + self.assertEqual(cur_version, version) |
528 | + |
529 | + def test_no_data_loss_2_to_3_to_2(self): |
530 | + """ |
531 | + Here, we test that in the case when we moved a column "type" from the |
532 | + base images table to be records in the image_properties table, that |
533 | + we don't lose any data during the migration. Similarly, we test that |
534 | + on downgrade, we don't lose any data, as the records are moved from |
535 | + the image_properties table back into the base image table. |
536 | + """ |
537 | + for key, engine in self.engines.items(): |
538 | + options = {'sql_connection': TestMigrations.TEST_DATABASES[key]} |
539 | + self._no_data_loss_2_to_3_to_2(engine, options) |
540 | + |
541 | + def _no_data_loss_2_to_3_to_2(self, engine, options): |
542 | + migration_api.version_control(options) |
543 | + migration_api.upgrade(options, 2) |
544 | + |
545 | + cur_version = migration_api.db_version(options) |
546 | + self.assertEquals(2, cur_version) |
547 | + |
548 | + # We are now on version 2. Check that the images table does |
549 | + # not contain the type column... |
550 | + |
551 | + images_table = Table('images', MetaData(), autoload=True, |
552 | + autoload_with=engine) |
553 | + |
554 | + image_properties_table = Table('image_properties', MetaData(), |
555 | + autoload=True, |
556 | + autoload_with=engine) |
557 | + |
558 | + self.assertTrue('type' in images_table.c, |
559 | + "'type' column found in images table columns! " |
560 | + "images table columns: %s" |
561 | + % images_table.c.keys()) |
562 | + |
563 | + conn = engine.connect() |
564 | + sel = select([func.count("*")], from_obj=[images_table]) |
565 | + orig_num_images = conn.execute(sel).scalar() |
566 | + sel = select([func.count("*")], from_obj=[image_properties_table]) |
567 | + orig_num_image_properties = conn.execute(sel).scalar() |
568 | + |
569 | + now = datetime.datetime.now() |
570 | + inserter = images_table.insert() |
571 | + conn.execute(inserter, [ |
572 | + {'deleted': False, 'created_at': now, |
573 | + 'updated_at': now, 'type': 'kernel', |
574 | + 'status': 'active', 'is_public': True}, |
575 | + {'deleted': False, 'created_at': now, |
576 | + 'updated_at': now, 'type': 'ramdisk', |
577 | + 'status': 'active', 'is_public': True}]) |
578 | + |
579 | + sel = select([func.count("*")], from_obj=[images_table]) |
580 | + num_images = conn.execute(sel).scalar() |
581 | + self.assertEqual(orig_num_images + 2, num_images) |
582 | + conn.close() |
583 | + |
584 | + # Now let's upgrade to 3. This should move the type column |
585 | + # to the image_properties table as type properties. |
586 | + |
587 | + migration_api.upgrade(options, 3) |
588 | + |
589 | + cur_version = migration_api.db_version(options) |
590 | + self.assertEquals(3, cur_version) |
591 | + |
592 | + images_table = Table('images', MetaData(), autoload=True, |
593 | + autoload_with=engine) |
594 | + |
595 | + self.assertTrue('type' not in images_table.c, |
596 | + "'type' column not found in images table columns! " |
597 | + "images table columns reported by metadata: %s\n" |
598 | + % images_table.c.keys()) |
599 | + |
600 | + image_properties_table = Table('image_properties', MetaData(), |
601 | + autoload=True, |
602 | + autoload_with=engine) |
603 | + |
604 | + conn = engine.connect() |
605 | + sel = select([func.count("*")], from_obj=[image_properties_table]) |
606 | + num_image_properties = conn.execute(sel).scalar() |
607 | + self.assertEqual(orig_num_image_properties + 2, num_image_properties) |
608 | + conn.close() |
609 | + |
610 | + # Downgrade to 2 and check that the type properties were moved |
611 | + # to the main image table |
612 | + |
613 | + migration_api.downgrade(options, 2) |
614 | + |
615 | + images_table = Table('images', MetaData(), autoload=True, |
616 | + autoload_with=engine) |
617 | + |
618 | + self.assertTrue('type' in images_table.c, |
619 | + "'type' column found in images table columns! " |
620 | + "images table columns: %s" |
621 | + % images_table.c.keys()) |
622 | + |
623 | + image_properties_table = Table('image_properties', MetaData(), |
624 | + autoload=True, |
625 | + autoload_with=engine) |
626 | + |
627 | + conn = engine.connect() |
628 | + sel = select([func.count("*")], from_obj=[image_properties_table]) |
629 | + last_num_image_properties = conn.execute(sel).scalar() |
630 | + |
631 | + self.assertEqual(num_image_properties - 2, last_num_image_properties) |
Good god Jay, that's a crazy bug fix.
I see the value of the migration tests (especially in this situation), but boy, is it ever going to increase the effort involved to change the schema :)
Nothing in here I can really complain about without really digging in deep.