Merge lp:~jaypipes/glance/checksum into lp:~glance-coresec/glance/cactus-trunk

Proposed by Jay Pipes
Status: Superseded
Proposed branch: lp:~jaypipes/glance/checksum
Merge into: lp:~glance-coresec/glance/cactus-trunk
Diff against target: 1588 lines (+855/-158)
20 files modified
doc/source/glanceapi.rst (+20/-0)
glance/client.py (+4/-1)
glance/registry/db/api.py (+1/-1)
glance/registry/db/migrate_repo/versions/003_add_disk_format.py (+147/-0)
glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql (+58/-0)
glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql (+64/-0)
glance/registry/db/migrate_repo/versions/004_add_checksum.py (+80/-0)
glance/registry/db/migration.py (+5/-5)
glance/registry/db/models.py (+1/-0)
glance/registry/server.py (+21/-8)
glance/server.py (+72/-42)
glance/store/__init__.py (+0/-37)
glance/store/filesystem.py (+12/-21)
glance/store/swift.py (+1/-1)
tests/stubs.py (+6/-1)
tests/unit/test_api.py (+103/-5)
tests/unit/test_filesystem_store.py (+11/-5)
tests/unit/test_migrations.conf (+5/-0)
tests/unit/test_migrations.py (+227/-25)
tests/unit/test_swift_store.py (+17/-6)
To merge this branch: bzr merge lp:~jaypipes/glance/checksum
Reviewer Review Type Date Requested Status
Sandy Walsh (community) Approve
Cory Wright (community) Needs Fixing
Rick Harris (community) Approve
Chuck Thier (community) Approve
Review via email: mp+52569@code.launchpad.net

This proposal has been superseded by a proposal from 2011-03-23.

Commit message

Add checksumming capabilities to Glance

Description of the change

Adds checksumming to Glance.

When adding an image (or uploading an image during PUT operations),
you may now supply an optional X-Image-Meta-Checksum header. When
storing the uploaded image, the backend image stores now are required
to return a checksum of the data they just stored. The optional
X-Image-Meta-Checksum header is compared against this generated checksum
and returns a 409 Bad Request if there is a mismatch.

The ETag header is now properly set to the image's checksum now
for all GET /images/<ID>, HEAD /images/<ID>, POST /images and
PUT /images/<ID> operations.

Adds unit tests verifying the checksumming behaviour in the API, and
in the Swift and Filesystem backend stores.

NOTE: This does not include the DB migration script. Separate bug will be filed for that.

To post a comment you must log in.
Revision history for this message
Chuck Thier (cthier) wrote :

The swift parts look reasonable to me. The only other thing that I might add is to clarify in the docs that the checksum is an md5 checksum, otherwise people may try to use CRC.

review: Approve
Revision history for this message
Jay Pipes (jaypipes) wrote :

thx Chuck. Made a note in the docs about the checksum being an MD5 checksum.

lp:~jaypipes/glance/checksum updated
81. By Jay Pipes

Make it clear that the checksum is an MD5 checksum in docs.

Revision history for this message
Rick Harris (rconradharris) wrote :

Nice patch.

Unfortunately with the state of Nova <=> Glance trunks at the moment, I wasn't able to test functionally. That said, this seems pretty safe to go ahead and merge, and if we have issues, we can fix them along with all the other fixes that are in progress at the moment :)

> 110 + checksum = Column(String(32))

I assume the answer is YAGNI, but I'll ask it anyway: Will we ever allow SHA1 or SHA256?

Would it make sense to make this a String(64) or, heck, a String(255) for flexibility?

review: Approve
Revision history for this message
Jay Pipes (jaypipes) wrote :

> > 110 + checksum = Column(String(32))
>
> I assume the answer is YAGNI, but I'll ask it anyway: Will we ever allow SHA1
> or SHA256?
>
> Would it make sense to make this a String(64) or, heck, a String(255) for
> flexibility?

Hmm, good question. Not sure, really. I suppose we could expand it in the future if we decide its a needed feature.

Revision history for this message
Cory Wright (corywright) wrote :

I noticed that there is no migration script to add the checksum column. Trunk is already missing a migration for disk_format/container_format, but we probably shouldn't get in the habit of merging to trunk without migrations, since that effectively leaves trunk broken. However, the tests seem to run fine without the migration.

I tried merging trunk into this branch to test these changes with the glance cli tool, but there were conflicts so you may want to merge trunk once more.

Otherwise, this looks good to me. I'm going to mark it as Needs Fixing because of the missing migration. If you think it's ok to merge this to trunk without the migration then let me know and I'll mark this approved.

review: Needs Fixing
Revision history for this message
Jay Pipes (jaypipes) wrote :

> I noticed that there is no migration script to add the checksum column. Trunk
> is already missing a migration for disk_format/container_format, but we
> probably shouldn't get in the habit of merging to trunk without migrations,
> since that effectively leaves trunk broken. However, the tests seem to run
> fine without the migration.
>
> I tried merging trunk into this branch to test these changes with the glance
> cli tool, but there were conflicts so you may want to merge trunk once more.
>
> Otherwise, this looks good to me. I'm going to mark it as Needs Fixing
> because of the missing migration. If you think it's ok to merge this to trunk
> without the migration then let me know and I'll mark this approved.

Agreed, though I did note this in the merge proposal commit message:

"NOTE: This does not include the DB migration script. Separate bug will be filed for that."

But, I'll work on a migrate script after merging trunk again. Thanks, Cory!

Revision history for this message
Cory Wright (corywright) wrote :

You sure did mention it, sorry about that. I read the commit message yesterday when I first looked at this and forgot.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Hmm, I got bitten by this migration bug yesterday ... can't this be put in this branch to keep trunk happy?

Revision history for this message
Jay Pipes (jaypipes) wrote :

> Hmm, I got bitten by this migration bug yesterday ... can't this be put in
> this branch to keep trunk happy?

? This hasn't been merged yet, so I'm unclear how this could have bitten you :)

FYI, I've now figured out the migrations stuff. Full steam ahead today on getting the image format migration bug done and getting this branch merged with a migration file.

lp:~jaypipes/glance/checksum updated
82. By Jay Pipes

Merge trunk and resolve conflicts

83. By Jay Pipes

Merge bug730213 (Migration fixes and tests)

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

sorry, it was a nova issue. _get_kernel_ramdisk_from_image checks 'disk_format' != 'ami'

I guess that just stresses the importance of having the migrations in place with the branch :)

Minor tweak ... ping me and I'll approve immediately after

278 missing _() ?

review: Needs Fixing
Revision history for this message
Jay Pipes (jaypipes) wrote :

On Wed, Mar 23, 2011 at 10:26 AM, Sandy Walsh <email address hidden> wrote:
> Review: Needs Fixing
> sorry, it was a nova issue. _get_kernel_ramdisk_from_image checks 'disk_format' != 'ami'
>
> I guess that just stresses the importance of having the migrations in place with the branch :)

Ya, I'm currently working on the migration script. should be done shortly...

> Minor tweak ... ping me and I'll approve immediately after
>
> 278 missing _() ?

Glance doesn't have i18n. Yet :)

lp:~jaypipes/glance/checksum updated
84. By Jay Pipes

Merge fix from bug730213

85. By Jay Pipes

Add migration script for checksum column

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

ah! haha

review: Approve
lp:~jaypipes/glance/checksum updated
86. By Jay Pipes

Merge trunk

87. By Jay Pipes

Fix up test case after merging in bug fixes from trunk... expected results were incorrect in curl test

88. By Jay Pipes

Uhhhm, stop_servers() should stop servers, not start them! Thanks to Cory for uncovering this copy/paste fail.

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'doc/source/glanceapi.rst'
--- doc/source/glanceapi.rst 2011-03-05 17:04:43 +0000
+++ doc/source/glanceapi.rst 2011-03-23 15:06:49 +0000
@@ -67,6 +67,7 @@
67 'disk_format': 'vhd',67 'disk_format': 'vhd',
68 'container_format': 'ovf',68 'container_format': 'ovf',
69 'size': '5368709120',69 'size': '5368709120',
70 'checksum': 'c2e5db72bd7fd153f53ede5da5a06de3',
70 'location': 'swift://account:key/container/image.tar.gz.0',71 'location': 'swift://account:key/container/image.tar.gz.0',
71 'created_at': '2010-02-03 09:34:01',72 'created_at': '2010-02-03 09:34:01',
72 'updated_at': '2010-02-03 09:34:01',73 'updated_at': '2010-02-03 09:34:01',
@@ -89,6 +90,8 @@
89 The `properties` field is a mapping of free-form key/value pairs that90 The `properties` field is a mapping of free-form key/value pairs that
90 have been saved with the image metadata91 have been saved with the image metadata
9192
93 The `checksum` field is an MD5 checksum of the image file data
94
9295
93Requesting Detailed Metadata on a Specific Image96Requesting Detailed Metadata on a Specific Image
94------------------------------------------------97------------------------------------------------
@@ -116,6 +119,7 @@
116 x-image-meta-disk-format vhd119 x-image-meta-disk-format vhd
117 x-image-meta-container-format ovf120 x-image-meta-container-format ovf
118 x-image-meta-size 5368709120121 x-image-meta-size 5368709120
122 x-image-meta-checksum c2e5db72bd7fd153f53ede5da5a06de3
119 x-image-meta-location swift://account:key/container/image.tar.gz.0123 x-image-meta-location swift://account:key/container/image.tar.gz.0
120 x-image-meta-created_at 2010-02-03 09:34:01124 x-image-meta-created_at 2010-02-03 09:34:01
121 x-image-meta-updated_at 2010-02-03 09:34:01125 x-image-meta-updated_at 2010-02-03 09:34:01
@@ -137,6 +141,9 @@
137 that have been saved with the image metadata. The key is the string141 that have been saved with the image metadata. The key is the string
138 after `x-image-meta-property-` and the value is the value of the header142 after `x-image-meta-property-` and the value is the value of the header
139143
144 The response's `ETag` header will always be equal to the
145 `x-image-meta-checksum` value
146
140147
141Retrieving a Virtual Machine Image148Retrieving a Virtual Machine Image
142----------------------------------149----------------------------------
@@ -166,6 +173,7 @@
166 x-image-meta-disk-format vhd173 x-image-meta-disk-format vhd
167 x-image-meta-container-format ovf174 x-image-meta-container-format ovf
168 x-image-meta-size 5368709120175 x-image-meta-size 5368709120
176 x-image-meta-checksum c2e5db72bd7fd153f53ede5da5a06de3
169 x-image-meta-location swift://account:key/container/image.tar.gz.0177 x-image-meta-location swift://account:key/container/image.tar.gz.0
170 x-image-meta-created_at 2010-02-03 09:34:01178 x-image-meta-created_at 2010-02-03 09:34:01
171 x-image-meta-updated_at 2010-02-03 09:34:01179 x-image-meta-updated_at 2010-02-03 09:34:01
@@ -190,6 +198,9 @@
190 The response's `Content-Length` header shall be equal to the value of198 The response's `Content-Length` header shall be equal to the value of
191 the `x-image-meta-size` header199 the `x-image-meta-size` header
192200
201 The response's `ETag` header will always be equal to the
202 `x-image-meta-checksum` value
203
193 The image data itself will be the body of the HTTP response returned204 The image data itself will be the body of the HTTP response returned
194 from the request, which will have content-type of205 from the request, which will have content-type of
195 `application/octet-stream`.206 `application/octet-stream`.
@@ -284,6 +295,15 @@
284 When not present, Glance will calculate the image's size based on the size295 When not present, Glance will calculate the image's size based on the size
285 of the request body.296 of the request body.
286297
298* ``x-image-meta-checksum``
299
300 This header is optional. When present it shall be the expected **MD5**
301 checksum of the image file data.
302
303 When present, Glance will verify the checksum generated from the backend
304 store when storing your image against this value and return a
305 **400 Bad Request** if the values do not match.
306
287* ``x-image-meta-is-public``307* ``x-image-meta-is-public``
288308
289 This header is optional.309 This header is optional.
290310
=== modified file 'glance/client.py'
--- glance/client.py 2011-03-06 17:02:44 +0000
+++ glance/client.py 2011-03-23 15:06:49 +0000
@@ -142,7 +142,10 @@
142 c.request(method, action, body, headers)142 c.request(method, action, body, headers)
143 res = c.getresponse()143 res = c.getresponse()
144 status_code = self.get_status_code(res)144 status_code = self.get_status_code(res)
145 if status_code == httplib.OK:145 if status_code in (httplib.OK,
146 httplib.CREATED,
147 httplib.ACCEPTED,
148 httplib.NO_CONTENT):
146 return res149 return res
147 elif status_code == httplib.UNAUTHORIZED:150 elif status_code == httplib.UNAUTHORIZED:
148 raise exception.NotAuthorized151 raise exception.NotAuthorized
149152
=== modified file 'glance/registry/db/api.py'
--- glance/registry/db/api.py 2011-03-14 19:10:24 +0000
+++ glance/registry/db/api.py 2011-03-23 15:06:49 +0000
@@ -43,7 +43,7 @@
4343
44IMAGE_ATTRS = BASE_MODEL_ATTRS | set(['name', 'status', 'size',44IMAGE_ATTRS = BASE_MODEL_ATTRS | set(['name', 'status', 'size',
45 'disk_format', 'container_format',45 'disk_format', 'container_format',
46 'is_public', 'location'])46 'is_public', 'location', 'checksum'])
4747
48CONTAINER_FORMATS = ['ami', 'ari', 'aki', 'bare', 'ovf']48CONTAINER_FORMATS = ['ami', 'ari', 'aki', 'bare', 'ovf']
49DISK_FORMATS = ['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 'qcow2', 'vdi']49DISK_FORMATS = ['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 'qcow2', 'vdi']
5050
=== added file 'glance/registry/db/migrate_repo/versions/003_add_disk_format.py'
--- glance/registry/db/migrate_repo/versions/003_add_disk_format.py 1970-01-01 00:00:00 +0000
+++ glance/registry/db/migrate_repo/versions/003_add_disk_format.py 2011-03-23 15:06:49 +0000
@@ -0,0 +1,147 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2011 OpenStack LLC.
4# All Rights Reserved.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18from migrate.changeset import *
19from sqlalchemy import *
20from sqlalchemy.sql import and_, not_
21
22from glance.registry.db.migrate_repo.schema import (
23 Boolean, DateTime, Integer, String, Text, from_migration_import)
24
25
26def get_images_table(meta):
27 """
28 Returns the Table object for the images table that
29 corresponds to the images table definition of this version.
30 """
31 images = Table('images', meta,
32 Column('id', Integer(), primary_key=True, nullable=False),
33 Column('name', String(255)),
34 Column('disk_format', String(20)),
35 Column('container_format', String(20)),
36 Column('size', Integer()),
37 Column('status', String(30), nullable=False),
38 Column('is_public', Boolean(), nullable=False, default=False,
39 index=True),
40 Column('location', Text()),
41 Column('created_at', DateTime(), nullable=False),
42 Column('updated_at', DateTime()),
43 Column('deleted_at', DateTime()),
44 Column('deleted', Boolean(), nullable=False, default=False,
45 index=True),
46 mysql_engine='InnoDB',
47 useexisting=True)
48
49 return images
50
51
52def get_image_properties_table(meta):
53 """
54 No changes to the image properties table from 002...
55 """
56 (define_image_properties_table,) = from_migration_import(
57 '002_add_image_properties_table', ['define_image_properties_table'])
58
59 image_properties = define_image_properties_table(meta)
60 return image_properties
61
62
63def upgrade(migrate_engine):
64 meta = MetaData()
65 meta.bind = migrate_engine
66 (define_images_table,) = from_migration_import(
67 '001_add_images_table', ['define_images_table'])
68 (define_image_properties_table,) = from_migration_import(
69 '002_add_image_properties_table', ['define_image_properties_table'])
70
71 conn = migrate_engine.connect()
72 images = define_images_table(meta)
73 image_properties = define_image_properties_table(meta)
74
75 # Steps to take, in this order:
76 # 1) Move the existing type column from Image into
77 # ImageProperty for all image records that have a non-NULL
78 # type column
79 # 2) Drop the type column in images
80 # 3) Add the new columns to images
81
82 # The below wackiness correlates to the following ANSI SQL:
83 # SELECT images.* FROM images
84 # LEFT JOIN image_properties
85 # ON images.id = image_properties.image_id
86 # AND image_properties.key = 'type'
87 # WHERE image_properties.image_id IS NULL
88 # AND images.type IS NOT NULL
89 #
90 # which returns all the images that have a type set
91 # but that DO NOT yet have an image_property record
92 # with key of type.
93 sel = select([images], from_obj=[
94 images.outerjoin(image_properties,
95 and_(images.c.id == image_properties.c.image_id,
96 image_properties.c.key == 'type'))]).where(
97 and_(image_properties.c.image_id == None,
98 images.c.type != None))
99 image_records = conn.execute(sel).fetchall()
100 property_insert = image_properties.insert()
101 for record in image_records:
102 conn.execute(property_insert,
103 image_id=record.id,
104 key='type',
105 created_at=record.created_at,
106 deleted=False,
107 value=record.type)
108
109 disk_format = Column('disk_format', String(20))
110 disk_format.create(images)
111 container_format = Column('container_format', String(20))
112 container_format.create(images)
113
114 images.columns['type'].drop()
115 conn.close()
116
117
118def downgrade(migrate_engine):
119 meta = MetaData()
120 meta.bind = migrate_engine
121
122 # Steps to take, in this order:
123 # 1) Add type column back to Image
124 # 2) Move the existing type properties from ImageProperty into
125 # Image.type
126 # 3) Drop the disk_format and container_format columns in Image
127
128 conn = migrate_engine.connect()
129 images = get_images_table(meta)
130 image_properties = get_image_properties_table(meta)
131
132 type_col = Column('type', String(30))
133 type_col.create(images)
134
135 sel = select([image_properties]).where(image_properties.c.key == 'type')
136 type_property_records = conn.execute(sel).fetchall()
137 for record in type_property_records:
138 upd = images.update().where(
139 images.c.id == record.image_id).values(type=record.value)
140 conn.execute(upd)
141 dlt = image_properties.delete().where(
142 image_properties.c.image_id == record.image_id)
143 conn.execute(dlt)
144 conn.close()
145
146 images.columns['disk_format'].drop()
147 images.columns['container_format'].drop()
0148
=== added file 'glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql'
--- glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql 1970-01-01 00:00:00 +0000
+++ glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql 2011-03-23 15:06:49 +0000
@@ -0,0 +1,58 @@
1BEGIN;
2
3/* Make changes to the base images table */
4CREATE TEMPORARY TABLE images_backup (
5 id INTEGER NOT NULL,
6 name VARCHAR(255),
7 size INTEGER,
8 status VARCHAR(30) NOT NULL,
9 is_public BOOLEAN NOT NULL,
10 location TEXT,
11 created_at DATETIME NOT NULL,
12 updated_at DATETIME,
13 deleted_at DATETIME,
14 deleted BOOLEAN NOT NULL,
15 PRIMARY KEY (id)
16);
17
18INSERT INTO images_backup
19SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted
20FROM images;
21
22DROP TABLE images;
23
24CREATE TABLE images (
25 id INTEGER NOT NULL,
26 name VARCHAR(255),
27 size INTEGER,
28 type VARCHAR(30),
29 status VARCHAR(30) NOT NULL,
30 is_public BOOLEAN NOT NULL,
31 location TEXT,
32 created_at DATETIME NOT NULL,
33 updated_at DATETIME,
34 deleted_at DATETIME,
35 deleted BOOLEAN NOT NULL,
36 PRIMARY KEY (id),
37 CHECK (is_public IN (0, 1)),
38 CHECK (deleted IN (0, 1))
39);
40CREATE INDEX ix_images_deleted ON images (deleted);
41CREATE INDEX ix_images_is_public ON images (is_public);
42
43INSERT INTO images (id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted)
44SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted
45FROM images_backup;
46
47DROP TABLE images_backup;
48
49/* Re-insert the type values from the temp table */
50UPDATE images
51SET type = (SELECT value FROM image_properties WHERE image_id = images.id AND key = 'type')
52WHERE EXISTS (SELECT * FROM image_properties WHERE image_id = images.id AND key = 'type');
53
54/* Remove the type properties from the image_properties table */
55DELETE FROM image_properties
56WHERE key = 'type';
57
58COMMIT;
059
=== added file 'glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql'
--- glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql 1970-01-01 00:00:00 +0000
+++ glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql 2011-03-23 15:06:49 +0000
@@ -0,0 +1,64 @@
1BEGIN TRANSACTION;
2
3/* Move type column from base images table
4 * to be records in image_properties table */
5CREATE TEMPORARY TABLE tmp_type_records (id INTEGER NOT NULL, type VARCHAR(30) NOT NULL);
6INSERT INTO tmp_type_records
7SELECT id, type
8FROM images
9WHERE type IS NOT NULL;
10
11REPLACE INTO image_properties
12(image_id, key, value, created_at, deleted)
13SELECT id, 'type', type, date('now'), 0
14FROM tmp_type_records;
15
16DROP TABLE tmp_type_records;
17
18/* Make changes to the base images table */
19CREATE TEMPORARY TABLE images_backup (
20 id INTEGER NOT NULL,
21 name VARCHAR(255),
22 size INTEGER,
23 status VARCHAR(30) NOT NULL,
24 is_public BOOLEAN NOT NULL,
25 location TEXT,
26 created_at DATETIME NOT NULL,
27 updated_at DATETIME,
28 deleted_at DATETIME,
29 deleted BOOLEAN NOT NULL,
30 PRIMARY KEY (id)
31);
32
33INSERT INTO images_backup
34SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted
35FROM images;
36
37DROP TABLE images;
38
39CREATE TABLE images (
40 id INTEGER NOT NULL,
41 name VARCHAR(255),
42 size INTEGER,
43 status VARCHAR(30) NOT NULL,
44 is_public BOOLEAN NOT NULL,
45 location TEXT,
46 created_at DATETIME NOT NULL,
47 updated_at DATETIME,
48 deleted_at DATETIME,
49 deleted BOOLEAN NOT NULL,
50 disk_format VARCHAR(20),
51 container_format VARCHAR(20),
52 PRIMARY KEY (id),
53 CHECK (is_public IN (0, 1)),
54 CHECK (deleted IN (0, 1))
55);
56CREATE INDEX ix_images_deleted ON images (deleted);
57CREATE INDEX ix_images_is_public ON images (is_public);
58
59INSERT INTO images (id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted)
60SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted
61FROM images_backup;
62
63DROP TABLE images_backup;
64COMMIT;
065
=== added file 'glance/registry/db/migrate_repo/versions/004_add_checksum.py'
--- glance/registry/db/migrate_repo/versions/004_add_checksum.py 1970-01-01 00:00:00 +0000
+++ glance/registry/db/migrate_repo/versions/004_add_checksum.py 2011-03-23 15:06:49 +0000
@@ -0,0 +1,80 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2011 OpenStack LLC.
4# All Rights Reserved.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18from migrate.changeset import *
19from sqlalchemy import *
20from sqlalchemy.sql import and_, not_
21
22from glance.registry.db.migrate_repo.schema import (
23 Boolean, DateTime, Integer, String, Text, from_migration_import)
24
25
26def get_images_table(meta):
27 """
28 Returns the Table object for the images table that
29 corresponds to the images table definition of this version.
30 """
31 images = Table('images', meta,
32 Column('id', Integer(), primary_key=True, nullable=False),
33 Column('name', String(255)),
34 Column('disk_format', String(20)),
35 Column('container_format', String(20)),
36 Column('size', Integer()),
37 Column('status', String(30), nullable=False),
38 Column('is_public', Boolean(), nullable=False, default=False,
39 index=True),
40 Column('location', Text()),
41 Column('created_at', DateTime(), nullable=False),
42 Column('updated_at', DateTime()),
43 Column('deleted_at', DateTime()),
44 Column('deleted', Boolean(), nullable=False, default=False,
45 index=True),
46 Column('checksum', String(32)),
47 mysql_engine='InnoDB',
48 useexisting=True)
49
50 return images
51
52
53def get_image_properties_table(meta):
54 """
55 No changes to the image properties table from 002...
56 """
57 (define_image_properties_table,) = from_migration_import(
58 '002_add_image_properties_table', ['define_image_properties_table'])
59
60 image_properties = define_image_properties_table(meta)
61 return image_properties
62
63
64def upgrade(migrate_engine):
65 meta = MetaData()
66 meta.bind = migrate_engine
67
68 images = get_images_table(meta)
69
70 checksum = Column('checksum', String(32))
71 checksum.create(images)
72
73
74def downgrade(migrate_engine):
75 meta = MetaData()
76 meta.bind = migrate_engine
77
78 images = get_images_table(meta)
79
80 images.columns['checksum'].drop()
081
=== modified file 'glance/registry/db/migration.py'
--- glance/registry/db/migration.py 2011-03-08 19:51:25 +0000
+++ glance/registry/db/migration.py 2011-03-23 15:06:49 +0000
@@ -38,7 +38,7 @@
38 :param options: options dict38 :param options: options dict
39 :retval version number39 :retval version number
40 """40 """
41 repo_path = _find_migrate_repo()41 repo_path = get_migrate_repo_path()
42 sql_connection = options['sql_connection']42 sql_connection = options['sql_connection']
43 try:43 try:
44 return versioning_api.db_version(sql_connection, repo_path)44 return versioning_api.db_version(sql_connection, repo_path)
@@ -56,7 +56,7 @@
56 :retval version number56 :retval version number
57 """57 """
58 db_version(options) # Ensure db is under migration control58 db_version(options) # Ensure db is under migration control
59 repo_path = _find_migrate_repo()59 repo_path = get_migrate_repo_path()
60 sql_connection = options['sql_connection']60 sql_connection = options['sql_connection']
61 version_str = version or 'latest'61 version_str = version or 'latest'
62 logger.info("Upgrading %(sql_connection)s to version %(version_str)s" %62 logger.info("Upgrading %(sql_connection)s to version %(version_str)s" %
@@ -72,7 +72,7 @@
72 :retval version number72 :retval version number
73 """73 """
74 db_version(options) # Ensure db is under migration control74 db_version(options) # Ensure db is under migration control
75 repo_path = _find_migrate_repo()75 repo_path = get_migrate_repo_path()
76 sql_connection = options['sql_connection']76 sql_connection = options['sql_connection']
77 logger.info("Downgrading %(sql_connection)s to version %(version)s" %77 logger.info("Downgrading %(sql_connection)s to version %(version)s" %
78 locals())78 locals())
@@ -98,7 +98,7 @@
9898
99 :param options: options dict99 :param options: options dict
100 """100 """
101 repo_path = _find_migrate_repo()101 repo_path = get_migrate_repo_path()
102 sql_connection = options['sql_connection']102 sql_connection = options['sql_connection']
103 return versioning_api.version_control(sql_connection, repo_path)103 return versioning_api.version_control(sql_connection, repo_path)
104104
@@ -117,7 +117,7 @@
117 upgrade(options, version=version)117 upgrade(options, version=version)
118118
119119
120def _find_migrate_repo():120def get_migrate_repo_path():
121 """Get the path for the migrate repository."""121 """Get the path for the migrate repository."""
122 path = os.path.join(os.path.abspath(os.path.dirname(__file__)),122 path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
123 'migrate_repo')123 'migrate_repo')
124124
=== modified file 'glance/registry/db/models.py'
--- glance/registry/db/models.py 2011-03-05 17:04:43 +0000
+++ glance/registry/db/models.py 2011-03-23 15:06:49 +0000
@@ -104,6 +104,7 @@
104 status = Column(String(30), nullable=False)104 status = Column(String(30), nullable=False)
105 is_public = Column(Boolean, nullable=False, default=False)105 is_public = Column(Boolean, nullable=False, default=False)
106 location = Column(Text)106 location = Column(Text)
107 checksum = Column(String(32))
107108
108109
109class ImageProperty(BASE, ModelBase):110class ImageProperty(BASE, ModelBase):
110111
=== modified file 'glance/registry/server.py'
--- glance/registry/server.py 2011-02-25 16:52:12 +0000
+++ glance/registry/server.py 2011-03-23 15:06:49 +0000
@@ -14,8 +14,9 @@
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations15# License for the specific language governing permissions and limitations
16# under the License.16# under the License.
17
17"""18"""
18Parllax Image controller19Reference implementation registry server WSGI controller
19"""20"""
2021
21import json22import json
@@ -31,6 +32,10 @@
3132
32logger = logging.getLogger('glance.registry.server')33logger = logging.getLogger('glance.registry.server')
3334
35DISPLAY_FIELDS_IN_INDEX = ['id', 'name', 'size',
36 'disk_format', 'container_format',
37 'checksum']
38
3439
35class Controller(wsgi.Controller):40class Controller(wsgi.Controller):
36 """Controller for the reference implementation registry server"""41 """Controller for the reference implementation registry server"""
@@ -49,16 +54,24 @@
4954
50 Where image_list is a sequence of mappings::55 Where image_list is a sequence of mappings::
5156
52 {'id': image_id, 'name': image_name}57 {
58 'id': <ID>,
59 'name': <NAME>,
60 'size': <SIZE>,
61 'disk_format': <DISK_FORMAT>,
62 'container_format': <CONTAINER_FORMAT>,
63 'checksum': <CHECKSUM>
64 }
5365
54 """66 """
55 images = db_api.image_get_all_public(None)67 images = db_api.image_get_all_public(None)
56 image_dicts = [dict(id=i['id'],68 results = []
57 name=i['name'],69 for image in images:
58 disk_format=i['disk_format'],70 result = {}
59 container_format=i['container_format'],71 for field in DISPLAY_FIELDS_IN_INDEX:
60 size=i['size']) for i in images]72 result[field] = image[field]
61 return dict(images=image_dicts)73 results.append(result)
74 return dict(images=results)
6275
63 def detail(self, req):76 def detail(self, req):
64 """Return detailed information for all public, non-deleted images77 """Return detailed information for all public, non-deleted images
6578
=== modified file 'glance/server.py'
--- glance/server.py 2011-03-09 00:07:44 +0000
+++ glance/server.py 2011-03-23 15:06:49 +0000
@@ -29,6 +29,7 @@
2929
30"""30"""
3131
32import httplib
32import json33import json
33import logging34import logging
34import sys35import sys
@@ -68,8 +69,8 @@
68 GET /images/<ID> -- Return image data for image with id <ID>69 GET /images/<ID> -- Return image data for image with id <ID>
69 POST /images -- Store image data and return metadata about the70 POST /images -- Store image data and return metadata about the
70 newly-stored image71 newly-stored image
71 PUT /images/<ID> -- Update image metadata (not image data, since72 PUT /images/<ID> -- Update image metadata and/or upload image
72 image data is immutable once stored)73 data for a previously-reserved image
73 DELETE /images/<ID> -- Delete the image with id <ID>74 DELETE /images/<ID> -- Delete the image with id <ID>
74 """75 """
7576
@@ -82,6 +83,9 @@
8283
83 * id -- The opaque image identifier84 * id -- The opaque image identifier
84 * name -- The name of the image85 * name -- The name of the image
86 * disk_format -- The disk image format
87 * container_format -- The "container" format of the image
88 * checksum -- MD5 checksum of the image data
85 * size -- Size of image data in bytes89 * size -- Size of image data in bytes
8690
87 :param request: The WSGI/Webob Request object91 :param request: The WSGI/Webob Request object
@@ -92,6 +96,7 @@
92 'name': <NAME>,96 'name': <NAME>,
93 'disk_format': <DISK_FORMAT>,97 'disk_format': <DISK_FORMAT>,
94 'container_format': <DISK_FORMAT>,98 'container_format': <DISK_FORMAT>,
99 'checksum': <CHECKSUM>
95 'size': <SIZE>}, ...100 'size': <SIZE>}, ...
96 ]}101 ]}
97 """102 """
@@ -111,6 +116,7 @@
111 'size': <SIZE>,116 'size': <SIZE>,
112 'disk_format': <DISK_FORMAT>,117 'disk_format': <DISK_FORMAT>,
113 'container_format': <CONTAINER_FORMAT>,118 'container_format': <CONTAINER_FORMAT>,
119 'checksum': <CHECKSUM>,
114 'store': <STORE>,120 'store': <STORE>,
115 'status': <STATUS>,121 'status': <STATUS>,
116 'created_at': <TIMESTAMP>,122 'created_at': <TIMESTAMP>,
@@ -136,6 +142,8 @@
136142
137 res = Response(request=req)143 res = Response(request=req)
138 utils.inject_image_meta_into_headers(res, image)144 utils.inject_image_meta_into_headers(res, image)
145 res.headers.add('Location', "/images/%s" % id)
146 res.headers.add('ETag', image['checksum'])
139147
140 return req.get_response(res)148 return req.get_response(res)
141149
@@ -165,6 +173,8 @@
165 res = Response(app_iter=image_iterator(),173 res = Response(app_iter=image_iterator(),
166 content_type="text/plain")174 content_type="text/plain")
167 utils.inject_image_meta_into_headers(res, image)175 utils.inject_image_meta_into_headers(res, image)
176 res.headers.add('Location', "/images/%s" % id)
177 res.headers.add('ETag', image['checksum'])
168 return req.get_response(res)178 return req.get_response(res)
169179
170 def _reserve(self, req):180 def _reserve(self, req):
@@ -225,36 +235,45 @@
225235
226 store = self.get_store_or_400(req, store_name)236 store = self.get_store_or_400(req, store_name)
227237
228 image_meta['status'] = 'saving'
229 image_id = image_meta['id']238 image_id = image_meta['id']
230 logger.debug("Updating image metadata for image %s"239 logger.debug("Setting image %s to status 'saving'"
231 % image_id)240 % image_id)
232 registry.update_image_metadata(self.options,241 registry.update_image_metadata(self.options, image_id,
233 image_meta['id'],242 {'status': 'saving'})
234 image_meta)
235
236 try:243 try:
237 logger.debug("Uploading image data for image %(image_id)s "244 logger.debug("Uploading image data for image %(image_id)s "
238 "to %(store_name)s store" % locals())245 "to %(store_name)s store" % locals())
239 location, size = store.add(image_meta['id'],246 location, size, checksum = store.add(image_meta['id'],
240 req.body_file,247 req.body_file,
241 self.options)248 self.options)
242 # If size returned from store is different from size249
243 # already stored in registry, update the registry with250 # Verify any supplied checksum value matches checksum
244 # the new size of the image251 # returned from store when adding image
245 if image_meta.get('size', 0) != size:252 supplied_checksum = image_meta.get('checksum')
246 image_meta['size'] = size253 if supplied_checksum and supplied_checksum != checksum:
247 logger.debug("Updating image metadata for image %s"254 msg = ("Supplied checksum (%(supplied_checksum)s) and "
248 % image_id)255 "checksum generated from uploaded image "
249 registry.update_image_metadata(self.options,256 "(%(checksum)s) did not match. Setting image "
250 image_meta['id'],257 "status to 'killed'.") % locals()
251 image_meta)258 self._safe_kill(req, image_meta)
259 raise HTTPBadRequest(msg, content_type="text/plain",
260 request=req)
261
262 # Update the database with the checksum returned
263 # from the backend store
264 logger.debug("Updating image %(image_id)s data. "
265 "Checksum set to %(checksum)s, size set "
266 "to %(size)d" % locals())
267 registry.update_image_metadata(self.options, image_id,
268 {'checksum': checksum,
269 'size': size})
270
252 return location271 return location
253 except exception.Duplicate, e:272 except exception.Duplicate, e:
254 logger.error("Error adding image to store: %s", str(e))273 logger.error("Error adding image to store: %s", str(e))
255 raise HTTPConflict(str(e), request=req)274 raise HTTPConflict(str(e), request=req)
256275
257 def _activate(self, req, image_meta, location):276 def _activate(self, req, image_id, location):
258 """277 """
259 Sets the image status to `active` and the image's location278 Sets the image status to `active` and the image's location
260 attribute.279 attribute.
@@ -263,25 +282,25 @@
263 :param image_meta: Mapping of metadata about image282 :param image_meta: Mapping of metadata about image
264 :param location: Location of where Glance stored this image283 :param location: Location of where Glance stored this image
265 """284 """
285 image_meta = {}
266 image_meta['location'] = location286 image_meta['location'] = location
267 image_meta['status'] = 'active'287 image_meta['status'] = 'active'
268 registry.update_image_metadata(self.options,288 return registry.update_image_metadata(self.options,
269 image_meta['id'],289 image_id,
270 image_meta)290 image_meta)
271291
272 def _kill(self, req, image_meta):292 def _kill(self, req, image_id):
273 """293 """
274 Marks the image status to `killed`294 Marks the image status to `killed`
275295
276 :param request: The WSGI/Webob Request object296 :param request: The WSGI/Webob Request object
277 :param image_meta: Mapping of metadata about image297 :param image_id: Opaque image identifier
278 """298 """
279 image_meta['status'] = 'killed'
280 registry.update_image_metadata(self.options,299 registry.update_image_metadata(self.options,
281 image_meta['id'],300 image_id,
282 image_meta)301 {'status': 'killed'})
283302
284 def _safe_kill(self, req, image_meta):303 def _safe_kill(self, req, image_id):
285 """304 """
286 Mark image killed without raising exceptions if it fails.305 Mark image killed without raising exceptions if it fails.
287306
@@ -289,12 +308,13 @@
289 not raise itself, rather it should just log its error.308 not raise itself, rather it should just log its error.
290309
291 :param request: The WSGI/Webob Request object310 :param request: The WSGI/Webob Request object
311 :param image_id: Opaque image identifier
292 """312 """
293 try:313 try:
294 self._kill(req, image_meta)314 self._kill(req, image_id)
295 except Exception, e:315 except Exception, e:
296 logger.error("Unable to kill image %s: %s",316 logger.error("Unable to kill image %s: %s",
297 image_meta['id'], repr(e))317 image_id, repr(e))
298318
299 def _upload_and_activate(self, req, image_meta):319 def _upload_and_activate(self, req, image_meta):
300 """320 """
@@ -304,13 +324,16 @@
304324
305 :param request: The WSGI/Webob Request object325 :param request: The WSGI/Webob Request object
306 :param image_meta: Mapping of metadata about image326 :param image_meta: Mapping of metadata about image
327
328 :retval Mapping of updated image data
307 """329 """
308 try:330 try:
331 image_id = image_meta['id']
309 location = self._upload(req, image_meta)332 location = self._upload(req, image_meta)
310 self._activate(req, image_meta, location)333 return self._activate(req, image_id, location)
311 except: # unqualified b/c we're re-raising it334 except: # unqualified b/c we're re-raising it
312 exc_type, exc_value, exc_traceback = sys.exc_info()335 exc_type, exc_value, exc_traceback = sys.exc_info()
313 self._safe_kill(req, image_meta)336 self._safe_kill(req, image_id)
314 # NOTE(sirp): _safe_kill uses httplib which, in turn, uses337 # NOTE(sirp): _safe_kill uses httplib which, in turn, uses
315 # Eventlet's GreenSocket. Eventlet subsequently clears exceptions338 # Eventlet's GreenSocket. Eventlet subsequently clears exceptions
316 # by calling `sys.exc_clear()`.339 # by calling `sys.exc_clear()`.
@@ -318,8 +341,8 @@
318 # This is why we can't use a raise with no arguments here: our341 # This is why we can't use a raise with no arguments here: our
319 # exception context was destroyed by Eventlet. To work around342 # exception context was destroyed by Eventlet. To work around
320 # this, we need to 'memorize' the exception context, and then343 # this, we need to 'memorize' the exception context, and then
321 # re-raise using 3-arg form after Eventlet has run344 # re-raise here.
322 raise exc_type, exc_value, exc_traceback345 raise exc_type(exc_traceback)
323346
324 def create(self, req):347 def create(self, req):
325 """348 """
@@ -354,19 +377,21 @@
354 image data.377 image data.
355 """378 """
356 image_meta = self._reserve(req)379 image_meta = self._reserve(req)
380 image_id = image_meta['id']
357381
358 if utils.has_body(req):382 if utils.has_body(req):
359 self._upload_and_activate(req, image_meta)383 image_meta = self._upload_and_activate(req, image_meta)
360 else:384 else:
361 if 'x-image-meta-location' in req.headers:385 if 'x-image-meta-location' in req.headers:
362 location = req.headers['x-image-meta-location']386 location = req.headers['x-image-meta-location']
363 self._activate(req, image_meta, location)387 image_meta = self._activate(req, image_id, location)
364388
365 # APP states we should return a Location: header with the edit389 # APP states we should return a Location: header with the edit
366 # URI of the resource newly-created.390 # URI of the resource newly-created.
367 res = Response(request=req, body=json.dumps(dict(image=image_meta)),391 res = Response(request=req, body=json.dumps(dict(image=image_meta)),
368 content_type="text/plain")392 status=httplib.CREATED, content_type="text/plain")
369 res.headers.add('Location', "/images/%s" % image_meta['id'])393 res.headers.add('Location', "/images/%s" % image_id)
394 res.headers.add('ETag', image_meta['checksum'])
370395
371 return req.get_response(res)396 return req.get_response(res)
372397
@@ -393,9 +418,14 @@
393 id,418 id,
394 new_image_meta)419 new_image_meta)
395 if has_body:420 if has_body:
396 self._upload_and_activate(req, image_meta)421 image_meta = self._upload_and_activate(req, image_meta)
397422
398 return dict(image=image_meta)423 res = Response(request=req,
424 body=json.dumps(dict(image=image_meta)),
425 content_type="text/plain")
426 res.headers.add('Location', "/images/%s" % id)
427 res.headers.add('ETag', image_meta['checksum'])
428 return res
399 except exception.Invalid, e:429 except exception.Invalid, e:
400 msg = ("Failed to update image metadata. Got error: %(e)s"430 msg = ("Failed to update image metadata. Got error: %(e)s"
401 % locals())431 % locals())
402432
=== modified file 'glance/store/__init__.py'
--- glance/store/__init__.py 2011-02-24 02:50:24 +0000
+++ glance/store/__init__.py 2011-03-23 15:06:49 +0000
@@ -141,40 +141,3 @@
141 authurl = "https://%s" % '/'.join(path_parts)141 authurl = "https://%s" % '/'.join(path_parts)
142142
143 return user, key, authurl, container, obj143 return user, key, authurl, container, obj
144
145
146def add_options(parser):
147 """
148 Adds any configuration options that each store might
149 have.
150
151 :param parser: An optparse.OptionParser object
152 :retval None
153 """
154 # TODO(jaypipes): Remove these imports...
155 from glance.store.http import HTTPBackend
156 from glance.store.s3 import S3Backend
157 from glance.store.swift import SwiftBackend
158 from glance.store.filesystem import FilesystemBackend
159
160 help_text = "The following configuration options are specific to the "\
161 "Glance image store."
162
163 DEFAULT_STORE_CHOICES = ['file', 'swift', 's3']
164 group = optparse.OptionGroup(parser, "Image Store Options", help_text)
165 group.add_option('--default-store', metavar="STORE",
166 default="file",
167 choices=DEFAULT_STORE_CHOICES,
168 help="The backend store that Glance will use to store "
169 "virtual machine images to. Choices: ('%s') "
170 "Default: %%default" % "','".join(DEFAULT_STORE_CHOICES))
171
172 backend_classes = [FilesystemBackend,
173 HTTPBackend,
174 SwiftBackend,
175 S3Backend]
176 for backend_class in backend_classes:
177 if hasattr(backend_class, 'add_options'):
178 backend_class.add_options(group)
179
180 parser.add_option_group(group)
181144
=== modified file 'glance/store/filesystem.py'
--- glance/store/filesystem.py 2011-02-27 20:54:29 +0000
+++ glance/store/filesystem.py 2011-03-23 15:06:49 +0000
@@ -19,6 +19,7 @@
19A simple filesystem-backed store19A simple filesystem-backed store
20"""20"""
2121
22import hashlib
22import logging23import logging
23import os24import os
24import urlparse25import urlparse
@@ -110,9 +111,10 @@
110 :param data: The image data to write, as a file-like object111 :param data: The image data to write, as a file-like object
111 :param options: Conf mapping112 :param options: Conf mapping
112113
113 :retval Tuple with (location, size)114 :retval Tuple with (location, size, checksum)
114 The location that was written, with file:// scheme prepended115 The location that was written, with file:// scheme prepended,
115 and the size in bytes of the data written116 the size in bytes of the data written, and the checksum of
117 the image added.
116 """118 """
117 datadir = options['filesystem_store_datadir']119 datadir = options['filesystem_store_datadir']
118120
@@ -127,6 +129,7 @@
127 raise exception.Duplicate("Image file %s already exists!"129 raise exception.Duplicate("Image file %s already exists!"
128 % filepath)130 % filepath)
129131
132 checksum = hashlib.md5()
130 bytes_written = 0133 bytes_written = 0
131 with open(filepath, 'wb') as f:134 with open(filepath, 'wb') as f:
132 while True:135 while True:
@@ -134,23 +137,11 @@
134 if not buf:137 if not buf:
135 break138 break
136 bytes_written += len(buf)139 bytes_written += len(buf)
140 checksum.update(buf)
137 f.write(buf)141 f.write(buf)
138142
139 logger.debug("Wrote %(bytes_written)d bytes to %(filepath)s"143 checksum_hex = checksum.hexdigest()
140 % locals())144
141 return ('file://%s' % filepath, bytes_written)145 logger.debug("Wrote %(bytes_written)d bytes to %(filepath)s with "
142146 "checksum %(checksum_hex)s" % locals())
143 @classmethod147 return ('file://%s' % filepath, bytes_written, checksum_hex)
144 def add_options(cls, parser):
145 """
146 Adds specific configuration options for this store
147
148 :param parser: An optparse.OptionParser object
149 :retval None
150 """
151
152 parser.add_option('--filesystem-store-datadir', metavar="DIR",
153 default="/var/lib/glance/images/",
154 help="Location to write image data. This directory "
155 "should be writeable by the user that runs the "
156 "glance-api program. Default: %default")
157148
=== modified file 'glance/store/swift.py'
--- glance/store/swift.py 2011-03-16 16:45:01 +0000
+++ glance/store/swift.py 2011-03-23 15:06:49 +0000
@@ -161,7 +161,7 @@
161 # header keys are lowercased by Swift161 # header keys are lowercased by Swift
162 if 'content-length' in resp_headers:162 if 'content-length' in resp_headers:
163 size = int(resp_headers['content-length'])163 size = int(resp_headers['content-length'])
164 return (location, size)164 return (location, size, obj_etag)
165 except swift_client.ClientException, e:165 except swift_client.ClientException, e:
166 if e.http_status == httplib.CONFLICT:166 if e.http_status == httplib.CONFLICT:
167 raise exception.Duplicate("Swift already has an image at "167 raise exception.Duplicate("Swift already has an image at "
168168
=== modified file 'tests/stubs.py'
--- tests/stubs.py 2011-03-14 19:10:24 +0000
+++ tests/stubs.py 2011-03-23 15:06:49 +0000
@@ -282,6 +282,7 @@
282 'updated_at': datetime.datetime.utcnow(),282 'updated_at': datetime.datetime.utcnow(),
283 'deleted_at': None,283 'deleted_at': None,
284 'deleted': False,284 'deleted': False,
285 'checksum': None,
285 'size': 13,286 'size': 13,
286 'location': "swift://user:passwd@acct/container/obj.tar.0",287 'location': "swift://user:passwd@acct/container/obj.tar.0",
287 'properties': [{'key': 'type',288 'properties': [{'key': 'type',
@@ -297,6 +298,7 @@
297 'updated_at': datetime.datetime.utcnow(),298 'updated_at': datetime.datetime.utcnow(),
298 'deleted_at': None,299 'deleted_at': None,
299 'deleted': False,300 'deleted': False,
301 'checksum': None,
300 'size': 19,302 'size': 19,
301 'location': "file:///tmp/glance-tests/2",303 'location': "file:///tmp/glance-tests/2",
302 'properties': []}]304 'properties': []}]
@@ -316,6 +318,7 @@
316 glance.registry.db.api.validate_image(values)318 glance.registry.db.api.validate_image(values)
317319
318 values['size'] = values.get('size', 0)320 values['size'] = values.get('size', 0)
321 values['checksum'] = values.get('checksum')
319 values['deleted'] = False322 values['deleted'] = False
320 values['properties'] = values.get('properties', {})323 values['properties'] = values.get('properties', {})
321 values['created_at'] = datetime.datetime.utcnow()324 values['created_at'] = datetime.datetime.utcnow()
@@ -348,6 +351,7 @@
348 copy_image.update(values)351 copy_image.update(values)
349 glance.registry.db.api.validate_image(copy_image)352 glance.registry.db.api.validate_image(copy_image)
350 props = []353 props = []
354 orig_properties = image['properties']
351355
352 if 'properties' in values.keys():356 if 'properties' in values.keys():
353 for k, v in values['properties'].items():357 for k, v in values['properties'].items():
@@ -360,7 +364,8 @@
360 p['deleted_at'] = None364 p['deleted_at'] = None
361 props.append(p)365 props.append(p)
362366
363 values['properties'] = props367 orig_properties = orig_properties + props
368 values['properties'] = orig_properties
364369
365 image.update(values)370 image.update(values)
366 return image371 return image
367372
=== modified file 'tests/unit/test_api.py'
--- tests/unit/test_api.py 2011-03-14 19:10:24 +0000
+++ tests/unit/test_api.py 2011-03-23 15:06:49 +0000
@@ -15,8 +15,9 @@
15# License for the specific language governing permissions and limitations15# License for the specific language governing permissions and limitations
16# under the License.16# under the License.
1717
18import hashlib
19import httplib
18import os20import os
19import httplib
20import json21import json
21import unittest22import unittest
2223
@@ -52,7 +53,9 @@
5253
53 """54 """
54 fixture = {'id': 2,55 fixture = {'id': 2,
55 'name': 'fake image #2'}56 'name': 'fake image #2',
57 'size': 19,
58 'checksum': None}
56 req = webob.Request.blank('/')59 req = webob.Request.blank('/')
57 res = req.get_response(self.api)60 res = req.get_response(self.api)
58 res_dict = json.loads(res.body)61 res_dict = json.loads(res.body)
@@ -70,7 +73,9 @@
7073
71 """74 """
72 fixture = {'id': 2,75 fixture = {'id': 2,
73 'name': 'fake image #2'}76 'name': 'fake image #2',
77 'size': 19,
78 'checksum': None}
74 req = webob.Request.blank('/images')79 req = webob.Request.blank('/images')
75 res = req.get_response(self.api)80 res = req.get_response(self.api)
76 res_dict = json.loads(res.body)81 res_dict = json.loads(res.body)
@@ -90,6 +95,8 @@
90 fixture = {'id': 2,95 fixture = {'id': 2,
91 'name': 'fake image #2',96 'name': 'fake image #2',
92 'is_public': True,97 'is_public': True,
98 'size': 19,
99 'checksum': None,
93 'disk_format': 'vhd',100 'disk_format': 'vhd',
94 'container_format': 'ovf',101 'container_format': 'ovf',
95 'status': 'active'}102 'status': 'active'}
@@ -396,7 +403,7 @@
396 for k, v in fixture_headers.iteritems():403 for k, v in fixture_headers.iteritems():
397 req.headers[k] = v404 req.headers[k] = v
398 res = req.get_response(self.api)405 res = req.get_response(self.api)
399 self.assertEquals(res.status_int, httplib.OK)406 self.assertEquals(res.status_int, httplib.CREATED)
400407
401 res_body = json.loads(res.body)['image']408 res_body = json.loads(res.body)['image']
402 self.assertEquals('queued', res_body['status'])409 self.assertEquals('queued', res_body['status'])
@@ -431,7 +438,7 @@
431 req.headers['Content-Type'] = 'application/octet-stream'438 req.headers['Content-Type'] = 'application/octet-stream'
432 req.body = "chunk00000remainder"439 req.body = "chunk00000remainder"
433 res = req.get_response(self.api)440 res = req.get_response(self.api)
434 self.assertEquals(res.status_int, 200)441 self.assertEquals(res.status_int, httplib.CREATED)
435442
436 res_body = json.loads(res.body)['image']443 res_body = json.loads(res.body)['image']
437 self.assertEquals(res_body['location'],444 self.assertEquals(res_body['location'],
@@ -445,6 +452,97 @@
445 "res.headerlist = %r" % res.headerlist)452 "res.headerlist = %r" % res.headerlist)
446 self.assertTrue('/images/3' in res.headers['location'])453 self.assertTrue('/images/3' in res.headers['location'])
447454
455 def test_image_is_checksummed(self):
456 """Test that the image contents are checksummed properly"""
457 fixture_headers = {'x-image-meta-store': 'file',
458 'x-image-meta-disk-format': 'vhd',
459 'x-image-meta-container-format': 'ovf',
460 'x-image-meta-name': 'fake image #3'}
461 image_contents = "chunk00000remainder"
462 image_checksum = hashlib.md5(image_contents).hexdigest()
463
464 req = webob.Request.blank("/images")
465 req.method = 'POST'
466 for k, v in fixture_headers.iteritems():
467 req.headers[k] = v
468
469 req.headers['Content-Type'] = 'application/octet-stream'
470 req.body = image_contents
471 res = req.get_response(self.api)
472 self.assertEquals(res.status_int, httplib.CREATED)
473
474 res_body = json.loads(res.body)['image']
475 self.assertEquals(res_body['location'],
476 'file:///tmp/glance-tests/3')
477 self.assertEquals(image_checksum, res_body['checksum'],
478 "Mismatched checksum. Expected %s, got %s" %
479 (image_checksum, res_body['checksum']))
480
481 def test_etag_equals_checksum_header(self):
482 """Test that the ETag header matches the x-image-meta-checksum"""
483 fixture_headers = {'x-image-meta-store': 'file',
484 'x-image-meta-disk-format': 'vhd',
485 'x-image-meta-container-format': 'ovf',
486 'x-image-meta-name': 'fake image #3'}
487 image_contents = "chunk00000remainder"
488 image_checksum = hashlib.md5(image_contents).hexdigest()
489
490 req = webob.Request.blank("/images")
491 req.method = 'POST'
492 for k, v in fixture_headers.iteritems():
493 req.headers[k] = v
494
495 req.headers['Content-Type'] = 'application/octet-stream'
496 req.body = image_contents
497 res = req.get_response(self.api)
498 self.assertEquals(res.status_int, httplib.CREATED)
499
500 # HEAD the image and check the ETag equals the checksum header...
501 expected_headers = {'x-image-meta-checksum': image_checksum,
502 'etag': image_checksum}
503 req = webob.Request.blank("/images/3")
504 req.method = 'HEAD'
505 res = req.get_response(self.api)
506 self.assertEquals(res.status_int, 200)
507
508 for key in expected_headers.keys():
509 self.assertTrue(key in res.headers,
510 "required header '%s' missing from "
511 "returned headers" % key)
512 for key, value in expected_headers.iteritems():
513 self.assertEquals(value, res.headers[key])
514
515 def test_bad_checksum_kills_image(self):
516 """Test that the image contents are checksummed properly"""
517 image_contents = "chunk00000remainder"
518 bad_checksum = hashlib.md5("invalid").hexdigest()
519 fixture_headers = {'x-image-meta-store': 'file',
520 'x-image-meta-disk-format': 'vhd',
521 'x-image-meta-container-format': 'ovf',
522 'x-image-meta-name': 'fake image #3',
523 'x-image-meta-checksum': bad_checksum}
524
525 req = webob.Request.blank("/images")
526 req.method = 'POST'
527 for k, v in fixture_headers.iteritems():
528 req.headers[k] = v
529
530 req.headers['Content-Type'] = 'application/octet-stream'
531 req.body = image_contents
532 res = req.get_response(self.api)
533 self.assertEquals(res.status_int, webob.exc.HTTPBadRequest.code)
534
535 # Test the image was killed...
536 expected_headers = {'x-image-meta-id': '3',
537 'x-image-meta-status': 'killed'}
538 req = webob.Request.blank("/images/3")
539 req.method = 'HEAD'
540 res = req.get_response(self.api)
541 self.assertEquals(res.status_int, 200)
542
543 for key, value in expected_headers.iteritems():
544 self.assertEquals(value, res.headers[key])
545
448 def test_image_meta(self):546 def test_image_meta(self):
449 """Test for HEAD /images/<ID>"""547 """Test for HEAD /images/<ID>"""
450 expected_headers = {'x-image-meta-id': '2',548 expected_headers = {'x-image-meta-id': '2',
451549
=== modified file 'tests/unit/test_filesystem_store.py'
--- tests/unit/test_filesystem_store.py 2011-02-27 20:54:29 +0000
+++ tests/unit/test_filesystem_store.py 2011-03-23 15:06:49 +0000
@@ -18,6 +18,7 @@
18"""Tests the filesystem backend store"""18"""Tests the filesystem backend store"""
1919
20import StringIO20import StringIO
21import hashlib
21import unittest22import unittest
22import urlparse23import urlparse
2324
@@ -27,6 +28,11 @@
27from glance.store.filesystem import FilesystemBackend, ChunkedFile28from glance.store.filesystem import FilesystemBackend, ChunkedFile
28from tests import stubs29from tests import stubs
2930
31FILESYSTEM_OPTIONS = {
32 'verbose': True,
33 'debug': True,
34 'filesystem_store_datadir': stubs.FAKE_FILESYSTEM_ROOTDIR}
35
3036
31class TestFilesystemBackend(unittest.TestCase):37class TestFilesystemBackend(unittest.TestCase):
3238
@@ -75,17 +81,17 @@
75 expected_image_id = 4281 expected_image_id = 42
76 expected_file_size = 1024 * 5 # 5K82 expected_file_size = 1024 * 5 # 5K
77 expected_file_contents = "*" * expected_file_size83 expected_file_contents = "*" * expected_file_size
84 expected_checksum = hashlib.md5(expected_file_contents).hexdigest()
78 expected_location = "file://%s/%s" % (stubs.FAKE_FILESYSTEM_ROOTDIR,85 expected_location = "file://%s/%s" % (stubs.FAKE_FILESYSTEM_ROOTDIR,
79 expected_image_id)86 expected_image_id)
80 image_file = StringIO.StringIO(expected_file_contents)87 image_file = StringIO.StringIO(expected_file_contents)
81 options = {'verbose': True,
82 'debug': True,
83 'filesystem_store_datadir': stubs.FAKE_FILESYSTEM_ROOTDIR}
8488
85 location, size = FilesystemBackend.add(42, image_file, options)89 location, size, checksum = FilesystemBackend.add(42, image_file,
90 FILESYSTEM_OPTIONS)
8691
87 self.assertEquals(expected_location, location)92 self.assertEquals(expected_location, location)
88 self.assertEquals(expected_file_size, size)93 self.assertEquals(expected_file_size, size)
94 self.assertEquals(expected_checksum, checksum)
8995
90 url_pieces = urlparse.urlparse("file:///tmp/glance-tests/42")96 url_pieces = urlparse.urlparse("file:///tmp/glance-tests/42")
91 new_image_file = FilesystemBackend.get(url_pieces)97 new_image_file = FilesystemBackend.get(url_pieces)
@@ -110,7 +116,7 @@
110 'filesystem_store_datadir': stubs.FAKE_FILESYSTEM_ROOTDIR}116 'filesystem_store_datadir': stubs.FAKE_FILESYSTEM_ROOTDIR}
111 self.assertRaises(exception.Duplicate,117 self.assertRaises(exception.Duplicate,
112 FilesystemBackend.add,118 FilesystemBackend.add,
113 2, image_file, options)119 2, image_file, FILESYSTEM_OPTIONS)
114120
115 def test_delete(self):121 def test_delete(self):
116 """122 """
117123
=== added file 'tests/unit/test_migrations.conf'
--- tests/unit/test_migrations.conf 1970-01-01 00:00:00 +0000
+++ tests/unit/test_migrations.conf 2011-03-23 15:06:49 +0000
@@ -0,0 +1,5 @@
1[DEFAULT]
2# Set up any number of migration data stores you want, one
3# The "name" used in the test is the config variable key.
4sqlite=sqlite:///test_migrations.db
5mysql=mysql://root:@localhost/test_migrations
06
=== modified file 'tests/unit/test_migrations.py'
--- tests/unit/test_migrations.py 2011-03-16 16:11:56 +0000
+++ tests/unit/test_migrations.py 2011-03-23 15:06:49 +0000
@@ -15,39 +15,241 @@
15# License for the specific language governing permissions and limitations15# License for the specific language governing permissions and limitations
16# under the License.16# under the License.
1717
18"""
19Tests for database migrations. This test case reads the configuration
20file /tests/unit/test_migrations.conf for database connection settings
21to use in the tests. For each connection found in the config file,
22the test case runs a series of test cases to ensure that migrations work
23properly both upgrading and downgrading, and that no data loss occurs
24if possible.
25"""
26
27import ConfigParser
28import datetime
18import os29import os
19import unittest30import unittest
2031import urlparse
32
33from migrate.versioning.repository import Repository
34from sqlalchemy import *
35from sqlalchemy.pool import NullPool
36
37from glance.common import exception
21import glance.registry.db.migration as migration_api38import glance.registry.db.migration as migration_api
22import glance.registry.db.api as api39from tests.unit.test_misc import execute
23import glance.common.config as config
2440
2541
26class TestMigrations(unittest.TestCase):42class TestMigrations(unittest.TestCase):
43
27 """Test sqlalchemy-migrate migrations"""44 """Test sqlalchemy-migrate migrations"""
2845
46 TEST_DATABASES = {}
47 CONFIG_FILE_PATH = os.path.join('tests', 'unit',
48 'test_migrations.conf')
49 REPOSITORY_PATH = os.path.join('glance', 'registry', 'db', 'migrate_repo')
50 REPOSITORY = Repository(REPOSITORY_PATH)
51
52 def __init__(self, *args, **kwargs):
53 super(TestMigrations, self).__init__(*args, **kwargs)
54
29 def setUp(self):55 def setUp(self):
30 self.db_path = "glance_test_migration.sqlite"56 # Load test databases from the config file. Only do this
31 sql_connection = os.environ.get('GLANCE_SQL_CONNECTION',57 # once. No need to re-run this on each test...
32 "sqlite:///%s" % self.db_path)58 if not TestMigrations.TEST_DATABASES:
3359 if os.path.exists(TestMigrations.CONFIG_FILE_PATH):
34 self.options = dict(sql_connection=sql_connection,60 cp = ConfigParser.RawConfigParser()
35 verbose=False)61 try:
36 config.setup_logging(self.options, {})62 cp.read(TestMigrations.CONFIG_FILE_PATH)
63 defaults = cp.defaults()
64 for key, value in defaults.items():
65 TestMigrations.TEST_DATABASES[key] = value
66 except ConfigParser.ParsingError, e:
67 print ("Failed to read test_migrations.conf config file. "
68 "Got error: %s" % e)
69
70 self.engines = {}
71 for key, value in TestMigrations.TEST_DATABASES.items():
72 self.engines[key] = create_engine(value, poolclass=NullPool)
73
74 # We start each test case with a completely blank slate.
75 self._reset_databases()
3776
38 def tearDown(self):77 def tearDown(self):
39 api.configure_db(self.options)78 # We destroy the test data store between each test case,
40 api.unregister_models()79 # and recreate it, which ensures that we have no side-effects
4180 # from the tests
42 def test_db_sync_downgrade_then_upgrade(self):81 self._reset_databases()
43 migration_api.db_sync(self.options)82
4483 def _reset_databases(self):
45 latest = migration_api.db_version(self.options)84 for key, engine in self.engines.items():
4685 conn_string = TestMigrations.TEST_DATABASES[key]
47 migration_api.downgrade(self.options, latest - 1)86 conn_pieces = urlparse.urlparse(conn_string)
48 cur_version = migration_api.db_version(self.options)87 if conn_string.startswith('sqlite'):
49 self.assertEqual(cur_version, latest - 1)88 # We can just delete the SQLite database, which is
5089 # the easiest and cleanest solution
51 migration_api.upgrade(self.options, cur_version + 1)90 db_path = conn_pieces.path.strip('/')
52 cur_version = migration_api.db_version(self.options)91 if os.path.exists(db_path):
53 self.assertEqual(cur_version, latest)92 os.unlink(db_path)
93 # No need to recreate the SQLite DB. SQLite will
94 # create it for us if it's not there...
95 elif conn_string.startswith('mysql'):
96 # We can execute the MySQL client to destroy and re-create
97 # the MYSQL database, which is easier and less error-prone
98 # than using SQLAlchemy to do this via MetaData...trust me.
99 database = conn_pieces.path.strip('/')
100 loc_pieces = conn_pieces.netloc.split('@')
101 host = loc_pieces[1]
102 auth_pieces = loc_pieces[0].split(':')
103 user = auth_pieces[0]
104 password = ""
105 if len(auth_pieces) > 1:
106 if auth_pieces[1].strip():
107 password = "-p%s" % auth_pieces[1]
108 sql = ("drop database if exists %(database)s; "
109 "create database %(database)s;") % locals()
110 cmd = ("mysql -u%(user)s %(password)s -h%(host)s "
111 "-e\"%(sql)s\"") % locals()
112 exitcode, out, err = execute(cmd)
113 self.assertEqual(0, exitcode)
114
115 def test_walk_versions(self):
116 """
117 Walks all version scripts for each tested database, ensuring
118 that there are no errors in the version scripts for each engine
119 """
120 for key, engine in self.engines.items():
121 options = {'sql_connection': TestMigrations.TEST_DATABASES[key]}
122 self._walk_versions(options)
123
124 def _walk_versions(self, options):
125 # Determine latest version script from the repo, then
126 # upgrade from 1 through to the latest, with no data
127 # in the databases. This just checks that the schema itself
128 # upgrades successfully.
129
130 # Assert we are not under version control...
131 self.assertRaises(exception.DatabaseMigrationError,
132 migration_api.db_version,
133 options)
134 # Place the database under version control
135 migration_api.version_control(options)
136
137 cur_version = migration_api.db_version(options)
138 self.assertEqual(0, cur_version)
139
140 for version in xrange(1, TestMigrations.REPOSITORY.latest + 1):
141 migration_api.upgrade(options, version)
142 cur_version = migration_api.db_version(options)
143 self.assertEqual(cur_version, version)
144
145 # Now walk it back down to 0 from the latest, testing
146 # the downgrade paths.
147 for version in reversed(
148 xrange(0, TestMigrations.REPOSITORY.latest)):
149 migration_api.downgrade(options, version)
150 cur_version = migration_api.db_version(options)
151 self.assertEqual(cur_version, version)
152
153 def test_no_data_loss_2_to_3_to_2(self):
154 """
155 Here, we test that in the case when we moved a column "type" from the
156 base images table to be records in the image_properties table, that
157 we don't lose any data during the migration. Similarly, we test that
158 on downgrade, we don't lose any data, as the records are moved from
159 the image_properties table back into the base image table.
160 """
161 for key, engine in self.engines.items():
162 options = {'sql_connection': TestMigrations.TEST_DATABASES[key]}
163 self._no_data_loss_2_to_3_to_2(engine, options)
164
165 def _no_data_loss_2_to_3_to_2(self, engine, options):
166 migration_api.version_control(options)
167 migration_api.upgrade(options, 2)
168
169 cur_version = migration_api.db_version(options)
170 self.assertEquals(2, cur_version)
171
172 # We are now on version 2. Check that the images table does
173 # not contain the type column...
174
175 images_table = Table('images', MetaData(), autoload=True,
176 autoload_with=engine)
177
178 image_properties_table = Table('image_properties', MetaData(),
179 autoload=True,
180 autoload_with=engine)
181
182 self.assertTrue('type' in images_table.c,
183 "'type' column found in images table columns! "
184 "images table columns: %s"
185 % images_table.c.keys())
186
187 conn = engine.connect()
188 sel = select([func.count("*")], from_obj=[images_table])
189 orig_num_images = conn.execute(sel).scalar()
190 sel = select([func.count("*")], from_obj=[image_properties_table])
191 orig_num_image_properties = conn.execute(sel).scalar()
192
193 now = datetime.datetime.now()
194 inserter = images_table.insert()
195 conn.execute(inserter, [
196 {'deleted': False, 'created_at': now,
197 'updated_at': now, 'type': 'kernel',
198 'status': 'active', 'is_public': True},
199 {'deleted': False, 'created_at': now,
200 'updated_at': now, 'type': 'ramdisk',
201 'status': 'active', 'is_public': True}])
202
203 sel = select([func.count("*")], from_obj=[images_table])
204 num_images = conn.execute(sel).scalar()
205 self.assertEqual(orig_num_images + 2, num_images)
206 conn.close()
207
208 # Now let's upgrade to 3. This should move the type column
209 # to the image_properties table as type properties.
210
211 migration_api.upgrade(options, 3)
212
213 cur_version = migration_api.db_version(options)
214 self.assertEquals(3, cur_version)
215
216 images_table = Table('images', MetaData(), autoload=True,
217 autoload_with=engine)
218
219 self.assertTrue('type' not in images_table.c,
220 "'type' column not found in images table columns! "
221 "images table columns reported by metadata: %s\n"
222 % images_table.c.keys())
223
224 image_properties_table = Table('image_properties', MetaData(),
225 autoload=True,
226 autoload_with=engine)
227
228 conn = engine.connect()
229 sel = select([func.count("*")], from_obj=[image_properties_table])
230 num_image_properties = conn.execute(sel).scalar()
231 self.assertEqual(orig_num_image_properties + 2, num_image_properties)
232 conn.close()
233
234 # Downgrade to 2 and check that the type properties were moved
235 # to the main image table
236
237 migration_api.downgrade(options, 2)
238
239 images_table = Table('images', MetaData(), autoload=True,
240 autoload_with=engine)
241
242 self.assertTrue('type' in images_table.c,
243 "'type' column found in images table columns! "
244 "images table columns: %s"
245 % images_table.c.keys())
246
247 image_properties_table = Table('image_properties', MetaData(),
248 autoload=True,
249 autoload_with=engine)
250
251 conn = engine.connect()
252 sel = select([func.count("*")], from_obj=[image_properties_table])
253 last_num_image_properties = conn.execute(sel).scalar()
254
255 self.assertEqual(num_image_properties - 2, last_num_image_properties)
54256
=== modified file 'tests/unit/test_swift_store.py'
--- tests/unit/test_swift_store.py 2011-03-16 16:45:01 +0000
+++ tests/unit/test_swift_store.py 2011-03-23 15:06:49 +0000
@@ -70,15 +70,20 @@
70 if hasattr(contents, 'read'):70 if hasattr(contents, 'read'):
71 fixture_object = StringIO.StringIO()71 fixture_object = StringIO.StringIO()
72 chunk = contents.read(SwiftBackend.CHUNKSIZE)72 chunk = contents.read(SwiftBackend.CHUNKSIZE)
73 checksum = hashlib.md5()
73 while chunk:74 while chunk:
74 fixture_object.write(chunk)75 fixture_object.write(chunk)
76 checksum.update(chunk)
75 chunk = contents.read(SwiftBackend.CHUNKSIZE)77 chunk = contents.read(SwiftBackend.CHUNKSIZE)
78 etag = checksum.hexdigest()
76 else:79 else:
77 fixture_object = StringIO.StringIO(contents)80 fixture_object = StringIO.StringIO(contents)
81 etag = hashlib.md5(fixture_object.getvalue()).hexdigest()
82 read_len = fixture_object.len
78 fixture_objects[fixture_key] = fixture_object83 fixture_objects[fixture_key] = fixture_object
79 fixture_headers[fixture_key] = {84 fixture_headers[fixture_key] = {
80 'content-length': fixture_object.len,85 'content-length': read_len,
81 'etag': hashlib.md5(fixture_object.read()).hexdigest()}86 'etag': etag}
82 return fixture_headers[fixture_key]['etag']87 return fixture_headers[fixture_key]['etag']
83 else:88 else:
84 msg = ("Object PUT failed - Object with key %s already exists"89 msg = ("Object PUT failed - Object with key %s already exists"
@@ -226,8 +231,9 @@
226 def test_add(self):231 def test_add(self):
227 """Test that we can add an image via the swift backend"""232 """Test that we can add an image via the swift backend"""
228 expected_image_id = 42233 expected_image_id = 42
229 expected_swift_size = 1024 * 5 # 5K234 expected_swift_size = FIVE_KB
230 expected_swift_contents = "*" * expected_swift_size235 expected_swift_contents = "*" * expected_swift_size
236 expected_checksum = hashlib.md5(expected_swift_contents).hexdigest()
231 expected_location = format_swift_location(237 expected_location = format_swift_location(
232 SWIFT_OPTIONS['swift_store_user'],238 SWIFT_OPTIONS['swift_store_user'],
233 SWIFT_OPTIONS['swift_store_key'],239 SWIFT_OPTIONS['swift_store_key'],
@@ -236,10 +242,12 @@
236 expected_image_id)242 expected_image_id)
237 image_swift = StringIO.StringIO(expected_swift_contents)243 image_swift = StringIO.StringIO(expected_swift_contents)
238244
239 location, size = SwiftBackend.add(42, image_swift, SWIFT_OPTIONS)245 location, size, checksum = SwiftBackend.add(42, image_swift,
246 SWIFT_OPTIONS)
240247
241 self.assertEquals(expected_location, location)248 self.assertEquals(expected_location, location)
242 self.assertEquals(expected_swift_size, size)249 self.assertEquals(expected_swift_size, size)
250 self.assertEquals(expected_checksum, checksum)
243251
244 url_pieces = urlparse.urlparse(expected_location)252 url_pieces = urlparse.urlparse(expected_location)
245 new_image_swift = SwiftBackend.get(url_pieces)253 new_image_swift = SwiftBackend.get(url_pieces)
@@ -280,8 +288,9 @@
280 options['swift_store_create_container_on_put'] = 'True'288 options['swift_store_create_container_on_put'] = 'True'
281 options['swift_store_container'] = 'noexist'289 options['swift_store_container'] = 'noexist'
282 expected_image_id = 42290 expected_image_id = 42
283 expected_swift_size = 1024 * 5 # 5K291 expected_swift_size = FIVE_KB
284 expected_swift_contents = "*" * expected_swift_size292 expected_swift_contents = "*" * expected_swift_size
293 expected_checksum = hashlib.md5(expected_swift_contents).hexdigest()
285 expected_location = format_swift_location(294 expected_location = format_swift_location(
286 options['swift_store_user'],295 options['swift_store_user'],
287 options['swift_store_key'],296 options['swift_store_key'],
@@ -290,10 +299,12 @@
290 expected_image_id)299 expected_image_id)
291 image_swift = StringIO.StringIO(expected_swift_contents)300 image_swift = StringIO.StringIO(expected_swift_contents)
292301
293 location, size = SwiftBackend.add(42, image_swift, options)302 location, size, checksum = SwiftBackend.add(42, image_swift,
303 options)
294304
295 self.assertEquals(expected_location, location)305 self.assertEquals(expected_location, location)
296 self.assertEquals(expected_swift_size, size)306 self.assertEquals(expected_swift_size, size)
307 self.assertEquals(expected_checksum, checksum)
297308
298 url_pieces = urlparse.urlparse(expected_location)309 url_pieces = urlparse.urlparse(expected_location)
299 new_image_swift = SwiftBackend.get(url_pieces)310 new_image_swift = SwiftBackend.get(url_pieces)

Subscribers

People subscribed via source and target branches