Merge lp:~jaypipes/glance/checksum into lp:~glance-coresec/glance/cactus-trunk
- checksum
- Merge into cactus-trunk
Status: | Superseded |
---|---|
Proposed branch: | lp:~jaypipes/glance/checksum |
Merge into: | lp:~glance-coresec/glance/cactus-trunk |
Diff against target: |
1588 lines (+855/-158) 20 files modified
doc/source/glanceapi.rst (+20/-0) glance/client.py (+4/-1) glance/registry/db/api.py (+1/-1) glance/registry/db/migrate_repo/versions/003_add_disk_format.py (+147/-0) glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql (+58/-0) glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql (+64/-0) glance/registry/db/migrate_repo/versions/004_add_checksum.py (+80/-0) glance/registry/db/migration.py (+5/-5) glance/registry/db/models.py (+1/-0) glance/registry/server.py (+21/-8) glance/server.py (+72/-42) glance/store/__init__.py (+0/-37) glance/store/filesystem.py (+12/-21) glance/store/swift.py (+1/-1) tests/stubs.py (+6/-1) tests/unit/test_api.py (+103/-5) tests/unit/test_filesystem_store.py (+11/-5) tests/unit/test_migrations.conf (+5/-0) tests/unit/test_migrations.py (+227/-25) tests/unit/test_swift_store.py (+17/-6) |
To merge this branch: | bzr merge lp:~jaypipes/glance/checksum |
Related bugs: | |
Related blueprints: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Sandy Walsh (community) | Approve | ||
Cory Wright (community) | Needs Fixing | ||
Rick Harris (community) | Approve | ||
Chuck Thier (community) | Approve | ||
Review via email: mp+52569@code.launchpad.net |
This proposal has been superseded by a proposal from 2011-03-23.
Commit message
Add checksumming capabilities to Glance
Description of the change
Adds checksumming to Glance.
When adding an image (or uploading an image during PUT operations),
you may now supply an optional X-Image-
storing the uploaded image, the backend image stores now are required
to return a checksum of the data they just stored. The optional
X-Image-
and returns a 409 Bad Request if there is a mismatch.
The ETag header is now properly set to the image's checksum now
for all GET /images/<ID>, HEAD /images/<ID>, POST /images and
PUT /images/<ID> operations.
Adds unit tests verifying the checksumming behaviour in the API, and
in the Swift and Filesystem backend stores.
NOTE: This does not include the DB migration script. Separate bug will be filed for that.
Jay Pipes (jaypipes) wrote : | # |
thx Chuck. Made a note in the docs about the checksum being an MD5 checksum.
Rick Harris (rconradharris) wrote : | # |
Nice patch.
Unfortunately with the state of Nova <=> Glance trunks at the moment, I wasn't able to test functionally. That said, this seems pretty safe to go ahead and merge, and if we have issues, we can fix them along with all the other fixes that are in progress at the moment :)
> 110 + checksum = Column(String(32))
I assume the answer is YAGNI, but I'll ask it anyway: Will we ever allow SHA1 or SHA256?
Would it make sense to make this a String(64) or, heck, a String(255) for flexibility?
Jay Pipes (jaypipes) wrote : | # |
> > 110 + checksum = Column(String(32))
>
> I assume the answer is YAGNI, but I'll ask it anyway: Will we ever allow SHA1
> or SHA256?
>
> Would it make sense to make this a String(64) or, heck, a String(255) for
> flexibility?
Hmm, good question. Not sure, really. I suppose we could expand it in the future if we decide its a needed feature.
Cory Wright (corywright) wrote : | # |
I noticed that there is no migration script to add the checksum column. Trunk is already missing a migration for disk_format/
I tried merging trunk into this branch to test these changes with the glance cli tool, but there were conflicts so you may want to merge trunk once more.
Otherwise, this looks good to me. I'm going to mark it as Needs Fixing because of the missing migration. If you think it's ok to merge this to trunk without the migration then let me know and I'll mark this approved.
Jay Pipes (jaypipes) wrote : | # |
> I noticed that there is no migration script to add the checksum column. Trunk
> is already missing a migration for disk_format/
> probably shouldn't get in the habit of merging to trunk without migrations,
> since that effectively leaves trunk broken. However, the tests seem to run
> fine without the migration.
>
> I tried merging trunk into this branch to test these changes with the glance
> cli tool, but there were conflicts so you may want to merge trunk once more.
>
> Otherwise, this looks good to me. I'm going to mark it as Needs Fixing
> because of the missing migration. If you think it's ok to merge this to trunk
> without the migration then let me know and I'll mark this approved.
Agreed, though I did note this in the merge proposal commit message:
"NOTE: This does not include the DB migration script. Separate bug will be filed for that."
But, I'll work on a migrate script after merging trunk again. Thanks, Cory!
Cory Wright (corywright) wrote : | # |
You sure did mention it, sorry about that. I read the commit message yesterday when I first looked at this and forgot.
Sandy Walsh (sandy-walsh) wrote : | # |
Hmm, I got bitten by this migration bug yesterday ... can't this be put in this branch to keep trunk happy?
Jay Pipes (jaypipes) wrote : | # |
> Hmm, I got bitten by this migration bug yesterday ... can't this be put in
> this branch to keep trunk happy?
? This hasn't been merged yet, so I'm unclear how this could have bitten you :)
FYI, I've now figured out the migrations stuff. Full steam ahead today on getting the image format migration bug done and getting this branch merged with a migration file.
Sandy Walsh (sandy-walsh) wrote : | # |
sorry, it was a nova issue. _get_kernel_
I guess that just stresses the importance of having the migrations in place with the branch :)
Minor tweak ... ping me and I'll approve immediately after
278 missing _() ?
Jay Pipes (jaypipes) wrote : | # |
On Wed, Mar 23, 2011 at 10:26 AM, Sandy Walsh <email address hidden> wrote:
> Review: Needs Fixing
> sorry, it was a nova issue. _get_kernel_
>
> I guess that just stresses the importance of having the migrations in place with the branch :)
Ya, I'm currently working on the migration script. should be done shortly...
> Minor tweak ... ping me and I'll approve immediately after
>
> 278 missing _() ?
Glance doesn't have i18n. Yet :)
Unmerged revisions
Preview Diff
1 | === modified file 'doc/source/glanceapi.rst' | |||
2 | --- doc/source/glanceapi.rst 2011-03-05 17:04:43 +0000 | |||
3 | +++ doc/source/glanceapi.rst 2011-03-23 15:06:49 +0000 | |||
4 | @@ -67,6 +67,7 @@ | |||
5 | 67 | 'disk_format': 'vhd', | 67 | 'disk_format': 'vhd', |
6 | 68 | 'container_format': 'ovf', | 68 | 'container_format': 'ovf', |
7 | 69 | 'size': '5368709120', | 69 | 'size': '5368709120', |
8 | 70 | 'checksum': 'c2e5db72bd7fd153f53ede5da5a06de3', | ||
9 | 70 | 'location': 'swift://account:key/container/image.tar.gz.0', | 71 | 'location': 'swift://account:key/container/image.tar.gz.0', |
10 | 71 | 'created_at': '2010-02-03 09:34:01', | 72 | 'created_at': '2010-02-03 09:34:01', |
11 | 72 | 'updated_at': '2010-02-03 09:34:01', | 73 | 'updated_at': '2010-02-03 09:34:01', |
12 | @@ -89,6 +90,8 @@ | |||
13 | 89 | The `properties` field is a mapping of free-form key/value pairs that | 90 | The `properties` field is a mapping of free-form key/value pairs that |
14 | 90 | have been saved with the image metadata | 91 | have been saved with the image metadata |
15 | 91 | 92 | ||
16 | 93 | The `checksum` field is an MD5 checksum of the image file data | ||
17 | 94 | |||
18 | 92 | 95 | ||
19 | 93 | Requesting Detailed Metadata on a Specific Image | 96 | Requesting Detailed Metadata on a Specific Image |
20 | 94 | ------------------------------------------------ | 97 | ------------------------------------------------ |
21 | @@ -116,6 +119,7 @@ | |||
22 | 116 | x-image-meta-disk-format vhd | 119 | x-image-meta-disk-format vhd |
23 | 117 | x-image-meta-container-format ovf | 120 | x-image-meta-container-format ovf |
24 | 118 | x-image-meta-size 5368709120 | 121 | x-image-meta-size 5368709120 |
25 | 122 | x-image-meta-checksum c2e5db72bd7fd153f53ede5da5a06de3 | ||
26 | 119 | x-image-meta-location swift://account:key/container/image.tar.gz.0 | 123 | x-image-meta-location swift://account:key/container/image.tar.gz.0 |
27 | 120 | x-image-meta-created_at 2010-02-03 09:34:01 | 124 | x-image-meta-created_at 2010-02-03 09:34:01 |
28 | 121 | x-image-meta-updated_at 2010-02-03 09:34:01 | 125 | x-image-meta-updated_at 2010-02-03 09:34:01 |
29 | @@ -137,6 +141,9 @@ | |||
30 | 137 | that have been saved with the image metadata. The key is the string | 141 | that have been saved with the image metadata. The key is the string |
31 | 138 | after `x-image-meta-property-` and the value is the value of the header | 142 | after `x-image-meta-property-` and the value is the value of the header |
32 | 139 | 143 | ||
33 | 144 | The response's `ETag` header will always be equal to the | ||
34 | 145 | `x-image-meta-checksum` value | ||
35 | 146 | |||
36 | 140 | 147 | ||
37 | 141 | Retrieving a Virtual Machine Image | 148 | Retrieving a Virtual Machine Image |
38 | 142 | ---------------------------------- | 149 | ---------------------------------- |
39 | @@ -166,6 +173,7 @@ | |||
40 | 166 | x-image-meta-disk-format vhd | 173 | x-image-meta-disk-format vhd |
41 | 167 | x-image-meta-container-format ovf | 174 | x-image-meta-container-format ovf |
42 | 168 | x-image-meta-size 5368709120 | 175 | x-image-meta-size 5368709120 |
43 | 176 | x-image-meta-checksum c2e5db72bd7fd153f53ede5da5a06de3 | ||
44 | 169 | x-image-meta-location swift://account:key/container/image.tar.gz.0 | 177 | x-image-meta-location swift://account:key/container/image.tar.gz.0 |
45 | 170 | x-image-meta-created_at 2010-02-03 09:34:01 | 178 | x-image-meta-created_at 2010-02-03 09:34:01 |
46 | 171 | x-image-meta-updated_at 2010-02-03 09:34:01 | 179 | x-image-meta-updated_at 2010-02-03 09:34:01 |
47 | @@ -190,6 +198,9 @@ | |||
48 | 190 | The response's `Content-Length` header shall be equal to the value of | 198 | The response's `Content-Length` header shall be equal to the value of |
49 | 191 | the `x-image-meta-size` header | 199 | the `x-image-meta-size` header |
50 | 192 | 200 | ||
51 | 201 | The response's `ETag` header will always be equal to the | ||
52 | 202 | `x-image-meta-checksum` value | ||
53 | 203 | |||
54 | 193 | The image data itself will be the body of the HTTP response returned | 204 | The image data itself will be the body of the HTTP response returned |
55 | 194 | from the request, which will have content-type of | 205 | from the request, which will have content-type of |
56 | 195 | `application/octet-stream`. | 206 | `application/octet-stream`. |
57 | @@ -284,6 +295,15 @@ | |||
58 | 284 | When not present, Glance will calculate the image's size based on the size | 295 | When not present, Glance will calculate the image's size based on the size |
59 | 285 | of the request body. | 296 | of the request body. |
60 | 286 | 297 | ||
61 | 298 | * ``x-image-meta-checksum`` | ||
62 | 299 | |||
63 | 300 | This header is optional. When present it shall be the expected **MD5** | ||
64 | 301 | checksum of the image file data. | ||
65 | 302 | |||
66 | 303 | When present, Glance will verify the checksum generated from the backend | ||
67 | 304 | store when storing your image against this value and return a | ||
68 | 305 | **400 Bad Request** if the values do not match. | ||
69 | 306 | |||
70 | 287 | * ``x-image-meta-is-public`` | 307 | * ``x-image-meta-is-public`` |
71 | 288 | 308 | ||
72 | 289 | This header is optional. | 309 | This header is optional. |
73 | 290 | 310 | ||
74 | === modified file 'glance/client.py' | |||
75 | --- glance/client.py 2011-03-06 17:02:44 +0000 | |||
76 | +++ glance/client.py 2011-03-23 15:06:49 +0000 | |||
77 | @@ -142,7 +142,10 @@ | |||
78 | 142 | c.request(method, action, body, headers) | 142 | c.request(method, action, body, headers) |
79 | 143 | res = c.getresponse() | 143 | res = c.getresponse() |
80 | 144 | status_code = self.get_status_code(res) | 144 | status_code = self.get_status_code(res) |
82 | 145 | if status_code == httplib.OK: | 145 | if status_code in (httplib.OK, |
83 | 146 | httplib.CREATED, | ||
84 | 147 | httplib.ACCEPTED, | ||
85 | 148 | httplib.NO_CONTENT): | ||
86 | 146 | return res | 149 | return res |
87 | 147 | elif status_code == httplib.UNAUTHORIZED: | 150 | elif status_code == httplib.UNAUTHORIZED: |
88 | 148 | raise exception.NotAuthorized | 151 | raise exception.NotAuthorized |
89 | 149 | 152 | ||
90 | === modified file 'glance/registry/db/api.py' | |||
91 | --- glance/registry/db/api.py 2011-03-14 19:10:24 +0000 | |||
92 | +++ glance/registry/db/api.py 2011-03-23 15:06:49 +0000 | |||
93 | @@ -43,7 +43,7 @@ | |||
94 | 43 | 43 | ||
95 | 44 | IMAGE_ATTRS = BASE_MODEL_ATTRS | set(['name', 'status', 'size', | 44 | IMAGE_ATTRS = BASE_MODEL_ATTRS | set(['name', 'status', 'size', |
96 | 45 | 'disk_format', 'container_format', | 45 | 'disk_format', 'container_format', |
98 | 46 | 'is_public', 'location']) | 46 | 'is_public', 'location', 'checksum']) |
99 | 47 | 47 | ||
100 | 48 | CONTAINER_FORMATS = ['ami', 'ari', 'aki', 'bare', 'ovf'] | 48 | CONTAINER_FORMATS = ['ami', 'ari', 'aki', 'bare', 'ovf'] |
101 | 49 | DISK_FORMATS = ['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 'qcow2', 'vdi'] | 49 | DISK_FORMATS = ['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 'qcow2', 'vdi'] |
102 | 50 | 50 | ||
103 | === added file 'glance/registry/db/migrate_repo/versions/003_add_disk_format.py' | |||
104 | --- glance/registry/db/migrate_repo/versions/003_add_disk_format.py 1970-01-01 00:00:00 +0000 | |||
105 | +++ glance/registry/db/migrate_repo/versions/003_add_disk_format.py 2011-03-23 15:06:49 +0000 | |||
106 | @@ -0,0 +1,147 @@ | |||
107 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
108 | 2 | |||
109 | 3 | # Copyright 2011 OpenStack LLC. | ||
110 | 4 | # All Rights Reserved. | ||
111 | 5 | # | ||
112 | 6 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
113 | 7 | # not use this file except in compliance with the License. You may obtain | ||
114 | 8 | # a copy of the License at | ||
115 | 9 | # | ||
116 | 10 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
117 | 11 | # | ||
118 | 12 | # Unless required by applicable law or agreed to in writing, software | ||
119 | 13 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
120 | 14 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
121 | 15 | # License for the specific language governing permissions and limitations | ||
122 | 16 | # under the License. | ||
123 | 17 | |||
124 | 18 | from migrate.changeset import * | ||
125 | 19 | from sqlalchemy import * | ||
126 | 20 | from sqlalchemy.sql import and_, not_ | ||
127 | 21 | |||
128 | 22 | from glance.registry.db.migrate_repo.schema import ( | ||
129 | 23 | Boolean, DateTime, Integer, String, Text, from_migration_import) | ||
130 | 24 | |||
131 | 25 | |||
132 | 26 | def get_images_table(meta): | ||
133 | 27 | """ | ||
134 | 28 | Returns the Table object for the images table that | ||
135 | 29 | corresponds to the images table definition of this version. | ||
136 | 30 | """ | ||
137 | 31 | images = Table('images', meta, | ||
138 | 32 | Column('id', Integer(), primary_key=True, nullable=False), | ||
139 | 33 | Column('name', String(255)), | ||
140 | 34 | Column('disk_format', String(20)), | ||
141 | 35 | Column('container_format', String(20)), | ||
142 | 36 | Column('size', Integer()), | ||
143 | 37 | Column('status', String(30), nullable=False), | ||
144 | 38 | Column('is_public', Boolean(), nullable=False, default=False, | ||
145 | 39 | index=True), | ||
146 | 40 | Column('location', Text()), | ||
147 | 41 | Column('created_at', DateTime(), nullable=False), | ||
148 | 42 | Column('updated_at', DateTime()), | ||
149 | 43 | Column('deleted_at', DateTime()), | ||
150 | 44 | Column('deleted', Boolean(), nullable=False, default=False, | ||
151 | 45 | index=True), | ||
152 | 46 | mysql_engine='InnoDB', | ||
153 | 47 | useexisting=True) | ||
154 | 48 | |||
155 | 49 | return images | ||
156 | 50 | |||
157 | 51 | |||
158 | 52 | def get_image_properties_table(meta): | ||
159 | 53 | """ | ||
160 | 54 | No changes to the image properties table from 002... | ||
161 | 55 | """ | ||
162 | 56 | (define_image_properties_table,) = from_migration_import( | ||
163 | 57 | '002_add_image_properties_table', ['define_image_properties_table']) | ||
164 | 58 | |||
165 | 59 | image_properties = define_image_properties_table(meta) | ||
166 | 60 | return image_properties | ||
167 | 61 | |||
168 | 62 | |||
169 | 63 | def upgrade(migrate_engine): | ||
170 | 64 | meta = MetaData() | ||
171 | 65 | meta.bind = migrate_engine | ||
172 | 66 | (define_images_table,) = from_migration_import( | ||
173 | 67 | '001_add_images_table', ['define_images_table']) | ||
174 | 68 | (define_image_properties_table,) = from_migration_import( | ||
175 | 69 | '002_add_image_properties_table', ['define_image_properties_table']) | ||
176 | 70 | |||
177 | 71 | conn = migrate_engine.connect() | ||
178 | 72 | images = define_images_table(meta) | ||
179 | 73 | image_properties = define_image_properties_table(meta) | ||
180 | 74 | |||
181 | 75 | # Steps to take, in this order: | ||
182 | 76 | # 1) Move the existing type column from Image into | ||
183 | 77 | # ImageProperty for all image records that have a non-NULL | ||
184 | 78 | # type column | ||
185 | 79 | # 2) Drop the type column in images | ||
186 | 80 | # 3) Add the new columns to images | ||
187 | 81 | |||
188 | 82 | # The below wackiness correlates to the following ANSI SQL: | ||
189 | 83 | # SELECT images.* FROM images | ||
190 | 84 | # LEFT JOIN image_properties | ||
191 | 85 | # ON images.id = image_properties.image_id | ||
192 | 86 | # AND image_properties.key = 'type' | ||
193 | 87 | # WHERE image_properties.image_id IS NULL | ||
194 | 88 | # AND images.type IS NOT NULL | ||
195 | 89 | # | ||
196 | 90 | # which returns all the images that have a type set | ||
197 | 91 | # but that DO NOT yet have an image_property record | ||
198 | 92 | # with key of type. | ||
199 | 93 | sel = select([images], from_obj=[ | ||
200 | 94 | images.outerjoin(image_properties, | ||
201 | 95 | and_(images.c.id == image_properties.c.image_id, | ||
202 | 96 | image_properties.c.key == 'type'))]).where( | ||
203 | 97 | and_(image_properties.c.image_id == None, | ||
204 | 98 | images.c.type != None)) | ||
205 | 99 | image_records = conn.execute(sel).fetchall() | ||
206 | 100 | property_insert = image_properties.insert() | ||
207 | 101 | for record in image_records: | ||
208 | 102 | conn.execute(property_insert, | ||
209 | 103 | image_id=record.id, | ||
210 | 104 | key='type', | ||
211 | 105 | created_at=record.created_at, | ||
212 | 106 | deleted=False, | ||
213 | 107 | value=record.type) | ||
214 | 108 | |||
215 | 109 | disk_format = Column('disk_format', String(20)) | ||
216 | 110 | disk_format.create(images) | ||
217 | 111 | container_format = Column('container_format', String(20)) | ||
218 | 112 | container_format.create(images) | ||
219 | 113 | |||
220 | 114 | images.columns['type'].drop() | ||
221 | 115 | conn.close() | ||
222 | 116 | |||
223 | 117 | |||
224 | 118 | def downgrade(migrate_engine): | ||
225 | 119 | meta = MetaData() | ||
226 | 120 | meta.bind = migrate_engine | ||
227 | 121 | |||
228 | 122 | # Steps to take, in this order: | ||
229 | 123 | # 1) Add type column back to Image | ||
230 | 124 | # 2) Move the existing type properties from ImageProperty into | ||
231 | 125 | # Image.type | ||
232 | 126 | # 3) Drop the disk_format and container_format columns in Image | ||
233 | 127 | |||
234 | 128 | conn = migrate_engine.connect() | ||
235 | 129 | images = get_images_table(meta) | ||
236 | 130 | image_properties = get_image_properties_table(meta) | ||
237 | 131 | |||
238 | 132 | type_col = Column('type', String(30)) | ||
239 | 133 | type_col.create(images) | ||
240 | 134 | |||
241 | 135 | sel = select([image_properties]).where(image_properties.c.key == 'type') | ||
242 | 136 | type_property_records = conn.execute(sel).fetchall() | ||
243 | 137 | for record in type_property_records: | ||
244 | 138 | upd = images.update().where( | ||
245 | 139 | images.c.id == record.image_id).values(type=record.value) | ||
246 | 140 | conn.execute(upd) | ||
247 | 141 | dlt = image_properties.delete().where( | ||
248 | 142 | image_properties.c.image_id == record.image_id) | ||
249 | 143 | conn.execute(dlt) | ||
250 | 144 | conn.close() | ||
251 | 145 | |||
252 | 146 | images.columns['disk_format'].drop() | ||
253 | 147 | images.columns['container_format'].drop() | ||
254 | 0 | 148 | ||
255 | === added file 'glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql' | |||
256 | --- glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql 1970-01-01 00:00:00 +0000 | |||
257 | +++ glance/registry/db/migrate_repo/versions/003_sqlite_downgrade.sql 2011-03-23 15:06:49 +0000 | |||
258 | @@ -0,0 +1,58 @@ | |||
259 | 1 | BEGIN; | ||
260 | 2 | |||
261 | 3 | /* Make changes to the base images table */ | ||
262 | 4 | CREATE TEMPORARY TABLE images_backup ( | ||
263 | 5 | id INTEGER NOT NULL, | ||
264 | 6 | name VARCHAR(255), | ||
265 | 7 | size INTEGER, | ||
266 | 8 | status VARCHAR(30) NOT NULL, | ||
267 | 9 | is_public BOOLEAN NOT NULL, | ||
268 | 10 | location TEXT, | ||
269 | 11 | created_at DATETIME NOT NULL, | ||
270 | 12 | updated_at DATETIME, | ||
271 | 13 | deleted_at DATETIME, | ||
272 | 14 | deleted BOOLEAN NOT NULL, | ||
273 | 15 | PRIMARY KEY (id) | ||
274 | 16 | ); | ||
275 | 17 | |||
276 | 18 | INSERT INTO images_backup | ||
277 | 19 | SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted | ||
278 | 20 | FROM images; | ||
279 | 21 | |||
280 | 22 | DROP TABLE images; | ||
281 | 23 | |||
282 | 24 | CREATE TABLE images ( | ||
283 | 25 | id INTEGER NOT NULL, | ||
284 | 26 | name VARCHAR(255), | ||
285 | 27 | size INTEGER, | ||
286 | 28 | type VARCHAR(30), | ||
287 | 29 | status VARCHAR(30) NOT NULL, | ||
288 | 30 | is_public BOOLEAN NOT NULL, | ||
289 | 31 | location TEXT, | ||
290 | 32 | created_at DATETIME NOT NULL, | ||
291 | 33 | updated_at DATETIME, | ||
292 | 34 | deleted_at DATETIME, | ||
293 | 35 | deleted BOOLEAN NOT NULL, | ||
294 | 36 | PRIMARY KEY (id), | ||
295 | 37 | CHECK (is_public IN (0, 1)), | ||
296 | 38 | CHECK (deleted IN (0, 1)) | ||
297 | 39 | ); | ||
298 | 40 | CREATE INDEX ix_images_deleted ON images (deleted); | ||
299 | 41 | CREATE INDEX ix_images_is_public ON images (is_public); | ||
300 | 42 | |||
301 | 43 | INSERT INTO images (id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted) | ||
302 | 44 | SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted | ||
303 | 45 | FROM images_backup; | ||
304 | 46 | |||
305 | 47 | DROP TABLE images_backup; | ||
306 | 48 | |||
307 | 49 | /* Re-insert the type values from the temp table */ | ||
308 | 50 | UPDATE images | ||
309 | 51 | SET type = (SELECT value FROM image_properties WHERE image_id = images.id AND key = 'type') | ||
310 | 52 | WHERE EXISTS (SELECT * FROM image_properties WHERE image_id = images.id AND key = 'type'); | ||
311 | 53 | |||
312 | 54 | /* Remove the type properties from the image_properties table */ | ||
313 | 55 | DELETE FROM image_properties | ||
314 | 56 | WHERE key = 'type'; | ||
315 | 57 | |||
316 | 58 | COMMIT; | ||
317 | 0 | 59 | ||
318 | === added file 'glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql' | |||
319 | --- glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql 1970-01-01 00:00:00 +0000 | |||
320 | +++ glance/registry/db/migrate_repo/versions/003_sqlite_upgrade.sql 2011-03-23 15:06:49 +0000 | |||
321 | @@ -0,0 +1,64 @@ | |||
322 | 1 | BEGIN TRANSACTION; | ||
323 | 2 | |||
324 | 3 | /* Move type column from base images table | ||
325 | 4 | * to be records in image_properties table */ | ||
326 | 5 | CREATE TEMPORARY TABLE tmp_type_records (id INTEGER NOT NULL, type VARCHAR(30) NOT NULL); | ||
327 | 6 | INSERT INTO tmp_type_records | ||
328 | 7 | SELECT id, type | ||
329 | 8 | FROM images | ||
330 | 9 | WHERE type IS NOT NULL; | ||
331 | 10 | |||
332 | 11 | REPLACE INTO image_properties | ||
333 | 12 | (image_id, key, value, created_at, deleted) | ||
334 | 13 | SELECT id, 'type', type, date('now'), 0 | ||
335 | 14 | FROM tmp_type_records; | ||
336 | 15 | |||
337 | 16 | DROP TABLE tmp_type_records; | ||
338 | 17 | |||
339 | 18 | /* Make changes to the base images table */ | ||
340 | 19 | CREATE TEMPORARY TABLE images_backup ( | ||
341 | 20 | id INTEGER NOT NULL, | ||
342 | 21 | name VARCHAR(255), | ||
343 | 22 | size INTEGER, | ||
344 | 23 | status VARCHAR(30) NOT NULL, | ||
345 | 24 | is_public BOOLEAN NOT NULL, | ||
346 | 25 | location TEXT, | ||
347 | 26 | created_at DATETIME NOT NULL, | ||
348 | 27 | updated_at DATETIME, | ||
349 | 28 | deleted_at DATETIME, | ||
350 | 29 | deleted BOOLEAN NOT NULL, | ||
351 | 30 | PRIMARY KEY (id) | ||
352 | 31 | ); | ||
353 | 32 | |||
354 | 33 | INSERT INTO images_backup | ||
355 | 34 | SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted | ||
356 | 35 | FROM images; | ||
357 | 36 | |||
358 | 37 | DROP TABLE images; | ||
359 | 38 | |||
360 | 39 | CREATE TABLE images ( | ||
361 | 40 | id INTEGER NOT NULL, | ||
362 | 41 | name VARCHAR(255), | ||
363 | 42 | size INTEGER, | ||
364 | 43 | status VARCHAR(30) NOT NULL, | ||
365 | 44 | is_public BOOLEAN NOT NULL, | ||
366 | 45 | location TEXT, | ||
367 | 46 | created_at DATETIME NOT NULL, | ||
368 | 47 | updated_at DATETIME, | ||
369 | 48 | deleted_at DATETIME, | ||
370 | 49 | deleted BOOLEAN NOT NULL, | ||
371 | 50 | disk_format VARCHAR(20), | ||
372 | 51 | container_format VARCHAR(20), | ||
373 | 52 | PRIMARY KEY (id), | ||
374 | 53 | CHECK (is_public IN (0, 1)), | ||
375 | 54 | CHECK (deleted IN (0, 1)) | ||
376 | 55 | ); | ||
377 | 56 | CREATE INDEX ix_images_deleted ON images (deleted); | ||
378 | 57 | CREATE INDEX ix_images_is_public ON images (is_public); | ||
379 | 58 | |||
380 | 59 | INSERT INTO images (id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted) | ||
381 | 60 | SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted | ||
382 | 61 | FROM images_backup; | ||
383 | 62 | |||
384 | 63 | DROP TABLE images_backup; | ||
385 | 64 | COMMIT; | ||
386 | 0 | 65 | ||
387 | === added file 'glance/registry/db/migrate_repo/versions/004_add_checksum.py' | |||
388 | --- glance/registry/db/migrate_repo/versions/004_add_checksum.py 1970-01-01 00:00:00 +0000 | |||
389 | +++ glance/registry/db/migrate_repo/versions/004_add_checksum.py 2011-03-23 15:06:49 +0000 | |||
390 | @@ -0,0 +1,80 @@ | |||
391 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
392 | 2 | |||
393 | 3 | # Copyright 2011 OpenStack LLC. | ||
394 | 4 | # All Rights Reserved. | ||
395 | 5 | # | ||
396 | 6 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
397 | 7 | # not use this file except in compliance with the License. You may obtain | ||
398 | 8 | # a copy of the License at | ||
399 | 9 | # | ||
400 | 10 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
401 | 11 | # | ||
402 | 12 | # Unless required by applicable law or agreed to in writing, software | ||
403 | 13 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
404 | 14 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
405 | 15 | # License for the specific language governing permissions and limitations | ||
406 | 16 | # under the License. | ||
407 | 17 | |||
408 | 18 | from migrate.changeset import * | ||
409 | 19 | from sqlalchemy import * | ||
410 | 20 | from sqlalchemy.sql import and_, not_ | ||
411 | 21 | |||
412 | 22 | from glance.registry.db.migrate_repo.schema import ( | ||
413 | 23 | Boolean, DateTime, Integer, String, Text, from_migration_import) | ||
414 | 24 | |||
415 | 25 | |||
416 | 26 | def get_images_table(meta): | ||
417 | 27 | """ | ||
418 | 28 | Returns the Table object for the images table that | ||
419 | 29 | corresponds to the images table definition of this version. | ||
420 | 30 | """ | ||
421 | 31 | images = Table('images', meta, | ||
422 | 32 | Column('id', Integer(), primary_key=True, nullable=False), | ||
423 | 33 | Column('name', String(255)), | ||
424 | 34 | Column('disk_format', String(20)), | ||
425 | 35 | Column('container_format', String(20)), | ||
426 | 36 | Column('size', Integer()), | ||
427 | 37 | Column('status', String(30), nullable=False), | ||
428 | 38 | Column('is_public', Boolean(), nullable=False, default=False, | ||
429 | 39 | index=True), | ||
430 | 40 | Column('location', Text()), | ||
431 | 41 | Column('created_at', DateTime(), nullable=False), | ||
432 | 42 | Column('updated_at', DateTime()), | ||
433 | 43 | Column('deleted_at', DateTime()), | ||
434 | 44 | Column('deleted', Boolean(), nullable=False, default=False, | ||
435 | 45 | index=True), | ||
436 | 46 | Column('checksum', String(32)), | ||
437 | 47 | mysql_engine='InnoDB', | ||
438 | 48 | useexisting=True) | ||
439 | 49 | |||
440 | 50 | return images | ||
441 | 51 | |||
442 | 52 | |||
443 | 53 | def get_image_properties_table(meta): | ||
444 | 54 | """ | ||
445 | 55 | No changes to the image properties table from 002... | ||
446 | 56 | """ | ||
447 | 57 | (define_image_properties_table,) = from_migration_import( | ||
448 | 58 | '002_add_image_properties_table', ['define_image_properties_table']) | ||
449 | 59 | |||
450 | 60 | image_properties = define_image_properties_table(meta) | ||
451 | 61 | return image_properties | ||
452 | 62 | |||
453 | 63 | |||
454 | 64 | def upgrade(migrate_engine): | ||
455 | 65 | meta = MetaData() | ||
456 | 66 | meta.bind = migrate_engine | ||
457 | 67 | |||
458 | 68 | images = get_images_table(meta) | ||
459 | 69 | |||
460 | 70 | checksum = Column('checksum', String(32)) | ||
461 | 71 | checksum.create(images) | ||
462 | 72 | |||
463 | 73 | |||
464 | 74 | def downgrade(migrate_engine): | ||
465 | 75 | meta = MetaData() | ||
466 | 76 | meta.bind = migrate_engine | ||
467 | 77 | |||
468 | 78 | images = get_images_table(meta) | ||
469 | 79 | |||
470 | 80 | images.columns['checksum'].drop() | ||
471 | 0 | 81 | ||
472 | === modified file 'glance/registry/db/migration.py' | |||
473 | --- glance/registry/db/migration.py 2011-03-08 19:51:25 +0000 | |||
474 | +++ glance/registry/db/migration.py 2011-03-23 15:06:49 +0000 | |||
475 | @@ -38,7 +38,7 @@ | |||
476 | 38 | :param options: options dict | 38 | :param options: options dict |
477 | 39 | :retval version number | 39 | :retval version number |
478 | 40 | """ | 40 | """ |
480 | 41 | repo_path = _find_migrate_repo() | 41 | repo_path = get_migrate_repo_path() |
481 | 42 | sql_connection = options['sql_connection'] | 42 | sql_connection = options['sql_connection'] |
482 | 43 | try: | 43 | try: |
483 | 44 | return versioning_api.db_version(sql_connection, repo_path) | 44 | return versioning_api.db_version(sql_connection, repo_path) |
484 | @@ -56,7 +56,7 @@ | |||
485 | 56 | :retval version number | 56 | :retval version number |
486 | 57 | """ | 57 | """ |
487 | 58 | db_version(options) # Ensure db is under migration control | 58 | db_version(options) # Ensure db is under migration control |
489 | 59 | repo_path = _find_migrate_repo() | 59 | repo_path = get_migrate_repo_path() |
490 | 60 | sql_connection = options['sql_connection'] | 60 | sql_connection = options['sql_connection'] |
491 | 61 | version_str = version or 'latest' | 61 | version_str = version or 'latest' |
492 | 62 | logger.info("Upgrading %(sql_connection)s to version %(version_str)s" % | 62 | logger.info("Upgrading %(sql_connection)s to version %(version_str)s" % |
493 | @@ -72,7 +72,7 @@ | |||
494 | 72 | :retval version number | 72 | :retval version number |
495 | 73 | """ | 73 | """ |
496 | 74 | db_version(options) # Ensure db is under migration control | 74 | db_version(options) # Ensure db is under migration control |
498 | 75 | repo_path = _find_migrate_repo() | 75 | repo_path = get_migrate_repo_path() |
499 | 76 | sql_connection = options['sql_connection'] | 76 | sql_connection = options['sql_connection'] |
500 | 77 | logger.info("Downgrading %(sql_connection)s to version %(version)s" % | 77 | logger.info("Downgrading %(sql_connection)s to version %(version)s" % |
501 | 78 | locals()) | 78 | locals()) |
502 | @@ -98,7 +98,7 @@ | |||
503 | 98 | 98 | ||
504 | 99 | :param options: options dict | 99 | :param options: options dict |
505 | 100 | """ | 100 | """ |
507 | 101 | repo_path = _find_migrate_repo() | 101 | repo_path = get_migrate_repo_path() |
508 | 102 | sql_connection = options['sql_connection'] | 102 | sql_connection = options['sql_connection'] |
509 | 103 | return versioning_api.version_control(sql_connection, repo_path) | 103 | return versioning_api.version_control(sql_connection, repo_path) |
510 | 104 | 104 | ||
511 | @@ -117,7 +117,7 @@ | |||
512 | 117 | upgrade(options, version=version) | 117 | upgrade(options, version=version) |
513 | 118 | 118 | ||
514 | 119 | 119 | ||
516 | 120 | def _find_migrate_repo(): | 120 | def get_migrate_repo_path(): |
517 | 121 | """Get the path for the migrate repository.""" | 121 | """Get the path for the migrate repository.""" |
518 | 122 | path = os.path.join(os.path.abspath(os.path.dirname(__file__)), | 122 | path = os.path.join(os.path.abspath(os.path.dirname(__file__)), |
519 | 123 | 'migrate_repo') | 123 | 'migrate_repo') |
520 | 124 | 124 | ||
521 | === modified file 'glance/registry/db/models.py' | |||
522 | --- glance/registry/db/models.py 2011-03-05 17:04:43 +0000 | |||
523 | +++ glance/registry/db/models.py 2011-03-23 15:06:49 +0000 | |||
524 | @@ -104,6 +104,7 @@ | |||
525 | 104 | status = Column(String(30), nullable=False) | 104 | status = Column(String(30), nullable=False) |
526 | 105 | is_public = Column(Boolean, nullable=False, default=False) | 105 | is_public = Column(Boolean, nullable=False, default=False) |
527 | 106 | location = Column(Text) | 106 | location = Column(Text) |
528 | 107 | checksum = Column(String(32)) | ||
529 | 107 | 108 | ||
530 | 108 | 109 | ||
531 | 109 | class ImageProperty(BASE, ModelBase): | 110 | class ImageProperty(BASE, ModelBase): |
532 | 110 | 111 | ||
533 | === modified file 'glance/registry/server.py' | |||
534 | --- glance/registry/server.py 2011-02-25 16:52:12 +0000 | |||
535 | +++ glance/registry/server.py 2011-03-23 15:06:49 +0000 | |||
536 | @@ -14,8 +14,9 @@ | |||
537 | 14 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | 14 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
538 | 15 | # License for the specific language governing permissions and limitations | 15 | # License for the specific language governing permissions and limitations |
539 | 16 | # under the License. | 16 | # under the License. |
540 | 17 | |||
541 | 17 | """ | 18 | """ |
543 | 18 | Parllax Image controller | 19 | Reference implementation registry server WSGI controller |
544 | 19 | """ | 20 | """ |
545 | 20 | 21 | ||
546 | 21 | import json | 22 | import json |
547 | @@ -31,6 +32,10 @@ | |||
548 | 31 | 32 | ||
549 | 32 | logger = logging.getLogger('glance.registry.server') | 33 | logger = logging.getLogger('glance.registry.server') |
550 | 33 | 34 | ||
551 | 35 | DISPLAY_FIELDS_IN_INDEX = ['id', 'name', 'size', | ||
552 | 36 | 'disk_format', 'container_format', | ||
553 | 37 | 'checksum'] | ||
554 | 38 | |||
555 | 34 | 39 | ||
556 | 35 | class Controller(wsgi.Controller): | 40 | class Controller(wsgi.Controller): |
557 | 36 | """Controller for the reference implementation registry server""" | 41 | """Controller for the reference implementation registry server""" |
558 | @@ -49,16 +54,24 @@ | |||
559 | 49 | 54 | ||
560 | 50 | Where image_list is a sequence of mappings:: | 55 | Where image_list is a sequence of mappings:: |
561 | 51 | 56 | ||
563 | 52 | {'id': image_id, 'name': image_name} | 57 | { |
564 | 58 | 'id': <ID>, | ||
565 | 59 | 'name': <NAME>, | ||
566 | 60 | 'size': <SIZE>, | ||
567 | 61 | 'disk_format': <DISK_FORMAT>, | ||
568 | 62 | 'container_format': <CONTAINER_FORMAT>, | ||
569 | 63 | 'checksum': <CHECKSUM> | ||
570 | 64 | } | ||
571 | 53 | 65 | ||
572 | 54 | """ | 66 | """ |
573 | 55 | images = db_api.image_get_all_public(None) | 67 | images = db_api.image_get_all_public(None) |
580 | 56 | image_dicts = [dict(id=i['id'], | 68 | results = [] |
581 | 57 | name=i['name'], | 69 | for image in images: |
582 | 58 | disk_format=i['disk_format'], | 70 | result = {} |
583 | 59 | container_format=i['container_format'], | 71 | for field in DISPLAY_FIELDS_IN_INDEX: |
584 | 60 | size=i['size']) for i in images] | 72 | result[field] = image[field] |
585 | 61 | return dict(images=image_dicts) | 73 | results.append(result) |
586 | 74 | return dict(images=results) | ||
587 | 62 | 75 | ||
588 | 63 | def detail(self, req): | 76 | def detail(self, req): |
589 | 64 | """Return detailed information for all public, non-deleted images | 77 | """Return detailed information for all public, non-deleted images |
590 | 65 | 78 | ||
591 | === modified file 'glance/server.py' | |||
592 | --- glance/server.py 2011-03-09 00:07:44 +0000 | |||
593 | +++ glance/server.py 2011-03-23 15:06:49 +0000 | |||
594 | @@ -29,6 +29,7 @@ | |||
595 | 29 | 29 | ||
596 | 30 | """ | 30 | """ |
597 | 31 | 31 | ||
598 | 32 | import httplib | ||
599 | 32 | import json | 33 | import json |
600 | 33 | import logging | 34 | import logging |
601 | 34 | import sys | 35 | import sys |
602 | @@ -68,8 +69,8 @@ | |||
603 | 68 | GET /images/<ID> -- Return image data for image with id <ID> | 69 | GET /images/<ID> -- Return image data for image with id <ID> |
604 | 69 | POST /images -- Store image data and return metadata about the | 70 | POST /images -- Store image data and return metadata about the |
605 | 70 | newly-stored image | 71 | newly-stored image |
608 | 71 | PUT /images/<ID> -- Update image metadata (not image data, since | 72 | PUT /images/<ID> -- Update image metadata and/or upload image |
609 | 72 | image data is immutable once stored) | 73 | data for a previously-reserved image |
610 | 73 | DELETE /images/<ID> -- Delete the image with id <ID> | 74 | DELETE /images/<ID> -- Delete the image with id <ID> |
611 | 74 | """ | 75 | """ |
612 | 75 | 76 | ||
613 | @@ -82,6 +83,9 @@ | |||
614 | 82 | 83 | ||
615 | 83 | * id -- The opaque image identifier | 84 | * id -- The opaque image identifier |
616 | 84 | * name -- The name of the image | 85 | * name -- The name of the image |
617 | 86 | * disk_format -- The disk image format | ||
618 | 87 | * container_format -- The "container" format of the image | ||
619 | 88 | * checksum -- MD5 checksum of the image data | ||
620 | 85 | * size -- Size of image data in bytes | 89 | * size -- Size of image data in bytes |
621 | 86 | 90 | ||
622 | 87 | :param request: The WSGI/Webob Request object | 91 | :param request: The WSGI/Webob Request object |
623 | @@ -92,6 +96,7 @@ | |||
624 | 92 | 'name': <NAME>, | 96 | 'name': <NAME>, |
625 | 93 | 'disk_format': <DISK_FORMAT>, | 97 | 'disk_format': <DISK_FORMAT>, |
626 | 94 | 'container_format': <DISK_FORMAT>, | 98 | 'container_format': <DISK_FORMAT>, |
627 | 99 | 'checksum': <CHECKSUM> | ||
628 | 95 | 'size': <SIZE>}, ... | 100 | 'size': <SIZE>}, ... |
629 | 96 | ]} | 101 | ]} |
630 | 97 | """ | 102 | """ |
631 | @@ -111,6 +116,7 @@ | |||
632 | 111 | 'size': <SIZE>, | 116 | 'size': <SIZE>, |
633 | 112 | 'disk_format': <DISK_FORMAT>, | 117 | 'disk_format': <DISK_FORMAT>, |
634 | 113 | 'container_format': <CONTAINER_FORMAT>, | 118 | 'container_format': <CONTAINER_FORMAT>, |
635 | 119 | 'checksum': <CHECKSUM>, | ||
636 | 114 | 'store': <STORE>, | 120 | 'store': <STORE>, |
637 | 115 | 'status': <STATUS>, | 121 | 'status': <STATUS>, |
638 | 116 | 'created_at': <TIMESTAMP>, | 122 | 'created_at': <TIMESTAMP>, |
639 | @@ -136,6 +142,8 @@ | |||
640 | 136 | 142 | ||
641 | 137 | res = Response(request=req) | 143 | res = Response(request=req) |
642 | 138 | utils.inject_image_meta_into_headers(res, image) | 144 | utils.inject_image_meta_into_headers(res, image) |
643 | 145 | res.headers.add('Location', "/images/%s" % id) | ||
644 | 146 | res.headers.add('ETag', image['checksum']) | ||
645 | 139 | 147 | ||
646 | 140 | return req.get_response(res) | 148 | return req.get_response(res) |
647 | 141 | 149 | ||
648 | @@ -165,6 +173,8 @@ | |||
649 | 165 | res = Response(app_iter=image_iterator(), | 173 | res = Response(app_iter=image_iterator(), |
650 | 166 | content_type="text/plain") | 174 | content_type="text/plain") |
651 | 167 | utils.inject_image_meta_into_headers(res, image) | 175 | utils.inject_image_meta_into_headers(res, image) |
652 | 176 | res.headers.add('Location', "/images/%s" % id) | ||
653 | 177 | res.headers.add('ETag', image['checksum']) | ||
654 | 168 | return req.get_response(res) | 178 | return req.get_response(res) |
655 | 169 | 179 | ||
656 | 170 | def _reserve(self, req): | 180 | def _reserve(self, req): |
657 | @@ -225,36 +235,45 @@ | |||
658 | 225 | 235 | ||
659 | 226 | store = self.get_store_or_400(req, store_name) | 236 | store = self.get_store_or_400(req, store_name) |
660 | 227 | 237 | ||
661 | 228 | image_meta['status'] = 'saving' | ||
662 | 229 | image_id = image_meta['id'] | 238 | image_id = image_meta['id'] |
664 | 230 | logger.debug("Updating image metadata for image %s" | 239 | logger.debug("Setting image %s to status 'saving'" |
665 | 231 | % image_id) | 240 | % image_id) |
670 | 232 | registry.update_image_metadata(self.options, | 241 | registry.update_image_metadata(self.options, image_id, |
671 | 233 | image_meta['id'], | 242 | {'status': 'saving'}) |
668 | 234 | image_meta) | ||
669 | 235 | |||
672 | 236 | try: | 243 | try: |
673 | 237 | logger.debug("Uploading image data for image %(image_id)s " | 244 | logger.debug("Uploading image data for image %(image_id)s " |
674 | 238 | "to %(store_name)s store" % locals()) | 245 | "to %(store_name)s store" % locals()) |
688 | 239 | location, size = store.add(image_meta['id'], | 246 | location, size, checksum = store.add(image_meta['id'], |
689 | 240 | req.body_file, | 247 | req.body_file, |
690 | 241 | self.options) | 248 | self.options) |
691 | 242 | # If size returned from store is different from size | 249 | |
692 | 243 | # already stored in registry, update the registry with | 250 | # Verify any supplied checksum value matches checksum |
693 | 244 | # the new size of the image | 251 | # returned from store when adding image |
694 | 245 | if image_meta.get('size', 0) != size: | 252 | supplied_checksum = image_meta.get('checksum') |
695 | 246 | image_meta['size'] = size | 253 | if supplied_checksum and supplied_checksum != checksum: |
696 | 247 | logger.debug("Updating image metadata for image %s" | 254 | msg = ("Supplied checksum (%(supplied_checksum)s) and " |
697 | 248 | % image_id) | 255 | "checksum generated from uploaded image " |
698 | 249 | registry.update_image_metadata(self.options, | 256 | "(%(checksum)s) did not match. Setting image " |
699 | 250 | image_meta['id'], | 257 | "status to 'killed'.") % locals() |
700 | 251 | image_meta) | 258 | self._safe_kill(req, image_meta) |
701 | 259 | raise HTTPBadRequest(msg, content_type="text/plain", | ||
702 | 260 | request=req) | ||
703 | 261 | |||
704 | 262 | # Update the database with the checksum returned | ||
705 | 263 | # from the backend store | ||
706 | 264 | logger.debug("Updating image %(image_id)s data. " | ||
707 | 265 | "Checksum set to %(checksum)s, size set " | ||
708 | 266 | "to %(size)d" % locals()) | ||
709 | 267 | registry.update_image_metadata(self.options, image_id, | ||
710 | 268 | {'checksum': checksum, | ||
711 | 269 | 'size': size}) | ||
712 | 270 | |||
713 | 252 | return location | 271 | return location |
714 | 253 | except exception.Duplicate, e: | 272 | except exception.Duplicate, e: |
715 | 254 | logger.error("Error adding image to store: %s", str(e)) | 273 | logger.error("Error adding image to store: %s", str(e)) |
716 | 255 | raise HTTPConflict(str(e), request=req) | 274 | raise HTTPConflict(str(e), request=req) |
717 | 256 | 275 | ||
719 | 257 | def _activate(self, req, image_meta, location): | 276 | def _activate(self, req, image_id, location): |
720 | 258 | """ | 277 | """ |
721 | 259 | Sets the image status to `active` and the image's location | 278 | Sets the image status to `active` and the image's location |
722 | 260 | attribute. | 279 | attribute. |
723 | @@ -263,25 +282,25 @@ | |||
724 | 263 | :param image_meta: Mapping of metadata about image | 282 | :param image_meta: Mapping of metadata about image |
725 | 264 | :param location: Location of where Glance stored this image | 283 | :param location: Location of where Glance stored this image |
726 | 265 | """ | 284 | """ |
727 | 285 | image_meta = {} | ||
728 | 266 | image_meta['location'] = location | 286 | image_meta['location'] = location |
729 | 267 | image_meta['status'] = 'active' | 287 | image_meta['status'] = 'active' |
732 | 268 | registry.update_image_metadata(self.options, | 288 | return registry.update_image_metadata(self.options, |
733 | 269 | image_meta['id'], | 289 | image_id, |
734 | 270 | image_meta) | 290 | image_meta) |
735 | 271 | 291 | ||
737 | 272 | def _kill(self, req, image_meta): | 292 | def _kill(self, req, image_id): |
738 | 273 | """ | 293 | """ |
739 | 274 | Marks the image status to `killed` | 294 | Marks the image status to `killed` |
740 | 275 | 295 | ||
741 | 276 | :param request: The WSGI/Webob Request object | 296 | :param request: The WSGI/Webob Request object |
743 | 277 | :param image_meta: Mapping of metadata about image | 297 | :param image_id: Opaque image identifier |
744 | 278 | """ | 298 | """ |
745 | 279 | image_meta['status'] = 'killed' | ||
746 | 280 | registry.update_image_metadata(self.options, | 299 | registry.update_image_metadata(self.options, |
749 | 281 | image_meta['id'], | 300 | image_id, |
750 | 282 | image_meta) | 301 | {'status': 'killed'}) |
751 | 283 | 302 | ||
753 | 284 | def _safe_kill(self, req, image_meta): | 303 | def _safe_kill(self, req, image_id): |
754 | 285 | """ | 304 | """ |
755 | 286 | Mark image killed without raising exceptions if it fails. | 305 | Mark image killed without raising exceptions if it fails. |
756 | 287 | 306 | ||
757 | @@ -289,12 +308,13 @@ | |||
758 | 289 | not raise itself, rather it should just log its error. | 308 | not raise itself, rather it should just log its error. |
759 | 290 | 309 | ||
760 | 291 | :param request: The WSGI/Webob Request object | 310 | :param request: The WSGI/Webob Request object |
761 | 311 | :param image_id: Opaque image identifier | ||
762 | 292 | """ | 312 | """ |
763 | 293 | try: | 313 | try: |
765 | 294 | self._kill(req, image_meta) | 314 | self._kill(req, image_id) |
766 | 295 | except Exception, e: | 315 | except Exception, e: |
767 | 296 | logger.error("Unable to kill image %s: %s", | 316 | logger.error("Unable to kill image %s: %s", |
769 | 297 | image_meta['id'], repr(e)) | 317 | image_id, repr(e)) |
770 | 298 | 318 | ||
771 | 299 | def _upload_and_activate(self, req, image_meta): | 319 | def _upload_and_activate(self, req, image_meta): |
772 | 300 | """ | 320 | """ |
773 | @@ -304,13 +324,16 @@ | |||
774 | 304 | 324 | ||
775 | 305 | :param request: The WSGI/Webob Request object | 325 | :param request: The WSGI/Webob Request object |
776 | 306 | :param image_meta: Mapping of metadata about image | 326 | :param image_meta: Mapping of metadata about image |
777 | 327 | |||
778 | 328 | :retval Mapping of updated image data | ||
779 | 307 | """ | 329 | """ |
780 | 308 | try: | 330 | try: |
781 | 331 | image_id = image_meta['id'] | ||
782 | 309 | location = self._upload(req, image_meta) | 332 | location = self._upload(req, image_meta) |
784 | 310 | self._activate(req, image_meta, location) | 333 | return self._activate(req, image_id, location) |
785 | 311 | except: # unqualified b/c we're re-raising it | 334 | except: # unqualified b/c we're re-raising it |
786 | 312 | exc_type, exc_value, exc_traceback = sys.exc_info() | 335 | exc_type, exc_value, exc_traceback = sys.exc_info() |
788 | 313 | self._safe_kill(req, image_meta) | 336 | self._safe_kill(req, image_id) |
789 | 314 | # NOTE(sirp): _safe_kill uses httplib which, in turn, uses | 337 | # NOTE(sirp): _safe_kill uses httplib which, in turn, uses |
790 | 315 | # Eventlet's GreenSocket. Eventlet subsequently clears exceptions | 338 | # Eventlet's GreenSocket. Eventlet subsequently clears exceptions |
791 | 316 | # by calling `sys.exc_clear()`. | 339 | # by calling `sys.exc_clear()`. |
792 | @@ -318,8 +341,8 @@ | |||
793 | 318 | # This is why we can't use a raise with no arguments here: our | 341 | # This is why we can't use a raise with no arguments here: our |
794 | 319 | # exception context was destroyed by Eventlet. To work around | 342 | # exception context was destroyed by Eventlet. To work around |
795 | 320 | # this, we need to 'memorize' the exception context, and then | 343 | # this, we need to 'memorize' the exception context, and then |
798 | 321 | # re-raise using 3-arg form after Eventlet has run | 344 | # re-raise here. |
799 | 322 | raise exc_type, exc_value, exc_traceback | 345 | raise exc_type(exc_traceback) |
800 | 323 | 346 | ||
801 | 324 | def create(self, req): | 347 | def create(self, req): |
802 | 325 | """ | 348 | """ |
803 | @@ -354,19 +377,21 @@ | |||
804 | 354 | image data. | 377 | image data. |
805 | 355 | """ | 378 | """ |
806 | 356 | image_meta = self._reserve(req) | 379 | image_meta = self._reserve(req) |
807 | 380 | image_id = image_meta['id'] | ||
808 | 357 | 381 | ||
809 | 358 | if utils.has_body(req): | 382 | if utils.has_body(req): |
811 | 359 | self._upload_and_activate(req, image_meta) | 383 | image_meta = self._upload_and_activate(req, image_meta) |
812 | 360 | else: | 384 | else: |
813 | 361 | if 'x-image-meta-location' in req.headers: | 385 | if 'x-image-meta-location' in req.headers: |
814 | 362 | location = req.headers['x-image-meta-location'] | 386 | location = req.headers['x-image-meta-location'] |
816 | 363 | self._activate(req, image_meta, location) | 387 | image_meta = self._activate(req, image_id, location) |
817 | 364 | 388 | ||
818 | 365 | # APP states we should return a Location: header with the edit | 389 | # APP states we should return a Location: header with the edit |
819 | 366 | # URI of the resource newly-created. | 390 | # URI of the resource newly-created. |
820 | 367 | res = Response(request=req, body=json.dumps(dict(image=image_meta)), | 391 | res = Response(request=req, body=json.dumps(dict(image=image_meta)), |
823 | 368 | content_type="text/plain") | 392 | status=httplib.CREATED, content_type="text/plain") |
824 | 369 | res.headers.add('Location', "/images/%s" % image_meta['id']) | 393 | res.headers.add('Location', "/images/%s" % image_id) |
825 | 394 | res.headers.add('ETag', image_meta['checksum']) | ||
826 | 370 | 395 | ||
827 | 371 | return req.get_response(res) | 396 | return req.get_response(res) |
828 | 372 | 397 | ||
829 | @@ -393,9 +418,14 @@ | |||
830 | 393 | id, | 418 | id, |
831 | 394 | new_image_meta) | 419 | new_image_meta) |
832 | 395 | if has_body: | 420 | if has_body: |
834 | 396 | self._upload_and_activate(req, image_meta) | 421 | image_meta = self._upload_and_activate(req, image_meta) |
835 | 397 | 422 | ||
837 | 398 | return dict(image=image_meta) | 423 | res = Response(request=req, |
838 | 424 | body=json.dumps(dict(image=image_meta)), | ||
839 | 425 | content_type="text/plain") | ||
840 | 426 | res.headers.add('Location', "/images/%s" % id) | ||
841 | 427 | res.headers.add('ETag', image_meta['checksum']) | ||
842 | 428 | return res | ||
843 | 399 | except exception.Invalid, e: | 429 | except exception.Invalid, e: |
844 | 400 | msg = ("Failed to update image metadata. Got error: %(e)s" | 430 | msg = ("Failed to update image metadata. Got error: %(e)s" |
845 | 401 | % locals()) | 431 | % locals()) |
846 | 402 | 432 | ||
847 | === modified file 'glance/store/__init__.py' | |||
848 | --- glance/store/__init__.py 2011-02-24 02:50:24 +0000 | |||
849 | +++ glance/store/__init__.py 2011-03-23 15:06:49 +0000 | |||
850 | @@ -141,40 +141,3 @@ | |||
851 | 141 | authurl = "https://%s" % '/'.join(path_parts) | 141 | authurl = "https://%s" % '/'.join(path_parts) |
852 | 142 | 142 | ||
853 | 143 | return user, key, authurl, container, obj | 143 | return user, key, authurl, container, obj |
854 | 144 | |||
855 | 145 | |||
856 | 146 | def add_options(parser): | ||
857 | 147 | """ | ||
858 | 148 | Adds any configuration options that each store might | ||
859 | 149 | have. | ||
860 | 150 | |||
861 | 151 | :param parser: An optparse.OptionParser object | ||
862 | 152 | :retval None | ||
863 | 153 | """ | ||
864 | 154 | # TODO(jaypipes): Remove these imports... | ||
865 | 155 | from glance.store.http import HTTPBackend | ||
866 | 156 | from glance.store.s3 import S3Backend | ||
867 | 157 | from glance.store.swift import SwiftBackend | ||
868 | 158 | from glance.store.filesystem import FilesystemBackend | ||
869 | 159 | |||
870 | 160 | help_text = "The following configuration options are specific to the "\ | ||
871 | 161 | "Glance image store." | ||
872 | 162 | |||
873 | 163 | DEFAULT_STORE_CHOICES = ['file', 'swift', 's3'] | ||
874 | 164 | group = optparse.OptionGroup(parser, "Image Store Options", help_text) | ||
875 | 165 | group.add_option('--default-store', metavar="STORE", | ||
876 | 166 | default="file", | ||
877 | 167 | choices=DEFAULT_STORE_CHOICES, | ||
878 | 168 | help="The backend store that Glance will use to store " | ||
879 | 169 | "virtual machine images to. Choices: ('%s') " | ||
880 | 170 | "Default: %%default" % "','".join(DEFAULT_STORE_CHOICES)) | ||
881 | 171 | |||
882 | 172 | backend_classes = [FilesystemBackend, | ||
883 | 173 | HTTPBackend, | ||
884 | 174 | SwiftBackend, | ||
885 | 175 | S3Backend] | ||
886 | 176 | for backend_class in backend_classes: | ||
887 | 177 | if hasattr(backend_class, 'add_options'): | ||
888 | 178 | backend_class.add_options(group) | ||
889 | 179 | |||
890 | 180 | parser.add_option_group(group) | ||
891 | 181 | 144 | ||
892 | === modified file 'glance/store/filesystem.py' | |||
893 | --- glance/store/filesystem.py 2011-02-27 20:54:29 +0000 | |||
894 | +++ glance/store/filesystem.py 2011-03-23 15:06:49 +0000 | |||
895 | @@ -19,6 +19,7 @@ | |||
896 | 19 | A simple filesystem-backed store | 19 | A simple filesystem-backed store |
897 | 20 | """ | 20 | """ |
898 | 21 | 21 | ||
899 | 22 | import hashlib | ||
900 | 22 | import logging | 23 | import logging |
901 | 23 | import os | 24 | import os |
902 | 24 | import urlparse | 25 | import urlparse |
903 | @@ -110,9 +111,10 @@ | |||
904 | 110 | :param data: The image data to write, as a file-like object | 111 | :param data: The image data to write, as a file-like object |
905 | 111 | :param options: Conf mapping | 112 | :param options: Conf mapping |
906 | 112 | 113 | ||
910 | 113 | :retval Tuple with (location, size) | 114 | :retval Tuple with (location, size, checksum) |
911 | 114 | The location that was written, with file:// scheme prepended | 115 | The location that was written, with file:// scheme prepended, |
912 | 115 | and the size in bytes of the data written | 116 | the size in bytes of the data written, and the checksum of |
913 | 117 | the image added. | ||
914 | 116 | """ | 118 | """ |
915 | 117 | datadir = options['filesystem_store_datadir'] | 119 | datadir = options['filesystem_store_datadir'] |
916 | 118 | 120 | ||
917 | @@ -127,6 +129,7 @@ | |||
918 | 127 | raise exception.Duplicate("Image file %s already exists!" | 129 | raise exception.Duplicate("Image file %s already exists!" |
919 | 128 | % filepath) | 130 | % filepath) |
920 | 129 | 131 | ||
921 | 132 | checksum = hashlib.md5() | ||
922 | 130 | bytes_written = 0 | 133 | bytes_written = 0 |
923 | 131 | with open(filepath, 'wb') as f: | 134 | with open(filepath, 'wb') as f: |
924 | 132 | while True: | 135 | while True: |
925 | @@ -134,23 +137,11 @@ | |||
926 | 134 | if not buf: | 137 | if not buf: |
927 | 135 | break | 138 | break |
928 | 136 | bytes_written += len(buf) | 139 | bytes_written += len(buf) |
929 | 140 | checksum.update(buf) | ||
930 | 137 | f.write(buf) | 141 | f.write(buf) |
931 | 138 | 142 | ||
950 | 139 | logger.debug("Wrote %(bytes_written)d bytes to %(filepath)s" | 143 | checksum_hex = checksum.hexdigest() |
951 | 140 | % locals()) | 144 | |
952 | 141 | return ('file://%s' % filepath, bytes_written) | 145 | logger.debug("Wrote %(bytes_written)d bytes to %(filepath)s with " |
953 | 142 | 146 | "checksum %(checksum_hex)s" % locals()) | |
954 | 143 | @classmethod | 147 | return ('file://%s' % filepath, bytes_written, checksum_hex) |
937 | 144 | def add_options(cls, parser): | ||
938 | 145 | """ | ||
939 | 146 | Adds specific configuration options for this store | ||
940 | 147 | |||
941 | 148 | :param parser: An optparse.OptionParser object | ||
942 | 149 | :retval None | ||
943 | 150 | """ | ||
944 | 151 | |||
945 | 152 | parser.add_option('--filesystem-store-datadir', metavar="DIR", | ||
946 | 153 | default="/var/lib/glance/images/", | ||
947 | 154 | help="Location to write image data. This directory " | ||
948 | 155 | "should be writeable by the user that runs the " | ||
949 | 156 | "glance-api program. Default: %default") | ||
955 | 157 | 148 | ||
956 | === modified file 'glance/store/swift.py' | |||
957 | --- glance/store/swift.py 2011-03-16 16:45:01 +0000 | |||
958 | +++ glance/store/swift.py 2011-03-23 15:06:49 +0000 | |||
959 | @@ -161,7 +161,7 @@ | |||
960 | 161 | # header keys are lowercased by Swift | 161 | # header keys are lowercased by Swift |
961 | 162 | if 'content-length' in resp_headers: | 162 | if 'content-length' in resp_headers: |
962 | 163 | size = int(resp_headers['content-length']) | 163 | size = int(resp_headers['content-length']) |
964 | 164 | return (location, size) | 164 | return (location, size, obj_etag) |
965 | 165 | except swift_client.ClientException, e: | 165 | except swift_client.ClientException, e: |
966 | 166 | if e.http_status == httplib.CONFLICT: | 166 | if e.http_status == httplib.CONFLICT: |
967 | 167 | raise exception.Duplicate("Swift already has an image at " | 167 | raise exception.Duplicate("Swift already has an image at " |
968 | 168 | 168 | ||
969 | === modified file 'tests/stubs.py' | |||
970 | --- tests/stubs.py 2011-03-14 19:10:24 +0000 | |||
971 | +++ tests/stubs.py 2011-03-23 15:06:49 +0000 | |||
972 | @@ -282,6 +282,7 @@ | |||
973 | 282 | 'updated_at': datetime.datetime.utcnow(), | 282 | 'updated_at': datetime.datetime.utcnow(), |
974 | 283 | 'deleted_at': None, | 283 | 'deleted_at': None, |
975 | 284 | 'deleted': False, | 284 | 'deleted': False, |
976 | 285 | 'checksum': None, | ||
977 | 285 | 'size': 13, | 286 | 'size': 13, |
978 | 286 | 'location': "swift://user:passwd@acct/container/obj.tar.0", | 287 | 'location': "swift://user:passwd@acct/container/obj.tar.0", |
979 | 287 | 'properties': [{'key': 'type', | 288 | 'properties': [{'key': 'type', |
980 | @@ -297,6 +298,7 @@ | |||
981 | 297 | 'updated_at': datetime.datetime.utcnow(), | 298 | 'updated_at': datetime.datetime.utcnow(), |
982 | 298 | 'deleted_at': None, | 299 | 'deleted_at': None, |
983 | 299 | 'deleted': False, | 300 | 'deleted': False, |
984 | 301 | 'checksum': None, | ||
985 | 300 | 'size': 19, | 302 | 'size': 19, |
986 | 301 | 'location': "file:///tmp/glance-tests/2", | 303 | 'location': "file:///tmp/glance-tests/2", |
987 | 302 | 'properties': []}] | 304 | 'properties': []}] |
988 | @@ -316,6 +318,7 @@ | |||
989 | 316 | glance.registry.db.api.validate_image(values) | 318 | glance.registry.db.api.validate_image(values) |
990 | 317 | 319 | ||
991 | 318 | values['size'] = values.get('size', 0) | 320 | values['size'] = values.get('size', 0) |
992 | 321 | values['checksum'] = values.get('checksum') | ||
993 | 319 | values['deleted'] = False | 322 | values['deleted'] = False |
994 | 320 | values['properties'] = values.get('properties', {}) | 323 | values['properties'] = values.get('properties', {}) |
995 | 321 | values['created_at'] = datetime.datetime.utcnow() | 324 | values['created_at'] = datetime.datetime.utcnow() |
996 | @@ -348,6 +351,7 @@ | |||
997 | 348 | copy_image.update(values) | 351 | copy_image.update(values) |
998 | 349 | glance.registry.db.api.validate_image(copy_image) | 352 | glance.registry.db.api.validate_image(copy_image) |
999 | 350 | props = [] | 353 | props = [] |
1000 | 354 | orig_properties = image['properties'] | ||
1001 | 351 | 355 | ||
1002 | 352 | if 'properties' in values.keys(): | 356 | if 'properties' in values.keys(): |
1003 | 353 | for k, v in values['properties'].items(): | 357 | for k, v in values['properties'].items(): |
1004 | @@ -360,7 +364,8 @@ | |||
1005 | 360 | p['deleted_at'] = None | 364 | p['deleted_at'] = None |
1006 | 361 | props.append(p) | 365 | props.append(p) |
1007 | 362 | 366 | ||
1009 | 363 | values['properties'] = props | 367 | orig_properties = orig_properties + props |
1010 | 368 | values['properties'] = orig_properties | ||
1011 | 364 | 369 | ||
1012 | 365 | image.update(values) | 370 | image.update(values) |
1013 | 366 | return image | 371 | return image |
1014 | 367 | 372 | ||
1015 | === modified file 'tests/unit/test_api.py' | |||
1016 | --- tests/unit/test_api.py 2011-03-14 19:10:24 +0000 | |||
1017 | +++ tests/unit/test_api.py 2011-03-23 15:06:49 +0000 | |||
1018 | @@ -15,8 +15,9 @@ | |||
1019 | 15 | # License for the specific language governing permissions and limitations | 15 | # License for the specific language governing permissions and limitations |
1020 | 16 | # under the License. | 16 | # under the License. |
1021 | 17 | 17 | ||
1022 | 18 | import hashlib | ||
1023 | 19 | import httplib | ||
1024 | 18 | import os | 20 | import os |
1025 | 19 | import httplib | ||
1026 | 20 | import json | 21 | import json |
1027 | 21 | import unittest | 22 | import unittest |
1028 | 22 | 23 | ||
1029 | @@ -52,7 +53,9 @@ | |||
1030 | 52 | 53 | ||
1031 | 53 | """ | 54 | """ |
1032 | 54 | fixture = {'id': 2, | 55 | fixture = {'id': 2, |
1034 | 55 | 'name': 'fake image #2'} | 56 | 'name': 'fake image #2', |
1035 | 57 | 'size': 19, | ||
1036 | 58 | 'checksum': None} | ||
1037 | 56 | req = webob.Request.blank('/') | 59 | req = webob.Request.blank('/') |
1038 | 57 | res = req.get_response(self.api) | 60 | res = req.get_response(self.api) |
1039 | 58 | res_dict = json.loads(res.body) | 61 | res_dict = json.loads(res.body) |
1040 | @@ -70,7 +73,9 @@ | |||
1041 | 70 | 73 | ||
1042 | 71 | """ | 74 | """ |
1043 | 72 | fixture = {'id': 2, | 75 | fixture = {'id': 2, |
1045 | 73 | 'name': 'fake image #2'} | 76 | 'name': 'fake image #2', |
1046 | 77 | 'size': 19, | ||
1047 | 78 | 'checksum': None} | ||
1048 | 74 | req = webob.Request.blank('/images') | 79 | req = webob.Request.blank('/images') |
1049 | 75 | res = req.get_response(self.api) | 80 | res = req.get_response(self.api) |
1050 | 76 | res_dict = json.loads(res.body) | 81 | res_dict = json.loads(res.body) |
1051 | @@ -90,6 +95,8 @@ | |||
1052 | 90 | fixture = {'id': 2, | 95 | fixture = {'id': 2, |
1053 | 91 | 'name': 'fake image #2', | 96 | 'name': 'fake image #2', |
1054 | 92 | 'is_public': True, | 97 | 'is_public': True, |
1055 | 98 | 'size': 19, | ||
1056 | 99 | 'checksum': None, | ||
1057 | 93 | 'disk_format': 'vhd', | 100 | 'disk_format': 'vhd', |
1058 | 94 | 'container_format': 'ovf', | 101 | 'container_format': 'ovf', |
1059 | 95 | 'status': 'active'} | 102 | 'status': 'active'} |
1060 | @@ -396,7 +403,7 @@ | |||
1061 | 396 | for k, v in fixture_headers.iteritems(): | 403 | for k, v in fixture_headers.iteritems(): |
1062 | 397 | req.headers[k] = v | 404 | req.headers[k] = v |
1063 | 398 | res = req.get_response(self.api) | 405 | res = req.get_response(self.api) |
1065 | 399 | self.assertEquals(res.status_int, httplib.OK) | 406 | self.assertEquals(res.status_int, httplib.CREATED) |
1066 | 400 | 407 | ||
1067 | 401 | res_body = json.loads(res.body)['image'] | 408 | res_body = json.loads(res.body)['image'] |
1068 | 402 | self.assertEquals('queued', res_body['status']) | 409 | self.assertEquals('queued', res_body['status']) |
1069 | @@ -431,7 +438,7 @@ | |||
1070 | 431 | req.headers['Content-Type'] = 'application/octet-stream' | 438 | req.headers['Content-Type'] = 'application/octet-stream' |
1071 | 432 | req.body = "chunk00000remainder" | 439 | req.body = "chunk00000remainder" |
1072 | 433 | res = req.get_response(self.api) | 440 | res = req.get_response(self.api) |
1074 | 434 | self.assertEquals(res.status_int, 200) | 441 | self.assertEquals(res.status_int, httplib.CREATED) |
1075 | 435 | 442 | ||
1076 | 436 | res_body = json.loads(res.body)['image'] | 443 | res_body = json.loads(res.body)['image'] |
1077 | 437 | self.assertEquals(res_body['location'], | 444 | self.assertEquals(res_body['location'], |
1078 | @@ -445,6 +452,97 @@ | |||
1079 | 445 | "res.headerlist = %r" % res.headerlist) | 452 | "res.headerlist = %r" % res.headerlist) |
1080 | 446 | self.assertTrue('/images/3' in res.headers['location']) | 453 | self.assertTrue('/images/3' in res.headers['location']) |
1081 | 447 | 454 | ||
1082 | 455 | def test_image_is_checksummed(self): | ||
1083 | 456 | """Test that the image contents are checksummed properly""" | ||
1084 | 457 | fixture_headers = {'x-image-meta-store': 'file', | ||
1085 | 458 | 'x-image-meta-disk-format': 'vhd', | ||
1086 | 459 | 'x-image-meta-container-format': 'ovf', | ||
1087 | 460 | 'x-image-meta-name': 'fake image #3'} | ||
1088 | 461 | image_contents = "chunk00000remainder" | ||
1089 | 462 | image_checksum = hashlib.md5(image_contents).hexdigest() | ||
1090 | 463 | |||
1091 | 464 | req = webob.Request.blank("/images") | ||
1092 | 465 | req.method = 'POST' | ||
1093 | 466 | for k, v in fixture_headers.iteritems(): | ||
1094 | 467 | req.headers[k] = v | ||
1095 | 468 | |||
1096 | 469 | req.headers['Content-Type'] = 'application/octet-stream' | ||
1097 | 470 | req.body = image_contents | ||
1098 | 471 | res = req.get_response(self.api) | ||
1099 | 472 | self.assertEquals(res.status_int, httplib.CREATED) | ||
1100 | 473 | |||
1101 | 474 | res_body = json.loads(res.body)['image'] | ||
1102 | 475 | self.assertEquals(res_body['location'], | ||
1103 | 476 | 'file:///tmp/glance-tests/3') | ||
1104 | 477 | self.assertEquals(image_checksum, res_body['checksum'], | ||
1105 | 478 | "Mismatched checksum. Expected %s, got %s" % | ||
1106 | 479 | (image_checksum, res_body['checksum'])) | ||
1107 | 480 | |||
1108 | 481 | def test_etag_equals_checksum_header(self): | ||
1109 | 482 | """Test that the ETag header matches the x-image-meta-checksum""" | ||
1110 | 483 | fixture_headers = {'x-image-meta-store': 'file', | ||
1111 | 484 | 'x-image-meta-disk-format': 'vhd', | ||
1112 | 485 | 'x-image-meta-container-format': 'ovf', | ||
1113 | 486 | 'x-image-meta-name': 'fake image #3'} | ||
1114 | 487 | image_contents = "chunk00000remainder" | ||
1115 | 488 | image_checksum = hashlib.md5(image_contents).hexdigest() | ||
1116 | 489 | |||
1117 | 490 | req = webob.Request.blank("/images") | ||
1118 | 491 | req.method = 'POST' | ||
1119 | 492 | for k, v in fixture_headers.iteritems(): | ||
1120 | 493 | req.headers[k] = v | ||
1121 | 494 | |||
1122 | 495 | req.headers['Content-Type'] = 'application/octet-stream' | ||
1123 | 496 | req.body = image_contents | ||
1124 | 497 | res = req.get_response(self.api) | ||
1125 | 498 | self.assertEquals(res.status_int, httplib.CREATED) | ||
1126 | 499 | |||
1127 | 500 | # HEAD the image and check the ETag equals the checksum header... | ||
1128 | 501 | expected_headers = {'x-image-meta-checksum': image_checksum, | ||
1129 | 502 | 'etag': image_checksum} | ||
1130 | 503 | req = webob.Request.blank("/images/3") | ||
1131 | 504 | req.method = 'HEAD' | ||
1132 | 505 | res = req.get_response(self.api) | ||
1133 | 506 | self.assertEquals(res.status_int, 200) | ||
1134 | 507 | |||
1135 | 508 | for key in expected_headers.keys(): | ||
1136 | 509 | self.assertTrue(key in res.headers, | ||
1137 | 510 | "required header '%s' missing from " | ||
1138 | 511 | "returned headers" % key) | ||
1139 | 512 | for key, value in expected_headers.iteritems(): | ||
1140 | 513 | self.assertEquals(value, res.headers[key]) | ||
1141 | 514 | |||
1142 | 515 | def test_bad_checksum_kills_image(self): | ||
1143 | 516 | """Test that the image contents are checksummed properly""" | ||
1144 | 517 | image_contents = "chunk00000remainder" | ||
1145 | 518 | bad_checksum = hashlib.md5("invalid").hexdigest() | ||
1146 | 519 | fixture_headers = {'x-image-meta-store': 'file', | ||
1147 | 520 | 'x-image-meta-disk-format': 'vhd', | ||
1148 | 521 | 'x-image-meta-container-format': 'ovf', | ||
1149 | 522 | 'x-image-meta-name': 'fake image #3', | ||
1150 | 523 | 'x-image-meta-checksum': bad_checksum} | ||
1151 | 524 | |||
1152 | 525 | req = webob.Request.blank("/images") | ||
1153 | 526 | req.method = 'POST' | ||
1154 | 527 | for k, v in fixture_headers.iteritems(): | ||
1155 | 528 | req.headers[k] = v | ||
1156 | 529 | |||
1157 | 530 | req.headers['Content-Type'] = 'application/octet-stream' | ||
1158 | 531 | req.body = image_contents | ||
1159 | 532 | res = req.get_response(self.api) | ||
1160 | 533 | self.assertEquals(res.status_int, webob.exc.HTTPBadRequest.code) | ||
1161 | 534 | |||
1162 | 535 | # Test the image was killed... | ||
1163 | 536 | expected_headers = {'x-image-meta-id': '3', | ||
1164 | 537 | 'x-image-meta-status': 'killed'} | ||
1165 | 538 | req = webob.Request.blank("/images/3") | ||
1166 | 539 | req.method = 'HEAD' | ||
1167 | 540 | res = req.get_response(self.api) | ||
1168 | 541 | self.assertEquals(res.status_int, 200) | ||
1169 | 542 | |||
1170 | 543 | for key, value in expected_headers.iteritems(): | ||
1171 | 544 | self.assertEquals(value, res.headers[key]) | ||
1172 | 545 | |||
1173 | 448 | def test_image_meta(self): | 546 | def test_image_meta(self): |
1174 | 449 | """Test for HEAD /images/<ID>""" | 547 | """Test for HEAD /images/<ID>""" |
1175 | 450 | expected_headers = {'x-image-meta-id': '2', | 548 | expected_headers = {'x-image-meta-id': '2', |
1176 | 451 | 549 | ||
1177 | === modified file 'tests/unit/test_filesystem_store.py' | |||
1178 | --- tests/unit/test_filesystem_store.py 2011-02-27 20:54:29 +0000 | |||
1179 | +++ tests/unit/test_filesystem_store.py 2011-03-23 15:06:49 +0000 | |||
1180 | @@ -18,6 +18,7 @@ | |||
1181 | 18 | """Tests the filesystem backend store""" | 18 | """Tests the filesystem backend store""" |
1182 | 19 | 19 | ||
1183 | 20 | import StringIO | 20 | import StringIO |
1184 | 21 | import hashlib | ||
1185 | 21 | import unittest | 22 | import unittest |
1186 | 22 | import urlparse | 23 | import urlparse |
1187 | 23 | 24 | ||
1188 | @@ -27,6 +28,11 @@ | |||
1189 | 27 | from glance.store.filesystem import FilesystemBackend, ChunkedFile | 28 | from glance.store.filesystem import FilesystemBackend, ChunkedFile |
1190 | 28 | from tests import stubs | 29 | from tests import stubs |
1191 | 29 | 30 | ||
1192 | 31 | FILESYSTEM_OPTIONS = { | ||
1193 | 32 | 'verbose': True, | ||
1194 | 33 | 'debug': True, | ||
1195 | 34 | 'filesystem_store_datadir': stubs.FAKE_FILESYSTEM_ROOTDIR} | ||
1196 | 35 | |||
1197 | 30 | 36 | ||
1198 | 31 | class TestFilesystemBackend(unittest.TestCase): | 37 | class TestFilesystemBackend(unittest.TestCase): |
1199 | 32 | 38 | ||
1200 | @@ -75,17 +81,17 @@ | |||
1201 | 75 | expected_image_id = 42 | 81 | expected_image_id = 42 |
1202 | 76 | expected_file_size = 1024 * 5 # 5K | 82 | expected_file_size = 1024 * 5 # 5K |
1203 | 77 | expected_file_contents = "*" * expected_file_size | 83 | expected_file_contents = "*" * expected_file_size |
1204 | 84 | expected_checksum = hashlib.md5(expected_file_contents).hexdigest() | ||
1205 | 78 | expected_location = "file://%s/%s" % (stubs.FAKE_FILESYSTEM_ROOTDIR, | 85 | expected_location = "file://%s/%s" % (stubs.FAKE_FILESYSTEM_ROOTDIR, |
1206 | 79 | expected_image_id) | 86 | expected_image_id) |
1207 | 80 | image_file = StringIO.StringIO(expected_file_contents) | 87 | image_file = StringIO.StringIO(expected_file_contents) |
1208 | 81 | options = {'verbose': True, | ||
1209 | 82 | 'debug': True, | ||
1210 | 83 | 'filesystem_store_datadir': stubs.FAKE_FILESYSTEM_ROOTDIR} | ||
1211 | 84 | 88 | ||
1213 | 85 | location, size = FilesystemBackend.add(42, image_file, options) | 89 | location, size, checksum = FilesystemBackend.add(42, image_file, |
1214 | 90 | FILESYSTEM_OPTIONS) | ||
1215 | 86 | 91 | ||
1216 | 87 | self.assertEquals(expected_location, location) | 92 | self.assertEquals(expected_location, location) |
1217 | 88 | self.assertEquals(expected_file_size, size) | 93 | self.assertEquals(expected_file_size, size) |
1218 | 94 | self.assertEquals(expected_checksum, checksum) | ||
1219 | 89 | 95 | ||
1220 | 90 | url_pieces = urlparse.urlparse("file:///tmp/glance-tests/42") | 96 | url_pieces = urlparse.urlparse("file:///tmp/glance-tests/42") |
1221 | 91 | new_image_file = FilesystemBackend.get(url_pieces) | 97 | new_image_file = FilesystemBackend.get(url_pieces) |
1222 | @@ -110,7 +116,7 @@ | |||
1223 | 110 | 'filesystem_store_datadir': stubs.FAKE_FILESYSTEM_ROOTDIR} | 116 | 'filesystem_store_datadir': stubs.FAKE_FILESYSTEM_ROOTDIR} |
1224 | 111 | self.assertRaises(exception.Duplicate, | 117 | self.assertRaises(exception.Duplicate, |
1225 | 112 | FilesystemBackend.add, | 118 | FilesystemBackend.add, |
1227 | 113 | 2, image_file, options) | 119 | 2, image_file, FILESYSTEM_OPTIONS) |
1228 | 114 | 120 | ||
1229 | 115 | def test_delete(self): | 121 | def test_delete(self): |
1230 | 116 | """ | 122 | """ |
1231 | 117 | 123 | ||
1232 | === added file 'tests/unit/test_migrations.conf' | |||
1233 | --- tests/unit/test_migrations.conf 1970-01-01 00:00:00 +0000 | |||
1234 | +++ tests/unit/test_migrations.conf 2011-03-23 15:06:49 +0000 | |||
1235 | @@ -0,0 +1,5 @@ | |||
1236 | 1 | [DEFAULT] | ||
1237 | 2 | # Set up any number of migration data stores you want, one | ||
1238 | 3 | # The "name" used in the test is the config variable key. | ||
1239 | 4 | sqlite=sqlite:///test_migrations.db | ||
1240 | 5 | mysql=mysql://root:@localhost/test_migrations | ||
1241 | 0 | 6 | ||
1242 | === modified file 'tests/unit/test_migrations.py' | |||
1243 | --- tests/unit/test_migrations.py 2011-03-16 16:11:56 +0000 | |||
1244 | +++ tests/unit/test_migrations.py 2011-03-23 15:06:49 +0000 | |||
1245 | @@ -15,39 +15,241 @@ | |||
1246 | 15 | # License for the specific language governing permissions and limitations | 15 | # License for the specific language governing permissions and limitations |
1247 | 16 | # under the License. | 16 | # under the License. |
1248 | 17 | 17 | ||
1249 | 18 | """ | ||
1250 | 19 | Tests for database migrations. This test case reads the configuration | ||
1251 | 20 | file /tests/unit/test_migrations.conf for database connection settings | ||
1252 | 21 | to use in the tests. For each connection found in the config file, | ||
1253 | 22 | the test case runs a series of test cases to ensure that migrations work | ||
1254 | 23 | properly both upgrading and downgrading, and that no data loss occurs | ||
1255 | 24 | if possible. | ||
1256 | 25 | """ | ||
1257 | 26 | |||
1258 | 27 | import ConfigParser | ||
1259 | 28 | import datetime | ||
1260 | 18 | import os | 29 | import os |
1261 | 19 | import unittest | 30 | import unittest |
1263 | 20 | 31 | import urlparse | |
1264 | 32 | |||
1265 | 33 | from migrate.versioning.repository import Repository | ||
1266 | 34 | from sqlalchemy import * | ||
1267 | 35 | from sqlalchemy.pool import NullPool | ||
1268 | 36 | |||
1269 | 37 | from glance.common import exception | ||
1270 | 21 | import glance.registry.db.migration as migration_api | 38 | import glance.registry.db.migration as migration_api |
1273 | 22 | import glance.registry.db.api as api | 39 | from tests.unit.test_misc import execute |
1272 | 23 | import glance.common.config as config | ||
1274 | 24 | 40 | ||
1275 | 25 | 41 | ||
1276 | 26 | class TestMigrations(unittest.TestCase): | 42 | class TestMigrations(unittest.TestCase): |
1277 | 43 | |||
1278 | 27 | """Test sqlalchemy-migrate migrations""" | 44 | """Test sqlalchemy-migrate migrations""" |
1279 | 28 | 45 | ||
1280 | 46 | TEST_DATABASES = {} | ||
1281 | 47 | CONFIG_FILE_PATH = os.path.join('tests', 'unit', | ||
1282 | 48 | 'test_migrations.conf') | ||
1283 | 49 | REPOSITORY_PATH = os.path.join('glance', 'registry', 'db', 'migrate_repo') | ||
1284 | 50 | REPOSITORY = Repository(REPOSITORY_PATH) | ||
1285 | 51 | |||
1286 | 52 | def __init__(self, *args, **kwargs): | ||
1287 | 53 | super(TestMigrations, self).__init__(*args, **kwargs) | ||
1288 | 54 | |||
1289 | 29 | def setUp(self): | 55 | def setUp(self): |
1297 | 30 | self.db_path = "glance_test_migration.sqlite" | 56 | # Load test databases from the config file. Only do this |
1298 | 31 | sql_connection = os.environ.get('GLANCE_SQL_CONNECTION', | 57 | # once. No need to re-run this on each test... |
1299 | 32 | "sqlite:///%s" % self.db_path) | 58 | if not TestMigrations.TEST_DATABASES: |
1300 | 33 | 59 | if os.path.exists(TestMigrations.CONFIG_FILE_PATH): | |
1301 | 34 | self.options = dict(sql_connection=sql_connection, | 60 | cp = ConfigParser.RawConfigParser() |
1302 | 35 | verbose=False) | 61 | try: |
1303 | 36 | config.setup_logging(self.options, {}) | 62 | cp.read(TestMigrations.CONFIG_FILE_PATH) |
1304 | 63 | defaults = cp.defaults() | ||
1305 | 64 | for key, value in defaults.items(): | ||
1306 | 65 | TestMigrations.TEST_DATABASES[key] = value | ||
1307 | 66 | except ConfigParser.ParsingError, e: | ||
1308 | 67 | print ("Failed to read test_migrations.conf config file. " | ||
1309 | 68 | "Got error: %s" % e) | ||
1310 | 69 | |||
1311 | 70 | self.engines = {} | ||
1312 | 71 | for key, value in TestMigrations.TEST_DATABASES.items(): | ||
1313 | 72 | self.engines[key] = create_engine(value, poolclass=NullPool) | ||
1314 | 73 | |||
1315 | 74 | # We start each test case with a completely blank slate. | ||
1316 | 75 | self._reset_databases() | ||
1317 | 37 | 76 | ||
1318 | 38 | def tearDown(self): | 77 | def tearDown(self): |
1334 | 39 | api.configure_db(self.options) | 78 | # We destroy the test data store between each test case, |
1335 | 40 | api.unregister_models() | 79 | # and recreate it, which ensures that we have no side-effects |
1336 | 41 | 80 | # from the tests | |
1337 | 42 | def test_db_sync_downgrade_then_upgrade(self): | 81 | self._reset_databases() |
1338 | 43 | migration_api.db_sync(self.options) | 82 | |
1339 | 44 | 83 | def _reset_databases(self): | |
1340 | 45 | latest = migration_api.db_version(self.options) | 84 | for key, engine in self.engines.items(): |
1341 | 46 | 85 | conn_string = TestMigrations.TEST_DATABASES[key] | |
1342 | 47 | migration_api.downgrade(self.options, latest - 1) | 86 | conn_pieces = urlparse.urlparse(conn_string) |
1343 | 48 | cur_version = migration_api.db_version(self.options) | 87 | if conn_string.startswith('sqlite'): |
1344 | 49 | self.assertEqual(cur_version, latest - 1) | 88 | # We can just delete the SQLite database, which is |
1345 | 50 | 89 | # the easiest and cleanest solution | |
1346 | 51 | migration_api.upgrade(self.options, cur_version + 1) | 90 | db_path = conn_pieces.path.strip('/') |
1347 | 52 | cur_version = migration_api.db_version(self.options) | 91 | if os.path.exists(db_path): |
1348 | 53 | self.assertEqual(cur_version, latest) | 92 | os.unlink(db_path) |
1349 | 93 | # No need to recreate the SQLite DB. SQLite will | ||
1350 | 94 | # create it for us if it's not there... | ||
1351 | 95 | elif conn_string.startswith('mysql'): | ||
1352 | 96 | # We can execute the MySQL client to destroy and re-create | ||
1353 | 97 | # the MYSQL database, which is easier and less error-prone | ||
1354 | 98 | # than using SQLAlchemy to do this via MetaData...trust me. | ||
1355 | 99 | database = conn_pieces.path.strip('/') | ||
1356 | 100 | loc_pieces = conn_pieces.netloc.split('@') | ||
1357 | 101 | host = loc_pieces[1] | ||
1358 | 102 | auth_pieces = loc_pieces[0].split(':') | ||
1359 | 103 | user = auth_pieces[0] | ||
1360 | 104 | password = "" | ||
1361 | 105 | if len(auth_pieces) > 1: | ||
1362 | 106 | if auth_pieces[1].strip(): | ||
1363 | 107 | password = "-p%s" % auth_pieces[1] | ||
1364 | 108 | sql = ("drop database if exists %(database)s; " | ||
1365 | 109 | "create database %(database)s;") % locals() | ||
1366 | 110 | cmd = ("mysql -u%(user)s %(password)s -h%(host)s " | ||
1367 | 111 | "-e\"%(sql)s\"") % locals() | ||
1368 | 112 | exitcode, out, err = execute(cmd) | ||
1369 | 113 | self.assertEqual(0, exitcode) | ||
1370 | 114 | |||
1371 | 115 | def test_walk_versions(self): | ||
1372 | 116 | """ | ||
1373 | 117 | Walks all version scripts for each tested database, ensuring | ||
1374 | 118 | that there are no errors in the version scripts for each engine | ||
1375 | 119 | """ | ||
1376 | 120 | for key, engine in self.engines.items(): | ||
1377 | 121 | options = {'sql_connection': TestMigrations.TEST_DATABASES[key]} | ||
1378 | 122 | self._walk_versions(options) | ||
1379 | 123 | |||
1380 | 124 | def _walk_versions(self, options): | ||
1381 | 125 | # Determine latest version script from the repo, then | ||
1382 | 126 | # upgrade from 1 through to the latest, with no data | ||
1383 | 127 | # in the databases. This just checks that the schema itself | ||
1384 | 128 | # upgrades successfully. | ||
1385 | 129 | |||
1386 | 130 | # Assert we are not under version control... | ||
1387 | 131 | self.assertRaises(exception.DatabaseMigrationError, | ||
1388 | 132 | migration_api.db_version, | ||
1389 | 133 | options) | ||
1390 | 134 | # Place the database under version control | ||
1391 | 135 | migration_api.version_control(options) | ||
1392 | 136 | |||
1393 | 137 | cur_version = migration_api.db_version(options) | ||
1394 | 138 | self.assertEqual(0, cur_version) | ||
1395 | 139 | |||
1396 | 140 | for version in xrange(1, TestMigrations.REPOSITORY.latest + 1): | ||
1397 | 141 | migration_api.upgrade(options, version) | ||
1398 | 142 | cur_version = migration_api.db_version(options) | ||
1399 | 143 | self.assertEqual(cur_version, version) | ||
1400 | 144 | |||
1401 | 145 | # Now walk it back down to 0 from the latest, testing | ||
1402 | 146 | # the downgrade paths. | ||
1403 | 147 | for version in reversed( | ||
1404 | 148 | xrange(0, TestMigrations.REPOSITORY.latest)): | ||
1405 | 149 | migration_api.downgrade(options, version) | ||
1406 | 150 | cur_version = migration_api.db_version(options) | ||
1407 | 151 | self.assertEqual(cur_version, version) | ||
1408 | 152 | |||
1409 | 153 | def test_no_data_loss_2_to_3_to_2(self): | ||
1410 | 154 | """ | ||
1411 | 155 | Here, we test that in the case when we moved a column "type" from the | ||
1412 | 156 | base images table to be records in the image_properties table, that | ||
1413 | 157 | we don't lose any data during the migration. Similarly, we test that | ||
1414 | 158 | on downgrade, we don't lose any data, as the records are moved from | ||
1415 | 159 | the image_properties table back into the base image table. | ||
1416 | 160 | """ | ||
1417 | 161 | for key, engine in self.engines.items(): | ||
1418 | 162 | options = {'sql_connection': TestMigrations.TEST_DATABASES[key]} | ||
1419 | 163 | self._no_data_loss_2_to_3_to_2(engine, options) | ||
1420 | 164 | |||
1421 | 165 | def _no_data_loss_2_to_3_to_2(self, engine, options): | ||
1422 | 166 | migration_api.version_control(options) | ||
1423 | 167 | migration_api.upgrade(options, 2) | ||
1424 | 168 | |||
1425 | 169 | cur_version = migration_api.db_version(options) | ||
1426 | 170 | self.assertEquals(2, cur_version) | ||
1427 | 171 | |||
1428 | 172 | # We are now on version 2. Check that the images table does | ||
1429 | 173 | # not contain the type column... | ||
1430 | 174 | |||
1431 | 175 | images_table = Table('images', MetaData(), autoload=True, | ||
1432 | 176 | autoload_with=engine) | ||
1433 | 177 | |||
1434 | 178 | image_properties_table = Table('image_properties', MetaData(), | ||
1435 | 179 | autoload=True, | ||
1436 | 180 | autoload_with=engine) | ||
1437 | 181 | |||
1438 | 182 | self.assertTrue('type' in images_table.c, | ||
1439 | 183 | "'type' column found in images table columns! " | ||
1440 | 184 | "images table columns: %s" | ||
1441 | 185 | % images_table.c.keys()) | ||
1442 | 186 | |||
1443 | 187 | conn = engine.connect() | ||
1444 | 188 | sel = select([func.count("*")], from_obj=[images_table]) | ||
1445 | 189 | orig_num_images = conn.execute(sel).scalar() | ||
1446 | 190 | sel = select([func.count("*")], from_obj=[image_properties_table]) | ||
1447 | 191 | orig_num_image_properties = conn.execute(sel).scalar() | ||
1448 | 192 | |||
1449 | 193 | now = datetime.datetime.now() | ||
1450 | 194 | inserter = images_table.insert() | ||
1451 | 195 | conn.execute(inserter, [ | ||
1452 | 196 | {'deleted': False, 'created_at': now, | ||
1453 | 197 | 'updated_at': now, 'type': 'kernel', | ||
1454 | 198 | 'status': 'active', 'is_public': True}, | ||
1455 | 199 | {'deleted': False, 'created_at': now, | ||
1456 | 200 | 'updated_at': now, 'type': 'ramdisk', | ||
1457 | 201 | 'status': 'active', 'is_public': True}]) | ||
1458 | 202 | |||
1459 | 203 | sel = select([func.count("*")], from_obj=[images_table]) | ||
1460 | 204 | num_images = conn.execute(sel).scalar() | ||
1461 | 205 | self.assertEqual(orig_num_images + 2, num_images) | ||
1462 | 206 | conn.close() | ||
1463 | 207 | |||
1464 | 208 | # Now let's upgrade to 3. This should move the type column | ||
1465 | 209 | # to the image_properties table as type properties. | ||
1466 | 210 | |||
1467 | 211 | migration_api.upgrade(options, 3) | ||
1468 | 212 | |||
1469 | 213 | cur_version = migration_api.db_version(options) | ||
1470 | 214 | self.assertEquals(3, cur_version) | ||
1471 | 215 | |||
1472 | 216 | images_table = Table('images', MetaData(), autoload=True, | ||
1473 | 217 | autoload_with=engine) | ||
1474 | 218 | |||
1475 | 219 | self.assertTrue('type' not in images_table.c, | ||
1476 | 220 | "'type' column not found in images table columns! " | ||
1477 | 221 | "images table columns reported by metadata: %s\n" | ||
1478 | 222 | % images_table.c.keys()) | ||
1479 | 223 | |||
1480 | 224 | image_properties_table = Table('image_properties', MetaData(), | ||
1481 | 225 | autoload=True, | ||
1482 | 226 | autoload_with=engine) | ||
1483 | 227 | |||
1484 | 228 | conn = engine.connect() | ||
1485 | 229 | sel = select([func.count("*")], from_obj=[image_properties_table]) | ||
1486 | 230 | num_image_properties = conn.execute(sel).scalar() | ||
1487 | 231 | self.assertEqual(orig_num_image_properties + 2, num_image_properties) | ||
1488 | 232 | conn.close() | ||
1489 | 233 | |||
1490 | 234 | # Downgrade to 2 and check that the type properties were moved | ||
1491 | 235 | # to the main image table | ||
1492 | 236 | |||
1493 | 237 | migration_api.downgrade(options, 2) | ||
1494 | 238 | |||
1495 | 239 | images_table = Table('images', MetaData(), autoload=True, | ||
1496 | 240 | autoload_with=engine) | ||
1497 | 241 | |||
1498 | 242 | self.assertTrue('type' in images_table.c, | ||
1499 | 243 | "'type' column found in images table columns! " | ||
1500 | 244 | "images table columns: %s" | ||
1501 | 245 | % images_table.c.keys()) | ||
1502 | 246 | |||
1503 | 247 | image_properties_table = Table('image_properties', MetaData(), | ||
1504 | 248 | autoload=True, | ||
1505 | 249 | autoload_with=engine) | ||
1506 | 250 | |||
1507 | 251 | conn = engine.connect() | ||
1508 | 252 | sel = select([func.count("*")], from_obj=[image_properties_table]) | ||
1509 | 253 | last_num_image_properties = conn.execute(sel).scalar() | ||
1510 | 254 | |||
1511 | 255 | self.assertEqual(num_image_properties - 2, last_num_image_properties) | ||
1512 | 54 | 256 | ||
1513 | === modified file 'tests/unit/test_swift_store.py' | |||
1514 | --- tests/unit/test_swift_store.py 2011-03-16 16:45:01 +0000 | |||
1515 | +++ tests/unit/test_swift_store.py 2011-03-23 15:06:49 +0000 | |||
1516 | @@ -70,15 +70,20 @@ | |||
1517 | 70 | if hasattr(contents, 'read'): | 70 | if hasattr(contents, 'read'): |
1518 | 71 | fixture_object = StringIO.StringIO() | 71 | fixture_object = StringIO.StringIO() |
1519 | 72 | chunk = contents.read(SwiftBackend.CHUNKSIZE) | 72 | chunk = contents.read(SwiftBackend.CHUNKSIZE) |
1520 | 73 | checksum = hashlib.md5() | ||
1521 | 73 | while chunk: | 74 | while chunk: |
1522 | 74 | fixture_object.write(chunk) | 75 | fixture_object.write(chunk) |
1523 | 76 | checksum.update(chunk) | ||
1524 | 75 | chunk = contents.read(SwiftBackend.CHUNKSIZE) | 77 | chunk = contents.read(SwiftBackend.CHUNKSIZE) |
1525 | 78 | etag = checksum.hexdigest() | ||
1526 | 76 | else: | 79 | else: |
1527 | 77 | fixture_object = StringIO.StringIO(contents) | 80 | fixture_object = StringIO.StringIO(contents) |
1528 | 81 | etag = hashlib.md5(fixture_object.getvalue()).hexdigest() | ||
1529 | 82 | read_len = fixture_object.len | ||
1530 | 78 | fixture_objects[fixture_key] = fixture_object | 83 | fixture_objects[fixture_key] = fixture_object |
1531 | 79 | fixture_headers[fixture_key] = { | 84 | fixture_headers[fixture_key] = { |
1534 | 80 | 'content-length': fixture_object.len, | 85 | 'content-length': read_len, |
1535 | 81 | 'etag': hashlib.md5(fixture_object.read()).hexdigest()} | 86 | 'etag': etag} |
1536 | 82 | return fixture_headers[fixture_key]['etag'] | 87 | return fixture_headers[fixture_key]['etag'] |
1537 | 83 | else: | 88 | else: |
1538 | 84 | msg = ("Object PUT failed - Object with key %s already exists" | 89 | msg = ("Object PUT failed - Object with key %s already exists" |
1539 | @@ -226,8 +231,9 @@ | |||
1540 | 226 | def test_add(self): | 231 | def test_add(self): |
1541 | 227 | """Test that we can add an image via the swift backend""" | 232 | """Test that we can add an image via the swift backend""" |
1542 | 228 | expected_image_id = 42 | 233 | expected_image_id = 42 |
1544 | 229 | expected_swift_size = 1024 * 5 # 5K | 234 | expected_swift_size = FIVE_KB |
1545 | 230 | expected_swift_contents = "*" * expected_swift_size | 235 | expected_swift_contents = "*" * expected_swift_size |
1546 | 236 | expected_checksum = hashlib.md5(expected_swift_contents).hexdigest() | ||
1547 | 231 | expected_location = format_swift_location( | 237 | expected_location = format_swift_location( |
1548 | 232 | SWIFT_OPTIONS['swift_store_user'], | 238 | SWIFT_OPTIONS['swift_store_user'], |
1549 | 233 | SWIFT_OPTIONS['swift_store_key'], | 239 | SWIFT_OPTIONS['swift_store_key'], |
1550 | @@ -236,10 +242,12 @@ | |||
1551 | 236 | expected_image_id) | 242 | expected_image_id) |
1552 | 237 | image_swift = StringIO.StringIO(expected_swift_contents) | 243 | image_swift = StringIO.StringIO(expected_swift_contents) |
1553 | 238 | 244 | ||
1555 | 239 | location, size = SwiftBackend.add(42, image_swift, SWIFT_OPTIONS) | 245 | location, size, checksum = SwiftBackend.add(42, image_swift, |
1556 | 246 | SWIFT_OPTIONS) | ||
1557 | 240 | 247 | ||
1558 | 241 | self.assertEquals(expected_location, location) | 248 | self.assertEquals(expected_location, location) |
1559 | 242 | self.assertEquals(expected_swift_size, size) | 249 | self.assertEquals(expected_swift_size, size) |
1560 | 250 | self.assertEquals(expected_checksum, checksum) | ||
1561 | 243 | 251 | ||
1562 | 244 | url_pieces = urlparse.urlparse(expected_location) | 252 | url_pieces = urlparse.urlparse(expected_location) |
1563 | 245 | new_image_swift = SwiftBackend.get(url_pieces) | 253 | new_image_swift = SwiftBackend.get(url_pieces) |
1564 | @@ -280,8 +288,9 @@ | |||
1565 | 280 | options['swift_store_create_container_on_put'] = 'True' | 288 | options['swift_store_create_container_on_put'] = 'True' |
1566 | 281 | options['swift_store_container'] = 'noexist' | 289 | options['swift_store_container'] = 'noexist' |
1567 | 282 | expected_image_id = 42 | 290 | expected_image_id = 42 |
1569 | 283 | expected_swift_size = 1024 * 5 # 5K | 291 | expected_swift_size = FIVE_KB |
1570 | 284 | expected_swift_contents = "*" * expected_swift_size | 292 | expected_swift_contents = "*" * expected_swift_size |
1571 | 293 | expected_checksum = hashlib.md5(expected_swift_contents).hexdigest() | ||
1572 | 285 | expected_location = format_swift_location( | 294 | expected_location = format_swift_location( |
1573 | 286 | options['swift_store_user'], | 295 | options['swift_store_user'], |
1574 | 287 | options['swift_store_key'], | 296 | options['swift_store_key'], |
1575 | @@ -290,10 +299,12 @@ | |||
1576 | 290 | expected_image_id) | 299 | expected_image_id) |
1577 | 291 | image_swift = StringIO.StringIO(expected_swift_contents) | 300 | image_swift = StringIO.StringIO(expected_swift_contents) |
1578 | 292 | 301 | ||
1580 | 293 | location, size = SwiftBackend.add(42, image_swift, options) | 302 | location, size, checksum = SwiftBackend.add(42, image_swift, |
1581 | 303 | options) | ||
1582 | 294 | 304 | ||
1583 | 295 | self.assertEquals(expected_location, location) | 305 | self.assertEquals(expected_location, location) |
1584 | 296 | self.assertEquals(expected_swift_size, size) | 306 | self.assertEquals(expected_swift_size, size) |
1585 | 307 | self.assertEquals(expected_checksum, checksum) | ||
1586 | 297 | 308 | ||
1587 | 298 | url_pieces = urlparse.urlparse(expected_location) | 309 | url_pieces = urlparse.urlparse(expected_location) |
1588 | 299 | new_image_swift = SwiftBackend.get(url_pieces) | 310 | new_image_swift = SwiftBackend.get(url_pieces) |
The swift parts look reasonable to me. The only other thing that I might add is to clarify in the docs that the checksum is an md5 checksum, otherwise people may try to use CRC.