Merge lp:~ec0/charms/trusty/glance-sync-slave/layer-metadata-fixes into lp:~canonical-bootstack/charms/trusty/glance-sync-slave/layer
- Trusty Tahr (14.04)
- layer-metadata-fixes
- Merge into layer
Proposed by
James Hebden
Status: | Needs review |
---|---|
Proposed branch: | lp:~ec0/charms/trusty/glance-sync-slave/layer-metadata-fixes |
Merge into: | lp:~canonical-bootstack/charms/trusty/glance-sync-slave/layer |
Diff against target: |
942 lines (+587/-241) 6 files modified
config.yaml (+16/-0) files/glance_sync_slave.py (+542/-239) reactive/glance-sync-slave.py (+21/-0) templates/glance_sync_slave_cron.j2 (+1/-1) templates/glancesync.novarc.j2 (+5/-0) templates/novarc.j2 (+2/-1) |
To merge this branch: | bzr merge lp:~ec0/charms/trusty/glance-sync-slave/layer-metadata-fixes |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Jill Rouleau | Pending | ||
Review via email: mp+328495@code.launchpad.net |
Commit message
Description of the change
Add changes to exclude glance and additional custom metadata from sync, as well as readonly attributes.
To post a comment you must log in.
Unmerged revisions
- 52. By James Hebden
-
[jhebden,r=jillr] Added sajoupa's changes to ignore syncing of glance and extra properties in metadata
- 51. By Alvaro Uria
-
added extra params to config.yaml and reactive logic (config-changed not calling @when)
- 50. By Alvaro Uria
-
fixes on project mapping and errors capture. First real tests ok
- 49. By Alvaro Uria
-
minor fixes: hardcoded glance master endpoint, temp_image format
- 48. By Alvaro Uria
-
glance-sync-slave redesign
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'config.yaml' |
2 | --- config.yaml 2017-01-11 03:55:07 +0000 |
3 | +++ config.yaml 2017-08-03 07:40:40 +0000 |
4 | @@ -24,6 +24,22 @@ |
5 | default: /srv/glance_sync_slave/logs |
6 | description: directory to store sync logfiles |
7 | type: string |
8 | + master_password: |
9 | + default: '' |
10 | + description: auth password against glance master service |
11 | + type: string |
12 | + master_auth_url: |
13 | + default: '' |
14 | + description: master keystone endpoint |
15 | + type: string |
16 | + master_glance_url: |
17 | + default: '' |
18 | + description: master glance service endpoint |
19 | + type: string |
20 | + master_region: |
21 | + default: 'RegionOne' |
22 | + description: name of the master region |
23 | + type: string |
24 | nagios_context: |
25 | default: "juju" |
26 | type: string |
27 | |
28 | === modified file 'files/glance_sync_slave.py' |
29 | --- files/glance_sync_slave.py 2016-05-17 19:43:27 +0000 |
30 | +++ files/glance_sync_slave.py 2017-08-03 07:40:40 +0000 |
31 | @@ -7,189 +7,28 @@ |
32 | import argparse |
33 | import json |
34 | import datetime |
35 | +import dateutil.parser |
36 | import atexit |
37 | -import re |
38 | +#import re |
39 | import shlex |
40 | from glanceclient import Client |
41 | -from glanceclient.exc import HTTPConflict |
42 | +from glanceclient.exc import ( |
43 | + HTTPConflict, |
44 | + HTTPForbidden, |
45 | +) |
46 | from keystoneclient.v2_0 import client |
47 | import subprocess |
48 | |
49 | +class OSProjectNotFound(Exception): |
50 | + """This indicates no sync is possible |
51 | + (not defaulting to admin project) |
52 | + """ |
53 | + pass |
54 | + |
55 | |
56 | class ImageSyncSlave: |
57 | - |
58 | - def __init__(self): |
59 | - self.glance = self.glance_connect() |
60 | - |
61 | - # remove readonly properties (from glanceclient/v2/image_schema.py) |
62 | - readonly_properties = ['file', 'size', 'self', 'direct_url', 'schema', |
63 | - 'status', 'updated_at', 'virtual_size', |
64 | - 'checksum', |
65 | - 'created_at'] |
66 | - |
67 | - reserved_properties = ['owner', 'locations', 'deleted', |
68 | - 'deleted_at', 'direct_url', 'self', 'file', |
69 | - 'schema'] |
70 | - self.glance_properties = readonly_properties + reserved_properties |
71 | - |
72 | - # rsync image and metadata files to data_dir |
73 | - def download_data(self, source, data_dir): |
74 | - |
75 | - if not data_dir.endswith('/'): |
76 | - data_dir = data_dir + '/' |
77 | - |
78 | - if not source.endswith('/'): |
79 | - source = source + '/' |
80 | - |
81 | - command = "/usr/bin/rsync -az --delete -e " \ |
82 | - "'ssh -o StrictHostKeyChecking=no' {} {}".format(source, |
83 | - data_dir) |
84 | - proc = subprocess.Popen(shlex.split(command), |
85 | - stdout=subprocess.PIPE, |
86 | - stderr=subprocess.PIPE) |
87 | - (stdout, stderr) = proc.communicate() |
88 | - if proc.returncode: |
89 | - print("{} ERROR: problem while getting data from " |
90 | - "master ({}).".format(self.timestamp_now(), command)) |
91 | - print(stderr) |
92 | - self.release_lock() |
93 | - sys.exit(2) |
94 | - else: |
95 | - return True |
96 | - |
97 | - def glance_upload(self, data_dir): |
98 | - |
99 | - meta_files = [file for file in self.list_files(data_dir) |
100 | - if file.endswith('.json')] |
101 | - |
102 | - glance_ids = [image.id for image in self.glance.images.list()] |
103 | - |
104 | - for file in meta_files: |
105 | - metadata = self.read_metadata(os.path.join(data_dir, file)) |
106 | - |
107 | - # compare image checksums with metadata checksums |
108 | - if self.check_md5(metadata['checksum'], |
109 | - os.path.join(data_dir, metadata['id'])): |
110 | - # import image into glance |
111 | - |
112 | - # clean_meta contains the image id from the master metadata file |
113 | - clean_meta = {key: value for key, value in metadata.items() if |
114 | - key not in self.glance_properties} |
115 | - |
116 | - if clean_meta['id'] not in glance_ids: |
117 | - # create glance image |
118 | - print("{} image-id {}: creating image".format( |
119 | - self.timestamp_now(), metadata['id'])) |
120 | - try: |
121 | - self.glance.images.create(**clean_meta) |
122 | - # catch conflicts with existing image ids (should already be |
123 | - # caught by glance_ids) |
124 | - except HTTPConflict as e: |
125 | - print("{} image-id {}: ERROR: Image already exists in " |
126 | - "glance.".format(self.timestamp_now(), |
127 | - clean_meta['id'])) |
128 | - print(e) |
129 | - try: |
130 | - self.update_metadata(clean_meta) |
131 | - continue |
132 | - except: |
133 | - print("{} image-id {}: ERROR: both image create " |
134 | - "and image metadata update failed. This can " |
135 | - "happen if the image was deleted through the " |
136 | - "API but still exists in the glance " |
137 | - "database.".format( |
138 | - self.timestamp_now(), clean_meta['id'])) |
139 | - continue |
140 | - try: |
141 | - meta_file = os.path.join(data_dir, clean_meta['id']) |
142 | - self.glance.images.upload(clean_meta['id'], |
143 | - open(meta_file)) |
144 | - |
145 | - except Exception as e: |
146 | - print(e) |
147 | - print("{} image-id {}: Error: problem uploading data " |
148 | - "to glance.".format(self.timestamp_now(), |
149 | - clean_meta['id'])) |
150 | - continue |
151 | - else: |
152 | - # only update metadata |
153 | - self.update_metadata(clean_meta) |
154 | - else: |
155 | - print("{} image-id {}: checksum mismatch, not " |
156 | - "importing.".format(self.timestamp_now(), metadata['id'])) |
157 | - |
158 | - def glance_removal(self, data_dir): |
159 | - meta_files = [file for file in self.list_files(data_dir) |
160 | - if file.endswith('.json')] |
161 | - |
162 | - local_image_files = [] |
163 | - for meta_file in meta_files: |
164 | - metadata = self.read_metadata(os.path.join(data_dir, meta_file)) |
165 | - local_image_files.append(metadata['id']) |
166 | - |
167 | - deleted_images = [image.id for image in self.glance.images.list() |
168 | - if image.id not in local_image_files] |
169 | - for image_id in deleted_images: |
170 | - |
171 | - # remove image from glance |
172 | - print("{} image-id {}: removing image".format(self.timestamp_now(), |
173 | - image_id)) |
174 | - try: |
175 | - self.glance.images.delete(image_id) |
176 | - except HTTPConflict as e: |
177 | - print("{} image-id {}: ERROR: {}".format(self.timestamp_now(), |
178 | - image_id, |
179 | - e)) |
180 | - |
181 | - # generate checksum on disk |
182 | - def check_md5(self, control_md5, image_file): |
183 | - return self.get_md5(image_file) == control_md5 |
184 | - |
185 | - def get_md5(self, image_file): |
186 | - hash_md5 = hashlib.md5() |
187 | - with open(image_file, "rb") as f: |
188 | - for chunk in iter(lambda: f.read(4096), b""): |
189 | - hash_md5.update(chunk) |
190 | - return hash_md5.hexdigest() |
191 | - |
192 | - def read_metadata(self, metadata_file): |
193 | - with open(metadata_file) as meta_file: |
194 | - data = json.load(meta_file) |
195 | - return data |
196 | - |
197 | - def glance_connect(self): |
198 | - try: |
199 | - username = os.environ['OS_USERNAME'] |
200 | - password = os.environ['OS_PASSWORD'] |
201 | - tenant_name = os.environ['OS_TENANT_NAME'] |
202 | - auth_url = os.environ['OS_AUTH_URL'] |
203 | - except Exception as e: |
204 | - print("ERROR: unable to load environment variables, please source " |
205 | - "novarc") |
206 | - print(e) |
207 | - self.release_lock() |
208 | - sys.exit(2) |
209 | - keystone = client.Client(username=username, password=password, |
210 | - tenant_name=tenant_name, auth_url=auth_url) |
211 | - |
212 | - token = keystone.auth_token |
213 | - |
214 | - service = keystone.services.find(name='glance') |
215 | - endpoint = keystone.endpoints.find(service_id=service.id) |
216 | - glance_url = endpoint.internalurl |
217 | - |
218 | - return Client('2', endpoint=glance_url, token=token) |
219 | - |
220 | - def timestamp_now(self): |
221 | - return datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") |
222 | - |
223 | - def update_metadata(self, metadata): |
224 | - print("{} image-id {}: updating metadata".format(self.timestamp_now(), |
225 | - metadata['id'])) |
226 | - |
227 | - # list of glance reserved properties |
228 | - # (from glanceclient/v2/image_schema.py) |
229 | - glance_properties = ["architecture", |
230 | + extra_properties = set(['bs_owner']) |
231 | + glance_properties = set(["architecture", |
232 | "checksum", |
233 | "container_format", |
234 | "created_at", |
235 | @@ -217,92 +56,557 @@ |
236 | "tags", |
237 | "updated_at", |
238 | "virtual_size", |
239 | - "visibility"] |
240 | - |
241 | - image = self.glance.images.get(metadata['id']) |
242 | - removed_props = [key for key in image.keys() if |
243 | - key not in metadata.keys()+glance_properties] |
244 | - |
245 | - try: |
246 | - self.glance.images.update(metadata['id'], |
247 | - remove_props=removed_props, |
248 | - **metadata) |
249 | - except Exception as e: |
250 | - print("{} image-id {}: ERROR: Problem updating metadata".format( |
251 | - self.timestamp_now(), metadata['id'])) |
252 | - print(e) |
253 | + "visibility"]) |
254 | + # egrep -B2 readOnly glanceclient/v2/image_schema.py | \ |
255 | + # awk '/\{/ {print $1}' | tr -d \": |
256 | + readonly_properties = set(['file', |
257 | + 'size', |
258 | + 'status', |
259 | + 'self', |
260 | + 'direct_url', |
261 | + 'schema', |
262 | + 'updated_at', |
263 | + 'locations', |
264 | + 'virtual_size', |
265 | + 'checksum', |
266 | + 'created_at']) |
267 | + |
268 | + def __init__(self, data_dir, source): |
269 | + self.projects_slave = {} |
270 | + self.DATA_DIR = data_dir |
271 | + self.SOURCE = source |
272 | + self.valid_properties = self.glance_properties.difference( |
273 | + self.readonly_properties.union( |
274 | + self.extra_properties)) |
275 | + self.set_filelock() |
276 | + self.glance_connect_slave() |
277 | + #self.glance_connect_master() |
278 | + |
279 | + def download_metadata_from_master(self): |
280 | + """rsync metadata files from source to data_dir""" |
281 | + |
282 | + if not self.SOURCE.endswith('/'): |
283 | + self.SOURCE += '/' |
284 | + if not self.DATA_DIR.endswith('/'): |
285 | + self.DATA_DIR += '/' |
286 | + |
287 | + command = '/usr/bin/rsync -az --delete -e ' \ |
288 | + "'ssh -o StrictHostKeyChecking=no' " \ |
289 | + '{0} {1}'.format(self.SOURCE, self.DATA_DIR) |
290 | + proc = subprocess.Popen(shlex.split(command), |
291 | + stdout=subprocess.PIPE, |
292 | + stderr=subprocess.PIPE) |
293 | + (stdout, stderr) = proc.communicate() |
294 | + if proc.returncode: |
295 | + self.log('ERROR: problem while getting data from master ' |
296 | + '({0})'.format(command)) |
297 | + self.log('ERROR: {0}'.format(stderr)) |
298 | + self.release_lock() |
299 | + sys.exit(2) |
300 | + else: |
301 | + return True |
302 | + |
303 | + def parse_glance_slave_images(self): |
304 | + """ |
305 | + image.id => [0-9a-z]* |
306 | + len(image.id) < 2 : DATA_DIR/<image-id>.json |
307 | + len(image.id) >= 2: DATA_DIR/XX/XXZZZ.json |
308 | + |
309 | + @returns processed (aka. parsed) images |
310 | + """ |
311 | + self.log('getting image list from slave') |
312 | + processed_images_ids = set() |
313 | + to_delete_images_ids = set() |
314 | + for image in self.glance_slave.images.list(): |
315 | + if len(image.id) < 2: |
316 | + basename = '{0}.json'.format(image.id) |
317 | + else: |
318 | + basename = '{0}/{1}.json'.format(str(image.id)[:2], |
319 | + image.id) |
320 | + filename = os.path.join(self.DATA_DIR, basename) |
321 | + if not os.path.isfile(filename): |
322 | + to_delete_images_ids.add(image.id) |
323 | + continue |
324 | + |
325 | + metadata_local = self.read_metadata(filename) |
326 | + if not metadata_local: |
327 | + self.log('ERROR: read_metadata did not retrieve anything ' |
328 | + '({0})'.format(filename)) |
329 | + continue |
330 | + |
331 | + if metadata_local['checksum'] == image.checksum: |
332 | + if not self.is_latest_metadata(metadata_local['id'], |
333 | + metadata_local['updated_at'], |
334 | + image.updated_at): |
335 | + # checksum ok, metadata outdated |
336 | + self.update_metadata(metadata_local, image) |
337 | + processed_images_ids.add(image.id) |
338 | + # checksum mismatch, re-upload |
339 | + elif self.download_from_master(metadata_local): |
340 | + self.upload_to_slave(metadata_local) |
341 | + processed_images_ids.add(image.id) |
342 | + |
343 | + self.log('DEBUG: images pending to be deleted: ' |
344 | + '{0}'.format(to_delete_images_ids)) |
345 | + self.delete_images_from_slave(to_delete_images_ids) |
346 | + self.log('DEBUG: processed images (to skip while parsing metadata ' |
347 | + 'files): {0}'.format(processed_images_ids)) |
348 | + return processed_images_ids |
349 | + |
350 | + def is_latest_metadata(self, image_id, master_updated_at, slave_updated_at): |
351 | + """Compares filename content (JSON metadata) and glance service info |
352 | + @return |
353 | + True: no need to update |
354 | + False: need to update metadata on local copy |
355 | + """ |
356 | + master_dup = dateutil.parser.parse(master_updated_at) |
357 | + slave_dup = dateutil.parser.parse(slave_updated_at) |
358 | + |
359 | + if master_dup <= slave_dup: |
360 | + self.log('INFO: is_latest_metadata :: {0} up to ' |
361 | + 'date'.format(image_id)) |
362 | + return True |
363 | + else: |
364 | + self.log('INFO: is_latest_metadata :: {0} outdated. Needs ' |
365 | + 'update_metadata.'.format(image_id)) |
366 | + return False |
367 | + |
368 | + def upload_to_slave(self, metadata_local): |
369 | + """upload image to glance slave service |
370 | + """ |
371 | + tmp_image_basename = '{0}.img'.format(metadata_local['id']) |
372 | + tmp_image = os.path.join(self.DATA_DIR, tmp_image_basename) |
373 | + try: |
374 | + clean_metadata, removed_props = self.mangle_metadata(metadata_local) |
375 | + except OSProjectNotFound as e: |
376 | + self.log('EXCEPTION: upload_to_slave :: image-id {0} :: ' |
377 | + 'problem uploading data to glance ' |
378 | + 'slave (image could not be removed) :: ' |
379 | + '{1}'.format(metadata_local['id'], e)) |
380 | + return False |
381 | + |
382 | + for k in removed_props: |
383 | + del clean_metadata[k] |
384 | + |
385 | + self.log('INFO: creating image {0}'.format(clean_metadata['id'])) |
386 | + try: |
387 | + # FIXME |
388 | + self.glance_slave.images.create(**clean_metadata) |
389 | + self.log('DEBUG: create image: {0}'.format(clean_metadata)) |
390 | + except HTTPConflict as e: |
391 | + self.log('EXCEPTION: upload_to_slave :: {0}'.format(e)) |
392 | + try: |
393 | + # update metadata |
394 | + # FIXME |
395 | + self.glance_slave.images.update(clean_metadata['id'], |
396 | + remove_props=removed_props, |
397 | + **clean_metadata) |
398 | + self.log('DEBUG: update_to_slave :: update metadata ' |
399 | + '{0}'.format(clean_metadata)) |
400 | + except Exception as e: |
401 | + self.log('ERROR: update_to_slave (both image create/update ' |
402 | + 'failed :: {0} - this can happen if the image was ' |
403 | + 'deleted through the API but still exists in the glance ' |
404 | + 'database :: {1}'.format(clean_metadata['id'], e)) |
405 | + return False |
406 | + try: |
407 | + # upload |
408 | + # FIXME |
409 | + self.glance_slave.images.upload(clean_metadata['id'], |
410 | + open(tmp_image)) |
411 | + os.remove(tmp_image) |
412 | + self.log('DEBUG: update_to_slave :: upload {0}'.format(tmp_image)) |
413 | + except Exception as e: |
414 | + self.log('ERROR: upload_to_slave :: image-id {0} :: ' |
415 | + 'problem uploading data to glance ' |
416 | + 'slave (image could not be removed) :: ' |
417 | + '{1}'.format(clean_metadata['id'], e)) |
418 | + return False |
419 | + |
420 | + def download_from_master(self, metadata_local): |
421 | + """downloads images from glance master service |
422 | + |
423 | + @return True: downloaded or already on local storage |
424 | + @return False: error |
425 | + """ |
426 | + tmp_image_basename = '{0}.img'.format(metadata_local['id']) |
427 | + tmp_image = os.path.join(self.DATA_DIR, tmp_image_basename) |
428 | + if os.path.isfile(tmp_image): |
429 | + if self.check_md5(metadata_local['checksum'], tmp_image): |
430 | + return True |
431 | + else: |
432 | + try: |
433 | + os.remove(tmp_image) |
434 | + except Exception as e: |
435 | + self.log('ERROR: download_from_master :: {0}'.format(e)) |
436 | + return False |
437 | + self.glance_connect_master() |
438 | + downloaded = False |
439 | + retries = 3 |
440 | + for i in xrange(0, retries): |
441 | + try: |
442 | + bin_image = self.glance_master.images.data( |
443 | + image_id=metadata_local['id']) |
444 | + |
445 | + hash_md5 = hashlib.md5() |
446 | + with open(tmp_image, 'wb') as fd: |
447 | + for chunk in bin_image: |
448 | + fd.write(chunk) |
449 | + hash_md5.update(chunk) |
450 | + bin_image_checksum = hash_md5.hexdigest() |
451 | + if metadata_local['checksum'] == bin_image_checksum: |
452 | + downloaded = True |
453 | + self.log('INFO: download_from_master ({0} - {1}):: ' |
454 | + 'checksum OK'.format(metadata_local['id'], |
455 | + metadata_local['checksum'])) |
456 | + break |
457 | + elif os.path.exists(tmp_image): |
458 | + self.log('INFO: download_from_master ({0}/{1}; {2}):: ' |
459 | + 'invalid checksum ' |
460 | + '{3}'.format(metadata_local['id'], i, retries, |
461 | + bin_image_checksum)) |
462 | + os.remove(tmp_image) |
463 | + except Exception as e: |
464 | + self.log('EXCEPTION: download_from_master ({0}/{1}; {2}):: ' |
465 | + '{3}'.format(i, retries, metadata_local['id'], e)) |
466 | + if os.path.exists(tmp_image): |
467 | + os.remove(tmp_image) |
468 | + self.glance_connect_master() |
469 | + |
470 | + return downloaded |
471 | + |
472 | + def delete_images_from_slave(self, to_delete_images_ids): |
473 | + """walks through local metadata files |
474 | + deletes images not found in local storage |
475 | + """ |
476 | + if not to_delete_images_ids: |
477 | + self.log('WARNING: precautionary halt. No glance images found ' |
478 | + 'to be deleted. noop.') |
479 | + return |
480 | + |
481 | + for image_id in to_delete_images_ids: |
482 | + self.log('INFO: removing image {0}'.format(image_id)) |
483 | + try: |
484 | + # FIXME |
485 | + self.glance_slave.images.delete(image_id) |
486 | + self.log('DEBUG: image {0} removed'.format(image_id)) |
487 | + except (HTTPConflict,HTTPForbidden) as e: |
488 | + self.log('ERROR: could not delete {0} :: ' |
489 | + '{1}'.format(image_id, e)) |
490 | + |
491 | + def create_missing_slave_images(self, processed_images_ids): |
492 | + |
493 | + for dirpath, dirnames, filenames in os.walk(self.DATA_DIR): |
494 | + if dirpath != self.DATA_DIR and \ |
495 | + len(dirnames) == 0 and \ |
496 | + len(filenames) == 0: |
497 | + os.rmdir(dirpath) |
498 | + continue |
499 | + |
500 | + for filename in filenames: |
501 | + full_path = os.path.join(dirpath, filename) |
502 | + if filename.endswith('.json'): |
503 | + image_id = filename[:-5] |
504 | + if image_id in processed_images_ids: |
505 | + continue |
506 | + |
507 | + metadata_local = self.read_metadata(full_path) |
508 | + if not metadata_local: |
509 | + self.log('ERROR: read_metadata did not ' |
510 | + 'retrieve anything ' |
511 | + '({0})'.format(full_path)) |
512 | + continue |
513 | + |
514 | + slave_project_id = self.project_mapping(metadata_local) |
515 | + if not slave_project_id: |
516 | + self.log('DEBUG: could not map image into any ' |
517 | + 'slave project :: {0}'.format(metadata_local)) |
518 | + continue |
519 | + |
520 | + metadata_local['owner'] = slave_project_id |
521 | + if self.download_from_master(metadata_local): |
522 | + self.upload_to_slave(metadata_local) |
523 | + else: |
524 | + self.log('ERROR: image {0} could not be downloaded ' |
525 | + 'from master'.format(metadata_local['id'])) |
526 | + |
527 | + def project_mapping(self, metadata_local): |
528 | + """can master/slave projects be mapped, no matter project_ids are not |
529 | + the same? |
530 | + """ |
531 | + # master/slave match can't be done (no project_id) |
532 | + if 'owner' not in metadata_local: |
533 | + return False |
534 | + |
535 | + # master/slave project_ids match |
536 | + if metadata_local['owner'] in self.projects_slave: |
537 | + return metadata_local['owner'] |
538 | + |
539 | + # no extra project_name passed -- can't check match |
540 | + if 'bs_owner' not in metadata_local: |
541 | + return False |
542 | + |
543 | + master_project_name = metadata_local['bs_owner'] |
544 | + |
545 | + # XXX(aluria): no image on slave service |
546 | + # XXX(aluria): look for similar project on slave |
547 | + for slave_project_id, slave_project_name in \ |
548 | + self.projects_slave.items(): |
549 | + slave_to_master = slave_project_name.replace(self.REGION_SLAVE, |
550 | + self.REGION_MASTER) |
551 | + # XXX(aluria): pitfall, if on master service there are |
552 | + # XXX(aluria): 2 projects: |
553 | + # XXX(auria): REGION_SLAVE-restofprojectname |
554 | + # XXX(auria): REGION_MASTER-restofprojectname |
555 | + # XXX(auria): first found gets image assigned |
556 | + if master_project_name in (slave_project_name, |
557 | + slave_to_master): |
558 | + return slave_project_id |
559 | + return False |
560 | + |
561 | + # generate checksum on disk |
562 | + def check_md5(self, control_md5, image_file): |
563 | + return self.get_md5(image_file) == control_md5 |
564 | + |
565 | + def get_md5(self, image_file): |
566 | + hash_md5 = hashlib.md5() |
567 | + with open(image_file, "rb") as f: |
568 | + for chunk in iter(lambda: f.read(4096), b""): |
569 | + hash_md5.update(chunk) |
570 | + return hash_md5.hexdigest() |
571 | + |
572 | + def read_metadata(self, metadata_file): |
573 | + """copy full json file into memory |
574 | + """ |
575 | + if os.path.isfile(metadata_file): |
576 | + with open(metadata_file) as meta_file: |
577 | + try: |
578 | + data = json.load(meta_file) |
579 | + return data |
580 | + except Exception, e: |
581 | + self.log('EXCEPTION: {0}'.format(e)) |
582 | + return False |
583 | + else: |
584 | + self.log('INFO: {0} not found.'.format(metadata_file)) |
585 | + return False |
586 | + |
587 | + def glance_connect_slave(self): |
588 | + try: |
589 | + username = os.environ['OS_USERNAME'] |
590 | + password = os.environ['OS_PASSWORD'] |
591 | + tenant_name = os.environ['OS_TENANT_NAME'] |
592 | + auth_url = os.environ['OS_AUTH_URL'] |
593 | + self.REGION_MASTER = os.environ['OS_REGION_NAME_MASTER'].upper() |
594 | + except Exception as e: |
595 | + self.log('EXCEPTION: {0}'.format(e)) |
596 | + self.log('ERROR: unable to load environment variables, please ' |
597 | + 'source novarc') |
598 | + self.release_lock() |
599 | + sys.exit(2) |
600 | + self.keystone = client.Client(username=username, password=password, |
601 | + tenant_name=tenant_name, |
602 | + auth_url=auth_url) |
603 | + |
604 | + if not self.projects_slave: |
605 | + self.projects_slave = dict([(tenant.id, tenant.name) for tenant in |
606 | + self.keystone.tenants.list() if |
607 | + tenant.enabled]) |
608 | + token = self.keystone.auth_token |
609 | + service = self.keystone.services.find(name='glance') |
610 | + endpoint = self.keystone.endpoints.find(service_id=service.id) |
611 | + glance_url = endpoint.internalurl |
612 | + self.REGION_SLAVE = endpoint.region.upper() |
613 | + self.glance_slave = Client('2', endpoint=glance_url, token=token) |
614 | + return self.glance_slave |
615 | + |
616 | + def glance_connect_master(self): |
617 | + try: |
618 | + username = 'glancesync' |
619 | + password = os.environ['OS_PASSWORD_MASTER'] |
620 | + tenant_name = os.environ['OS_TENANT_NAME'] |
621 | + auth_url = os.environ['OS_AUTH_URL_MASTER'] |
622 | + glance_url_master = os.environ['OS_GLANCE_URL_MASTER'] |
623 | + except Exception as e: |
624 | + self.log('EXCEPTION: glance_connect_master :: {0}'.format(e)) |
625 | + self.log('ERROR: unable to load environment variables, please ' |
626 | + 'source novarc') |
627 | + self.release_lock() |
628 | + sys.exit(2) |
629 | + self.keystone_master = client.Client(username=username, |
630 | + password=password, |
631 | + tenant_name=tenant_name, |
632 | + auth_url=auth_url) |
633 | + token = self.keystone_master.auth_token |
634 | + # XXX: uses adminURL, which is OS-MGMT |
635 | + # XXX: publicURL should be used |
636 | + #service = self.keystone_master.services.find(name='glance') |
637 | + #endpoint = self.keystone_master.endpoints.find(service_id=service.id) |
638 | + #glance_url = endpoint.publicurl |
639 | + #self.REGION_MASTER = endpoint.region.upper() |
640 | + self.glance_master = Client('2', endpoint=glance_url_master, |
641 | + token=token) |
642 | + return self.glance_master |
643 | + |
644 | + def timestamp_now(self): |
645 | + return datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") |
646 | + |
647 | + def log(self, msg): |
648 | + print('{0} {1}'.format(self.timestamp_now(), msg)) |
649 | + |
650 | + def mangle_metadata(self, metadata_local, metadata_slave=None): |
651 | + """Maps projects in MASTER region with projects in SLAVE region |
652 | + metadata_local: metadata from master images |
653 | + metatada_slave: metadata from slave images |
654 | + |
655 | + (from glanceclient/v2/image_schema.py) |
656 | + [ |
657 | + "image_state": "available", |
658 | + "container_format": "bare", |
659 | + "min_ram": 0, |
660 | + "ramdisk_id": null, |
661 | + "updated_at": "2017-02-06T12:01:31Z", |
662 | + "file": "/v2/images/1019f307-0c1a-4b23-bf9f-725150ab80ab/file", |
663 | + "owner": "ae75c492635143f79fa9161d73e0cb73", |
664 | + "id": "1019f307-0c1a-4b23-bf9f-725150ab80ab", |
665 | + "size": 34965880832, |
666 | + "user_id": "7ac9feb8b7b04761a780f6aea1ee8a02", |
667 | + "image_type": "snapshot", |
668 | + "disk_format": "qcow2", |
669 | + "base_image_ref": "6c738559-fb83-4fb4-98dd-6c28b100e417", |
670 | + "schema": "/v2/schemas/image", |
671 | + "status": "active", |
672 | + "image_location": "snapshot", |
673 | + "tags": [], |
674 | + "kernel_id": null, |
675 | + "visibility": "private", |
676 | + "locations": [ |
677 | + { |
678 | + "url": "rbd://75f1cd1b-1ef4-4a90-9033-5c026959f679/ |
679 | + glance/1019f307-0c1a-4b23-bf9f-725150ab80ab/snap", |
680 | + "metadata": {} |
681 | + } |
682 | + ], |
683 | + "min_disk": 40, |
684 | + "virtual_size": null, |
685 | + "instance_uuid": "db494977-27be-4875-939b-d2af0f7035a4", |
686 | + "name": "ams-samdb-snap", |
687 | + "checksum": "f82eb5d18273b28f74175c2ce3c8ff18", |
688 | + "created_at": "2017-02-06T11:54:09Z", |
689 | + "protected": false, |
690 | + "bs_owner": "AMS1-VEPC-MGMT-CENTRAL", |
691 | + "owner_id": "ae75c492635143f79fa9161d73e0cb73" |
692 | + ] |
693 | + > pprint.pprint([key for key in d.keys() if key not in |
694 | + > glance_properties+extra_properties]) |
695 | + ['user_id', |
696 | + 'image_type', |
697 | + 'image_state', |
698 | + 'image_location', |
699 | + 'base_image_ref', |
700 | + 'owner_id'] |
701 | + """ |
702 | + if 'owner' not in metadata_local: |
703 | + raise (OSProjectNotFound, 'no owner :: {0}'.format(metadata_local)) |
704 | + |
705 | + if metadata_local['owner'] not in self.projects_slave: |
706 | + if 'bs_owner' in metadata_local: |
707 | + master_project_name = metadata_local['bs_owner'] |
708 | + else: |
709 | + raise (OSProjectNotFound, 'no bs_owner :: ' |
710 | + '{0}'.format(metadata_local)) |
711 | + |
712 | + # XXX(aluria): image does not exist on slave service |
713 | + if not metadata_slave: |
714 | + slave_project_id = self.project_mapping(metadata_local) |
715 | + if not slave_project_id: |
716 | + raise (OSProjectNotFound, 'no project_mapping :: ' |
717 | + '{0}'.format(metadata_local)) |
718 | + elif metadata_local['owner'] != slave_project_id: |
719 | + metadata_local['owner'] = slave_project_id |
720 | + # XXX(aluria): image exists on slave service |
721 | + # XXX(aluria): keep all metadata and mangle project_id (owner) |
722 | + else: |
723 | + # ie. admin, services, SLAVE-CENTRAL |
724 | + slave_project_name = \ |
725 | + self.projects_slave[metadata_slave['owner']] |
726 | + # ie. admin, services, MASTER-CENTRAL |
727 | + slave_to_master = slave_project_name.replace(self.REGION_SLAVE, |
728 | + self.REGION_MASTER) |
729 | + # ie. admin, services, MASTER-CENTRAL |
730 | + if master_project_name in (slave_project_name, slave_to_master): |
731 | + metadata_local['owner'] = metadata_slave['owner'] |
732 | + else: |
733 | + raise (OSProjectNotFound, 'project not found: ' |
734 | + '{0}'.format(metadata_local)) |
735 | + |
736 | + # do not remove properties defined in extra_properties and |
737 | + # glance_properties |
738 | + removed_props = [k for k in metadata_slave.keys() if k not in |
739 | + metadata_local.keys() and k not in |
740 | + self.extra_properties.union( |
741 | + self.glance_properties)] |
742 | + return (metadata_local, removed_props) |
743 | + |
744 | + |
745 | + def update_metadata(self, metadata_local, metadata_slave): |
746 | + self.log('INFO: image-id {0}: updating ' |
747 | + 'metadata'.format(metadata_local['id'])) |
748 | + |
749 | + metadata, removed_props = self.mangle_metadata(metadata_local, |
750 | + metadata_slave) |
751 | + # do not update read only and extra properties |
752 | + for k in metadata_local.keys(): |
753 | + if k in extra_properties.union(readonly_properties): |
754 | + del metadata_local[k] |
755 | + try: |
756 | + self.glance_slave.images.update(metadata['id'], |
757 | + remove_props=removed_props, |
758 | + **metadata) |
759 | + except Exception as e: |
760 | + self.log('EXCEPTION: update_metadata :: {0} - ' |
761 | + '{1}'.format(metadata['id'], e)) |
762 | raise e |
763 | |
764 | - # lists filenames relative to path, not full path to match with glance names |
765 | - # so with a directory/subdirectories like the following |
766 | - # /tmp/foo |
767 | - # /tmp/bar/baz |
768 | - # |
769 | - # list_files('/tmp') would return: |
770 | - # ['foo','bar/baz'] |
771 | - # |
772 | - def list_files(self, path): |
773 | - file_list = [] |
774 | - for root, dirs, files in os.walk(path): |
775 | - for file in files: |
776 | - file_list.append(os.path.join(root, file)) |
777 | - relative = [re.sub(path+'/*', '', file) for file in file_list] |
778 | - return relative |
779 | - |
780 | def create_lock(self, lockfile): |
781 | try: |
782 | with open(lockfile, 'w') as lock: |
783 | lock.write(str(os.getpid())) |
784 | except OSError: |
785 | - print("{} ERROR: could not create lockfile {}".format( |
786 | - self.timestamp_now(), lockfile)) |
787 | + self.log('ERROR: could not create lockfile {0}'.format(lockfile)) |
788 | |
789 | - def file_locked(self, lockfile): |
790 | + def file_locked(self, lockfile='/tmp/glance_sync_slave.lock'): |
791 | if os.path.isfile(lockfile): |
792 | return True |
793 | else: |
794 | return False |
795 | |
796 | - def release_lock(self): |
797 | - lockfile = '/tmp/glance_sync_slave.lock' |
798 | + def release_lock(self, lockfile='/tmp/glance_sync_slave.lock'): |
799 | if os.path.isfile(lockfile): |
800 | try: |
801 | os.remove(lockfile) |
802 | except OSError as e: |
803 | - print(e) |
804 | - os.system("rm -f {}".format(lockfile)) |
805 | - |
806 | - def main(self, data_dir, source, lockfile): |
807 | - |
808 | + self.log(e) |
809 | + |
810 | + def set_filelock(self, lockfile='/tmp/glance_sync_slave.lock'): |
811 | if self.file_locked(lockfile): |
812 | - message = "{} WARNING: sync already in progress, exiting".format( |
813 | - self.timestamp_now()) |
814 | - print(message) |
815 | - sys.exit(1) |
816 | + self.log('WARNING: sync already in progress, exiting') |
817 | + sys.exit(2) |
818 | |
819 | self.create_lock(lockfile) |
820 | atexit.register(self.release_lock) |
821 | |
822 | - print("{} starting glance image sync run".format(self.timestamp_now())) |
823 | - |
824 | - # get images and metadata from master source |
825 | - print("{} getting images and metadata from master".format( |
826 | - self.timestamp_now())) |
827 | - self.download_data(source, data_dir) |
828 | - |
829 | - print("{} starting import into glance".format(self.timestamp_now())) |
830 | - self.glance_upload(data_dir) |
831 | - |
832 | - print("{} cleaning up files that were removed from master".format( |
833 | - self.timestamp_now())) |
834 | - self.glance_removal(data_dir) |
835 | - print("{} ending glance image sync slave run".format( |
836 | - self.timestamp_now())) |
837 | + def main(self): |
838 | + self.log('starting glance sync') |
839 | + self.log('getting metadata from master') |
840 | + self.download_metadata_from_master() |
841 | + processed_images_ids = self.parse_glance_slave_images() |
842 | + self.create_missing_slave_images(processed_images_ids) |
843 | + self.log('ending glance image sync slave run') |
844 | self.release_lock() |
845 | |
846 | if __name__ == '__main__': |
847 | - parser = argparse.ArgumentParser(description='Synchronize remote images to ' |
848 | - 'disk and import into glance ') |
849 | + parser = argparse.ArgumentParser(description='Synchronize remote images ' |
850 | + 'metadata to disk and import into glance') |
851 | parser.add_argument("-d", "--datadir", help="directory to write images to") |
852 | parser.add_argument("-s", "--source", help="full path to master rsync " |
853 | "source. Format: " |
854 | @@ -310,7 +614,6 @@ |
855 | "<directory>") |
856 | args = parser.parse_args() |
857 | |
858 | - data_dir = "" |
859 | if args.datadir: |
860 | data_dir = args.datadir |
861 | else: |
862 | @@ -323,5 +626,5 @@ |
863 | parser.print_help() |
864 | sys.exit('ERROR: please specify an image source to sync from') |
865 | |
866 | - slave = ImageSyncSlave() |
867 | - slave.main(data_dir, source, lockfile='/tmp/glance_sync_slave.lock') |
868 | + slave = ImageSyncSlave(data_dir, source) |
869 | + slave.main() |
870 | |
871 | === modified file 'reactive/glance-sync-slave.py' |
872 | --- reactive/glance-sync-slave.py 2016-05-19 17:30:30 +0000 |
873 | +++ reactive/glance-sync-slave.py 2017-08-03 07:40:40 +0000 |
874 | @@ -17,6 +17,7 @@ |
875 | fetch.apt_update() |
876 | fetch.apt_install('python-glanceclient') |
877 | fetch.apt_install('python-nose') |
878 | + fetch.apt_install('python-dateutil') |
879 | |
880 | # configure the variables that have a default value |
881 | configure_config_dir() |
882 | @@ -151,6 +152,26 @@ |
883 | def configure_custom_novarc(): |
884 | configure_novarc() |
885 | |
886 | +@when('config.changed.master_{password,auth_url,glance_url,region}') |
887 | +def configure_master_novarc(): |
888 | + context = {} |
889 | + for i in ('master_password', 'master_auth_url', |
890 | + 'master_glance_url', 'master_region'): |
891 | + value = hookenv.config(i) |
892 | + if not value: |
893 | + hookenv.status_set('blocked', 'master service credentials ' |
894 | + 'missing') |
895 | + return |
896 | + context[i] = value |
897 | + |
898 | + glancesync_novarc_file = os.path.join(config_dir, 'glancesync.novarc') |
899 | + |
900 | + templating.render(source='glancesync.novarc.j2', |
901 | + target=glancesync_novarc_file, |
902 | + owner='ubuntu', |
903 | + perms=0o600, |
904 | + context=relation) |
905 | + hookenv.status_set('active', 'glancesync.novarc configured') |
906 | |
907 | @hook('{requires:keystone-admin}-relation-{joined,changed}') |
908 | def configure_relation_novarc(relation=None): |
909 | |
910 | === modified file 'templates/glance_sync_slave_cron.j2' |
911 | --- templates/glance_sync_slave_cron.j2 2017-01-11 03:55:07 +0000 |
912 | +++ templates/glance_sync_slave_cron.j2 2017-08-03 07:40:40 +0000 |
913 | @@ -6,7 +6,7 @@ |
914 | MAILTO={{ admin_email }} |
915 | SHELL=/bin/bash |
916 | |
917 | -{{ cron_frequency }} ubuntu bash -c 'source {{ config_dir}}/novarc && python {{ script_dir }}/glance_sync_slave.py -d {{ data_dir }} -s {{ sync_source }} >> {{ log_dir }}/glance_sync_slave.log 2>&1' |
918 | +{{ cron_frequency }} ubuntu bash -c 'source {{ config_dir }}/glancesync.novarc && source {{ config_dir}}/novarc && python {{ script_dir }}/glance_sync_slave.py -d {{ data_dir }} -s {{ sync_source }} >> {{ log_dir }}/glance_sync_slave.log 2>&1' |
919 | |
920 | {% endif %} |
921 | |
922 | |
923 | === added file 'templates/glancesync.novarc.j2' |
924 | --- templates/glancesync.novarc.j2 1970-01-01 00:00:00 +0000 |
925 | +++ templates/glancesync.novarc.j2 2017-08-03 07:40:40 +0000 |
926 | @@ -0,0 +1,5 @@ |
927 | +export OS_PASSWORD_MASTER={{ master_password }} |
928 | +export OS_AUTH_URL_MASTER={{ master_auth_url }} |
929 | +export OS_GLANCE_URL_MASTER={{ master_glance_url }} |
930 | +export OS_REGION_NAME_MASTER={{ master_region }} |
931 | + |
932 | |
933 | === modified file 'templates/novarc.j2' |
934 | --- templates/novarc.j2 2016-04-26 17:54:16 +0000 |
935 | +++ templates/novarc.j2 2017-08-03 07:40:40 +0000 |
936 | @@ -1,4 +1,5 @@ |
937 | -export OS_AUTH_URL=http://{{ service_hostname }}:{{ service_port }}/v2.0 |
938 | +#export OS_AUTH_URL=http://{{ service_hostname }}:{{ service_port }}/v2.0 |
939 | +export OS_AUTH_URL=https://{{ service_hostname }}:{{ service_port }}/v2.0 |
940 | export OS_USERNAME={{ service_username }} |
941 | export OS_PASSWORD={{ service_password }} |
942 | export OS_TENANT_NAME={{ service_tenant_name }} |
1) My layer/branch (lp:~aluria/canonical-bootstack/bootstack-tele2-glance-sync-slave-layer) had issues on "juju set" a few params (see cRT#100292). This is the reason it was not pushed via ~canonical- bootstack user (but ~aluria).
2) glance-sync-slave python script on such layer (in ~aluria) has some modifications thought for tele2 (ie tenant mapping by replacing REGION_MASTER by REGION_SLAVE). I think we should keep a fork of the main "glance- sync-slave/ layer"