Merge lp:~darkmuggle-deactivatedaccount/cloud-utils/syncimgs into lp:cloud-utils
- syncimgs
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 185 |
Proposed branch: | lp:~darkmuggle-deactivatedaccount/cloud-utils/syncimgs |
Merge into: | lp:cloud-utils |
Diff against target: |
1113 lines (+266/-414) 10 files modified
bin/cloudimg-sync (+23/-4) bin/ubuntu-cloudimg-query2 (+90/-55) debian/control (+1/-0) syncimgs/build_json.py (+95/-28) syncimgs/cloudimg.py (+16/-13) syncimgs/easyrep.py (+3/-0) syncimgs/execute.py (+3/-3) syncimgs/syncmessage.py (+0/-206) syncimgs/syncmsg-reader.py (+0/-42) syncimgs/yaml_config.py (+35/-63) |
To merge this branch: | bzr merge lp:~darkmuggle-deactivatedaccount/cloud-utils/syncimgs |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Scott Moser (community) | Needs Fixing | ||
Review via email: mp+114538@code.launchpad.net |
Commit message
Description of the change
Further code clean up and improvement per previous MP.
syncimgs/
Removed as unneeded
bin/cloudimg-sync:
- Added example help to argparse commands
- Fixed bug where --config was required, preventing --how or
- URL for file downloads is now coming from build_json.py
- Minor whitespace cleanup
bin/ubuntu-
- Added support for version number queries
- Removed hard-coded version names; replaced with python-distro-info
- Fixed minor bug where all results for --build were being displayed
- Minor white space changes for readability
- Added more example for invokation
syncimgs/
- Added support for Query1 string formating
- Added attributes to BuildFile and Registration objects when
iterating to better support ubuntu-
- Selection of host for download is now done by random selection
in build_json.py
syncimgs/
- Fixed bug where stdin and stderr tuple where being inproperly
split.
syncimgs/
- Added set function; used by build_json.py to set query2 object
attributes
ubuntu-
- Replaced corrupted version with a valid keyring
syncimgs/
- Better documented program functionality about how rules work for
cloudimg-sync
Ben Howard (darkmuggle-deactivatedaccount) wrote : | # |
Scott Moser (smoser) wrote : | # |
- ubuntu-
the file you have here is zero length. also, I've tested that the
version in trunk when installed does successfully queries (no complaints
about bad keyring)
- please add debian/control dependensy on python-distro-info, remove dependency on 'distro-info'
- confusing in build_json:
+ xarch = self.arch
+ if 'amd64' in xarch:
+ xarch = 'x86_64'
is there a reason you used 'in' there? why not ==?
- * The command can use the following as parametes:
^ spelling
- for list_cmd, I'm just curious what value today's date could bring:
- %(date)s: todays date in YYYY-MM-DD format
- for list_cmd, you say it must return '<tag> <build_serial> <arch>
Ben Howard (darkmuggle-deactivatedaccount) wrote : | # |
- ubuntu-
Fixed...it looks like my pull was corrupted.
- please add debian/control dependensy on python-distro-info, remove dependency on
done
- confusing in build_json:
replaced with "==" over "in"
- * The command can use the following as parametes:
Fixed
- for list_cmd, I'm just curious what value today's date could bring:
there isn't much value, other than that is a valid replacement elase where
- for list_cmd, you say it must return '<tag> <build_serial> <arch>
clarified its usage
- 185. By Scott Moser
-
Further code clean up and improvement per previous MP.
syncimgs/
syncmessage. py and syncimgs/ syncmsg- reader. py:
Removed as unneededbin/cloudimg-sync:
- Added example help to argparse commands
- Fixed bug where --config was required, preventing --how or
--test- config cli arguments from working
- URL for file downloads is now coming from build_json.py
- Minor whitespace cleanupbin/ubuntu-
cloudimg- query2:
- Added support for version number queries
- Removed hard-coded version names; replaced with python-distro-info
- Fixed minor bug where all results for --build were being displayed
- Minor white space changes for readability
- Added more example for invokationsyncimgs/
build_json. py:
- Added support for Query1 string formating
- Added attributes to BuildFile and Registration objects when
iterating to better support ubuntu-cloudimg- query2 and
cloud- syncimgs programs.
- Selection of host for download is now done by random selection
in build_json.pysyncimgs/
execute. py:
- Fixed bug where stdin and stderr tuple where being inproperly
split.syncimgs/
easyrep. py:
- Added set function; used by build_json.py to set query2 object
attributessyncimgs/
yaml_config. py:
- Better documented program functionality about how rules work for
cloudimg-sync
Preview Diff
1 | === modified file 'bin/cloudimg-sync' |
2 | --- bin/cloudimg-sync 2012-07-11 16:52:42 +0000 |
3 | +++ bin/cloudimg-sync 2012-07-12 16:31:27 +0000 |
4 | @@ -23,9 +23,24 @@ |
5 | logger.setLevel(logging.DEBUG) |
6 | |
7 | if __name__ == '__main__': |
8 | - parser = argparse.ArgumentParser() |
9 | - |
10 | - parser.add_argument('--config', action='store', required=True, |
11 | + parser = argparse.ArgumentParser(prog='Ubuntu Cloud Image Sync Tool', |
12 | + description=""" |
13 | +Rule based program for syncronizing and customizing Ubuntu Cloud Images |
14 | +""", |
15 | + epilog=""" |
16 | +Example usage: |
17 | + cloudimg-sync --config <configuration> |
18 | + Use <configuration> as the rule file |
19 | + |
20 | + cloudimg-sync --how |
21 | + Show how to write rule file |
22 | + |
23 | + cloudimg-sync --test-config |
24 | + Show an example configuration file |
25 | +""", |
26 | + formatter_class=argparse.RawDescriptionHelpFormatter) |
27 | + |
28 | + parser.add_argument('--config', action='store', |
29 | help='Configuration file') |
30 | parser.add_argument('--force', action='store_true', default=False, |
31 | help='Force option') |
32 | @@ -56,12 +71,16 @@ |
33 | logger.info('Displaying configuration help information') |
34 | yaml_config.config_help() |
35 | sys.exit(0) |
36 | + |
37 | elif opts.test_config: |
38 | - |
39 | logger.info('Displaying a test configuration') |
40 | print yaml_config.get_default_config() |
41 | sys.exit(0) |
42 | |
43 | + if not opts.config: |
44 | + logger.info('Must define --config, --how or --test_config') |
45 | + sys.exit(1) |
46 | + |
47 | logger.info('Setting up') |
48 | |
49 | configuration = yaml_config.read_yaml_file(opts.config) |
50 | |
51 | === modified file 'bin/ubuntu-cloudimg-query2' |
52 | --- bin/ubuntu-cloudimg-query2 2012-07-11 16:52:42 +0000 |
53 | +++ bin/ubuntu-cloudimg-query2 2012-07-12 16:31:27 +0000 |
54 | @@ -12,6 +12,7 @@ |
55 | from datetime import datetime, time, date |
56 | import syncimgs.build_json |
57 | import argparse |
58 | +import distro_info |
59 | import logging |
60 | import sys |
61 | import json |
62 | @@ -22,47 +23,7 @@ |
63 | logger.setLevel(logging.CRITICAL) |
64 | |
65 | |
66 | -def formatting_help(): |
67 | - print """ |
68 | -The following are valid format values for displaying Query2 data for clouds |
69 | - |
70 | - %(region)s - region name |
71 | - %(id)s - the cloud vendor ID |
72 | - %(instance_type)s - instance type |
73 | - %(cloud)s - cloud vendor name, i.e "ec2" |
74 | - %(ramdisk_id)s - ramdisk ID, largely obsolete |
75 | - %(registered_name)s - registered name in cloud system |
76 | - %(build_serial)s - the build serial ID |
77 | - %(path)s - Download URL for corresponding instance |
78 | - %(sha1)s - SHA1 of the file |
79 | - %(arch)s - architecture of the image |
80 | - |
81 | -The following are valid format values when running with "--builds": |
82 | - |
83 | - %(path)s - the path of the file |
84 | - %(sha1)s - the SHA1 checksum of the file |
85 | - %(sha512)s - the SHA512 checksum of the file |
86 | - %(description) - description of the file |
87 | - |
88 | -Example: |
89 | - ubuntu-cloudimg-query2 --format "%(build_serials) %(id)s" us-west-2 ec2 |
90 | - Show the build serial and the ids of the images for us-west-2 |
91 | - region of EC2 |
92 | - |
93 | - ubuntu-cloudimg-query2 --builds --format "%(path)s" precise armhf |
94 | - Show the files for Precise |
95 | - |
96 | -""" |
97 | - |
98 | - |
99 | -if __name__ == '__main__': |
100 | - |
101 | - parser = argparse.ArgumentParser(prog='Ubuntu Cloud Image Query2', |
102 | - description=""" |
103 | -This program uses the Query2 format for describing the Ubuntu Cloud Images. |
104 | -""", |
105 | - epilog=""" |
106 | - |
107 | +epilog_text = """ |
108 | Arguments: |
109 | All arguments after the arguments flags are parsed as parameters. After |
110 | arguments, the position does not matter. |
111 | @@ -80,13 +41,16 @@ |
112 | File Types: root.tar.gz, tar.gz, qcow2, manifest, vmdk, kernel, ovf |
113 | Release Tags: release, daily |
114 | Cloud: ec2 (other public clouds may be supported later) |
115 | - Ubuntu Releases: hardy, lucid, maverick, natty, oneiric, precise, quantal |
116 | + Ubuntu Release Code Names: %s |
117 | + Ubuntu Release Version Numbers: %s |
118 | |
119 | Defaults: |
120 | EC2 is the default cloud. |
121 | us-east-1 is the default region. |
122 | Ubuntu Server 12.04 (Precise) is the default image for query. |
123 | +""" |
124 | |
125 | +epilog_footer=""" |
126 | Example Usage: |
127 | ubuntu-cloudimg-query2 --include qcow2 --builds armhf |
128 | Display the URL for the qcow2 armhf images |
129 | @@ -98,7 +62,72 @@ |
130 | ubuntu-cloudimg-query2 quantal amd64 hvm |
131 | Display the AMI ID of the AMD64 HVM image for quantal in us-east-1 |
132 | |
133 | + ubuntu-cloudimg-query hvm eu-west-1 12.10 --format "%(ami)s %(serial)s" |
134 | + Display the AMI ID and the build serial for 12.10 |
135 | +""" |
136 | + |
137 | +def formatting_help(): |
138 | + print """ |
139 | +The following are common format values for both cloud and build queries |
140 | + |
141 | + %(path)s - Download URL for corresponding instance |
142 | + %(sha1)s - SHA1 of the file |
143 | + %(arch)s - architecture of the image |
144 | + %(url)s - download url of image |
145 | + %(host)s - host URL |
146 | + %(release)s - release tag, (i.e. release, daily) |
147 | + %(release_tag)s - release tag, (i.e. release, daily) |
148 | + %(stream)s - build type (i.e. server, desktop) |
149 | + %(bname)s - build type (i.e. server, desktop) |
150 | + %(xarch)s - machine architecture |
151 | + %(build_serial)s - the build serial ID |
152 | + %(serial)s - build serial number |
153 | + %(suggested name)s - name suggested for downloaded files |
154 | + |
155 | +The following are specific format values for querying Cloud data: |
156 | + |
157 | + %(region)s - region name |
158 | + %(id)s - the cloud vendor ID |
159 | + %(instance_type)s - instance type |
160 | + %(cloud)s - cloud vendor name, i.e "ec2" |
161 | + %(ramdisk_id)s - ramdisk ID, largely obsolete |
162 | + %(registered_name)s - registered name in cloud system |
163 | + %(build_serial)s - the build serial ID |
164 | + %(summary)s - summary of the image |
165 | + %(pname)s - virtualization type |
166 | + %(dlpath)s - download path |
167 | + |
168 | +The following are specific format values to "--builds": |
169 | + |
170 | + %(description) - description of the file |
171 | + %(file_type) - file type |
172 | + |
173 | +Example: |
174 | + ubuntu-cloudimg-query2 --format "%(build_serials) %(id)s" us-west-2 ec2 |
175 | + Show the build serial and the ids of the images for us-west-2 |
176 | + region of EC2 |
177 | + |
178 | + ubuntu-cloudimg-query2 --builds --format "%(path)s" precise armhf |
179 | + Show the files for Precise |
180 | + |
181 | +""" |
182 | + |
183 | + |
184 | +if __name__ == '__main__': |
185 | + |
186 | + udistro = distro_info.UbuntuDistroInfo() |
187 | + code_names = udistro.supported() |
188 | + ver_codes = map(lambda ver: ver.replace(" LTS",''), |
189 | + udistro.supported(result='release')) |
190 | + |
191 | + epilog = epilog_text % (", ".join(code_names), ", ".join(ver_codes)) + \ |
192 | + epilog_footer |
193 | + |
194 | + parser = argparse.ArgumentParser(prog='Ubuntu Cloud Image Query2', |
195 | + description=""" |
196 | +This program uses the Query2 format for describing the Ubuntu Cloud Images. |
197 | """, |
198 | + epilog=epilog, |
199 | formatter_class=argparse.RawDescriptionHelpFormatter) |
200 | |
201 | parser.add_argument('--url', action='store', |
202 | @@ -148,7 +177,10 @@ |
203 | if not opts.excluded: |
204 | opts.excluded = [] |
205 | |
206 | - if not opts.format: |
207 | + if not opts.format and opts.builds: |
208 | + opts.format = '%(serial)s %(url)s' |
209 | + |
210 | + elif not opts.format: |
211 | opts.format = '%(id)s' |
212 | |
213 | if opts.show_format: |
214 | @@ -166,9 +198,8 @@ |
215 | daily_re = re.compile('(daily)') |
216 | cloud_re = re.compile('(ec2)|(azure)') |
217 | base_re = re.compile('(server)|(desktop)') |
218 | - distro_re = \ |
219 | - re.compile('(hardy)|(lucid)|(maverick)|(natty)|(oneiric)|(precise)|(quantal)' |
220 | - ) |
221 | + distro_re = re.compile('(' + ")|(".join(code_names) + ')') |
222 | + ver_re = re.compile('(' + ")|(".join(ver_codes) + ')') |
223 | |
224 | for val in opts.args: |
225 | val = val.lower() |
226 | @@ -177,30 +208,33 @@ |
227 | if region_re.match(val): |
228 | key = 'region' |
229 | elif type_re.match(val): |
230 | - |
231 | key = 'instance_type' |
232 | + |
233 | elif daily_re.match(val): |
234 | - |
235 | key = 'release_tag' |
236 | val = 'daily' |
237 | + |
238 | elif release_re.match(val): |
239 | - |
240 | key = 'release_tag' |
241 | val = 'release' |
242 | + |
243 | elif base_re.match(val): |
244 | - |
245 | key = 'stream' |
246 | + |
247 | elif cloud_re.match(val): |
248 | - |
249 | key = 'cloud' |
250 | + |
251 | elif arch_re.match(val): |
252 | - |
253 | key = 'arch' |
254 | + |
255 | elif distro_re.match(val): |
256 | - |
257 | - key = 'distro' |
258 | + key = 'distro' |
259 | + |
260 | + elif ver_re.match(val): |
261 | + key = 'distro' |
262 | + val = code_names[ver_codes.index(val)] |
263 | + |
264 | elif file_re.match(val): |
265 | - |
266 | key = 'file_type' |
267 | |
268 | if key not in v.keys(): |
269 | @@ -295,6 +329,7 @@ |
270 | opts.included, |
271 | opts.excluded, |
272 | serial=opts.serial, |
273 | + output=opts.format, |
274 | all=opts.all, |
275 | ) |
276 | |
277 | |
278 | === modified file 'debian/control' |
279 | --- debian/control 2012-07-11 15:51:37 +0000 |
280 | +++ debian/control 2012-07-12 16:31:27 +0000 |
281 | @@ -15,6 +15,7 @@ |
282 | python, |
283 | python-paramiko, |
284 | python-yaml, |
285 | + python-distro-info, |
286 | util-linux (>= 2.17.2), |
287 | ${misc:Depends} |
288 | Description: cloud image management utilities |
289 | |
290 | === modified file 'syncimgs/build_json.py' |
291 | --- syncimgs/build_json.py 2012-07-11 16:52:42 +0000 |
292 | +++ syncimgs/build_json.py 2012-07-12 16:31:27 +0000 |
293 | @@ -66,10 +66,12 @@ |
294 | __name__ = 'Regsitration' |
295 | |
296 | def dump(self, |
297 | - output='%(build_serial)s %(id)s %(arch)s %(instance_type)s %(region)s ' |
298 | - ): |
299 | + output='%(build_serial)s %(id)s %(arch)s %(instance_type)s %(region)s', |
300 | + url_base=None, |
301 | + stream=None, |
302 | + ): |
303 | |
304 | - supported_keys = [ |
305 | + required_keys = [ |
306 | 'region', |
307 | 'id', |
308 | 'ramdisk_id', |
309 | @@ -80,23 +82,41 @@ |
310 | 'cloud', |
311 | 'instance_type', |
312 | 'arch', |
313 | + 'virt_type', |
314 | ] |
315 | |
316 | - for key in supported_keys: |
317 | + for key in required_keys: |
318 | if not hasattr(self, key): |
319 | setattr(self, key, None) |
320 | |
321 | + xarch = self.arch |
322 | + if 'amd64' == xarch: |
323 | + xarch = 'x86_64' |
324 | + |
325 | output = output % { |
326 | 'region': self.region_name, |
327 | 'id': self.published_id, |
328 | + 'ami': self.published_id, |
329 | 'instance_type': self.instance_type, |
330 | 'cloud': self.cloud, |
331 | 'ramdisk_id': self.ramdisk_id, |
332 | + 'summary': self.registered_name, |
333 | + 'bname': stream, |
334 | + 'pname': self.virt_type, |
335 | 'registered_name': self.registered_name, |
336 | 'build_serial': self.build_serial, |
337 | + 'dlpath': self.path, |
338 | + 'url': self.url, |
339 | + 'host': url_base, |
340 | 'path': self.path, |
341 | 'sha1': self.sha1, |
342 | 'arch': self.arch, |
343 | + 'xarch': xarch, |
344 | + 'release': self.release_tag, |
345 | + 'release_tag': self.release_tag, |
346 | + 'serial': self.build_serial, |
347 | + 'stream': self.stream, |
348 | + 'suggested name': self.suggested_name, |
349 | } |
350 | |
351 | return output |
352 | @@ -222,13 +242,14 @@ |
353 | |
354 | keep = latest.lower() |
355 | if keep == 'latest': |
356 | - index = sorted(b_list)[-1] |
357 | - yield Build(unpack=b_list[index]) |
358 | - skip = True |
359 | + |
360 | + if len(b_list) > 0: |
361 | + index = sorted(b_list)[-1] |
362 | + yield Build(unpack=b_list[index]) |
363 | + skip = True |
364 | + |
365 | elif not re.search(r'\w-\d', keep): |
366 | - |
367 | start_index = total_builds - int(keep.replace('n-', '')) |
368 | - |
369 | if start_index < 0: |
370 | start_index = 0 |
371 | |
372 | @@ -312,13 +333,29 @@ |
373 | if url_base: |
374 | url = "%s/%s" % (url_base, self.path) |
375 | |
376 | + xarch = self.arch |
377 | + if 'amd64' == xarch: |
378 | + xarch = 'x86_64' |
379 | + |
380 | output = output % { |
381 | 'description': self.description, |
382 | 'sha1': self.sha1, |
383 | 'sha512': self.sha512, |
384 | 'buildid': self.buildid, |
385 | - 'path': url, |
386 | + 'path': self.path, |
387 | + 'url': url, |
388 | + 'host': url_base, |
389 | 'file_type': self.file_type, |
390 | + 'bname': self.stream, |
391 | + 'distro': self.distro, |
392 | + 'buildid': self.buildid, |
393 | + 'serial': self.serial, |
394 | + 'arch': self.arch, |
395 | + 'xarch': xarch, |
396 | + 'release': self.release, |
397 | + 'release_tag': self.release, |
398 | + 'stream': self.stream, |
399 | + 'suggested name': self.suggested_name, |
400 | } |
401 | |
402 | return output |
403 | @@ -363,8 +400,8 @@ |
404 | self.distros[i['distro_code_name']] = [] |
405 | |
406 | if bt \ |
407 | - not in self.distros[i['distro_code_name' |
408 | - ]]: |
409 | + not in self.distros[i[ |
410 | + 'distro_code_name']]: |
411 | self.distros[i['distro_code_name' |
412 | ]].append(bt) |
413 | |
414 | @@ -493,8 +530,9 @@ |
415 | base = self.__get__('%s_%s' % (distro, stream)) |
416 | |
417 | for sub in base: |
418 | - if sub['release_tag'] == release_tag or release_tag \ |
419 | - == 'all': |
420 | + if sub['release_tag'] == release_tag or \ |
421 | + release_tag == 'all': |
422 | + |
423 | build_serial = sub['build_serial'] |
424 | ret[build_serial] = {} |
425 | |
426 | @@ -503,8 +541,19 @@ |
427 | ret[build_serial][arch] = [] |
428 | |
429 | for fl in sub['arches'][arch]['file_list']: |
430 | - anon = BuildFiles(buildid=sub['arches' |
431 | - ][arch]['build_id'], unpack=fl) |
432 | + anon = BuildFiles( |
433 | + buildid=sub['arches'][arch]['build_id'], |
434 | + unpack=fl, |
435 | + ) |
436 | + |
437 | + anon.set('suggested_name', anon.path.split('/')[-1]) |
438 | + anon.set('serial', build_serial) |
439 | + anon.set('distro', distro) |
440 | + anon.set('stream', stream) |
441 | + anon.set('arch', arch) |
442 | + anon.set('build_id', sub['arches'][arch]['build_id']) |
443 | + anon.set('release', sub['release_tag']) |
444 | + |
445 | ret[build_serial][arch].append(anon) |
446 | |
447 | return ret |
448 | @@ -550,14 +599,19 @@ |
449 | included, |
450 | excluded, |
451 | serial, |
452 | - all, |
453 | + all=all, |
454 | ): |
455 | |
456 | url_base = random.choice(self.mirrors_transfer) |
457 | + d = f.dump(output=output, |
458 | + url_base=url_base, |
459 | + ) |
460 | |
461 | - d = f.dump(output=output, url_base=url_base) |
462 | if d: |
463 | - data += "%s\n" % d |
464 | + if data: |
465 | + data += "\n%s" % d |
466 | + else: |
467 | + data = d |
468 | |
469 | return data |
470 | |
471 | @@ -594,11 +648,12 @@ |
472 | |
473 | for bf in results: |
474 | |
475 | - if serial and bf != serial: |
476 | - continue |
477 | + if serial: |
478 | + if bf != serial: |
479 | + continue |
480 | |
481 | elif not all and bf != serials[-1]: |
482 | - next |
483 | + continue |
484 | |
485 | for b_arch in results[bf]: |
486 | |
487 | @@ -640,7 +695,9 @@ |
488 | |
489 | builds = self.distro_builds(distro, stream, |
490 | release_tag=release_tag) |
491 | + |
492 | for bf in builds.iter_builds(latest=latest): |
493 | + |
494 | if release_tag != 'all' and release_tag != bf.release_tag: |
495 | print '%s %s' % (release_tag, bf.release_tag) |
496 | continue |
497 | @@ -660,15 +717,20 @@ |
498 | continue |
499 | |
500 | for reg in itypes.iter_registrations(): |
501 | - reg.set('path', '%s/%s' |
502 | - % (str(self.mirrors_transfer[0]), |
503 | - path)) |
504 | + reg.set('url', '%s/%s' |
505 | + % (random.choice(self.mirrors_transfer), |
506 | + path) |
507 | + ) |
508 | + reg.set('path', path) |
509 | + reg.set('suggested_name', path.split('/')[-1]) |
510 | reg.set('sha1', sha1) |
511 | reg.set('build_serial', bf.build_serial) |
512 | reg.set('instance_type', itypes.name) |
513 | reg.set('cloud', _cloud) |
514 | reg.set('arch', _arch) |
515 | reg.set('release_tag', bf.release_tag) |
516 | + reg.set('stream', stream) |
517 | + reg.set('virt_type', itypes.name) |
518 | yield reg |
519 | |
520 | def get_reg( |
521 | @@ -695,11 +757,13 @@ |
522 | cloud=cloud, |
523 | ): |
524 | |
525 | + url_base = random.choice(self.mirrors_transfer) |
526 | + |
527 | if all_regions: |
528 | - reg_data += "%s\n" % r.dump(output=output) |
529 | + reg_data += "%s\n" % r.dump(output=output, url_base=url_base) |
530 | |
531 | elif r.region_name == region: |
532 | - return r.dump(output=output) |
533 | + return r.dump(output=output, url_base=url_base) |
534 | |
535 | return reg_data |
536 | |
537 | @@ -840,15 +904,18 @@ |
538 | request = urllib2.Request(url) |
539 | request.add_header('Accept-encoding', 'gzip') |
540 | response = urllib2.urlopen(request) |
541 | + |
542 | fetched_json = None |
543 | |
544 | if response.info().get('Content-Encoding') == 'gzip': |
545 | buf = StringIO(response.read()) |
546 | f = gzip.GzipFile(fileobj=buf) |
547 | fetched_json = f.read() |
548 | + |
549 | elif response.info().get('Content-Type') \ |
550 | == 'application/x-bzip2': |
551 | fetched_json = bz2.decompress(response.read()) |
552 | + |
553 | else: |
554 | fetched_json = response.read() |
555 | |
556 | @@ -877,6 +944,7 @@ |
557 | |
558 | setattr(self, 'build_catalog', |
559 | BuildCatalog(json=json.loads(fetched_json))) |
560 | + |
561 | except IOError as e: |
562 | |
563 | raise Exception('Unable to fetch JSON from remote source.\n%s' |
564 | @@ -884,7 +952,6 @@ |
565 | |
566 | return False |
567 | |
568 | - |
569 | # Example of how to parse out some details...still needs work |
570 | #logger.setLevel(logging.DEBUG) |
571 | #cloudjson = CloudJSON(gpg_verify=True, url="http://cloud-images.ubuntu.com/query2/server/quantal/ec2.json.bz2") |
572 | |
573 | === modified file 'syncimgs/cloudimg.py' |
574 | --- syncimgs/cloudimg.py 2012-07-11 16:52:42 +0000 |
575 | +++ syncimgs/cloudimg.py 2012-07-12 16:31:27 +0000 |
576 | @@ -178,9 +178,10 @@ |
577 | stream, |
578 | ): |
579 | """Basic replacements of: |
580 | - - %(distro)s: code name of the distro |
581 | + - %(distro)s: code name of the distro |
582 | - %(version)s: suite version name, i.e 12.04 |
583 | - %(date)s: todays date in YYYY-MM-DD format |
584 | + - %(stream)s: stream of build, i.e. server, desktop |
585 | """ |
586 | |
587 | return string % { |
588 | @@ -778,20 +779,20 @@ |
589 | logger.warn(' Already registered, skipping processing' |
590 | ) |
591 | continue |
592 | + |
593 | elif reg and not rec: |
594 | - |
595 | logger.warn(' Image was registered _outside_ of this program!' |
596 | ) |
597 | continue |
598 | + |
599 | elif not reg and rec: |
600 | - |
601 | logger.warn(' Image is recorded in logs as registered, but is missing.' |
602 | ) |
603 | logger.warn(' This is likely due to a problem with your registration script!' |
604 | ) |
605 | logger.warn(' Re-processing image!') |
606 | + |
607 | else: |
608 | - |
609 | logger.info(' Arch %s is cleared for processing' |
610 | % arch) |
611 | |
612 | @@ -819,8 +820,6 @@ |
613 | continue |
614 | |
615 | # Do the work |
616 | - |
617 | - url = '%s/%s' % (d_conf.transfer, fd.path) |
618 | local_file = '%s/%s' % (d_conf.sync_dir, |
619 | fd.path) |
620 | pristine_file = '%s/%s' % (d_conf.pristine, |
621 | @@ -833,7 +832,7 @@ |
622 | % fd.file_type) |
623 | logger.info(' Expected SHA1: %s' |
624 | % fd.sha1) |
625 | - logger.info(' Fetch URL: %s' % url) |
626 | + logger.info(' Fetch URL: %s' % fd.url) |
627 | logger.info(' Local Path: %s' |
628 | % local_file) |
629 | logger.info(' Log Path: %s' |
630 | @@ -843,7 +842,7 @@ |
631 | pristine_file = None |
632 | |
633 | furl = URLFetcher( |
634 | - url, |
635 | + fd.url, |
636 | fd.sha1, |
637 | local_file, |
638 | conf.download_log, |
639 | @@ -852,10 +851,11 @@ |
640 | pristine=pristine_file, |
641 | spaces=' ', |
642 | ) |
643 | + |
644 | if not furl.get(): |
645 | print furl |
646 | raise Exception('Failed to download %s' |
647 | - % url) |
648 | + % fd.url) |
649 | |
650 | replacements = self.__replacements__( |
651 | fd, |
652 | @@ -909,13 +909,16 @@ |
653 | for line in out.splitlines(): |
654 | if emitted_re.match(line): |
655 | local_file = \ |
656 | - line.replace('::EMITTED-FILE::', '') |
657 | + line.replace('::EMITTED-FILE::', '') |
658 | + |
659 | replacements['local'] = \ |
660 | - local_file |
661 | + local_file |
662 | + |
663 | logger.info('%sCustom command emitted: %s' |
664 | - % (spaces, local_file)) |
665 | + % (spaces, local_file)) |
666 | + |
667 | logger.info('%sUsing emitted file for publishing' |
668 | - % spaces) |
669 | + % spaces) |
670 | else: |
671 | |
672 | logger.critical(' FAILED TO CUSTOMIZE. Processing of this build\n IS ABORTED! Other builds will be processed' |
673 | |
674 | === modified file 'syncimgs/easyrep.py' |
675 | --- syncimgs/easyrep.py 2012-07-11 16:52:42 +0000 |
676 | +++ syncimgs/easyrep.py 2012-07-12 16:31:27 +0000 |
677 | @@ -77,5 +77,8 @@ |
678 | def get(self, name): |
679 | return self.__get__(name) |
680 | |
681 | + def set(self, name, value): |
682 | + setattr(self, name, value) |
683 | + |
684 | |
685 | # vi: ts=4 expandtab |
686 | |
687 | === modified file 'syncimgs/execute.py' |
688 | --- syncimgs/execute.py 2012-07-11 16:52:42 +0000 |
689 | +++ syncimgs/execute.py 2012-07-12 16:31:27 +0000 |
690 | @@ -57,18 +57,18 @@ |
691 | '\n'.join(cmds))) |
692 | |
693 | run_cmd = Popen([temp_f], shell=True, stdout=PIPE, stderr=PIPE) |
694 | - self.output = run_cmd.communicate() |
695 | + (self.output, self.error) = run_cmd.communicate() |
696 | |
697 | if 'output' in kargs: |
698 | if kargs['output'] == 'screen': |
699 | print self.output |
700 | elif kargs['output'] == 'log': |
701 | |
702 | - for line in self.output[0].splitlines(): |
703 | + for line in self.output.splitlines(): |
704 | logger.info('%sCMD stdout: %s ' % (self.spaces, |
705 | line)) |
706 | |
707 | - for line in self.output[1].splitlines(): |
708 | + for line in self.error.splitlines(): |
709 | logger.info('%sCMD stderr: %s ' % (self.spaces, |
710 | line)) |
711 | |
712 | |
713 | === removed file 'syncimgs/syncmessage.py' |
714 | --- syncimgs/syncmessage.py 2012-07-11 16:52:42 +0000 |
715 | +++ syncimgs/syncmessage.py 1970-01-01 00:00:00 +0000 |
716 | @@ -1,206 +0,0 @@ |
717 | -#!/usr/bin/python |
718 | -# -*- coding: utf-8 -*- |
719 | - |
720 | -## Copyright (C) 2011 Ben Howard <ben.howard@canonical.com> |
721 | -## Date: 25 February 2012 |
722 | -## |
723 | -## This comes with ABSOLUTELY NO WARRANTY; for details see COPYING. |
724 | -## This is free software, and you are welcome to redistribute it |
725 | -## under certain conditions; see copying for details. |
726 | - |
727 | -# Simple messaging class |
728 | - |
729 | -from boto.sqs.connection import SQSConnection |
730 | -from boto.sqs.message import Message |
731 | -import argparse |
732 | -import json |
733 | -import re |
734 | -import uuid |
735 | -import pickle |
736 | -import platform |
737 | -import sys |
738 | - |
739 | - |
740 | -class SyncMessage: |
741 | - |
742 | - __name__ = 'SyncMessage' |
743 | - |
744 | - def __init__(self, **kargs): |
745 | - self.uuid = str(uuid.uuid1()) |
746 | - self.__python_version = platform.python_version() |
747 | - ( |
748 | - self.__system, |
749 | - self.__node, |
750 | - self.__release, |
751 | - self.__version, |
752 | - self.__machine, |
753 | - self.__process, |
754 | - ) = platform.uname() |
755 | - self.json_key = '%s_' % self.uuid |
756 | - |
757 | - for key in kargs: |
758 | - if key == 'json': |
759 | - try: |
760 | - j = None |
761 | - with open(kargs[key], 'rb') as f: |
762 | - j = json.load(f) |
763 | - f.close() |
764 | - |
765 | - if j: |
766 | - for item in j: |
767 | - setattr(self, '%s_%s' % (self.json_key, |
768 | - item), j[item]) |
769 | - except IOError, e: |
770 | - |
771 | - print e |
772 | - sys.exit(1) |
773 | - else: |
774 | - setattr(self, key, kargs[key]) |
775 | - |
776 | - def iter_unpack(self, key=None): |
777 | - if not key: |
778 | - key = self.json_key |
779 | - |
780 | - key_re = re.compile('%s.*' % key) |
781 | - for val in self.__dict__.keys(): |
782 | - if key_re.match(val): |
783 | - nkey = val.replace('%s_' % key, '') |
784 | - yield (nkey, getattr(self, val)) |
785 | - |
786 | - def unpack(self, key=None, pretty=False): |
787 | - pack = {} |
788 | - for (key, val) in self.iter_unpack(key=key): |
789 | - pack[key] = val |
790 | - |
791 | - if pretty: |
792 | - return json.dumps(pack) |
793 | - |
794 | - return pack |
795 | - |
796 | - def passed(self): |
797 | - try: |
798 | - return self.success |
799 | - except KeyError: |
800 | - return None |
801 | - |
802 | - def failed(self): |
803 | - try: |
804 | - return self.success |
805 | - except KeyError: |
806 | - return None |
807 | - |
808 | - def get(self, key): |
809 | - try: |
810 | - return getattr(self, key) |
811 | - except KeyError: |
812 | - return None |
813 | - |
814 | - def set(self, key, value): |
815 | - setattr(self, key, value) |
816 | - |
817 | - def ack(self, uuid): |
818 | - try: |
819 | - if getattr(self, 'ack') == uuid: |
820 | - return True |
821 | - else: |
822 | - return False |
823 | - except KeyError: |
824 | - return None |
825 | - |
826 | - def __repr__(self): |
827 | - ret_string = '' |
828 | - for i in self.__dict__.keys(): |
829 | - if not ret_string: |
830 | - ret_string = '%s=%s' % (i, getattr(self, i)) |
831 | - else: |
832 | - |
833 | - ret_string = ret_string + ', %s=%s' % (i, getattr(self, |
834 | - i)) |
835 | - |
836 | - ret_string = str('%s(%s)' % (self.__class__.__name__, |
837 | - ret_string)) |
838 | - return ret_string |
839 | - |
840 | - |
841 | -class SQSConn: |
842 | - |
843 | - def __init__( |
844 | - self, |
845 | - access_key, |
846 | - secret_key, |
847 | - queue_name, |
848 | - ): |
849 | - self.conn = SQSConnection(access_key, secret_key) |
850 | - self.queue = self.conn.create_queue(queue_name) |
851 | - |
852 | - def connected(self): |
853 | - c = False |
854 | - q = False |
855 | - if self.conn: |
856 | - c = True |
857 | - if self.queue: |
858 | - q = True |
859 | - |
860 | - return (c, q) |
861 | - |
862 | - def send(self, syncmessage): |
863 | - |
864 | - if not isinstance(syncmessage, SyncMessage): |
865 | - raise Exception('Invalid message type') |
866 | - |
867 | - if self.queue: |
868 | - m = Message() |
869 | - m.set_body(pickle.dumps(syncmessage)) |
870 | - self.queue.write(m) |
871 | - else: |
872 | - raise Exception('No Queue Connection!') |
873 | - |
874 | - def get(self, delete=False): |
875 | - if self.queue: |
876 | - while self.queue.count > 0: |
877 | - try: |
878 | - m = self.queue.read() |
879 | - if m: |
880 | - message = pickle.loads(m.get_body()) |
881 | - yield message |
882 | - |
883 | - if delete: |
884 | - m.delete() |
885 | - else: |
886 | - break |
887 | - except pickle.UnpicklingError as e: |
888 | - raise 'Error de-pickling!\n%s' % e |
889 | - |
890 | - |
891 | -if __name__ == '__main__': |
892 | - |
893 | - parser = argparse.ArgumentParser() |
894 | - parser.add_argument('--secret_key', action='store', required=True, |
895 | - help='AWS Secret Key') |
896 | - parser.add_argument('--access_key', action='store', required=True, |
897 | - help='AWS Access Key/ID') |
898 | - parser.add_argument('--queue', action='store', required=True, |
899 | - help='Queue to use for storing message') |
900 | - parser.add_argument('--success', dest='success', action='store_true' |
901 | - , help='Record command as successfull') |
902 | - parser.add_argument('--fail', dest='success', action='store_false', |
903 | - help='Record command as a failure') |
904 | - parser.add_argument('--msg', action='store', required=True, |
905 | - help='Message component') |
906 | - parser.add_argument('--json', action='store', |
907 | - help='Name of JSON file containing futher key/value pairs' |
908 | - ) |
909 | - |
910 | - opts = parser.parse_args() |
911 | - |
912 | - # Prepare the message |
913 | - |
914 | - message = SyncMessage(msg=opts.msg, json=opts.json, |
915 | - success=opts.success) |
916 | - |
917 | - # Write the message |
918 | - |
919 | - sqsconn = SQSConn(opts.access_key, opts.secret_key, opts.queue) |
920 | - sqsconn.send(message) |
921 | - |
922 | -# vi: ts=4 expandtab |
923 | |
924 | === removed file 'syncimgs/syncmsg-reader.py' |
925 | --- syncimgs/syncmsg-reader.py 2012-07-11 16:52:42 +0000 |
926 | +++ syncimgs/syncmsg-reader.py 1970-01-01 00:00:00 +0000 |
927 | @@ -1,42 +0,0 @@ |
928 | -#!/usr/bin/python |
929 | -# -*- coding: utf-8 -*- |
930 | - |
931 | -## Copyright (C) 2011 Ben Howard <ben.howard@canonical.com> |
932 | -## Date: 25 February 2012 |
933 | -## |
934 | -## This comes with ABSOLUTELY NO WARRANTY; for details see COPYING. |
935 | -## This is free software, and you are welcome to redistribute it |
936 | -## under certain conditions; see copying for details. |
937 | - |
938 | -# Simple Utility for reading SQS Message Queue |
939 | - |
940 | -import argparse |
941 | -from syncimgs.syncmessage import SQSConn |
942 | - |
943 | -if __name__ == '__main__': |
944 | - |
945 | - parser = argparse.ArgumentParser() |
946 | - parser.add_argument('--secret_key', action='store', required=True, |
947 | - help='AWS Secret Key') |
948 | - parser.add_argument('--access_key', action='store', required=True, |
949 | - help='AWS Access Key/ID') |
950 | - parser.add_argument('--queue', action='store', required=True, |
951 | - help='Queue to use for storing message') |
952 | - parser.add_argument('--all', action='store_true', |
953 | - help='Read all queue items') |
954 | - parser.add_argument('--max', action='store', type=int, |
955 | - help='Max number of elements to read') |
956 | - |
957 | - opts = parser.parse_args() |
958 | - sqsconn = SQSConn(opts.access_key, opts.secret_key, opts.queue) |
959 | - |
960 | - counted = 0 |
961 | - for item in sqsconn.get(): |
962 | - print item.unpack(pretty=True) |
963 | - print '\n' |
964 | - |
965 | - if opts.max and not opts.all: |
966 | - if opts.max >= counted: |
967 | - break |
968 | - |
969 | -# vi: ts=4 expandtab |
970 | |
971 | === modified file 'syncimgs/yaml_config.py' |
972 | --- syncimgs/yaml_config.py 2012-07-11 16:52:42 +0000 |
973 | +++ syncimgs/yaml_config.py 2012-07-12 16:31:27 +0000 |
974 | @@ -139,59 +139,14 @@ |
975 | |
976 | def config_help(): |
977 | print """ |
978 | -The following rules apply: |
979 | - 1. There must be a single section "--- !Control" defined; that is the control |
980 | - section that defines the basis for downloading and mirroring the images. |
981 | - 2. There must be a single section "--- !GeneralRules" defined; that section |
982 | - defines the default behavior of the mirroring. |
983 | - 3. There may be any number of "--- !SuiteRuleOverride" defined, which allows |
984 | - you to override general rules. |
985 | - 4. Each SuiteRuleOverride will be treated a separate and distinct task on a per |
986 | - suite basis. |
987 | - |
988 | -#cloudimg-sync-config: |
989 | -sync-dir: /srv/cloudimgs/data |
990 | -pristine: /srv/cloudimgs/pristine |
991 | -history_log: /srv/cloudimgs/history.log |
992 | -download_log: /srv/cloudimgs/download.log |
993 | -host_url: http://cloud-images.ubuntu.com/query2/server/release/build.json |
994 | -gpg_validate: True |
995 | -suites: { available, hardy, lucid, maverick, natty, oneiric, precise } |
996 | -stream: [ desktop, server ] |
997 | -publish: [ qcow2 ] |
998 | -mirror: [ tar.gz, manifest ] |
999 | -name_convention: %(name)s-%(build_date)s |
1000 | -copy_pristine: {True, False} |
1001 | -max_dailies: {all,#,latest} |
1002 | -max_milestones: {all,#,latest} |
1003 | -keep_pre_release: {True, False} |
1004 | -arches: [ i386, amd64 ] |
1005 | -mirror_arches: [ armel, armhf ] |
1006 | -check_types: [ manifest ] |
1007 | -customize_types: [ qcow2 ] |
1008 | -custom_cmd: | |
1009 | - /bin/true |
1010 | -list_cmd: | |
1011 | - /bin/true |
1012 | -check_cmd: | |
1013 | - /bin/false |
1014 | -publish_cmd: | |
1015 | - /bin/true |
1016 | -unpublish_cmd: | |
1017 | - /bin/true |
1018 | -overrides: |
1019 | - -precise: |
1020 | - publish: [ qcow2 ] |
1021 | - mirror: [ tar.gz ] |
1022 | - copy_pristine: False |
1023 | - -oneiric: |
1024 | - publish: [ tar.gz ] |
1025 | - mirror: [ qcow2 ] |
1026 | - mirror_arches: [ armel ] |
1027 | - check_cmd: | |
1028 | - /bin/false |
1029 | - |
1030 | - |
1031 | +cloudimg-sync is a tool for mirroring, customizing and registering the |
1032 | +Ubuntu Cloud Images on arbitrary clouds using a rule-engine. |
1033 | + |
1034 | +Example configuration: |
1035 | +--------------------------- |
1036 | +""" |
1037 | + get_default_config() |
1038 | + print """ |
1039 | --------------------------- |
1040 | |
1041 | Overrides: |
1042 | @@ -269,7 +224,7 @@ |
1043 | |
1044 | Custom Commands: |
1045 | |
1046 | - Custom commands are indented lists of arbitrary CLI commands or language commands. |
1047 | + Custom commands are lists of arbitrary CLI commands or language commands. |
1048 | If your command(s) requires an interpreter, use "#!/..." syntax. For |
1049 | example, to run a Perl command, you would use: |
1050 | |
1051 | @@ -285,8 +240,7 @@ |
1052 | |
1053 | |
1054 | If using multiple "#!/..." files, it is recommended to use files instead of storing |
1055 | - these commands in the configuration file. Each "#!/.." encountered is interpreted to |
1056 | - be a different set of commands. |
1057 | + these commands in the configuration file. |
1058 | |
1059 | To pass arguments to custom commands, you can use any of the string replacements |
1060 | explained above. The example above passes the value JSON value of the file's SHA512 |
1061 | @@ -303,27 +257,45 @@ |
1062 | This command is called with substitutions. For example: |
1063 | /srv/cloudimgs/bin/publish-ec2 %(directory)s %(file)s %(distro)s %(arch)s %(build_serial)s |
1064 | |
1065 | + This command is run on _each_ file that is listed in the check file list. |
1066 | + |
1067 | + If command exists successfully, then image processing for that specific build serial |
1068 | + continues normally. |
1069 | + |
1070 | custom_cmd: Commands for customizing the image, such as installing keys or packages |
1071 | These commands _are not_ run in a chroot. |
1072 | |
1073 | The command is called with the directory of the downloaded files and the file |
1074 | to be customized. |
1075 | |
1076 | - If the command exits succesfully, then the last line should be the name of the |
1077 | + If the command exits successfully, then the last line should be the name of the |
1078 | file to be published. If there is no output from the command, but it |
1079 | successfully exits, then the filename is assumed to be unchanged. |
1080 | |
1081 | - list_cmd: Indented list of command(s) to run a query to get a listing of names of |
1082 | + list_cmd: Command(s) to run a query to get a listing of names of |
1083 | published and/or mirrored images. |
1084 | |
1085 | - The command _MUST_ return registratrations with the following rules: |
1086 | - * One registration per line |
1087 | - * Each line formated as: <tag> <build_serial> <arch> |
1088 | - * The command _must_ accept "%(distro)s" or "%(version)s" as the only parameter |
1089 | + The command _MUST_ return registrations with the following rules: |
1090 | + * One registration per line, formated as: |
1091 | + <tag> <build_serial> <arch> |
1092 | + * The command can use the following as parameters: |
1093 | + - %(distro)s: code name of the distro |
1094 | + - %(version)s: suite version name, i.e 12.04 |
1095 | + - %(date)s: todays date in YYYY-MM-DD format |
1096 | + - %(stream)s: stream of build, i.e. server, desktop |
1097 | + |
1098 | + This option should be used to _check_ if an image should be processed or not. |
1099 | + |
1100 | + A non-zero exit status is treated as a failure. In the event of a failure the program |
1101 | + will skip the build serial. |
1102 | + |
1103 | + State tracking is done both locally and by cloud registration if list_cmd is defined. |
1104 | + The output of list_cmd is authoritative, while the local state tracking is used |
1105 | + purely for logging and to reduce bandwidth usage. |
1106 | |
1107 | publish_cmd: This command published an image. |
1108 | |
1109 | - The command will be passed subsitutions defined above. For example: |
1110 | + The command will be passed substitutions defined above. For example: |
1111 | /srv/cloudimgs/bin/publish-ec2 %(directory)s %(file)s %(distro)s %(arch)s %(build_serial)s |
1112 | |
1113 | * You must define the inputs. |
I'm going to work on getting execute.py to not shell tomorrow. I'll submit another MP for that.