Merge lp:~cjwatson/ubuntu-system-image/cdimage-custom into lp:~registry/ubuntu-system-image/client

Proposed by Colin Watson
Status: Superseded
Proposed branch: lp:~cjwatson/ubuntu-system-image/cdimage-custom
Merge into: lp:~registry/ubuntu-system-image/client
Diff against target: 7377 lines (+7236/-0) (has conflicts)
25 files modified
.bzrignore (+9/-0)
README (+12/-0)
bin/copy-image (+308/-0)
bin/generate-keyrings (+87/-0)
bin/generate-keys (+61/-0)
bin/import-images (+305/-0)
bin/set-phased-percentage (+90/-0)
bin/si-shell (+79/-0)
etc/config.example (+48/-0)
lib/systemimage/config.py (+206/-0)
lib/systemimage/diff.py (+242/-0)
lib/systemimage/generators.py (+1173/-0)
lib/systemimage/gpg.py (+239/-0)
lib/systemimage/tools.py (+367/-0)
lib/systemimage/tree.py (+999/-0)
tests/generate-keys (+52/-0)
tests/run (+60/-0)
tests/test_config.py (+281/-0)
tests/test_diff.py (+265/-0)
tests/test_generators.py (+1039/-0)
tests/test_gpg.py (+163/-0)
tests/test_static.py (+78/-0)
tests/test_tools.py (+297/-0)
tests/test_tree.py (+679/-0)
utils/check-latest (+97/-0)
Conflict adding file .bzrignore.  Moved existing file to .bzrignore.moved.
To merge this branch: bzr merge lp:~cjwatson/ubuntu-system-image/cdimage-custom
Reviewer Review Type Date Requested Status
Registry Administrators Pending
Review via email: mp+237941@code.launchpad.net

This proposal has been superseded by a proposal from 2014-10-10.

Commit message

Add a new cdimage-custom generator.

Description of the change

Add a new cdimage-custom generator.

This is basically just a clone-and-hack of cdimage-ubuntu, simplified somewhat. It goes with recent changes to ubuntu-cdimage, and is all with the aim of being able to fix bug 1367332 (moving some click packages to /custom) in a single step for the community Ubuntu images.

To post a comment you must log in.

Unmerged revisions

246. By Colin Watson

Add a new cdimage-custom generator.

245. By Stéphane Graber

Drop system/android/cache/recovery from core image.

244. By Stéphane Graber

Hash http filepaths by default using a combination of the URL and version string, update code to pass current pep-8 test.

243. By Stéphane Graber

Fix variable names conflicts.

242. By Stéphane Graber

Use a comma as the separator to avoid ini parsing errors.

241. By Stéphane Graber

Add support for device overrides.

240. By Stéphane Graber

Add device name to version tarball.

239. By Stéphane Graber

Fix incorrect path for download cache in core image

238. By Stéphane Graber

Add /android/cache/recovery to core builds for now.

237. By Stéphane Graber

Skip android bits for non-touch

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file '.bzrignore'
2--- .bzrignore 1970-01-01 00:00:00 +0000
3+++ .bzrignore 2014-10-10 11:11:17 +0000
4@@ -0,0 +1,9 @@
5+etc/config
6+lib/phablet/__pycache__
7+secret/gpg/keyrings/*
8+secret/gpg/keys/*
9+secret/ssh/*
10+tests/coverage
11+tests/keys/*
12+www/*
13+state/*
14
15=== renamed file '.bzrignore' => '.bzrignore.moved'
16=== added file 'README'
17--- README 1970-01-01 00:00:00 +0000
18+++ README 2014-10-10 11:11:17 +0000
19@@ -0,0 +1,12 @@
20+Runtime dependencies:
21+ - pxz | xz-utils
22+ - python3, python3-gpgme | python, python-gpgme
23+ - e2fsprogs
24+ - android-tools-fsutils
25+ - abootimg
26+
27+Test dependencies:
28+ - python-mock, python3-mock
29+ - python-coverage, python3-coverage
30+ - pep8
31+ - pyflakes3, pyflakes
32
33=== added directory 'bin'
34=== added file 'bin/copy-image'
35--- bin/copy-image 1970-01-01 00:00:00 +0000
36+++ bin/copy-image 2014-10-10 11:11:17 +0000
37@@ -0,0 +1,308 @@
38+#!/usr/bin/python
39+# -*- coding: utf-8 -*-
40+
41+# Copyright (C) 2013 Canonical Ltd.
42+# Author: Stéphane Graber <stgraber@ubuntu.com>
43+
44+# This program is free software: you can redistribute it and/or modify
45+# it under the terms of the GNU General Public License as published by
46+# the Free Software Foundation; version 3 of the License.
47+#
48+# This program is distributed in the hope that it will be useful,
49+# but WITHOUT ANY WARRANTY; without even the implied warranty of
50+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
51+# GNU General Public License for more details.
52+#
53+# You should have received a copy of the GNU General Public License
54+# along with this program. If not, see <http://www.gnu.org/licenses/>.
55+
56+import json
57+import os
58+import sys
59+sys.path.insert(0, os.path.join(sys.path[0], os.pardir, "lib"))
60+
61+from systemimage import config, generators, tools, tree
62+
63+import argparse
64+import fcntl
65+import logging
66+
67+if __name__ == '__main__':
68+ parser = argparse.ArgumentParser(description="image copier")
69+ parser.add_argument("source_channel", metavar="SOURCE-CHANNEL")
70+ parser.add_argument("destination_channel", metavar="DESTINATION-CHANNEL")
71+ parser.add_argument("device", metavar="DEVICE")
72+ parser.add_argument("version", metavar="VERSION", type=int)
73+ parser.add_argument("-k", "--keep-version", action="store_true",
74+ help="Keep the original verison number")
75+ parser.add_argument("--verbose", "-v", action="count", default=0)
76+
77+ args = parser.parse_args()
78+
79+ # Setup logging
80+ formatter = logging.Formatter(
81+ "%(asctime)s %(levelname)s %(message)s")
82+
83+ levels = {1: logging.ERROR,
84+ 2: logging.WARNING,
85+ 3: logging.INFO,
86+ 4: logging.DEBUG}
87+
88+ if args.verbose > 0:
89+ stdoutlogger = logging.StreamHandler(sys.stdout)
90+ stdoutlogger.setFormatter(formatter)
91+ logging.root.setLevel(levels[min(4, args.verbose)])
92+ logging.root.addHandler(stdoutlogger)
93+ else:
94+ logging.root.addHandler(logging.NullHandler())
95+
96+ # Load the configuration
97+ conf = config.Config()
98+
99+ # Try to acquire a global lock
100+ lock_file = os.path.join(conf.state_path, "global.lock")
101+ lock_fd = open(lock_file, 'w')
102+
103+ try:
104+ fcntl.lockf(lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
105+ except IOError:
106+ print("Something else holds the global lock. exiting.")
107+ sys.exit(0)
108+
109+ # Load the tree
110+ pub = tree.Tree(conf)
111+
112+ # Do some checks
113+ if args.source_channel not in pub.list_channels():
114+ parser.error("Invalid source channel: %s" % args.source_channel)
115+
116+ if args.destination_channel not in pub.list_channels():
117+ parser.error("Invalid destination channel: %s" %
118+ args.destination_channel)
119+
120+ if args.device not in pub.list_channels()[args.source_channel]['devices']:
121+ parser.error("Invalid device for source channel: %s" %
122+ args.device)
123+
124+ if args.device not in \
125+ pub.list_channels()[args.destination_channel]['devices']:
126+ parser.error("Invalid device for destination channel: %s" %
127+ args.device)
128+
129+ if "alias" in pub.list_channels()[args.source_channel] and \
130+ pub.list_channels()[args.source_channel]['alias'] \
131+ != args.source_channel:
132+ parser.error("Source channel is an alias.")
133+
134+ if "alias" in pub.list_channels()[args.destination_channel] and \
135+ pub.list_channels()[args.destination_channel]['alias'] \
136+ != args.destination_channel:
137+ parser.error("Destination channel is an alias.")
138+
139+ if "redirect" in pub.list_channels()[args.source_channel]:
140+ parser.error("Source channel is a redirect.")
141+
142+ if "redirect" in pub.list_channels()[args.destination_channel]:
143+ parser.error("Destination channel is a redirect.")
144+
145+ source_device = pub.get_device(args.source_channel, args.device)
146+ destination_device = pub.get_device(args.destination_channel, args.device)
147+
148+ if args.keep_version:
149+ images = [image for image in destination_device.list_images()
150+ if image['version'] == args.version]
151+ if images:
152+ parser.error("Version number is already used: %s" % args.version)
153+
154+ # Assign a new version number
155+ new_version = args.version
156+ if not args.keep_version:
157+ # Find the next available version
158+ new_version = 1
159+ for image in destination_device.list_images():
160+ if image['version'] >= new_version:
161+ new_version = image['version'] + 1
162+ logging.debug("Version for next image: %s" % new_version)
163+
164+ # Extract the build we want to copy
165+ images = [image for image in source_device.list_images()
166+ if image['type'] == "full" and image['version'] == args.version]
167+ if not images:
168+ parser.error("Can't find version: %s" % args.version)
169+ source_image = images[0]
170+
171+ # Extract the list of existing full images
172+ full_images = {image['version']: image
173+ for image in destination_device.list_images()
174+ if image['type'] == "full"}
175+
176+ # Check that the last full and the new image aren't one and the same
177+ source_files = [entry['path'].split("/")[-1]
178+ for entry in source_image['files']
179+ if not entry['path'].split("/")[-1].startswith("version-")]
180+ destination_files = []
181+ if full_images:
182+ latest_full = sorted(full_images.values(),
183+ key=lambda image: image['version'])[-1]
184+ destination_files = [entry['path'].split("/")[-1]
185+ for entry in latest_full['files']
186+ if not entry['path'].split(
187+ "/")[-1].startswith("version-")]
188+ if source_files == destination_files:
189+ parser.error("Source image is already latest full in "
190+ "destination channel.")
191+
192+ # Generate a list of required deltas
193+ delta_base = []
194+
195+ if args.destination_channel in conf.channels:
196+ for base_channel in conf.channels[args.destination_channel].deltabase:
197+ # Skip missing channels
198+ if base_channel not in pub.list_channels():
199+ continue
200+
201+ # Skip missing devices
202+ if args.device not in (pub.list_channels()
203+ [base_channel]['devices']):
204+ continue
205+
206+ # Extract the latest full image
207+ base_device = pub.get_device(base_channel, args.device)
208+ base_images = sorted([image
209+ for image in base_device.list_images()
210+ if image['type'] == "full"],
211+ key=lambda image: image['version'])
212+
213+ # Check if the version is valid and add it
214+ if base_images and base_images[-1]['version'] in full_images:
215+ if (full_images[base_images[-1]['version']]
216+ not in delta_base):
217+ delta_base.append(full_images
218+ [base_images[-1]['version']])
219+ logging.debug("Source version for delta: %s" %
220+ base_images[-1]['version'])
221+
222+ # Create new empty entries
223+ new_images = {'full': {'files': []}}
224+ for delta in delta_base:
225+ new_images["delta_%s" % delta['version']] = {'files': []}
226+
227+ # Extract current version_detail and files
228+ version_detail = ""
229+ for entry in source_image['files']:
230+ path = os.path.realpath("%s/%s" % (conf.publish_path, entry['path']))
231+
232+ filename = path.split("/")[-1]
233+
234+ # Look for version-X.tar.xz
235+ if filename == "version-%s.tar.xz" % args.version:
236+ # Extract the metadata
237+ if os.path.exists(path.replace(".tar.xz", ".json")):
238+ with open(path.replace(".tar.xz", ".json"), "r") as fd:
239+ metadata = json.loads(fd.read())
240+ if "channel.ini" in metadata:
241+ version_detail = metadata['channel.ini'].get(
242+ "version_detail", None)
243+ else:
244+ new_images['full']['files'].append(path)
245+ logging.debug("Source version_detail is: %s" % version_detail)
246+
247+ # Generate new version tarball
248+ environment = {}
249+ environment['channel_name'] = args.destination_channel
250+ environment['device'] = destination_device
251+ environment['device_name'] = args.device
252+ environment['version'] = new_version
253+ environment['version_detail'] = [entry
254+ for entry in version_detail.split(",")
255+ if not entry.startswith("version=")]
256+ environment['new_files'] = new_images['full']['files']
257+
258+ logging.info("Generating new version tarball for '%s' (%s)"
259+ % (new_version, "," % environment['version_detail']))
260+ version_path = generators.generate_file(conf, "version", [], environment)
261+ if version_path:
262+ new_images['full']['files'].append(version_path)
263+
264+ # Generate deltas
265+ for abspath in new_images['full']['files']:
266+ prefix = abspath.split("/")[-1].rsplit("-", 1)[0]
267+ for delta in delta_base:
268+ # Extract the source
269+ src_path = None
270+ for file_dict in delta['files']:
271+ if (file_dict['path'].split("/")[-1]
272+ .startswith(prefix)):
273+ src_path = "%s/%s" % (conf.publish_path,
274+ file_dict['path'])
275+ break
276+
277+ # Check that it's not the current file
278+ if src_path:
279+ src_path = os.path.realpath(src_path)
280+
281+ # FIXME: the keyring- is a big hack...
282+ if src_path == abspath and "keyring-" not in src_path:
283+ continue
284+
285+ # Generators are allowed to return None when no delta
286+ # exists at all.
287+ logging.info("Generating delta from '%s' for '%s'" %
288+ (delta['version'],
289+ prefix))
290+ delta_path = generators.generate_delta(conf, src_path,
291+ abspath)
292+ else:
293+ delta_path = abspath
294+
295+ if not delta_path:
296+ continue
297+
298+ # Get the full and relative paths
299+ delta_abspath, delta_relpath = tools.expand_path(
300+ delta_path, conf.publish_path)
301+
302+ new_images['delta_%s' % delta['version']]['files'] \
303+ .append(delta_abspath)
304+
305+ # Add full image
306+ logging.info("Publishing new image '%s' (%s) with %s files."
307+ % (new_version, ",".join(environment['version_detail']),
308+ len(new_images['full']['files'])))
309+ destination_device.create_image("full", new_version,
310+ ",".join(environment['version_detail']),
311+ new_images['full']['files'])
312+
313+ # Add delta images
314+ for delta in delta_base:
315+ files = new_images["delta_%s" % delta['version']]['files']
316+ logging.info("Publishing new delta from '%s' (%s)"
317+ " to '%s' (%s) with %s files" %
318+ (delta['version'], delta.get("description", ""),
319+ new_version, ",".join(environment['version_detail']),
320+ len(files)))
321+
322+ destination_device.create_image(
323+ "delta", new_version,
324+ ",".join(environment['version_detail']),
325+ files,
326+ base=delta['version'])
327+
328+ # Expire images
329+ if args.destination_channel in conf.channels:
330+ if conf.channels[args.destination_channel].fullcount > 0:
331+ logging.info("Expiring old images")
332+ destination_device.expire_images(
333+ conf.channels[args.destination_channel].fullcount)
334+
335+ # Sync all channel aliases
336+ logging.info("Syncing any existing alias")
337+ pub.sync_aliases(args.destination_channel)
338+
339+ # Remove any orphaned file
340+ logging.info("Removing orphaned files from the pool")
341+ pub.cleanup_tree()
342+
343+ # Sync the mirrors
344+ logging.info("Triggering a mirror sync")
345+ tools.sync_mirrors(conf)
346
347=== added file 'bin/generate-keyrings'
348--- bin/generate-keyrings 1970-01-01 00:00:00 +0000
349+++ bin/generate-keyrings 2014-10-10 11:11:17 +0000
350@@ -0,0 +1,87 @@
351+#!/usr/bin/python
352+# -*- coding: utf-8 -*-
353+
354+# Copyright (C) 2013 Canonical Ltd.
355+# Author: Stéphane Graber <stgraber@ubuntu.com>
356+
357+# This program is free software: you can redistribute it and/or modify
358+# it under the terms of the GNU General Public License as published by
359+# the Free Software Foundation; version 3 of the License.
360+#
361+# This program is distributed in the hope that it will be useful,
362+# but WITHOUT ANY WARRANTY; without even the implied warranty of
363+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
364+# GNU General Public License for more details.
365+#
366+# You should have received a copy of the GNU General Public License
367+# along with this program. If not, see <http://www.gnu.org/licenses/>.
368+
369+import os
370+import sys
371+import time
372+sys.path.insert(0, 'lib')
373+
374+from systemimage import config
375+from systemimage import gpg
376+from systemimage import tools
377+
378+conf = config.Config()
379+
380+# archive-master keyring
381+if os.path.exists(os.path.join(conf.gpg_key_path, "archive-master")):
382+ archive_master = gpg.Keyring(conf, "archive-master")
383+ archive_master.set_metadata("archive-master")
384+ archive_master.import_keys(os.path.join(conf.gpg_key_path,
385+ "archive-master"))
386+ path = archive_master.generate_tarball()
387+ tools.xz_compress(path)
388+ os.remove(path)
389+ gpg.sign_file(conf, "archive-master", "%s.xz" % path)
390+
391+# image-master keyring
392+if os.path.exists(os.path.join(conf.gpg_key_path, "image-master")) and \
393+ os.path.exists(os.path.join(conf.gpg_key_path, "archive-master")):
394+ image_master = gpg.Keyring(conf, "image-master")
395+ image_master.set_metadata("image-master")
396+ image_master.import_keys(os.path.join(conf.gpg_key_path, "image-master"))
397+ path = image_master.generate_tarball()
398+ tools.xz_compress(path)
399+ os.remove(path)
400+ gpg.sign_file(conf, "archive-master", "%s.xz" % path)
401+
402+# image-signing keyring
403+if os.path.exists(os.path.join(conf.gpg_key_path, "image-signing")) and \
404+ os.path.exists(os.path.join(conf.gpg_key_path, "image-master")):
405+ image_signing = gpg.Keyring(conf, "image-signing")
406+ image_signing.set_metadata("image-signing",
407+ int(time.strftime("%s",
408+ time.localtime())) + 63072000)
409+ image_signing.import_keys(os.path.join(conf.gpg_key_path, "image-signing"))
410+ path = image_signing.generate_tarball()
411+ tools.xz_compress(path)
412+ os.remove(path)
413+ gpg.sign_file(conf, "image-master", "%s.xz" % path)
414+
415+# device-signing keyring
416+if os.path.exists(os.path.join(conf.gpg_key_path, "device-signing")) and \
417+ os.path.exists(os.path.join(conf.gpg_key_path, "image-signing")):
418+ device_signing = gpg.Keyring(conf, "device-signing")
419+ device_signing.set_metadata("device-signing",
420+ int(time.strftime("%s",
421+ time.localtime())) + 2678400)
422+ device_signing.import_keys(os.path.join(conf.gpg_key_path,
423+ "device-signing"))
424+ path = device_signing.generate_tarball()
425+ tools.xz_compress(path)
426+ os.remove(path)
427+ gpg.sign_file(conf, "image-signing", "%s.xz" % path)
428+
429+# blacklist keyring
430+if os.path.exists(os.path.join(conf.gpg_key_path, "blacklist")) and \
431+ os.path.exists(os.path.join(conf.gpg_key_path, "image-master")):
432+ blacklist = gpg.Keyring(conf, "blacklist")
433+ blacklist.set_metadata("blacklist")
434+ path = blacklist.generate_tarball()
435+ tools.xz_compress(path)
436+ os.remove(path)
437+ gpg.sign_file(conf, "image-master", "%s.xz" % path)
438
439=== added file 'bin/generate-keys'
440--- bin/generate-keys 1970-01-01 00:00:00 +0000
441+++ bin/generate-keys 2014-10-10 11:11:17 +0000
442@@ -0,0 +1,61 @@
443+#!/usr/bin/python
444+# -*- coding: utf-8 -*-
445+#
446+# Copyright (C) 2014 Canonical Ltd.
447+# Author: Timothy Chavez <timothy.chavez@canonical.com>
448+#
449+# This program is free software: you can redistribute it and/or modify
450+# it under the terms of the GNU General Public License as published by
451+# the Free Software Foundation; version 3 of the License.
452+#
453+# This program is distributed in the hope that it will be useful,
454+# but WITHOUT ANY WARRANTY; without even the implied warranty of
455+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
456+# GNU General Public License for more details.
457+#
458+# You should have received a copy of the GNU General Public License
459+# along with this program. If not, see <http://www.gnu.org/licenses/>.
460+
461+import argparse
462+import os
463+import sys
464+
465+sys.path.insert(0, 'lib')
466+from systemimage import config
467+from systemimage.gpg import generate_signing_key
468+
469+
470+KEYS = {
471+ "archive-master": ("{0} Archive Master key", 0),
472+ "image-master": ("{0} Image Master key", 0),
473+ "device-signing": ("{0} Device Signing key", "2y"),
474+ "image-signing": ("{0} Image Signing key", "2y")
475+}
476+
477+
478+def main():
479+ parser = argparse.ArgumentParser(description='Generate signing keya.')
480+ parser.add_argument("--email", dest="email", required=True,
481+ help="An email address to associate with the keys")
482+ parser.add_argument("--prefix", dest="prefix", required=True,
483+ help="A prefix to include in the key name")
484+ args = parser.parse_args()
485+
486+ conf = config.Config()
487+
488+ print("I: Generating signing keys...")
489+
490+ for key_id, (key_name, key_expiry) in KEYS.iteritems():
491+ key_path = os.path.join(conf.gpg_key_path, key_id)
492+ if os.path.exists(key_path):
493+ print("W: The key \"{0}\" already exists".format(key_id))
494+ continue
495+ os.makedirs(key_path)
496+ generate_signing_key(
497+ key_path, key_name.format(args.prefix), args.email, key_expiry)
498+
499+ print("I: Done")
500+
501+
502+if __name__ == "__main__":
503+ main()
504
505=== added file 'bin/import-images'
506--- bin/import-images 1970-01-01 00:00:00 +0000
507+++ bin/import-images 2014-10-10 11:11:17 +0000
508@@ -0,0 +1,305 @@
509+#!/usr/bin/python
510+# -*- coding: utf-8 -*-
511+
512+# Copyright (C) 2013 Canonical Ltd.
513+# Author: Stéphane Graber <stgraber@ubuntu.com>
514+
515+# This program is free software: you can redistribute it and/or modify
516+# it under the terms of the GNU General Public License as published by
517+# the Free Software Foundation; version 3 of the License.
518+#
519+# This program is distributed in the hope that it will be useful,
520+# but WITHOUT ANY WARRANTY; without even the implied warranty of
521+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
522+# GNU General Public License for more details.
523+#
524+# You should have received a copy of the GNU General Public License
525+# along with this program. If not, see <http://www.gnu.org/licenses/>.
526+
527+import os
528+import sys
529+sys.path.insert(0, os.path.join(sys.path[0], os.pardir, "lib"))
530+
531+from systemimage import config, generators, tools, tree
532+
533+import argparse
534+import fcntl
535+import logging
536+
537+if __name__ == '__main__':
538+ parser = argparse.ArgumentParser(description="image importer")
539+ parser.add_argument("--verbose", "-v", action="count", default=0)
540+ args = parser.parse_args()
541+
542+ # Setup logging
543+ formatter = logging.Formatter(
544+ "%(asctime)s %(levelname)s %(message)s")
545+
546+ levels = {1: logging.ERROR,
547+ 2: logging.WARNING,
548+ 3: logging.INFO,
549+ 4: logging.DEBUG}
550+
551+ if args.verbose > 0:
552+ stdoutlogger = logging.StreamHandler(sys.stdout)
553+ stdoutlogger.setFormatter(formatter)
554+ logging.root.setLevel(levels[min(4, args.verbose)])
555+ logging.root.addHandler(stdoutlogger)
556+ else:
557+ logging.root.addHandler(logging.NullHandler())
558+
559+ # Load the configuration
560+ conf = config.Config()
561+
562+ # Try to acquire a global lock
563+ lock_file = os.path.join(conf.state_path, "global.lock")
564+ lock_fd = open(lock_file, 'w')
565+
566+ try:
567+ fcntl.lockf(lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
568+ except IOError:
569+ logging.info("Something else holds the global lock. exiting.")
570+ sys.exit(0)
571+
572+ # Load the tree
573+ pub = tree.Tree(conf)
574+
575+ # Iterate through the channels
576+ for channel_name, channel in conf.channels.items():
577+ # We're only interested in automated channels
578+ if channel.type != "auto":
579+ logging.debug("Skipping non-auto channel: %s" % channel_name)
580+ continue
581+
582+ logging.info("Processing channel: %s" % channel_name)
583+
584+ # Check the channel exists
585+ if channel_name not in pub.list_channels():
586+ logging.error("Invalid channel name: %s" % channel_name)
587+ continue
588+
589+ # Iterate through the devices
590+ for device_name in pub.list_channels()[channel_name]['devices']:
591+ logging.info("Processing device: %s" % device_name)
592+
593+ device = pub.get_device(channel_name, device_name)
594+
595+ # Extract last full version
596+ full_images = {image['version']: image
597+ for image in device.list_images()
598+ if image['type'] == "full"}
599+
600+ last_full = None
601+ if full_images:
602+ last_full = sorted(full_images.values(),
603+ key=lambda image: image['version'])[-1]
604+ logging.debug("Last full image: %s" % last_full['version'])
605+ else:
606+ logging.debug("This is the first full image.")
607+
608+ # Extract all delta base versions
609+ delta_base = []
610+
611+ for base_channel in channel.deltabase:
612+ # Skip missing channels
613+ if base_channel not in pub.list_channels():
614+ logging.warn("Invalid base channel: %s" % base_channel)
615+ continue
616+
617+ # Skip missing devices
618+ if device_name not in (pub.list_channels()
619+ [base_channel]['devices']):
620+ logging.warn("Missing device in base channel: %s in %s" %
621+ (device_name, base_channel))
622+ continue
623+
624+ # Extract the latest full image
625+ base_device = pub.get_device(base_channel, device_name)
626+ base_images = sorted([image
627+ for image in base_device.list_images()
628+ if image['type'] == "full"],
629+ key=lambda image: image['version'])
630+
631+ # Check if the version is valid and add it
632+ if base_images and base_images[-1]['version'] in full_images:
633+ if (full_images[base_images[-1]['version']]
634+ not in delta_base):
635+ delta_base.append(full_images
636+ [base_images[-1]['version']])
637+ logging.debug("Source version for delta: %s" %
638+ base_images[-1]['version'])
639+
640+ # Allocate new version number
641+ new_version = channel.versionbase
642+ if last_full:
643+ new_version = last_full['version'] + 1
644+ logging.debug("Version for next image: %s" % new_version)
645+
646+ # And the list used to generate version_detail
647+ version_detail = []
648+
649+ # And a list of new files
650+ new_files = []
651+
652+ # Keep track of what files we've processed
653+ processed_files = []
654+
655+ # Create new empty entries
656+ new_images = {}
657+ new_images['full'] = {'files': []}
658+ for delta in delta_base:
659+ new_images["delta_%s" % delta['version']] = {'files': []}
660+
661+ # Iterate through the files
662+ for file_entry in channel.files:
663+ # Deal with device specific overrides
664+ if "," in file_entry['name']:
665+ file_name, file_device = file_entry['name'].split(',', 1)
666+ if file_device != device_name:
667+ logging.debug("Skipping '%s' because the device name"
668+ "doesn't match" % file_entry['name'])
669+ continue
670+ else:
671+ file_name = file_entry['name']
672+
673+ if file_name in processed_files:
674+ logging.debug("Skipping '%s' because a more specific"
675+ "generator was already called."
676+ % file_entry['name'])
677+ continue
678+
679+ processed_files.append(file_name)
680+
681+ # Generate the environment
682+ environment = {}
683+ environment['channel_name'] = channel_name
684+ environment['device'] = device
685+ environment['device_name'] = device_name
686+ environment['version'] = new_version
687+ environment['version_detail'] = version_detail
688+ environment['new_files'] = new_files
689+
690+ # Call file generator
691+ logging.info("Calling '%s' generator for a new file"
692+ % file_entry['generator'])
693+ path = generators.generate_file(conf,
694+ file_entry['generator'],
695+ file_entry['arguments'],
696+ environment)
697+
698+ # Generators are allowed to return None when no build
699+ # exists at all. This cancels the whole image.
700+ if not path:
701+ new_files = []
702+ logging.info("No image will be produced because the "
703+ "'%s' generator returned None" %
704+ file_entry['generator'])
705+ break
706+
707+ # Get the full and relative paths
708+ abspath, relpath = tools.expand_path(path, conf.publish_path)
709+ urlpath = "/%s" % "/".join(relpath.split(os.sep))
710+
711+ # FIXME: Extract the prefix, used later for matching between
712+ # full images. This forces a specific filename format.
713+ prefix = abspath.split("/")[-1].rsplit("-", 1)[0]
714+
715+ # Add the file to the full image
716+ new_images['full']['files'].append(abspath)
717+
718+ # Check if same as current
719+ new_file = True
720+ if last_full:
721+ for file_dict in last_full['files']:
722+ if file_dict['path'] == urlpath:
723+ new_file = False
724+ break
725+
726+ if new_file:
727+ logging.info("New file from '%s': %s" %
728+ (file_entry['generator'], relpath))
729+ new_files.append(abspath)
730+ else:
731+ logging.info("File from '%s' is already current" %
732+ (file_entry['generator']))
733+
734+ # Generate deltas
735+ for delta in delta_base:
736+ # Extract the source
737+ src_path = None
738+ for file_dict in delta['files']:
739+ if (file_dict['path'].split("/")[-1]
740+ .startswith(prefix)):
741+ src_path = "%s/%s" % (conf.publish_path,
742+ file_dict['path'])
743+ break
744+
745+ # Check that it's not the current file
746+ if src_path:
747+ src_path = os.path.realpath(src_path)
748+
749+ # FIXME: the keyring- is a big hack...
750+ if src_path == abspath and "keyring-" not in src_path:
751+ continue
752+
753+ # Generators are allowed to return None when no delta
754+ # exists at all.
755+ logging.info("Generating delta from '%s' for '%s'" %
756+ (delta['version'],
757+ file_entry['generator']))
758+ delta_path = generators.generate_delta(conf, src_path,
759+ abspath)
760+ else:
761+ delta_path = abspath
762+
763+ if not delta_path:
764+ continue
765+
766+ # Get the full and relative paths
767+ delta_abspath, delta_relpath = tools.expand_path(
768+ delta_path, conf.publish_path)
769+
770+ new_images['delta_%s' % delta['version']]['files'] \
771+ .append(delta_abspath)
772+
773+ # Check if we've got a new image
774+ if len(new_files):
775+ # Publish full image
776+ logging.info("Publishing new image '%s' (%s) with %s files."
777+ % (new_version,
778+ ",".join(environment['version_detail']),
779+ len(new_images['full']['files'])))
780+ device.create_image("full", new_version,
781+ ",".join(environment['version_detail']),
782+ new_images['full']['files'])
783+ # Publish deltas
784+ for delta in delta_base:
785+ files = new_images["delta_%s" % delta['version']]['files']
786+ logging.info("Publishing new delta from '%s' (%s)"
787+ " to '%s' (%s) with %s files" %
788+ (delta['version'],
789+ delta.get("description", ""),
790+ new_version,
791+ ",".join(environment['version_detail']),
792+ len(files)))
793+ device.create_image(
794+ "delta", new_version,
795+ ",".join(environment['version_detail']), files,
796+ base=delta['version'])
797+
798+ # Expire images
799+ if channel.fullcount > 0:
800+ logging.info("Expiring old images")
801+ device.expire_images(channel.fullcount)
802+
803+ # Sync all channel aliases
804+ logging.info("Syncing any existing alias")
805+ pub.sync_aliases(channel_name)
806+
807+ # Remove any orphaned file
808+ logging.info("Removing orphaned files from the pool")
809+ pub.cleanup_tree()
810+
811+ # Sync the mirrors
812+ logging.info("Triggering a mirror sync")
813+ tools.sync_mirrors(conf)
814
815=== added file 'bin/set-phased-percentage'
816--- bin/set-phased-percentage 1970-01-01 00:00:00 +0000
817+++ bin/set-phased-percentage 2014-10-10 11:11:17 +0000
818@@ -0,0 +1,90 @@
819+#!/usr/bin/python
820+# -*- coding: utf-8 -*-
821+
822+# Copyright (C) 2013 Canonical Ltd.
823+# Author: Stéphane Graber <stgraber@ubuntu.com>
824+
825+# This program is free software: you can redistribute it and/or modify
826+# it under the terms of the GNU General Public License as published by
827+# the Free Software Foundation; version 3 of the License.
828+#
829+# This program is distributed in the hope that it will be useful,
830+# but WITHOUT ANY WARRANTY; without even the implied warranty of
831+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
832+# GNU General Public License for more details.
833+#
834+# You should have received a copy of the GNU General Public License
835+# along with this program. If not, see <http://www.gnu.org/licenses/>.
836+
837+import os
838+import sys
839+sys.path.insert(0, os.path.join(sys.path[0], os.pardir, "lib"))
840+
841+from systemimage import config, tools, tree
842+
843+import argparse
844+import logging
845+
846+if __name__ == '__main__':
847+ parser = argparse.ArgumentParser(description="set phased percentage")
848+ parser.add_argument("channel", metavar="CHANNEL")
849+ parser.add_argument("device", metavar="DEVICE")
850+ parser.add_argument("version", metavar="VERSION", type=int)
851+ parser.add_argument("percentage", metavar="PERCENTAGE", type=int)
852+ parser.add_argument("--verbose", "-v", action="count")
853+
854+ args = parser.parse_args()
855+
856+ # Setup logging
857+ formatter = logging.Formatter(
858+ "%(asctime)s %(levelname)s %(message)s")
859+
860+ levels = {1: logging.ERROR,
861+ 2: logging.WARNING,
862+ 3: logging.INFO,
863+ 4: logging.DEBUG}
864+
865+ if args.verbose > 0:
866+ stdoutlogger = logging.StreamHandler(sys.stdout)
867+ stdoutlogger.setFormatter(formatter)
868+ logging.root.setLevel(levels[min(4, args.verbose)])
869+ logging.root.addHandler(stdoutlogger)
870+ else:
871+ logging.root.addHandler(logging.NullHandler())
872+
873+ # Load the configuration
874+ conf = config.Config()
875+
876+ # Load the tree
877+ pub = tree.Tree(conf)
878+
879+ # Do some checks
880+ if args.channel not in pub.list_channels():
881+ parser.error("Invalid channel: %s" % args.channel)
882+
883+ if args.device not in pub.list_channels()[args.channel]['devices']:
884+ parser.error("Invalid device for source channel: %s" %
885+ args.device)
886+
887+ if args.percentage < 0 or args.percentage > 100:
888+ parser.error("Invalid value: %s" % args.percentage)
889+
890+ if "alias" in pub.list_channels()[args.channel] and \
891+ pub.list_channels()[args.channel]['alias'] != args.channel:
892+ parser.error("Channel is an alias.")
893+
894+ if "redirect" in pub.list_channels()[args.channel]:
895+ parser.error("Channel is a redirect.")
896+
897+ dev = pub.get_device(args.channel, args.device)
898+ logging.info("Setting phased-percentage of '%s' to %s%%" %
899+ (args.version, args.percentage))
900+ dev.set_phased_percentage(args.version, args.percentage)
901+
902+ # Sync all channel aliases
903+ logging.info("Syncing any existing alias")
904+ pub.sync_aliases(args.channel)
905+
906+ # Sync the mirrors
907+ logging.info("Triggering a mirror sync")
908+ tools.sync_mirrors(conf)
909
910=== added file 'bin/si-shell'
911--- bin/si-shell 1970-01-01 00:00:00 +0000
912+++ bin/si-shell 2014-10-10 11:11:17 +0000
913@@ -0,0 +1,79 @@
914+#!/usr/bin/python
915+# -*- coding: utf-8 -*-
916+
917+# Copyright (C) 2013 Canonical Ltd.
918+# Author: Stéphane Graber <stgraber@ubuntu.com>
919+
920+# This program is free software: you can redistribute it and/or modify
921+# it under the terms of the GNU General Public License as published by
922+# the Free Software Foundation; version 3 of the License.
923+#
924+# This program is distributed in the hope that it will be useful,
925+# but WITHOUT ANY WARRANTY; without even the implied warranty of
926+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
927+# GNU General Public License for more details.
928+#
929+# You should have received a copy of the GNU General Public License
930+# along with this program. If not, see <http://www.gnu.org/licenses/>.
931+
932+import code
933+import logging
934+import os
935+import sys
936+sys.path.insert(0, os.path.join(sys.path[0], os.pardir, "lib"))
937+
938+from systemimage import config, tree
939+
940+import argparse
941+
942+if __name__ == '__main__':
943+ parser = argparse.ArgumentParser(description="system-image shell")
944+ parser.add_argument("--verbose", "-v", action="count", default=0)
945+
946+ args = parser.parse_args()
947+
948+ # Setup logging
949+ formatter = logging.Formatter(
950+ "%(asctime)s %(levelname)s %(message)s")
951+
952+ levels = {1: logging.ERROR,
953+ 2: logging.WARNING,
954+ 3: logging.INFO,
955+ 4: logging.DEBUG}
956+
957+ if args.verbose > 0:
958+ stdoutlogger = logging.StreamHandler(sys.stdout)
959+ stdoutlogger.setFormatter(formatter)
960+ stdoutlogger.setLevel(levels[min(4, args.verbose)])
961+ logging.root.addHandler(stdoutlogger)
962+ else:
963+ logging.root.addHandler(logging.NullHandler())
964+
965+ # Load the configuration
966+ conf = config.Config()
967+
968+ # Load the tree
969+ pub = tree.Tree(conf)
970+
971+ # Start the shell
972+ banner = """Welcome to the system-image shell.
973+The configuration is available as: conf
974+The system-image tree is availabe as: pub
975+"""
976+
977+ class CompleterConsole(code.InteractiveConsole):
978+ def __init__(self):
979+ local = {'conf': conf,
980+ 'pub': pub}
981+ code.InteractiveConsole.__init__(self, locals=local)
982+ try:
983+ import readline
984+ except ImportError:
985+ print('I: readline module not available.')
986+ else:
987+ import rlcompleter
988+ rlcompleter # Silence pyflakes
989+ readline.parse_and_bind("tab: complete")
990+
991+ console = CompleterConsole()
992+ console.interact(banner)
993
994=== added directory 'etc'
995=== added file 'etc/config.example'
996--- etc/config.example 1970-01-01 00:00:00 +0000
997+++ etc/config.example 2014-10-10 11:11:17 +0000
998@@ -0,0 +1,48 @@
999+[global]
1000+base_path = /some/fs/path
1001+channels = trusty, trusty-proposed, trusty-customized
1002+gpg_key_path = secret/gpg/keys/
1003+gpg_keyring_path = secret/gpg/keyrings/
1004+publish_path = www/
1005+state_path = state/
1006+mirrors = a, b
1007+public_fqdn = system-image.example.net
1008+public_http_port = 80
1009+public_https_port = 443
1010+
1011+[channel_trusty]
1012+type = manual
1013+versionbase = 1
1014+fullcount = 10
1015+
1016+[channel_trusty-proposed]
1017+type = auto
1018+versionbase = 1
1019+fullcount = 20
1020+deltabase = trusty, trusty-proposed
1021+files = ubuntu, device, version
1022+file_ubuntu = cdimage-ubuntu;daily-preinstalled;trusty,import=any
1023+file_device = cdimage-device;daily-preinstalled;trusty,import=any
1024+file_version = version
1025+
1026+[channel_trusty-customized]
1027+type = auto
1028+versionbase = 1
1029+fullcount = 15
1030+files = ubuntu, device, custom, version
1031+file_ubuntu = system-image;trusty;file=ubuntu
1032+file_device = system-image;trusty;file=device
1033+file_custom = http;http://www.example.net/custom/custom.tar.xz;name=custom,monitor=http://www.example.net/custom/build_number
1034+file_version = version
1035+
1036+[mirror_default]
1037+ssh_user = mirror
1038+ssh_key = secret/ssh/mirror
1039+ssh_port = 22
1040+ssh_command = sync-mirror
1041+
1042+[mirror_a]
1043+ssh_host = a.example.com
1044+
1045+[mirror_b]
1046+ssh_host = b.example.com
1047
1048=== added directory 'lib'
1049=== added directory 'lib/systemimage'
1050=== added file 'lib/systemimage/__init__.py'
1051=== added file 'lib/systemimage/config.py'
1052--- lib/systemimage/config.py 1970-01-01 00:00:00 +0000
1053+++ lib/systemimage/config.py 2014-10-10 11:11:17 +0000
1054@@ -0,0 +1,206 @@
1055+# -*- coding: utf-8 -*-
1056+
1057+# Copyright (C) 2013 Canonical Ltd.
1058+# Author: Stéphane Graber <stgraber@ubuntu.com>
1059+
1060+# This program is free software: you can redistribute it and/or modify
1061+# it under the terms of the GNU General Public License as published by
1062+# the Free Software Foundation; version 3 of the License.
1063+#
1064+# This program is distributed in the hope that it will be useful,
1065+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1066+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1067+# GNU General Public License for more details.
1068+#
1069+# You should have received a copy of the GNU General Public License
1070+# along with this program. If not, see <http://www.gnu.org/licenses/>.
1071+
1072+import os
1073+
1074+try:
1075+ from configparser import ConfigParser
1076+except ImportError: # pragma: no cover
1077+ from ConfigParser import ConfigParser
1078+
1079+
1080+def parse_config(path):
1081+ config = {}
1082+
1083+ configp = ConfigParser()
1084+ try:
1085+ configp.read(path)
1086+ except:
1087+ return config
1088+
1089+ for section in configp.sections():
1090+ config_section = {}
1091+ for option in configp.options(section):
1092+ value = configp.get(section, option)
1093+ if ", " in value:
1094+ value = [entry.strip('"').strip()
1095+ for entry in value.split(", ")]
1096+ else:
1097+ value = value.strip('"').strip()
1098+ config_section[option] = value
1099+ config[section] = config_section
1100+
1101+ return config
1102+
1103+
1104+class Config:
1105+ def __init__(self, path=None):
1106+ if not path:
1107+ path = "%s/etc/config" % os.environ.get("SYSTEM_IMAGE_ROOT",
1108+ os.getcwd())
1109+ if not os.path.exists(path):
1110+ path = os.path.realpath(os.path.join(os.path.dirname(__file__),
1111+ "../../etc/config"))
1112+
1113+ self.load_config(path)
1114+
1115+ def load_config(self, path):
1116+ if not os.path.exists(path):
1117+ raise Exception("Configuration file doesn't exist: %s" % path)
1118+
1119+ # Read the config
1120+ config = parse_config(path)
1121+
1122+ if 'global' not in config:
1123+ config['global'] = {}
1124+
1125+ # Set defaults
1126+ self.base_path = config['global'].get(
1127+ "base_path", os.environ.get("SYSTEM_IMAGE_ROOT", os.getcwd()))
1128+
1129+ self.gpg_key_path = config['global'].get(
1130+ "gpg_key_path", os.path.join(self.base_path,
1131+ "secret", "gpg", "keys"))
1132+ if not self.gpg_key_path.startswith("/"):
1133+ self.gpg_key_path = os.path.join(self.base_path, self.gpg_key_path)
1134+
1135+ self.gpg_keyring_path = config['global'].get(
1136+ "gpg_keyring_path", os.path.join(self.base_path,
1137+ "secret", "gpg", "keyrings"))
1138+ if not self.gpg_keyring_path.startswith("/"):
1139+ self.gpg_keyring_path = os.path.join(self.base_path,
1140+ self.gpg_keyring_path)
1141+
1142+ self.publish_path = config['global'].get(
1143+ "publish_path", os.path.join(self.base_path, "www"))
1144+ if not self.publish_path.startswith("/"):
1145+ self.publish_path = os.path.join(self.base_path, self.publish_path)
1146+
1147+ self.state_path = config['global'].get(
1148+ "state_path", os.path.join(self.base_path, "state"))
1149+ if not self.state_path.startswith("/"):
1150+ self.state_path = os.path.join(self.base_path, self.state_path)
1151+
1152+ # Export some more keys as-is
1153+ for key in ("public_fqdn", "public_http_port", "public_https_port"):
1154+ if key not in config['global']:
1155+ continue
1156+
1157+ setattr(self, key, config['global'][key])
1158+
1159+ # Parse the mirror configuration
1160+ self.mirrors = {}
1161+ if "mirrors" in config['global']:
1162+ if not isinstance(config['global']['mirrors'], list):
1163+ config['global']['mirrors'] = [config['global']['mirrors']]
1164+
1165+ if len(config['global']['mirrors']) != 0:
1166+ if "mirror_default" not in config:
1167+ raise KeyError("Missing mirror_default section.")
1168+
1169+ for key in ("ssh_user", "ssh_key", "ssh_port", "ssh_command"):
1170+ if key not in config['mirror_default']:
1171+ raise KeyError("Missing key in mirror_default: %s" %
1172+ key)
1173+
1174+ for entry in config['global']['mirrors']:
1175+ dict_entry = "mirror_%s" % entry
1176+ if dict_entry not in config:
1177+ raise KeyError("Missing mirror section: %s" %
1178+ dict_entry)
1179+
1180+ mirror = type("Mirror", (object,), {})
1181+
1182+ if "ssh_host" not in config[dict_entry]:
1183+ raise KeyError("Missing key in %s: ssh_host" %
1184+ dict_entry)
1185+ else:
1186+ mirror.ssh_host = config[dict_entry]['ssh_host']
1187+
1188+ mirror.ssh_user = config[dict_entry].get(
1189+ "ssh_user", config['mirror_default']['ssh_user'])
1190+ mirror.ssh_key = config[dict_entry].get(
1191+ "ssh_key", config['mirror_default']['ssh_key'])
1192+ if not mirror.ssh_key.startswith("/"):
1193+ mirror.ssh_key = os.path.join(self.base_path,
1194+ mirror.ssh_key)
1195+ mirror.ssh_port = int(config[dict_entry].get(
1196+ "ssh_port", config['mirror_default']['ssh_port']))
1197+ mirror.ssh_command = config[dict_entry].get(
1198+ "ssh_command", config['mirror_default']['ssh_command'])
1199+
1200+ self.mirrors[entry] = mirror
1201+
1202+ # Parse the channel configuration
1203+ self.channels = {}
1204+ if "channels" in config['global']:
1205+ if not isinstance(config['global']['channels'], list):
1206+ config['global']['channels'] = \
1207+ [config['global']['channels']]
1208+
1209+ if len(config['global']['channels']) != 0:
1210+ for entry in config['global']['channels']:
1211+ dict_entry = "channel_%s" % entry
1212+ if dict_entry not in config:
1213+ raise KeyError("Missing channel section: %s" %
1214+ dict_entry)
1215+
1216+ channel = type("Channel", (object,), {})
1217+
1218+ channel.versionbase = int(config[dict_entry].get(
1219+ 'versionbase', 1))
1220+
1221+ channel.type = config[dict_entry].get(
1222+ "type", "manual")
1223+
1224+ channel.fullcount = int(config[dict_entry].get(
1225+ "fullcount", 0))
1226+
1227+ channel.deltabase = [entry]
1228+ if "deltabase" in config[dict_entry]:
1229+ if isinstance(config[dict_entry]["deltabase"],
1230+ list):
1231+ channel.deltabase = \
1232+ config[dict_entry]["deltabase"]
1233+ else:
1234+ channel.deltabase = \
1235+ [config[dict_entry]["deltabase"]]
1236+
1237+ # Parse the file list
1238+ files = config[dict_entry].get("files", [])
1239+ if isinstance(files, str):
1240+ files = [files]
1241+
1242+ channel.files = []
1243+ for file_entry in files:
1244+ if "file_%s" % file_entry not in config[dict_entry]:
1245+ raise KeyError("Missing file entry: %s" %
1246+ "file_%s" % file_entry)
1247+
1248+ fields = (config[dict_entry]
1249+ ["file_%s" % file_entry].split(";"))
1250+
1251+ file_dict = {}
1252+ file_dict['name'] = file_entry
1253+ file_dict['generator'] = fields[0]
1254+ file_dict['arguments'] = []
1255+ if len(fields) > 1:
1256+ file_dict['arguments'] = fields[1:]
1257+
1258+ channel.files.append(file_dict)
1259+
1260+ self.channels[entry] = channel
1261
1262=== added file 'lib/systemimage/diff.py'
1263--- lib/systemimage/diff.py 1970-01-01 00:00:00 +0000
1264+++ lib/systemimage/diff.py 2014-10-10 11:11:17 +0000
1265@@ -0,0 +1,242 @@
1266+# -*- coding: utf-8 -*-
1267+
1268+# Copyright (C) 2013 Canonical Ltd.
1269+# Author: Stéphane Graber <stgraber@ubuntu.com>
1270+
1271+# This program is free software: you can redistribute it and/or modify
1272+# it under the terms of the GNU General Public License as published by
1273+# the Free Software Foundation; version 3 of the License.
1274+#
1275+# This program is distributed in the hope that it will be useful,
1276+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1277+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1278+# GNU General Public License for more details.
1279+#
1280+# You should have received a copy of the GNU General Public License
1281+# along with this program. If not, see <http://www.gnu.org/licenses/>.
1282+
1283+import os
1284+import tarfile
1285+import time
1286+
1287+from io import BytesIO
1288+
1289+
1290+def compare_files(fd_source, fd_target):
1291+ """
1292+ Compare two files.
1293+
1294+ Returns True if their content matches.
1295+ Returns False if they don't match.
1296+ Returns None if the files can't be compared.
1297+ """
1298+
1299+ if fd_source == fd_target:
1300+ return True
1301+
1302+ if not fd_source or not fd_target:
1303+ return False
1304+
1305+ return fd_source.read() == fd_target.read()
1306+
1307+
1308+def list_tarfile(tarfile):
1309+ """
1310+ Walk through a tarfile and generate a list of the content.
1311+
1312+ Returns a tuple containing a set and a dict.
1313+ The set is typically used for simple diffs between tarballs.
1314+ The dict is used to easily grab the details of a specific entry.
1315+ """
1316+
1317+ set_content = set()
1318+ dict_content = {}
1319+
1320+ for entry in tarfile:
1321+ if entry.isdir():
1322+ set_content.add((entry.path, 'dir', None))
1323+ dict_content[entry.path] = ('dir', None)
1324+ else:
1325+ fhash = ("%s" % entry.mode,
1326+ "%s" % entry.devmajor,
1327+ "%s" % entry.devminor,
1328+ "%s" % entry.type.decode('utf-8'),
1329+ "%s" % entry.uid,
1330+ "%s" % entry.gid,
1331+ "%s" % entry.size,
1332+ "%s" % entry.mtime)
1333+
1334+ set_content.add((entry.path, 'file', fhash))
1335+ dict_content[entry.path] = ('file', fhash)
1336+
1337+ return (set_content, dict_content)
1338+
1339+
1340+class ImageDiff:
1341+ source_content = None
1342+ target_content = None
1343+ diff = None
1344+
1345+ def __init__(self, source, target):
1346+ self.source_file = tarfile.open(source, 'r:')
1347+ self.target_file = tarfile.open(target, 'r:')
1348+
1349+ def scan_content(self, image):
1350+ """
1351+ Scan the content of an image and return the image tuple.
1352+ This also caches the content for further use.
1353+ """
1354+
1355+ if image not in ("source", "target"):
1356+ raise KeyError("Invalid image '%s'." % image)
1357+
1358+ image_file = getattr(self, "%s_file" % image)
1359+
1360+ content = list_tarfile(image_file)
1361+
1362+ setattr(self, "%s_content" % image, content)
1363+ return content
1364+
1365+ def compare_images(self):
1366+ """
1367+ Compare the file listing of two images and return a set.
1368+ This also caches the diff for further use.
1369+
1370+ The set contains tuples of (path, changetype).
1371+ """
1372+ if not self.source_content:
1373+ self.scan_content("source")
1374+
1375+ if not self.target_content:
1376+ self.scan_content("target")
1377+
1378+ # Find the changes in the two trees
1379+ changes = set()
1380+ for change in self.source_content[0] \
1381+ .symmetric_difference(self.target_content[0]):
1382+ if change[0] not in self.source_content[1]:
1383+ changetype = "add"
1384+ elif change[0] not in self.target_content[1]:
1385+ changetype = "del"
1386+ else:
1387+ changetype = "mod"
1388+ changes.add((change[0], changetype))
1389+
1390+ # Ignore files that only vary in mtime
1391+ # (separate loop to run after de-dupe)
1392+ for change in sorted(changes):
1393+ if change[1] == "mod":
1394+ fstat_source = self.source_content[1][change[0]][1]
1395+ fstat_target = self.target_content[1][change[0]][1]
1396+
1397+ # Skip differences between directories and files
1398+ if not fstat_source or not fstat_target: # pragma: no cover
1399+ continue
1400+
1401+ # Deal with switched hardlinks
1402+ if (fstat_source[0:2] == fstat_target[0:2] and
1403+ fstat_source[3] != fstat_target[3] and
1404+ (fstat_source[3] == "1" or fstat_target[3] == "1") and
1405+ fstat_source[4:5] == fstat_target[4:5] and
1406+ fstat_source[7] == fstat_target[7]):
1407+ source_file = self.source_file.getmember(change[0])
1408+ target_file = self.target_file.getmember(change[0])
1409+ if compare_files(
1410+ self.source_file.extractfile(change[0]),
1411+ self.target_file.extractfile(change[0])):
1412+ changes.remove(change)
1413+ continue
1414+
1415+ # Deal with regular files
1416+ if fstat_source[0:7] == fstat_target[0:7]:
1417+ source_file = self.source_file.getmember(change[0])
1418+ target_file = self.target_file.getmember(change[0])
1419+
1420+ if (source_file.linkpath
1421+ and source_file.linkpath == target_file.linkpath):
1422+ changes.remove(change)
1423+ continue
1424+
1425+ if (source_file.isfile() and target_file.isfile()
1426+ and compare_files(
1427+ self.source_file.extractfile(change[0]),
1428+ self.target_file.extractfile(change[0]))):
1429+ changes.remove(change)
1430+ continue
1431+
1432+ self.diff = changes
1433+ return changes
1434+
1435+ def print_changes(self):
1436+ """
1437+ Simply print the list of changes.
1438+ """
1439+
1440+ if not self.diff:
1441+ self.compare_images()
1442+
1443+ for change in sorted(self.diff):
1444+ print(" - %s (%s)" % (change[0], change[1]))
1445+
1446+ def generate_diff_tarball(self, path):
1447+ """
1448+ Generate a tarball containing all files that are
1449+ different between the source and target iamge as well
1450+ as a file listing all removals.
1451+ """
1452+
1453+ if not self.diff:
1454+ self.compare_images()
1455+
1456+ output = tarfile.open(path, 'w:')
1457+
1458+ # List both deleted files and modified files in the removal list
1459+ # that's needed to allow file type change (e.g. directory to symlink)
1460+ removed_files_list = [entry[0] for entry in self.diff
1461+ if entry[1] in ("del", "mod")]
1462+
1463+ removed_files = "\n".join(removed_files_list)
1464+ removed_files = "%s\n" % removed_files.encode('utf-8')
1465+
1466+ removals = tarfile.TarInfo()
1467+ removals.name = "removed"
1468+ removals.size = len(removed_files)
1469+ removals.mtime = int(time.strftime("%s", time.localtime()))
1470+ removals.uname = "root"
1471+ removals.gname = "root"
1472+
1473+ output.addfile(removals, BytesIO(removed_files.encode('utf-8')))
1474+
1475+ # Copy all the added and modified
1476+ added = []
1477+ for name, action in sorted(self.diff):
1478+ if action == 'del':
1479+ continue
1480+
1481+ if name in added:
1482+ continue
1483+
1484+ newfile = self.target_file.getmember(name)
1485+ if newfile.islnk():
1486+ if newfile.linkname.startswith("system/"):
1487+ targetfile_path = newfile.linkname
1488+ else:
1489+ targetfile_path = os.path.normpath(os.path.join(
1490+ os.path.dirname(newfile.name), newfile.linkname))
1491+
1492+ targetfile = self.target_file.getmember(targetfile_path)
1493+
1494+ if ((targetfile_path, 'add') in self.diff or
1495+ (targetfile_path, 'mod') in self.diff) and \
1496+ targetfile_path not in added:
1497+ fileptr = self.target_file.extractfile(targetfile)
1498+ output.addfile(targetfile, fileptr)
1499+ added.append(targetfile.name)
1500+
1501+ fileptr = None
1502+ if newfile.isfile():
1503+ fileptr = self.target_file.extractfile(name)
1504+ output.addfile(newfile, fileobj=fileptr)
1505+ added.append(newfile.name)
1506+
1507+ output.close()
1508
1509=== added file 'lib/systemimage/generators.py'
1510--- lib/systemimage/generators.py 1970-01-01 00:00:00 +0000
1511+++ lib/systemimage/generators.py 2014-10-10 11:11:17 +0000
1512@@ -0,0 +1,1173 @@
1513+# -*- coding: utf-8 -*-
1514+
1515+# Copyright (C) 2013 Canonical Ltd.
1516+# Author: Stéphane Graber <stgraber@ubuntu.com>
1517+
1518+# This program is free software: you can redistribute it and/or modify
1519+# it under the terms of the GNU General Public License as published by
1520+# the Free Software Foundation; version 3 of the License.
1521+#
1522+# This program is distributed in the hope that it will be useful,
1523+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1524+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1525+# GNU General Public License for more details.
1526+#
1527+# You should have received a copy of the GNU General Public License
1528+# along with this program. If not, see <http://www.gnu.org/licenses/>.
1529+
1530+from hashlib import sha256
1531+from systemimage import diff, gpg, tree, tools
1532+import json
1533+import os
1534+import socket
1535+import shutil
1536+import subprocess
1537+import tarfile
1538+import tempfile
1539+import time
1540+
1541+try:
1542+ from urllib.request import urlopen, urlretrieve
1543+except ImportError: # pragma: no cover
1544+ from urllib import urlopen, urlretrieve
1545+
1546+# Global
1547+CACHE = {}
1548+
1549+
1550+def root_ownership(tarinfo):
1551+ tarinfo.mode = 0o644
1552+ tarinfo.mtime = int(time.strftime("%s", time.localtime()))
1553+ tarinfo.uname = "root"
1554+ tarinfo.gname = "root"
1555+ return tarinfo
1556+
1557+
1558+def unpack_arguments(arguments):
1559+ """
1560+ Takes a string representing comma separate key=value options and
1561+ returns a dict.
1562+ """
1563+ arg_dict = {}
1564+
1565+ for option in arguments.split(","):
1566+ fields = option.split("=")
1567+ if len(fields) != 2:
1568+ continue
1569+
1570+ arg_dict[fields[0]] = fields[1]
1571+
1572+ return arg_dict
1573+
1574+
1575+def generate_delta(conf, source_path, target_path):
1576+ """
1577+ Take two .tar.xz file and generate a third file, stored in the pool.
1578+ The path to the pool file is then returned and <path>.asc is also
1579+ generated using the default signing key.
1580+ """
1581+ source_filename = source_path.split("/")[-1].replace(".tar.xz", "")
1582+ target_filename = target_path.split("/")[-1].replace(".tar.xz", "")
1583+
1584+ # FIXME: This is a bit of an hack, it'd be better not to have to hardcode
1585+ # that kind of stuff...
1586+ if (source_filename.startswith("version-")
1587+ and target_filename.startswith("version-")):
1588+ return target_path
1589+
1590+ if (source_filename.startswith("keyring-")
1591+ and target_filename.startswith("keyring-")):
1592+ return target_path
1593+
1594+ # Now for everything else
1595+ path = os.path.realpath(os.path.join(conf.publish_path, "pool",
1596+ "%s.delta-%s.tar.xz" %
1597+ (target_filename, source_filename)))
1598+
1599+ # Return pre-existing entries
1600+ if os.path.exists(path):
1601+ return path
1602+
1603+ # Create the pool if it doesn't exist
1604+ if not os.path.exists(os.path.join(conf.publish_path, "pool")):
1605+ os.makedirs(os.path.join(conf.publish_path, "pool"))
1606+
1607+ # Generate the diff
1608+ tempdir = tempfile.mkdtemp()
1609+ tools.xz_uncompress(source_path, os.path.join(tempdir, "source.tar"))
1610+ tools.xz_uncompress(target_path, os.path.join(tempdir, "target.tar"))
1611+
1612+ imagediff = diff.ImageDiff(os.path.join(tempdir, "source.tar"),
1613+ os.path.join(tempdir, "target.tar"))
1614+
1615+ imagediff.generate_diff_tarball(os.path.join(tempdir, "output.tar"))
1616+ tools.xz_compress(os.path.join(tempdir, "output.tar"), path)
1617+ shutil.rmtree(tempdir)
1618+
1619+ # Sign the result
1620+ gpg.sign_file(conf, "image-signing", path)
1621+
1622+ # Generate the metadata file
1623+ metadata = {}
1624+ metadata['generator'] = "delta"
1625+ metadata['source'] = {}
1626+ metadata['target'] = {}
1627+
1628+ if os.path.exists(source_path.replace(".tar.xz", ".json")):
1629+ with open(source_path.replace(".tar.xz", ".json"), "r") as fd:
1630+ metadata['source'] = json.loads(fd.read())
1631+
1632+ if os.path.exists(target_path.replace(".tar.xz", ".json")):
1633+ with open(target_path.replace(".tar.xz", ".json"), "r") as fd:
1634+ metadata['target'] = json.loads(fd.read())
1635+
1636+ with open(path.replace(".tar.xz", ".json"), "w+") as fd:
1637+ fd.write("%s\n" % json.dumps(metadata, sort_keys=True,
1638+ indent=4, separators=(',', ': ')))
1639+ gpg.sign_file(conf, "image-signing", path.replace(".tar.xz", ".json"))
1640+
1641+ return path
1642+
1643+
1644+def generate_file(conf, generator, arguments, environment):
1645+ """
1646+ Dispatcher for the various generators and importers.
1647+ It calls the right generator and signs the generated file
1648+ before returning the path.
1649+ """
1650+
1651+ if generator == "version":
1652+ path = generate_file_version(conf, arguments, environment)
1653+ elif generator == "cdimage-device":
1654+ path = generate_file_cdimage_device(conf, arguments, environment)
1655+ elif generator == "cdimage-ubuntu":
1656+ path = generate_file_cdimage_ubuntu(conf, arguments, environment)
1657+ elif generator == "cdimage-custom":
1658+ path = generate_file_cdimage_custom(conf, arguments, environment)
1659+ elif generator == "http":
1660+ path = generate_file_http(conf, arguments, environment)
1661+ elif generator == "keyring":
1662+ path = generate_file_keyring(conf, arguments, environment)
1663+ elif generator == "system-image":
1664+ path = generate_file_system_image(conf, arguments, environment)
1665+ elif generator == "remote-system-image":
1666+ path = generate_file_remote_system_image(conf, arguments, environment)
1667+ else:
1668+ raise Exception("Invalid generator: %s" % generator)
1669+
1670+ return path
1671+
1672+
1673+def generate_file_cdimage_device(conf, arguments, environment):
1674+ """
1675+ Scan a cdimage tree for new device files.
1676+ """
1677+
1678+ # We need at least a path and a series
1679+ if len(arguments) < 2:
1680+ return None
1681+
1682+ # Read the arguments
1683+ cdimage_path = arguments[0]
1684+ series = arguments[1]
1685+
1686+ options = {}
1687+ if len(arguments) > 2:
1688+ options = unpack_arguments(arguments[2])
1689+
1690+ boot_arch = "armhf"
1691+ recovery_arch = "armel"
1692+ system_arch = "armel"
1693+ if environment['device_name'] in ("generic_x86", "generic_i386"):
1694+ boot_arch = "i386"
1695+ recovery_arch = "i386"
1696+ system_arch = "i386"
1697+ elif environment['device_name'] in ("generic_amd64",):
1698+ boot_arch = "amd64"
1699+ recovery_arch = "amd64"
1700+ system_arch = "amd64"
1701+
1702+ # Check that the directory exists
1703+ if not os.path.exists(cdimage_path):
1704+ return None
1705+
1706+ versions = sorted([version for version in os.listdir(cdimage_path)
1707+ if version not in ("pending", "current")],
1708+ reverse=True)
1709+
1710+ for version in versions:
1711+ # Skip directory without checksums
1712+ if not os.path.exists(os.path.join(cdimage_path, version,
1713+ "SHA256SUMS")):
1714+ continue
1715+
1716+ # Check for all the needed files
1717+ boot_path = os.path.join(cdimage_path, version,
1718+ "%s-preinstalled-boot-%s+%s.img" %
1719+ (series, boot_arch,
1720+ environment['device_name']))
1721+ if not os.path.exists(boot_path):
1722+ continue
1723+
1724+ recovery_path = os.path.join(cdimage_path, version,
1725+ "%s-preinstalled-recovery-%s+%s.img" %
1726+ (series, recovery_arch,
1727+ environment['device_name']))
1728+ if not os.path.exists(recovery_path):
1729+ continue
1730+
1731+ system_path = os.path.join(cdimage_path, version,
1732+ "%s-preinstalled-system-%s+%s.img" %
1733+ (series, system_arch,
1734+ environment['device_name']))
1735+ if not os.path.exists(system_path):
1736+ continue
1737+
1738+ # Check if we should only import tested images
1739+ if options.get("import", "any") == "good":
1740+ if not os.path.exists(os.path.join(cdimage_path, version,
1741+ ".marked_good")):
1742+ continue
1743+
1744+ # Set the version_detail string
1745+ version_detail = "device=%s" % version
1746+
1747+ # Extract the hashes
1748+ boot_hash = None
1749+ recovery_hash = None
1750+ system_hash = None
1751+ with open(os.path.join(cdimage_path, version,
1752+ "SHA256SUMS"), "r") as fd:
1753+ for line in fd:
1754+ line = line.strip()
1755+ if line.endswith(boot_path.split("/")[-1]):
1756+ boot_hash = line.split()[0]
1757+ elif line.endswith(recovery_path.split("/")[-1]):
1758+ recovery_hash = line.split()[0]
1759+ elif line.endswith(system_path.split("/")[-1]):
1760+ system_hash = line.split()[0]
1761+
1762+ if boot_hash and recovery_hash and system_hash:
1763+ break
1764+
1765+ if not boot_hash or not recovery_hash or not system_hash:
1766+ continue
1767+
1768+ hash_string = "%s/%s/%s" % (boot_hash, recovery_hash, system_hash)
1769+ global_hash = sha256(hash_string.encode('utf-8')).hexdigest()
1770+
1771+ # Generate the path
1772+ path = os.path.join(conf.publish_path, "pool",
1773+ "device-%s.tar.xz" % global_hash)
1774+
1775+ # Return pre-existing entries
1776+ if os.path.exists(path):
1777+ # Get the real version number (in case it got copied)
1778+ if os.path.exists(path.replace(".tar.xz", ".json")):
1779+ with open(path.replace(".tar.xz", ".json"), "r") as fd:
1780+ metadata = json.loads(fd.read())
1781+
1782+ if "version_detail" in metadata:
1783+ version_detail = metadata['version_detail']
1784+
1785+ environment['version_detail'].append(version_detail)
1786+ return path
1787+
1788+ temp_dir = tempfile.mkdtemp()
1789+
1790+ # Generate a new tarball
1791+ target_tarball = tarfile.open(os.path.join(temp_dir, "target.tar"),
1792+ "w:")
1793+
1794+ # system image
1795+ # # convert to raw image
1796+ system_img = os.path.join(temp_dir, "system.img")
1797+ with open(os.path.devnull, "w") as devnull:
1798+ subprocess.call(["simg2img", system_path, system_img],
1799+ stdout=devnull)
1800+
1801+ # # shrink to minimal size
1802+ with open(os.path.devnull, "w") as devnull:
1803+ subprocess.call(["resize2fs", "-M", system_img],
1804+ stdout=devnull, stderr=devnull)
1805+
1806+ # # include in tarball
1807+ target_tarball.add(system_img,
1808+ arcname="system/var/lib/lxc/android/system.img",
1809+ filter=root_ownership)
1810+
1811+ # boot image
1812+ target_tarball.add(boot_path, arcname="partitions/boot.img",
1813+ filter=root_ownership)
1814+
1815+ # recovery image
1816+ target_tarball.add(recovery_path,
1817+ arcname="partitions/recovery.img",
1818+ filter=root_ownership)
1819+
1820+ target_tarball.close()
1821+
1822+ # Create the pool if it doesn't exist
1823+ if not os.path.exists(os.path.join(conf.publish_path, "pool")):
1824+ os.makedirs(os.path.join(conf.publish_path, "pool"))
1825+
1826+ # Compress the target tarball and sign it
1827+ tools.xz_compress(os.path.join(temp_dir, "target.tar"), path)
1828+ gpg.sign_file(conf, "image-signing", path)
1829+
1830+ # Generate the metadata file
1831+ metadata = {}
1832+ metadata['generator'] = "cdimage-device"
1833+ metadata['version'] = version
1834+ metadata['version_detail'] = version_detail
1835+ metadata['series'] = series
1836+ metadata['device'] = environment['device_name']
1837+ metadata['boot_path'] = boot_path
1838+ metadata['boot_checksum'] = boot_hash
1839+ metadata['recovery_path'] = recovery_path
1840+ metadata['recovery_checksum'] = recovery_hash
1841+ metadata['system_path'] = system_path
1842+ metadata['system_checksum'] = system_hash
1843+
1844+ with open(path.replace(".tar.xz", ".json"), "w+") as fd:
1845+ fd.write("%s\n" % json.dumps(metadata, sort_keys=True,
1846+ indent=4, separators=(',', ': ')))
1847+ gpg.sign_file(conf, "image-signing", path.replace(".tar.xz", ".json"))
1848+
1849+ # Cleanup
1850+ shutil.rmtree(temp_dir)
1851+
1852+ environment['version_detail'].append(version_detail)
1853+ return path
1854+
1855+ return None
1856+
1857+
1858+def generate_file_cdimage_ubuntu(conf, arguments, environment):
1859+ """
1860+ Scan a cdimage tree for new ubuntu files.
1861+ """
1862+
1863+ # We need at least a path and a series
1864+ if len(arguments) < 2:
1865+ return None
1866+
1867+ # Read the arguments
1868+ cdimage_path = arguments[0]
1869+ series = arguments[1]
1870+
1871+ options = {}
1872+ if len(arguments) > 2:
1873+ options = unpack_arguments(arguments[2])
1874+
1875+ arch = "armhf"
1876+ if environment['device_name'] in ("generic_x86", "generic_i386"):
1877+ arch = "i386"
1878+ elif environment['device_name'] in ("generic_amd64",):
1879+ arch = "amd64"
1880+
1881+ # Check that the directory exists
1882+ if not os.path.exists(cdimage_path):
1883+ return None
1884+
1885+ versions = sorted([version for version in os.listdir(cdimage_path)
1886+ if version not in ("pending", "current")],
1887+ reverse=True)
1888+
1889+ for version in versions:
1890+ # Skip directory without checksums
1891+ if not os.path.exists(os.path.join(cdimage_path, version,
1892+ "SHA256SUMS")):
1893+ continue
1894+
1895+ # Check for the rootfs
1896+ rootfs_path = os.path.join(cdimage_path, version,
1897+ "%s-preinstalled-%s-%s.tar.gz" %
1898+ (series, options.get("product", "touch"),
1899+ arch))
1900+ if not os.path.exists(rootfs_path):
1901+ continue
1902+
1903+ # Check if we should only import tested images
1904+ if options.get("import", "any") == "good":
1905+ if not os.path.exists(os.path.join(cdimage_path, version,
1906+ ".marked_good")):
1907+ continue
1908+
1909+ # Set the version_detail string
1910+ version_detail = "ubuntu=%s" % version
1911+
1912+ # Extract the hash
1913+ rootfs_hash = None
1914+ with open(os.path.join(cdimage_path, version,
1915+ "SHA256SUMS"), "r") as fd:
1916+ for line in fd:
1917+ line = line.strip()
1918+ if line.endswith(rootfs_path.split("/")[-1]):
1919+ rootfs_hash = line.split()[0]
1920+ break
1921+
1922+ if not rootfs_hash:
1923+ continue
1924+
1925+ # Generate the path
1926+ path = os.path.join(conf.publish_path, "pool",
1927+ "ubuntu-%s.tar.xz" % rootfs_hash)
1928+
1929+ # Return pre-existing entries
1930+ if os.path.exists(path):
1931+ # Get the real version number (in case it got copied)
1932+ if os.path.exists(path.replace(".tar.xz", ".json")):
1933+ with open(path.replace(".tar.xz", ".json"), "r") as fd:
1934+ metadata = json.loads(fd.read())
1935+
1936+ if "version_detail" in metadata:
1937+ version_detail = metadata['version_detail']
1938+
1939+ environment['version_detail'].append(version_detail)
1940+ return path
1941+
1942+ temp_dir = tempfile.mkdtemp()
1943+
1944+ # Unpack the source tarball
1945+ tools.gzip_uncompress(rootfs_path, os.path.join(temp_dir,
1946+ "source.tar"))
1947+
1948+ # Generate a new shifted tarball
1949+ source_tarball = tarfile.open(os.path.join(temp_dir, "source.tar"),
1950+ "r:")
1951+ target_tarball = tarfile.open(os.path.join(temp_dir, "target.tar"),
1952+ "w:")
1953+
1954+ added = []
1955+ for entry in source_tarball:
1956+ # FIXME: Will need to be done on the real rootfs
1957+ # Skip some files
1958+ if entry.name in ("SWAP.swap", "etc/mtab"):
1959+ continue
1960+
1961+ fileptr = None
1962+ if entry.isfile():
1963+ try:
1964+ fileptr = source_tarball.extractfile(entry.name)
1965+ except KeyError: # pragma: no cover
1966+ pass
1967+
1968+ # Update hardlinks to point to the right target
1969+ if entry.islnk():
1970+ entry.linkname = "system/%s" % entry.linkname
1971+
1972+ entry.name = "system/%s" % entry.name
1973+ target_tarball.addfile(entry, fileobj=fileptr)
1974+ added.append(entry.name)
1975+
1976+ if options.get("product", "touch") == "touch":
1977+ # FIXME: Will need to be done on the real rootfs
1978+ # Add some symlinks and directories
1979+ # # /android
1980+ new_file = tarfile.TarInfo()
1981+ new_file.type = tarfile.DIRTYPE
1982+ new_file.name = "system/android"
1983+ new_file.mode = 0o755
1984+ new_file.mtime = int(time.strftime("%s", time.localtime()))
1985+ new_file.uname = "root"
1986+ new_file.gname = "root"
1987+ target_tarball.addfile(new_file)
1988+
1989+ # # Android partitions
1990+ for android_path in ("cache", "data", "factory", "firmware",
1991+ "persist", "system"):
1992+ new_file = tarfile.TarInfo()
1993+ new_file.type = tarfile.SYMTYPE
1994+ new_file.name = "system/%s" % android_path
1995+ new_file.linkname = "/android/%s" % android_path
1996+ new_file.mode = 0o755
1997+ new_file.mtime = int(time.strftime("%s", time.localtime()))
1998+ new_file.uname = "root"
1999+ new_file.gname = "root"
2000+ target_tarball.addfile(new_file)
2001+
2002+ # # /vendor
2003+ new_file = tarfile.TarInfo()
2004+ new_file.type = tarfile.SYMTYPE
2005+ new_file.name = "system/vendor"
2006+ new_file.linkname = "/android/system/vendor"
2007+ new_file.mode = 0o755
2008+ new_file.mtime = int(time.strftime("%s", time.localtime()))
2009+ new_file.uname = "root"
2010+ new_file.gname = "root"
2011+ target_tarball.addfile(new_file)
2012+
2013+ # # /userdata
2014+ new_file = tarfile.TarInfo()
2015+ new_file.type = tarfile.DIRTYPE
2016+ new_file.name = "system/userdata"
2017+ new_file.mode = 0o755
2018+ new_file.mtime = int(time.strftime("%s", time.localtime()))
2019+ new_file.uname = "root"
2020+ new_file.gname = "root"
2021+ target_tarball.addfile(new_file)
2022+
2023+ # # /etc/mtab
2024+ new_file = tarfile.TarInfo()
2025+ new_file.type = tarfile.SYMTYPE
2026+ new_file.name = "system/etc/mtab"
2027+ new_file.linkname = "/proc/mounts"
2028+ new_file.mode = 0o444
2029+ new_file.mtime = int(time.strftime("%s", time.localtime()))
2030+ new_file.uname = "root"
2031+ new_file.gname = "root"
2032+ target_tarball.addfile(new_file)
2033+
2034+ # # /lib/modules
2035+ new_file = tarfile.TarInfo()
2036+ new_file.type = tarfile.DIRTYPE
2037+ new_file.name = "system/lib/modules"
2038+ new_file.mode = 0o755
2039+ new_file.mtime = int(time.strftime("%s", time.localtime()))
2040+ new_file.uname = "root"
2041+ new_file.gname = "root"
2042+ target_tarball.addfile(new_file)
2043+
2044+ source_tarball.close()
2045+ target_tarball.close()
2046+
2047+ # Create the pool if it doesn't exist
2048+ if not os.path.exists(os.path.join(conf.publish_path, "pool")):
2049+ os.makedirs(os.path.join(conf.publish_path, "pool"))
2050+
2051+ # Compress the target tarball and sign it
2052+ tools.xz_compress(os.path.join(temp_dir, "target.tar"), path)
2053+ gpg.sign_file(conf, "image-signing", path)
2054+
2055+ # Generate the metadata file
2056+ metadata = {}
2057+ metadata['generator'] = "cdimage-ubuntu"
2058+ metadata['version'] = version
2059+ metadata['version_detail'] = version_detail
2060+ metadata['series'] = series
2061+ metadata['rootfs_path'] = rootfs_path
2062+ metadata['rootfs_checksum'] = rootfs_hash
2063+
2064+ with open(path.replace(".tar.xz", ".json"), "w+") as fd:
2065+ fd.write("%s\n" % json.dumps(metadata, sort_keys=True,
2066+ indent=4, separators=(',', ': ')))
2067+ gpg.sign_file(conf, "image-signing", path.replace(".tar.xz", ".json"))
2068+
2069+ # Cleanup
2070+ shutil.rmtree(temp_dir)
2071+
2072+ environment['version_detail'].append(version_detail)
2073+ return path
2074+
2075+ return None
2076+
2077+
2078+def generate_file_cdimage_custom(conf, arguments, environment):
2079+ """
2080+ Scan a cdimage tree for new custom files.
2081+ """
2082+
2083+ # We need at least a path and a series
2084+ if len(arguments) < 2:
2085+ return None
2086+
2087+ # Read the arguments
2088+ cdimage_path = arguments[0]
2089+ series = arguments[1]
2090+
2091+ options = {}
2092+ if len(arguments) > 2:
2093+ options = unpack_arguments(arguments[2])
2094+
2095+ arch = "armhf"
2096+ if environment['device_name'] in ("generic_x86", "generic_i386"):
2097+ arch = "i386"
2098+ elif environment['device_name'] in ("generic_amd64",):
2099+ arch = "amd64"
2100+
2101+ # Check that the directory exists
2102+ if not os.path.exists(cdimage_path):
2103+ return None
2104+
2105+ versions = sorted([version for version in os.listdir(cdimage_path)
2106+ if version not in ("pending", "current")],
2107+ reverse=True)
2108+
2109+ for version in versions:
2110+ # Skip directory without checksums
2111+ if not os.path.exists(os.path.join(cdimage_path, version,
2112+ "SHA256SUMS")):
2113+ continue
2114+
2115+ # Check for the custom tarball
2116+ custom_path = os.path.join(cdimage_path, version,
2117+ "%s-preinstalled-%s-%s.custom.tar.gz" %
2118+ (series, options.get("product", "touch"),
2119+ arch))
2120+ if not os.path.exists(custom_path):
2121+ continue
2122+
2123+ # Check if we should only import tested images
2124+ if options.get("import", "any") == "good":
2125+ if not os.path.exists(os.path.join(cdimage_path, version,
2126+ ".marked_good")):
2127+ continue
2128+
2129+ # Set the version_detail string
2130+ version_detail = "custom=%s" % version
2131+
2132+ # Extract the hash
2133+ custom_hash = None
2134+ with open(os.path.join(cdimage_path, version,
2135+ "SHA256SUMS"), "r") as fd:
2136+ for line in fd:
2137+ line = line.strip()
2138+ if line.endswith(custom_path.split("/")[-1]):
2139+ custom_hash = line.split()[0]
2140+ break
2141+
2142+ if not custom_hash:
2143+ continue
2144+
2145+ # Generate the path
2146+ path = os.path.join(conf.publish_path, "pool",
2147+ "custom-%s.tar.xz" % custom_hash)
2148+
2149+ # Return pre-existing entries
2150+ if os.path.exists(path):
2151+ # Get the real version number (in case it got copied)
2152+ if os.path.exists(path.replace(".tar.xz", ".json")):
2153+ with open(path.replace(".tar.xz", ".json"), "r") as fd:
2154+ metadata = json.loads(fd.read())
2155+
2156+ if "version_detail" in metadata:
2157+ version_detail = metadata['version_detail']
2158+
2159+ environment['version_detail'].append(version_detail)
2160+ return path
2161+
2162+ temp_dir = tempfile.mkdtemp()
2163+
2164+ # Unpack the source tarball
2165+ tools.gzip_uncompress(custom_path, os.path.join(temp_dir,
2166+ "source.tar"))
2167+
2168+ # Create the pool if it doesn't exist
2169+ if not os.path.exists(os.path.join(conf.publish_path, "pool")):
2170+ os.makedirs(os.path.join(conf.publish_path, "pool"))
2171+
2172+ # Compress the target tarball and sign it
2173+ tools.xz_compress(os.path.join(temp_dir, "source.tar"), path)
2174+ gpg.sign_file(conf, "image-signing", path)
2175+
2176+ # Generate the metadata file
2177+ metadata = {}
2178+ metadata['generator'] = "cdimage-custom"
2179+ metadata['version'] = version
2180+ metadata['version_detail'] = version_detail
2181+ metadata['series'] = series
2182+ metadata['custom_path'] = custom_path
2183+ metadata['custom_checksum'] = custom_hash
2184+
2185+ with open(path.replace(".tar.xz", ".json"), "w+") as fd:
2186+ fd.write("%s\n" % json.dumps(metadata, sort_keys=True,
2187+ indent=4, separators=(',', ': ')))
2188+ gpg.sign_file(conf, "image-signing", path.replace(".tar.xz", ".json"))
2189+
2190+ # Cleanup
2191+ shutil.rmtree(temp_dir)
2192+
2193+ environment['version_detail'].append(version_detail)
2194+ return path
2195+
2196+ return None
2197+
2198+
2199+def generate_file_http(conf, arguments, environment):
2200+ """
2201+ Grab, cache and returns a file using http/https.
2202+ """
2203+
2204+ # We need at least a URL
2205+ if len(arguments) == 0:
2206+ return None
2207+
2208+ # Read the arguments
2209+ url = arguments[0]
2210+
2211+ options = {}
2212+ if len(arguments) > 1:
2213+ options = unpack_arguments(arguments[1])
2214+
2215+ path = None
2216+ version = None
2217+
2218+ if "http_%s" % url in CACHE:
2219+ version = CACHE['http_%s' % url]
2220+
2221+ # Get the version/build number
2222+ if "monitor" in options or version:
2223+ if not version:
2224+ # Grab the current version number
2225+ old_timeout = socket.getdefaulttimeout()
2226+ socket.setdefaulttimeout(5)
2227+ try:
2228+ version = urlopen(options['monitor']).read().strip()
2229+ except socket.timeout:
2230+ return None
2231+ except IOError:
2232+ return None
2233+ socket.setdefaulttimeout(old_timeout)
2234+
2235+ # Validate the version number
2236+ if not version or len(version.split("\n")) > 1:
2237+ return None
2238+
2239+ # Push the result in the cache
2240+ CACHE['http_%s' % url] = version
2241+
2242+ # Set version_detail
2243+ version_detail = "%s=%s" % (options.get("name", "http"), version)
2244+
2245+ # FIXME: can be dropped once all the non-hased tarballs are gone
2246+ old_path = os.path.realpath(os.path.join(conf.publish_path, "pool",
2247+ "%s-%s.tar.xz" %
2248+ (options.get("name", "http"),
2249+ version)))
2250+ if os.path.exists(old_path):
2251+ # Get the real version number (in case it got copied)
2252+ if os.path.exists(old_path.replace(".tar.xz", ".json")):
2253+ with open(old_path.replace(".tar.xz", ".json"), "r") as fd:
2254+ metadata = json.loads(fd.read())
2255+
2256+ if "version_detail" in metadata:
2257+ version_detail = metadata['version_detail']
2258+
2259+ environment['version_detail'].append(version_detail)
2260+ return old_path
2261+
2262+ # Build the path, hasing together the URL and version
2263+ hash_string = "%s:%s" % (url, version)
2264+ global_hash = sha256(hash_string.encode('utf-8')).hexdigest()
2265+ path = os.path.realpath(os.path.join(conf.publish_path, "pool",
2266+ "%s-%s.tar.xz" %
2267+ (options.get("name", "http"),
2268+ global_hash)))
2269+
2270+ # Return pre-existing entries
2271+ if os.path.exists(path):
2272+ # Get the real version number (in case it got copied)
2273+ if os.path.exists(path.replace(".tar.xz", ".json")):
2274+ with open(path.replace(".tar.xz", ".json"), "r") as fd:
2275+ metadata = json.loads(fd.read())
2276+
2277+ if "version_detail" in metadata:
2278+ version_detail = metadata['version_detail']
2279+
2280+ environment['version_detail'].append(version_detail)
2281+ return path
2282+
2283+ # Grab the real thing
2284+ tempdir = tempfile.mkdtemp()
2285+ old_timeout = socket.getdefaulttimeout()
2286+ socket.setdefaulttimeout(5)
2287+ try:
2288+ urlretrieve(url, os.path.join(tempdir, "download"))
2289+ except socket.timeout:
2290+ shutil.rmtree(tempdir)
2291+ return None
2292+ except IOError:
2293+ shutil.rmtree(tempdir)
2294+ return None
2295+ socket.setdefaulttimeout(old_timeout)
2296+
2297+ # Hash it if we don't have a version number
2298+ if not version:
2299+ # Hash the file
2300+ with open(os.path.join(tempdir, "download"), "rb") as fd:
2301+ version = sha256(fd.read()).hexdigest()
2302+
2303+ # Set version_detail
2304+ version_detail = "%s=%s" % (options.get("name", "http"), version)
2305+
2306+ # Push the result in the cache
2307+ CACHE['http_%s' % url] = version
2308+
2309+ # Build the path
2310+ path = os.path.realpath(os.path.join(conf.publish_path, "pool",
2311+ "%s-%s.tar.xz" %
2312+ (options.get("name", "http"),
2313+ version)))
2314+ # Return pre-existing entries
2315+ if os.path.exists(path):
2316+ # Get the real version number (in case it got copied)
2317+ if os.path.exists(path.replace(".tar.xz", ".json")):
2318+ with open(path.replace(".tar.xz", ".json"), "r") as fd:
2319+ metadata = json.loads(fd.read())
2320+
2321+ if "version_detail" in metadata:
2322+ version_detail = metadata['version_detail']
2323+
2324+ environment['version_detail'].append(version_detail)
2325+ shutil.rmtree(tempdir)
2326+ return path
2327+
2328+ # Create the pool if it doesn't exist
2329+ if not os.path.exists(os.path.join(conf.publish_path, "pool")):
2330+ os.makedirs(os.path.join(conf.publish_path, "pool"))
2331+
2332+ # Move the file to the pool and sign it
2333+ shutil.move(os.path.join(tempdir, "download"), path)
2334+ gpg.sign_file(conf, "image-signing", path)
2335+
2336+ # Generate the metadata file
2337+ metadata = {}
2338+ metadata['generator'] = "http"
2339+ metadata['version'] = version
2340+ metadata['version_detail'] = version_detail
2341+ metadata['url'] = url
2342+
2343+ with open(path.replace(".tar.xz", ".json"), "w+") as fd:
2344+ fd.write("%s\n" % json.dumps(metadata, sort_keys=True,
2345+ indent=4, separators=(',', ': ')))
2346+ gpg.sign_file(conf, "image-signing", path.replace(".tar.xz", ".json"))
2347+
2348+ # Cleanup
2349+ shutil.rmtree(tempdir)
2350+
2351+ environment['version_detail'].append(version_detail)
2352+ return path
2353+
2354+
2355+def generate_file_keyring(conf, arguments, environment):
2356+ """
2357+ Generate a keyring tarball or return a pre-existing one.
2358+ """
2359+
2360+ # Don't generate keyring tarballs when nothing changed
2361+ if len(environment['new_files']) == 0:
2362+ return None
2363+
2364+ # We need a keyring name
2365+ if len(arguments) == 0:
2366+ return None
2367+
2368+ # Read the arguments
2369+ keyring_name = arguments[0]
2370+ keyring_path = os.path.join(conf.gpg_keyring_path, keyring_name)
2371+
2372+ # Fail on missing keyring
2373+ if not os.path.exists("%s.tar.xz" % keyring_path) or \
2374+ not os.path.exists("%s.tar.xz.asc" % keyring_path):
2375+ return None
2376+
2377+ with open("%s.tar.xz" % keyring_path, "rb") as fd:
2378+ hash_tarball = sha256(fd.read()).hexdigest()
2379+
2380+ with open("%s.tar.xz.asc" % keyring_path, "rb") as fd:
2381+ hash_signature = sha256(fd.read()).hexdigest()
2382+
2383+ hash_string = "%s/%s" % (hash_tarball, hash_signature)
2384+ global_hash = sha256(hash_string.encode('utf-8')).hexdigest()
2385+
2386+ # Build the path
2387+ path = os.path.realpath(os.path.join(conf.publish_path, "pool",
2388+ "keyring-%s.tar.xz" %
2389+ global_hash))
2390+
2391+ # Set the version_detail string
2392+ environment['version_detail'].append("keyring=%s" % keyring_name)
2393+
2394+ # Don't bother re-generating a file if it already exists
2395+ if os.path.exists(path):
2396+ return path
2397+
2398+ # Create temporary directory
2399+ tempdir = tempfile.mkdtemp()
2400+
2401+ # Generate the tarball
2402+ tarball = tarfile.open(os.path.join(tempdir, "output.tar"), "w:")
2403+ tarball.add("%s.tar.xz" % keyring_path,
2404+ arcname="/system/etc/system-image/archive-master.tar.xz",
2405+ filter=root_ownership)
2406+ tarball.add("%s.tar.xz.asc" % keyring_path,
2407+ arcname="/system/etc/system-image/archive-master.tar.xz.asc",
2408+ filter=root_ownership)
2409+ tarball.close()
2410+
2411+ # Create the pool if it doesn't exist
2412+ if not os.path.exists(os.path.join(conf.publish_path, "pool")):
2413+ os.makedirs(os.path.join(conf.publish_path, "pool"))
2414+
2415+ # Compress and sign it
2416+ tools.xz_compress(os.path.join(tempdir, "output.tar"), path)
2417+ gpg.sign_file(conf, "image-signing", path)
2418+
2419+ # Generate the metadata file
2420+ metadata = {}
2421+ metadata['generator'] = "keyring"
2422+ metadata['version'] = global_hash
2423+ metadata['version_detail'] = "keyring=%s" % keyring_name
2424+ metadata['path'] = keyring_path
2425+
2426+ with open(path.replace(".tar.xz", ".json"), "w+") as fd:
2427+ fd.write("%s\n" % json.dumps(metadata, sort_keys=True,
2428+ indent=4, separators=(',', ': ')))
2429+ gpg.sign_file(conf, "image-signing", path.replace(".tar.xz", ".json"))
2430+
2431+ # Cleanup
2432+ shutil.rmtree(tempdir)
2433+
2434+ return path
2435+
2436+
2437+def generate_file_remote_system_image(conf, arguments, environment):
2438+ """
2439+ Import files from a remote system-image server
2440+ """
2441+
2442+ # We need at least a channel name and a file prefix
2443+ if len(arguments) < 3:
2444+ return None
2445+
2446+ # Read the arguments
2447+ base_url = arguments[0]
2448+ channel_name = arguments[1]
2449+ prefix = arguments[2]
2450+
2451+ options = {}
2452+ if len(arguments) > 3:
2453+ options = unpack_arguments(arguments[3])
2454+
2455+ device_name = environment['device_name']
2456+ if 'device' in options:
2457+ device_name = options['device']
2458+
2459+ # Fetch and validate the remote channels.json
2460+ old_timeout = socket.getdefaulttimeout()
2461+ socket.setdefaulttimeout(5)
2462+ try:
2463+ channel_json = json.loads(urlopen("%s/channels.json" %
2464+ base_url).read().decode().strip())
2465+ except socket.timeout:
2466+ return None
2467+ except IOError:
2468+ return None
2469+ socket.setdefaulttimeout(old_timeout)
2470+
2471+ if channel_name not in channel_json:
2472+ return None
2473+
2474+ if "devices" not in channel_json[channel_name]:
2475+ return None
2476+
2477+ if device_name not in channel_json[channel_name]['devices']:
2478+ return None
2479+
2480+ if "index" not in (channel_json[channel_name]['devices']
2481+ [device_name]):
2482+ return None
2483+
2484+ index_url = "%s/%s" % (base_url, channel_json[channel_name]['devices']
2485+ [device_name]['index'])
2486+
2487+ # Fetch and validate the remote index.json
2488+ old_timeout = socket.getdefaulttimeout()
2489+ socket.setdefaulttimeout(5)
2490+ try:
2491+ index_json = json.loads(urlopen(index_url).read().decode())
2492+ except socket.timeout:
2493+ return None
2494+ except IOError:
2495+ return None
2496+ socket.setdefaulttimeout(old_timeout)
2497+
2498+ # Grab the list of full images
2499+ full_images = sorted([image for image in index_json['images']
2500+ if image['type'] == "full"],
2501+ key=lambda image: image['version'])
2502+
2503+ # No images
2504+ if not full_images:
2505+ return None
2506+
2507+ # Found an image, so let's try to find a match
2508+ for file_entry in full_images[-1]['files']:
2509+ file_name = file_entry['path'].split("/")[-1]
2510+ file_prefix = file_name.rsplit("-", 1)[0]
2511+ if file_prefix == prefix:
2512+ path = os.path.realpath("%s/%s" % (conf.publish_path,
2513+ file_entry['path']))
2514+ if os.path.exists(path):
2515+ return path
2516+
2517+ # Create the target if needed
2518+ if not os.path.exists(os.path.dirname(path)):
2519+ os.makedirs(os.path.dirname(path))
2520+
2521+ # Grab the file
2522+ file_url = "%s/%s" % (base_url, file_entry['path'])
2523+ socket.setdefaulttimeout(5)
2524+ try:
2525+ urlretrieve(file_url, path)
2526+ except socket.timeout:
2527+ if os.path.exists(path):
2528+ os.remove(path)
2529+ return None
2530+ except IOError:
2531+ if os.path.exists(path):
2532+ os.remove(path)
2533+ return None
2534+ socket.setdefaulttimeout(old_timeout)
2535+
2536+ if "keyring" in options:
2537+ if not tools.repack_recovery_keyring(conf, path,
2538+ options['keyring']):
2539+ if os.path.exists(path):
2540+ os.remove(path)
2541+ return None
2542+
2543+ gpg.sign_file(conf, "image-signing", path)
2544+
2545+ # Attempt to grab an associated json
2546+ socket.setdefaulttimeout(5)
2547+ json_path = path.replace(".tar.xz", ".json")
2548+ json_url = file_url.replace(".tar.xz", ".json")
2549+ try:
2550+ urlretrieve(json_url, json_path),
2551+ except socket.timeout:
2552+ if os.path.exists(json_path):
2553+ os.remove(json_path)
2554+ except IOError:
2555+ if os.path.exists(json_path):
2556+ os.remove(json_path)
2557+ socket.setdefaulttimeout(old_timeout)
2558+
2559+ if os.path.exists(json_path):
2560+ gpg.sign_file(conf, "image-signing", json_path)
2561+ with open(json_path, "r") as fd:
2562+ metadata = json.loads(fd.read())
2563+
2564+ if "version_detail" in metadata:
2565+ environment['version_detail'].append(
2566+ metadata['version_detail'])
2567+
2568+ return path
2569+
2570+ return None
2571+
2572+
2573+def generate_file_system_image(conf, arguments, environment):
2574+ """
2575+ Copy a file from another channel.
2576+ """
2577+
2578+ # We need at least a channel name and a file prefix
2579+ if len(arguments) < 2:
2580+ return None
2581+
2582+ # Read the arguments
2583+ channel_name = arguments[0]
2584+ prefix = arguments[1]
2585+
2586+ # Run some checks
2587+ pub = tree.Tree(conf)
2588+ if channel_name not in pub.list_channels():
2589+ return None
2590+
2591+ if (not environment['device_name'] in
2592+ pub.list_channels()[channel_name]['devices']):
2593+ return None
2594+
2595+ # Try to find the file
2596+ device = pub.get_device(channel_name, environment['device_name'])
2597+
2598+ full_images = sorted([image for image in device.list_images()
2599+ if image['type'] == "full"],
2600+ key=lambda image: image['version'])
2601+
2602+ # No images
2603+ if not full_images:
2604+ return None
2605+
2606+ # Found an image, so let's try to find a match
2607+ for file_entry in full_images[-1]['files']:
2608+ file_name = file_entry['path'].split("/")[-1]
2609+ file_prefix = file_name.rsplit("-", 1)[0]
2610+ if file_prefix == prefix:
2611+ path = os.path.realpath("%s/%s" % (conf.publish_path,
2612+ file_entry['path']))
2613+
2614+ if os.path.exists(path.replace(".tar.xz", ".json")):
2615+ with open(path.replace(".tar.xz", ".json"), "r") as fd:
2616+ metadata = json.loads(fd.read())
2617+
2618+ if "version_detail" in metadata:
2619+ environment['version_detail'].append(
2620+ metadata['version_detail'])
2621+
2622+ return path
2623+
2624+ return None
2625+
2626+
2627+def generate_file_version(conf, arguments, environment):
2628+ """
2629+ Generate a version tarball or return a pre-existing one.
2630+ """
2631+
2632+ # Don't generate version tarballs when nothing changed
2633+ if len(environment['new_files']) == 0:
2634+ return None
2635+
2636+ path = os.path.realpath(os.path.join(environment['device'].path,
2637+ "version-%s.tar.xz" % environment['version']))
2638+
2639+ # Set the version_detail string
2640+ environment['version_detail'].append("version=%s" % environment['version'])
2641+
2642+ # Don't bother re-generating a file if it already exists
2643+ if os.path.exists(path):
2644+ return path
2645+
2646+ # Generate version_detail
2647+ version_detail = ",".join(environment['version_detail'])
2648+
2649+ # Create temporary directory
2650+ tempdir = tempfile.mkdtemp()
2651+
2652+ # Generate the tarball
2653+ tools.generate_version_tarball(
2654+ conf, environment['channel_name'], environment['device_name'],
2655+ str(environment['version']),
2656+ os.path.join(tempdir, "version"), version_detail=version_detail)
2657+
2658+ # Create the pool if it doesn't exist
2659+ if not os.path.exists(os.path.join(environment['device'].path)):
2660+ os.makedirs(os.path.join(environment['device'].path))
2661+
2662+ # Compress and sign it
2663+ tools.xz_compress(os.path.join(tempdir, "version"), path)
2664+ gpg.sign_file(conf, "image-signing", path)
2665+
2666+ # Generate the metadata file
2667+ metadata = {}
2668+ metadata['generator'] = "version"
2669+ metadata['version'] = environment['version']
2670+ metadata['version_detail'] = "version=%s" % environment['version']
2671+ metadata['channel.ini'] = {}
2672+ metadata['channel.ini']['channel'] = environment['channel_name']
2673+ metadata['channel.ini']['device'] = environment['device_name']
2674+ metadata['channel.ini']['version'] = str(environment['version'])
2675+ metadata['channel.ini']['version_detail'] = version_detail
2676+
2677+ with open(path.replace(".tar.xz", ".json"), "w+") as fd:
2678+ fd.write("%s\n" % json.dumps(metadata, sort_keys=True,
2679+ indent=4, separators=(',', ': ')))
2680+ gpg.sign_file(conf, "image-signing", path.replace(".tar.xz", ".json"))
2681+
2682+ # Cleanup
2683+ shutil.rmtree(tempdir)
2684+
2685+ return path
2686
2687=== added file 'lib/systemimage/gpg.py'
2688--- lib/systemimage/gpg.py 1970-01-01 00:00:00 +0000
2689+++ lib/systemimage/gpg.py 2014-10-10 11:11:17 +0000
2690@@ -0,0 +1,239 @@
2691+# -*- coding: utf-8 -*-
2692+
2693+# Copyright (C) 2013 Canonical Ltd.
2694+# Author: Stéphane Graber <stgraber@ubuntu.com>
2695+
2696+# This program is free software: you can redistribute it and/or modify
2697+# it under the terms of the GNU General Public License as published by
2698+# the Free Software Foundation; version 3 of the License.
2699+#
2700+# This program is distributed in the hope that it will be useful,
2701+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2702+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2703+# GNU General Public License for more details.
2704+#
2705+# You should have received a copy of the GNU General Public License
2706+# along with this program. If not, see <http://www.gnu.org/licenses/>.
2707+
2708+import json
2709+import gpgme
2710+import os
2711+import tarfile
2712+
2713+from io import BytesIO
2714+
2715+
2716+def generate_signing_key(keyring_path, key_name, key_email, key_expiry):
2717+ """
2718+ Generate a new 2048bit RSA signing key.
2719+ """
2720+
2721+ if not os.path.isdir(keyring_path):
2722+ raise Exception("Keyring path doesn't exist: %s" % keyring_path)
2723+
2724+ key_params = """<GnupgKeyParms format="internal">
2725+Key-Type: RSA
2726+Key-Length: 2048
2727+Key-Usage: sign
2728+Name-Real: %s
2729+Name-Email: %s
2730+Expire-Date: %s
2731+</GnupgKeyParms>
2732+""" % (key_name, key_email, key_expiry)
2733+
2734+ os.environ['GNUPGHOME'] = keyring_path
2735+
2736+ ctx = gpgme.Context()
2737+ result = ctx.genkey(key_params)
2738+ key = ctx.get_key(result.fpr, True)
2739+ [uid] = key.uids
2740+
2741+ return uid
2742+
2743+
2744+def sign_file(config, key, path, destination=None, detach=True, armor=True):
2745+ """
2746+ Sign a file and publish the signature.
2747+ The key parameter must be a valid key under config.gpg_key_path.
2748+ The path must be that of a valid file.
2749+ The destination defaults to <path>.gpg (non-armored) or
2750+ <path>.asc (armored).
2751+ The detach and armor parameters respectively control the use of
2752+ detached signatures and base64 armoring.
2753+ """
2754+
2755+ key_path = "%s/%s" % (config.gpg_key_path, key)
2756+
2757+ if not os.path.isdir(key_path):
2758+ raise IndexError("Invalid GPG key name '%s'." % key)
2759+
2760+ if not os.path.isfile(path):
2761+ raise Exception("Invalid path '%s'." % path)
2762+
2763+ if not destination:
2764+ if armor:
2765+ destination = "%s.asc" % path
2766+ elif detach:
2767+ destination = "%s.sig" % path
2768+ else:
2769+ destination = "%s.gpg" % path
2770+
2771+ if os.path.exists(destination):
2772+ raise Exception("destination already exists.")
2773+
2774+ os.environ['GNUPGHOME'] = key_path
2775+
2776+ # Create a GPG context, assuming no passphrase
2777+ ctx = gpgme.Context()
2778+ ctx.armor = armor
2779+ [key] = ctx.keylist()
2780+ ctx.signers = [key]
2781+
2782+ with open(path, "rb") as fd_in, open(destination, "wb+") as fd_out:
2783+ if detach:
2784+ retval = ctx.sign(fd_in, fd_out, gpgme.SIG_MODE_DETACH)
2785+ else:
2786+ retval = ctx.sign(fd_in, fd_out, gpgme.SIG_MODE_NORMAL)
2787+
2788+ return retval
2789+
2790+
2791+class Keyring:
2792+ """
2793+ Represents a keyring, let's you list/add/remove keys and change
2794+ some of the keyring properties (type, expiration, target hardware)
2795+ """
2796+
2797+ keyring_name = None
2798+ keyring_type = None
2799+ keyring_expiry = None
2800+ keyring_model = None
2801+ keyring_path = None
2802+
2803+ def __init__(self, config, keyring_name):
2804+ keyring_path = "%s/%s" % (config.gpg_keyring_path, keyring_name)
2805+
2806+ if not os.path.isdir(keyring_path):
2807+ os.makedirs(keyring_path)
2808+
2809+ self.keyring_name = keyring_name
2810+ self.keyring_path = keyring_path
2811+
2812+ if os.path.exists("%s/keyring.json" % keyring_path):
2813+ with open("%s/keyring.json" % keyring_path, "r") as fd:
2814+ keyring_json = json.loads(fd.read())
2815+
2816+ self.keyring_type = keyring_json.get('type', None)
2817+ self.keyring_expiry = keyring_json.get('expiry', None)
2818+ self.keyring_model = keyring_json.get('model', None)
2819+ else:
2820+ open("%s/pubring.gpg" % keyring_path, "w+").close()
2821+
2822+ def generate_tarball(self, destination=None):
2823+ """
2824+ Generate a tarball of the keyring and its json metadata.
2825+ Returns the path to the tarball.
2826+ """
2827+
2828+ if not destination:
2829+ destination = "%s.tar" % self.keyring_path
2830+
2831+ if os.path.isfile(destination):
2832+ os.remove(destination)
2833+
2834+ tarball = tarfile.open(destination, "w:")
2835+ tarball.add("%s/keyring.json" % self.keyring_path,
2836+ arcname="keyring.json")
2837+ tarball.add("%s/pubring.gpg" % self.keyring_path,
2838+ arcname="keyring.gpg")
2839+ tarball.close()
2840+
2841+ return destination
2842+
2843+ def set_metadata(self, keyring_type, keyring_expiry=None,
2844+ keyring_model=None):
2845+ """
2846+ Generate a new keyring.json file.
2847+ """
2848+
2849+ keyring_json = {}
2850+ if keyring_type:
2851+ self.keyring_type = keyring_type
2852+ keyring_json['type'] = keyring_type
2853+
2854+ if keyring_expiry:
2855+ self.keyring_expiry = keyring_expiry
2856+ keyring_json['expiry'] = keyring_expiry
2857+
2858+ if keyring_model:
2859+ self.keyring_model = keyring_model
2860+ keyring_json['model'] = keyring_model
2861+
2862+ with open("%s/keyring.json" % self.keyring_path, "w+") as fd:
2863+ fd.write("%s\n" % json.dumps(keyring_json, sort_keys=True,
2864+ indent=4, separators=(',', ': ')))
2865+
2866+ def list_keys(self):
2867+ os.environ['GNUPGHOME'] = self.keyring_path
2868+
2869+ keys = []
2870+
2871+ ctx = gpgme.Context()
2872+ for key in ctx.keylist():
2873+ keys.append((key.subkeys[0].keyid, key.subkeys[0].length,
2874+ [uid.uid for uid in key.uids]))
2875+
2876+ return keys
2877+
2878+ def export_key(self, path, key, armor=True):
2879+ os.environ['GNUPGHOME'] = self.keyring_path
2880+
2881+ ctx = gpgme.Context()
2882+ ctx.armor = armor
2883+
2884+ gpg_key = ctx.get_key(key)
2885+
2886+ with open(path, "wb+") as fd:
2887+ for subkey in gpg_key.subkeys:
2888+ ctx.export(str(subkey.keyid), fd)
2889+
2890+ def import_key(self, path, armor=True):
2891+ os.environ['GNUPGHOME'] = self.keyring_path
2892+
2893+ ctx = gpgme.Context()
2894+ ctx.armor = armor
2895+
2896+ with open(path, "rb") as fd:
2897+ ctx.import_(fd)
2898+
2899+ def import_keys(self, path):
2900+ """
2901+ Import all the keys from the specified keyring.
2902+ """
2903+
2904+ os.environ['GNUPGHOME'] = path
2905+
2906+ ctx = gpgme.Context()
2907+
2908+ keys = []
2909+ for key in list(ctx.keylist()):
2910+ for subkey in key.subkeys:
2911+ content = BytesIO()
2912+ ctx.export(str(subkey.keyid), content)
2913+ keys.append(content)
2914+
2915+ os.environ['GNUPGHOME'] = self.keyring_path
2916+ ctx = gpgme.Context()
2917+
2918+ for key in keys:
2919+ key.seek(0)
2920+ ctx.import_(key)
2921+
2922+ def del_key(self, key):
2923+ os.environ['GNUPGHOME'] = self.keyring_path
2924+
2925+ ctx = gpgme.Context()
2926+
2927+ gpg_key = ctx.get_key(key)
2928+
2929+ ctx.delete(gpg_key)
2930
2931=== added file 'lib/systemimage/tools.py'
2932--- lib/systemimage/tools.py 1970-01-01 00:00:00 +0000
2933+++ lib/systemimage/tools.py 2014-10-10 11:11:17 +0000
2934@@ -0,0 +1,367 @@
2935+# -*- coding: utf-8 -*-
2936+
2937+# Copyright (C) 2013 Canonical Ltd.
2938+# Author: Stéphane Graber <stgraber@ubuntu.com>
2939+
2940+# This program is free software: you can redistribute it and/or modify
2941+# it under the terms of the GNU General Public License as published by
2942+# the Free Software Foundation; version 3 of the License.
2943+#
2944+# This program is distributed in the hope that it will be useful,
2945+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2946+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2947+# GNU General Public License for more details.
2948+#
2949+# You should have received a copy of the GNU General Public License
2950+# along with this program. If not, see <http://www.gnu.org/licenses/>.
2951+
2952+from io import BytesIO
2953+
2954+import gzip
2955+import os
2956+import re
2957+import shutil
2958+import subprocess
2959+import tarfile
2960+import tempfile
2961+import time
2962+
2963+
2964+def expand_path(path, base="/"):
2965+ """
2966+ Takes a path and returns a tuple containing the absolute path
2967+ and a relative path (relative to base).
2968+ """
2969+
2970+ if path.startswith(base):
2971+ path = re.sub('^%s' % re.escape(base), "", path)
2972+
2973+ if path.startswith(os.sep):
2974+ relpath = path[1:]
2975+ else:
2976+ relpath = path
2977+
2978+ abspath = os.path.realpath(os.path.join(base, relpath))
2979+
2980+ return abspath, relpath
2981+
2982+
2983+# Imported from cdimage.osextras
2984+def find_on_path(command):
2985+ """Is command on the executable search path?"""
2986+
2987+ if 'PATH' not in os.environ:
2988+ return False
2989+ path = os.environ['PATH']
2990+ for element in path.split(os.pathsep):
2991+ if not element:
2992+ continue
2993+ filename = os.path.join(element, command)
2994+ if os.path.isfile(filename) and os.access(filename, os.X_OK):
2995+ return True
2996+ return False
2997+
2998+
2999+def generate_version_tarball(config, channel, device, version, path,
3000+ build_path="system/etc/ubuntu-build",
3001+ channel_path="system/etc/system-image/"
3002+ "channel.ini",
3003+ version_detail=None,
3004+ channel_target=None):
3005+ """
3006+ Generates a tarball which contains two files
3007+ (build_path and channel_path).
3008+ The first contains the build id, the second a .ini config file.
3009+ The resulting tarball is written at the provided location (path).
3010+ """
3011+
3012+ tarball = tarfile.open(path, 'w:')
3013+
3014+ version_file = tarfile.TarInfo()
3015+ version_file.size = len(version) + 1
3016+ version_file.mtime = int(time.strftime("%s", time.localtime()))
3017+ version_file.name = build_path
3018+
3019+ # Append a line break
3020+ version += "\n"
3021+
3022+ tarball.addfile(version_file, BytesIO(version.encode('utf-8')))
3023+
3024+ http_port = config.public_http_port
3025+ https_port = config.public_https_port
3026+
3027+ if http_port == 0:
3028+ http_port = "disabled"
3029+
3030+ if https_port == 0:
3031+ https_port = "disabled"
3032+
3033+ channel = """[service]
3034+base: %s
3035+http_port: %s
3036+https_port: %s
3037+channel: %s
3038+device: %s
3039+build_number: %s
3040+""" % (config.public_fqdn, http_port, https_port,
3041+ channel, device, version.strip())
3042+
3043+ if channel_target:
3044+ channel += "channel_target: %s\n" % channel_target
3045+
3046+ if version_detail:
3047+ channel += "version_detail: %s\n" % version_detail
3048+
3049+ channel_file = tarfile.TarInfo()
3050+ channel_file.size = len(channel)
3051+ channel_file.mtime = int(time.strftime("%s", time.localtime()))
3052+ channel_file.name = channel_path
3053+
3054+ tarball.addfile(channel_file, BytesIO(channel.encode('utf-8')))
3055+
3056+ tarball.close()
3057+
3058+
3059+def gzip_compress(path, destination=None, level=9):
3060+ """
3061+ Compress a file (path) using gzip.
3062+ By default, creates a .gz version of the file in the same directory.
3063+ An alternate destination path may be provided.
3064+ The compress level is 9 by default but can be overriden.
3065+ """
3066+
3067+ if not destination:
3068+ destination = "%s.gz" % path
3069+
3070+ if os.path.exists(destination):
3071+ raise Exception("destination already exists.")
3072+
3073+ uncompressed = open(path, "rb")
3074+ compressed = gzip.open(destination, "wb+", level)
3075+ compressed.writelines(uncompressed)
3076+ compressed.close()
3077+ uncompressed.close()
3078+
3079+ return destination
3080+
3081+
3082+def gzip_uncompress(path, destination=None):
3083+ """
3084+ Uncompress a file (path) using gzip.
3085+ By default, uses the source path without the .gz prefix as the target.
3086+ An alternate destination path may be provided.
3087+ """
3088+
3089+ if not destination and path[-3:] != ".gz":
3090+ raise Exception("unspecified destination and path doesn't end"
3091+ " with .gz")
3092+
3093+ if not destination:
3094+ destination = path[:-3]
3095+
3096+ if os.path.exists(destination):
3097+ raise Exception("destination already exists.")
3098+
3099+ compressed = gzip.open(path, "rb")
3100+ uncompressed = open(destination, "wb+")
3101+ uncompressed.writelines(compressed)
3102+ uncompressed.close()
3103+ compressed.close()
3104+
3105+ return destination
3106+
3107+
3108+def xz_compress(path, destination=None, level=9):
3109+ """
3110+ Compress a file (path) using xz.
3111+ By default, creates a .xz version of the file in the same directory.
3112+ An alternate destination path may be provided.
3113+ The compress level is 9 by default but can be overriden.
3114+ """
3115+
3116+ # NOTE: Once we can drop support for < 3.3, the new lzma module can be used
3117+
3118+ if not destination:
3119+ destination = "%s.xz" % path
3120+
3121+ if os.path.exists(destination):
3122+ raise Exception("destination already exists.")
3123+
3124+ if find_on_path("pxz"):
3125+ xz_command = "pxz"
3126+ else:
3127+ xz_command = "xz"
3128+
3129+ with open(destination, "wb+") as fd:
3130+ retval = subprocess.call([xz_command, '-z', '-%s' % level, '-c', path],
3131+ stdout=fd)
3132+ return retval
3133+
3134+
3135+def xz_uncompress(path, destination=None):
3136+ """
3137+ Uncompress a file (path) using xz.
3138+ By default, uses the source path without the .xz prefix as the target.
3139+ An alternate destination path may be provided.
3140+ """
3141+
3142+ # NOTE: Once we can drop support for < 3.3, the new lzma module can be used
3143+
3144+ if not destination and path[-3:] != ".xz":
3145+ raise Exception("unspecified destination and path doesn't end"
3146+ " with .xz")
3147+
3148+ if not destination:
3149+ destination = path[:-3]
3150+
3151+ if os.path.exists(destination):
3152+ raise Exception("destination already exists.")
3153+
3154+ with open(destination, "wb+") as fd:
3155+ retval = subprocess.call(['xz', '-d', '-c', path],
3156+ stdout=fd)
3157+
3158+ return retval
3159+
3160+
3161+def trigger_mirror(host, port, username, key, command):
3162+ return subprocess.call(['ssh',
3163+ '-i', key,
3164+ '-l', username,
3165+ '-p', str(port),
3166+ host,
3167+ command])
3168+
3169+
3170+def sync_mirrors(config):
3171+ for mirror in sorted(config.mirrors.values(),
3172+ key=lambda mirror: mirror.ssh_host):
3173+ trigger_mirror(mirror.ssh_host, mirror.ssh_port, mirror.ssh_user,
3174+ mirror.ssh_key, mirror.ssh_command)
3175+
3176+
3177+def repack_recovery_keyring(conf, path, keyring_name):
3178+ tempdir = tempfile.mkdtemp()
3179+
3180+ xz_uncompress(path, os.path.join(tempdir, "input.tar"))
3181+
3182+ input_tarball = tarfile.open(os.path.join(tempdir, "input.tar"), "r:")
3183+
3184+ # Make sure the partition is in there
3185+ if "partitions/recovery.img" not in input_tarball.getnames():
3186+ shutil.rmtree(tempdir)
3187+ return False
3188+
3189+ input_tarball.extract("partitions/recovery.img", tempdir)
3190+
3191+ # Extract the content of the .img
3192+ os.mkdir(os.path.join(tempdir, "img"))
3193+ old_pwd = os.getcwd()
3194+ os.chdir(os.path.join(tempdir, "img"))
3195+ cmd = ["abootimg",
3196+ "-x", os.path.join(tempdir, "partitions", "recovery.img")]
3197+
3198+ with open(os.path.devnull, "w") as devnull:
3199+ subprocess.call(cmd, stdout=devnull, stderr=devnull)
3200+
3201+ os.chdir(old_pwd)
3202+
3203+ # Extract the content of the initrd
3204+ os.mkdir(os.path.join(tempdir, "initrd"))
3205+ state_path = os.path.join(tempdir, "fakeroot_state")
3206+ old_pwd = os.getcwd()
3207+ os.chdir(os.path.join(tempdir, "initrd"))
3208+
3209+ gzip_uncompress(os.path.join(tempdir, "img", "initrd.img"),
3210+ os.path.join(tempdir, "img", "initrd"))
3211+
3212+ with open(os.path.join(tempdir, "img", "initrd"), "rb") as fd:
3213+ with open(os.path.devnull, "w") as devnull:
3214+ subprocess.call(['fakeroot', '-s', state_path, 'cpio', '-i'],
3215+ stdin=fd, stdout=devnull, stderr=devnull)
3216+
3217+ os.chdir(old_pwd)
3218+
3219+ # Swap the files
3220+ keyring_path = os.path.join(conf.gpg_keyring_path, "archive-master")
3221+
3222+ shutil.copy("%s.tar.xz" % keyring_path,
3223+ os.path.join(tempdir, "initrd", "etc", "system-image",
3224+ "archive-master.tar.xz"))
3225+
3226+ shutil.copy("%s.tar.xz.asc" % keyring_path,
3227+ os.path.join(tempdir, "initrd", "etc", "system-image",
3228+ "archive-master.tar.xz.asc"))
3229+
3230+ # Re-generate the initrd
3231+ old_pwd = os.getcwd()
3232+ os.chdir(os.path.join(tempdir, "initrd"))
3233+
3234+ find = subprocess.Popen(["find", "."], stdout=subprocess.PIPE)
3235+ with open(os.path.join(tempdir, "img", "initrd"), "w+") as fd:
3236+ with open(os.path.devnull, "w") as devnull:
3237+ subprocess.call(['fakeroot', '-i', state_path, 'cpio',
3238+ '-o', '--format=newc'],
3239+ stdin=find.stdout,
3240+ stdout=fd,
3241+ stderr=devnull)
3242+
3243+ os.chdir(old_pwd)
3244+
3245+ os.rename(os.path.join(tempdir, "img", "initrd.img"),
3246+ os.path.join(tempdir, "img", "initrd.img.bak"))
3247+ gzip_compress(os.path.join(tempdir, "img", "initrd"),
3248+ os.path.join(tempdir, "img", "initrd.img"))
3249+
3250+ # Rewrite bootimg.cfg
3251+ content = ""
3252+ with open(os.path.join(tempdir, "img", "bootimg.cfg"), "r") as source:
3253+ for line in source:
3254+ if line.startswith("bootsize"):
3255+ line = "bootsize=0x900000\n"
3256+ content += line
3257+
3258+ with open(os.path.join(tempdir, "img", "bootimg.cfg"), "w+") as dest:
3259+ dest.write(content)
3260+
3261+ # Update the partition image
3262+ with open(os.path.devnull, "w") as devnull:
3263+ subprocess.call(['abootimg', '-u',
3264+ os.path.join(tempdir, "partitions", "recovery.img"),
3265+ "-f", os.path.join(tempdir, "img", "bootimg.cfg")],
3266+ stdout=devnull, stderr=devnull)
3267+
3268+ # Update the partition image
3269+ with open(os.path.devnull, "w") as devnull:
3270+ subprocess.call(['abootimg', '-u',
3271+ os.path.join(tempdir, "partitions", "recovery.img"),
3272+ "-r", os.path.join(tempdir, "img", "initrd.img")],
3273+ stdout=devnull, stderr=devnull)
3274+
3275+ # Generate a new tarball
3276+ output_tarball = tarfile.open(os.path.join(tempdir, "output.tar"), "w:")
3277+ for entry in input_tarball:
3278+ fileptr = None
3279+ if entry.isfile():
3280+ try:
3281+ if entry.name == "partitions/recovery.img":
3282+ with open(os.path.join(tempdir, "partitions",
3283+ "recovery.img"), "rb") as fd:
3284+ fileptr = BytesIO(fd.read())
3285+ entry.size = os.stat(
3286+ os.path.join(tempdir, "partitions",
3287+ "recovery.img")).st_size
3288+ else:
3289+ fileptr = input_tarball.extractfile(entry.name)
3290+ except KeyError: # pragma: no cover
3291+ pass
3292+
3293+ output_tarball.addfile(entry, fileobj=fileptr)
3294+ output_tarball.close()
3295+
3296+ os.remove(path)
3297+ xz_compress(os.path.join(tempdir, "output.tar"), path)
3298+
3299+ shutil.rmtree(tempdir)
3300+
3301+ return True
3302
3303=== added file 'lib/systemimage/tree.py'
3304--- lib/systemimage/tree.py 1970-01-01 00:00:00 +0000
3305+++ lib/systemimage/tree.py 2014-10-10 11:11:17 +0000
3306@@ -0,0 +1,999 @@
3307+# -*- coding: utf-8 -*-
3308+
3309+# Copyright (C) 2013 Canonical Ltd.
3310+# Author: Stéphane Graber <stgraber@ubuntu.com>
3311+
3312+# This program is free software: you can redistribute it and/or modify
3313+# it under the terms of the GNU General Public License as published by
3314+# the Free Software Foundation; version 3 of the License.
3315+#
3316+# This program is distributed in the hope that it will be useful,
3317+# but WITHOUT ANY WARRANTY; without even the implied warranty of
3318+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3319+# GNU General Public License for more details.
3320+#
3321+# You should have received a copy of the GNU General Public License
3322+# along with this program. If not, see <http://www.gnu.org/licenses/>.
3323+
3324+import copy
3325+import json
3326+import os
3327+import shutil
3328+import time
3329+
3330+from contextlib import contextmanager
3331+from hashlib import sha256
3332+from systemimage import gpg, tools
3333+
3334+
3335+# Context managers
3336+@contextmanager
3337+def channels_json(config, path, commit=False):
3338+ """
3339+ Context function (to be used with "with") that will open a
3340+ channels.json file, parse it, validate it and return the
3341+ decoded version.
3342+
3343+ If commit is True, the file will then be updated (or created) on
3344+ exit.
3345+ """
3346+
3347+ # If the file doesn't exist, just yield an empty dict
3348+ json_content = {}
3349+ if os.path.exists(path):
3350+ with open(path, "r") as fd:
3351+ content = fd.read()
3352+ if content:
3353+ json_content = json.loads(content)
3354+
3355+ # Validation
3356+ if not isinstance(json_content, dict):
3357+ raise TypeError("Invalid channels.json, not a dict.")
3358+
3359+ if commit:
3360+ orig_json_content = copy.deepcopy(json_content)
3361+
3362+ # Yield the decoded value and save on exit
3363+ try:
3364+ yield json_content
3365+ finally:
3366+ if commit and (orig_json_content != json_content or
3367+ not os.path.exists(path)):
3368+ new_path = "%s.new" % path
3369+ with open(new_path, "w+") as fd:
3370+ fd.write("%s\n" % json.dumps(json_content, sort_keys=True,
3371+ indent=4, separators=(',', ': ')))
3372+
3373+ # Move the signature
3374+ gpg.sign_file(config, "image-signing", new_path)
3375+ if os.path.exists("%s.asc" % path):
3376+ os.remove("%s.asc" % path)
3377+ os.rename("%s.asc" % new_path, "%s.asc" % path)
3378+
3379+ # Move the index
3380+ if os.path.exists(path):
3381+ os.remove(path)
3382+ os.rename(new_path, path)
3383+
3384+
3385+@contextmanager
3386+def index_json(config, path, commit=False):
3387+ """
3388+ Context function (to be used with "with") that will open an
3389+ index.json file, parse it, validate it and return the
3390+ decoded version.
3391+
3392+ If commit is True, the file will then be updated (or created) on
3393+ exit.
3394+ """
3395+
3396+ # If the file doesn't exist, just yield an empty dict
3397+ json_content = {}
3398+ json_content['global'] = {}
3399+ json_content['images'] = []
3400+
3401+ if os.path.exists(path):
3402+ with open(path, "r") as fd:
3403+ content = fd.read()
3404+ if content:
3405+ json_content = json.loads(content)
3406+
3407+ # Validation
3408+ if not isinstance(json_content, dict):
3409+ raise TypeError("Invalid index.json, not a dict.")
3410+
3411+ if commit:
3412+ orig_json_content = copy.deepcopy(json_content)
3413+
3414+ # Yield the decoded value and save on exit
3415+ try:
3416+ yield json_content
3417+ finally:
3418+ # Remove any invalid attribute
3419+ versions = sorted({image['version']
3420+ for image in json_content['images']})
3421+ if versions:
3422+ last_version = versions[-1]
3423+
3424+ # Remove phased-percentage from any old image
3425+ for image in json_content['images']:
3426+ if image['version'] != last_version and \
3427+ "phased-percentage" in image:
3428+ image.pop("phased-percentage")
3429+
3430+ # Save to disk
3431+ if commit and (orig_json_content != json_content or
3432+ not os.path.exists(path)):
3433+ json_content['global']['generated_at'] = time.strftime(
3434+ "%a %b %d %H:%M:%S UTC %Y", time.gmtime())
3435+
3436+ new_path = "%s.new" % path
3437+ with open(new_path, "w+") as fd:
3438+ fd.write("%s\n" % json.dumps(json_content, sort_keys=True,
3439+ indent=4, separators=(',', ': ')))
3440+
3441+ # Move the signature
3442+ gpg.sign_file(config, "image-signing", new_path)
3443+ if os.path.exists("%s.asc" % path):
3444+ os.remove("%s.asc" % path)
3445+ os.rename("%s.asc" % new_path, "%s.asc" % path)
3446+
3447+ # Move the index
3448+ if os.path.exists(path):
3449+ os.remove(path)
3450+ os.rename(new_path, path)
3451+
3452+
3453+class Tree:
3454+ def __init__(self, config, path=None):
3455+ if not path:
3456+ path = config.publish_path
3457+
3458+ if not os.path.isdir(path):
3459+ raise Exception("Invalid path: %s" % path)
3460+
3461+ self.config = config
3462+ self.path = path
3463+ self.indexpath = os.path.join(path, "channels.json")
3464+
3465+ def __list_existing(self):
3466+ """
3467+ Returns a set of all files present in the tree and a set of
3468+ empty directories that can be removed.
3469+ """
3470+
3471+ existing_files = set()
3472+ empty_dirs = set()
3473+
3474+ for dirpath, dirnames, filenames in os.walk(self.path):
3475+ if dirpath == os.path.join(self.path, "gpg"):
3476+ continue
3477+
3478+ if not filenames and not dirnames:
3479+ empty_dirs.add(dirpath)
3480+
3481+ for entry in filenames:
3482+ existing_files.add(os.path.join(dirpath, entry))
3483+
3484+ return (existing_files, empty_dirs)
3485+
3486+ def __list_referenced(self):
3487+ """
3488+ Returns a set of all files that are referenced by the
3489+ various indexes and should be present in the tree.
3490+ """
3491+
3492+ listed_files = set()
3493+ listed_files.add(os.path.join(self.path, "channels.json"))
3494+ listed_files.add(os.path.join(self.path, "channels.json.asc"))
3495+
3496+ for channel, metadata in self.list_channels().items():
3497+ devices = metadata['devices']
3498+ for device in devices:
3499+ if 'keyring' in devices[device]:
3500+ listed_files.add(os.path.join(
3501+ self.path, devices[device]['keyring']['path'][1:]))
3502+ listed_files.add(os.path.join(
3503+ self.path,
3504+ devices[device]['keyring']['signature'][1:]))
3505+
3506+ device_entry = self.get_device(channel, device)
3507+
3508+ listed_files.add(os.path.join(device_entry.path, "index.json"))
3509+ listed_files.add(os.path.join(device_entry.path,
3510+ "index.json.asc"))
3511+
3512+ for image in device_entry.list_images():
3513+ for entry in image['files']:
3514+ listed_files.add(os.path.join(self.path,
3515+ entry['path'][1:]))
3516+ listed_files.add(os.path.join(self.path,
3517+ entry['signature'][1:]))
3518+
3519+ return listed_files
3520+
3521+ def change_channel_alias(self, channel_name, target_name):
3522+ """
3523+ Change the target of an alias.
3524+ """
3525+
3526+ with channels_json(self.config, self.indexpath) as channels:
3527+ if channel_name not in channels:
3528+ raise KeyError("Couldn't find channel: %s" % channel_name)
3529+
3530+ if "alias" not in channels[channel_name] or \
3531+ channels[channel_name]['alias'] == channel_name:
3532+ raise KeyError("Channel isn't an alias: %s" % channel_name)
3533+
3534+ if target_name not in channels:
3535+ raise KeyError("Couldn't find target channel: %s" %
3536+ target_name)
3537+
3538+ self.remove_channel(channel_name)
3539+ self.create_channel_alias(channel_name, target_name)
3540+
3541+ return True
3542+
3543+ def cleanup_tree(self):
3544+ """
3545+ Remove any orphaned file from the tree.
3546+ """
3547+
3548+ for entry in self.list_orphaned_files():
3549+ if os.path.isdir(entry):
3550+ os.rmdir(entry)
3551+ else:
3552+ os.remove(entry)
3553+
3554+ return True
3555+
3556+ def create_channel(self, channel_name):
3557+ """
3558+ Creates a new channel entry in the tree.
3559+ """
3560+
3561+ with channels_json(self.config, self.indexpath, True) as channels:
3562+ if channel_name in channels:
3563+ raise KeyError("Channel already exists: %s" % channel_name)
3564+
3565+ channels[channel_name] = {'devices': {}}
3566+
3567+ return True
3568+
3569+ def create_channel_alias(self, channel_name, target_name):
3570+ """
3571+ Creates a new channel as an alias for an existing one.
3572+ """
3573+
3574+ with channels_json(self.config, self.indexpath, True) as channels:
3575+ if channel_name in channels:
3576+ raise KeyError("Channel already exists: %s" % channel_name)
3577+
3578+ if target_name not in channels:
3579+ raise KeyError("Couldn't find target channel: %s" %
3580+ target_name)
3581+
3582+ channels[channel_name] = {'devices': {},
3583+ 'alias': target_name}
3584+
3585+ return self.sync_alias(channel_name)
3586+
3587+ def create_channel_redirect(self, channel_name, target_name):
3588+ """
3589+ Creates a new channel redirect.
3590+ """
3591+
3592+ with channels_json(self.config, self.indexpath, True) as channels:
3593+ if channel_name in channels:
3594+ raise KeyError("Channel already exists: %s" % channel_name)
3595+
3596+ if target_name not in channels:
3597+ raise KeyError("Couldn't find target channel: %s" %
3598+ target_name)
3599+
3600+ channels[channel_name] = dict(channels[target_name])
3601+ channels[channel_name]['redirect'] = target_name
3602+
3603+ self.hide_channel(channel_name)
3604+
3605+ return True
3606+
3607+ def create_device(self, channel_name, device_name, keyring_path=None):
3608+ """
3609+ Creates a new device entry in the tree.
3610+ """
3611+
3612+ with channels_json(self.config, self.indexpath, True) as channels:
3613+ if channel_name not in channels:
3614+ raise KeyError("Couldn't find channel: %s" % channel_name)
3615+
3616+ if device_name in channels[channel_name]['devices']:
3617+ raise KeyError("Device already exists: %s" % device_name)
3618+
3619+ device_path = os.path.join(self.path, channel_name, device_name)
3620+ if not os.path.exists(device_path):
3621+ os.makedirs(device_path)
3622+
3623+ # Create an empty index if it doesn't exist, if it does,
3624+ # just validate it
3625+ with index_json(self.config, os.path.join(device_path,
3626+ "index.json"), True):
3627+ pass
3628+
3629+ device = {}
3630+ device['index'] = "/%s/%s/index.json" % (channel_name, device_name)
3631+
3632+ channels[channel_name]['devices'][device_name] = device
3633+
3634+ if keyring_path:
3635+ self.set_device_keyring(channel_name, device_name, keyring_path)
3636+
3637+ self.sync_aliases(channel_name)
3638+ self.sync_redirects(channel_name)
3639+
3640+ return True
3641+
3642+ def generate_index(self, magic=False):
3643+ """
3644+ Re-generate the channels.json file based on the current content of
3645+ the tree.
3646+
3647+ This function is only present for emergency purposes and will
3648+ completely rebuild the tree based on what's on the filesystem,
3649+ looking into some well known locations to guess things like device
3650+ keyring paths.
3651+
3652+ Call this function with confirm="I know what I'm doing" to actually
3653+ trigger it.
3654+ """
3655+
3656+ if magic != "I know what I'm doing":
3657+ raise Exception("Invalid magic value, please read the help.")
3658+
3659+ if os.path.exists(self.indexpath):
3660+ os.remove(self.indexpath)
3661+
3662+ for channel_name in [entry for entry in os.listdir(self.path)
3663+ if os.path.isdir(os.path.join(self.path,
3664+ entry))
3665+ and entry not in ('gpg',)]:
3666+ self.create_channel(channel_name)
3667+
3668+ for device_name in os.listdir(os.path.join(self.path,
3669+ channel_name)):
3670+
3671+ path = os.path.join(self.path, channel_name, device_name)
3672+ if not os.path.exists(os.path.join(path, "index.json")):
3673+ continue
3674+
3675+ keyring_path = os.path.join(path, "device.tar.xz")
3676+ if (os.path.exists(keyring_path)
3677+ and os.path.exists("%s.asc" % keyring_path)):
3678+ self.create_device(channel_name, device_name, keyring_path)
3679+ else:
3680+ self.create_device(channel_name, device_name)
3681+
3682+ return True
3683+
3684+ def get_device(self, channel_name, device_name):
3685+ """
3686+ Returns a Device instance.
3687+ """
3688+
3689+ with channels_json(self.config, self.indexpath) as channels:
3690+ if channel_name not in channels:
3691+ raise KeyError("Couldn't find channel: %s" % channel_name)
3692+
3693+ if device_name not in channels[channel_name]['devices']:
3694+ raise KeyError("Couldn't find device: %s" % device_name)
3695+
3696+ device_path = os.path.dirname(channels[channel_name]['devices']
3697+ [device_name]['index'])
3698+
3699+ return Device(self.config, os.path.normpath("%s/%s" % (self.path,
3700+ device_path)))
3701+
3702+ def hide_channel(self, channel_name):
3703+ """
3704+ Hide a channel from the client's list.
3705+ """
3706+
3707+ with channels_json(self.config, self.indexpath, True) as channels:
3708+ if channel_name not in channels:
3709+ raise KeyError("Couldn't find channel: %s" % channel_name)
3710+
3711+ channels[channel_name]['hidden'] = True
3712+
3713+ return True
3714+
3715+ def list_channels(self):
3716+ """
3717+ Returns a dict of all existing channels and devices for each of
3718+ those.
3719+ This is simply a decoded version of channels.json
3720+ """
3721+
3722+ with channels_json(self.config, self.indexpath) as channels:
3723+ return channels
3724+
3725+ def list_missing_files(self):
3726+ """
3727+ Returns a list of absolute paths that should exist but aren't
3728+ present on the filesystem.
3729+ """
3730+
3731+ all_files, empty_dirs = self.__list_existing()
3732+ referenced_files = self.__list_referenced()
3733+
3734+ return sorted(referenced_files - all_files)
3735+
3736+ def list_orphaned_files(self):
3737+ """
3738+ Returns a list of absolute paths to files that are present in the
3739+ tree but aren't referenced anywhere.
3740+ """
3741+
3742+ orphaned_files = set()
3743+
3744+ all_files, empty_dirs = self.__list_existing()
3745+ referenced_files = self.__list_referenced()
3746+
3747+ orphaned_files.update(all_files - referenced_files)
3748+ orphaned_files.update(empty_dirs)
3749+
3750+ for entry in list(orphaned_files):
3751+ if entry.endswith(".json"):
3752+ tarname = entry.replace(".json", ".tar.xz")
3753+ if tarname in referenced_files:
3754+ orphaned_files.remove(entry)
3755+
3756+ if entry.endswith(".json.asc"):
3757+ tarname = entry.replace(".json.asc", ".tar.xz")
3758+ if tarname in referenced_files:
3759+ orphaned_files.remove(entry)
3760+
3761+ return sorted(orphaned_files)
3762+
3763+ def publish_keyring(self, keyring_name):
3764+ """
3765+ Publish the keyring under gpg/
3766+ """
3767+
3768+ gpg_path = os.path.join(self.config.publish_path, "gpg")
3769+
3770+ if not os.path.exists(gpg_path):
3771+ os.mkdir(gpg_path)
3772+
3773+ keyring_path = os.path.join(self.config.gpg_keyring_path, keyring_name)
3774+
3775+ if not os.path.exists("%s.tar.xz" % keyring_path):
3776+ raise Exception("Missing keyring: %s.tar.xz" % keyring_path)
3777+
3778+ if not os.path.exists("%s.tar.xz.asc" % keyring_path):
3779+ raise Exception("Missing keyring signature: %s.tar.xz.asc" %
3780+ keyring_path)
3781+
3782+ shutil.copy("%s.tar.xz" % keyring_path, gpg_path)
3783+ shutil.copy("%s.tar.xz.asc" % keyring_path, gpg_path)
3784+
3785+ return True
3786+
3787+ def remove_channel(self, channel_name):
3788+ """
3789+ Remove a channel and everything it contains.
3790+ """
3791+
3792+ with channels_json(self.config, self.indexpath, True) as channels:
3793+ if channel_name not in channels:
3794+ raise KeyError("Couldn't find channel: %s" % channel_name)
3795+
3796+ channel_path = os.path.join(self.path, channel_name)
3797+ if os.path.exists(channel_path) and \
3798+ "alias" not in channels[channel_name] and \
3799+ "redirect" not in channels[channel_name]:
3800+ shutil.rmtree(channel_path)
3801+ channels.pop(channel_name)
3802+
3803+ return True
3804+
3805+ def remove_device(self, channel_name, device_name):
3806+ """
3807+ Remove a device and everything it contains.
3808+ """
3809+
3810+ with channels_json(self.config, self.indexpath, True) as channels:
3811+ if channel_name not in channels:
3812+ raise KeyError("Couldn't find channel: %s" % channel_name)
3813+
3814+ if device_name not in channels[channel_name]['devices']:
3815+ raise KeyError("Couldn't find device: %s" % device_name)
3816+
3817+ device_path = os.path.join(self.path, channel_name, device_name)
3818+ if os.path.exists(device_path):
3819+ shutil.rmtree(device_path)
3820+ channels[channel_name]['devices'].pop(device_name)
3821+
3822+ self.sync_aliases(channel_name)
3823+ self.sync_redirects(channel_name)
3824+
3825+ return True
3826+
3827+ def rename_channel(self, old_name, new_name):
3828+ """
3829+ Rename a channel.
3830+ """
3831+
3832+ with channels_json(self.config, self.indexpath, True) as channels:
3833+ if old_name not in channels:
3834+ raise KeyError("Couldn't find channel: %s" % old_name)
3835+
3836+ if new_name in channels:
3837+ raise KeyError("Channel already exists: %s" % new_name)
3838+
3839+ old_channel_path = os.path.join(self.path, old_name)
3840+ new_channel_path = os.path.join(self.path, new_name)
3841+ if "redirect" not in channels[old_name]:
3842+ if os.path.exists(new_channel_path):
3843+ raise Exception("Channel path already exists: %s" %
3844+ new_channel_path)
3845+
3846+ if not os.path.exists(os.path.dirname(new_channel_path)):
3847+ os.makedirs(os.path.dirname(new_channel_path))
3848+ if os.path.exists(old_channel_path):
3849+ os.rename(old_channel_path, new_channel_path)
3850+
3851+ channels[new_name] = dict(channels[old_name])
3852+
3853+ if "redirect" not in channels[new_name]:
3854+ for device_name in channels[new_name]['devices']:
3855+ index_path = "/%s/%s/index.json" % (new_name, device_name)
3856+ channels[new_name]['devices'][device_name]['index'] = \
3857+ index_path
3858+
3859+ with index_json(self.config, "%s/%s" %
3860+ (self.path, index_path), True) as index:
3861+ for image in index['images']:
3862+ for entry in image['files']:
3863+ entry['path'] = entry['path'] \
3864+ .replace("/%s/" % old_name,
3865+ "/%s/" % new_name)
3866+ entry['signature'] = entry['signature'] \
3867+ .replace("/%s/" % old_name,
3868+ "/%s/" % new_name)
3869+
3870+ channels.pop(old_name)
3871+
3872+ return True
3873+
3874+ def show_channel(self, channel_name):
3875+ """
3876+ Show a channel from the client's list.
3877+ """
3878+
3879+ with channels_json(self.config, self.indexpath, True) as channels:
3880+ if channel_name not in channels:
3881+ raise KeyError("Couldn't find channel: %s" % channel_name)
3882+
3883+ if "hidden" in channels[channel_name]:
3884+ channels[channel_name].pop("hidden")
3885+
3886+ return True
3887+
3888+ def set_device_keyring(self, channel_name, device_name, path):
3889+ """
3890+ Update the keyring entry for the given channel and device.
3891+ Passing None as the path will unset any existing value.
3892+ """
3893+
3894+ with channels_json(self.config, self.indexpath, True) as channels:
3895+ if channel_name not in channels:
3896+ raise KeyError("Couldn't find channel: %s" % channel_name)
3897+
3898+ if device_name not in channels[channel_name]['devices']:
3899+ raise KeyError("Couldn't find device: %s" % device_name)
3900+
3901+ abspath, relpath = tools.expand_path(path, self.path)
3902+
3903+ if not os.path.exists(abspath):
3904+ raise Exception("Specified GPG keyring doesn't exists: %s" %
3905+ abspath)
3906+
3907+ if not os.path.exists("%s.asc" % abspath):
3908+ raise Exception("The GPG keyring signature doesn't exists: "
3909+ "%s.asc" % abspath)
3910+
3911+ keyring = {}
3912+ keyring['path'] = "/%s" % "/".join(relpath.split(os.sep))
3913+ keyring['signature'] = "/%s.asc" % "/".join(relpath.split(os.sep))
3914+
3915+ channels[channel_name]['devices'][device_name]['keyring'] = keyring
3916+
3917+ return True
3918+
3919+ def sync_alias(self, channel_name):
3920+ """
3921+ Update a channel with data from its parent.
3922+ """
3923+
3924+ with channels_json(self.config, self.indexpath) as channels:
3925+ if channel_name not in channels:
3926+ raise KeyError("Couldn't find channel: %s" % channel_name)
3927+
3928+ if "alias" not in channels[channel_name] or \
3929+ channels[channel_name]['alias'] == channel_name:
3930+ raise TypeError("Not a channel alias")
3931+
3932+ target_name = channels[channel_name]['alias']
3933+
3934+ if target_name not in channels:
3935+ raise KeyError("Couldn't find target channel: %s" %
3936+ target_name)
3937+
3938+ # Start by looking for added/removed devices
3939+ devices = set(channels[channel_name]['devices'].keys())
3940+ target_devices = set(channels[target_name]['devices'].keys())
3941+
3942+ # # Remove any removed device
3943+ for device in devices - target_devices:
3944+ self.remove_device(channel_name, device)
3945+
3946+ # # Add any missing device
3947+ for device in target_devices - devices:
3948+ self.create_device(channel_name, device)
3949+
3950+ # Iterate through all the devices to import builds
3951+ for device_name in target_devices:
3952+ device = self.get_device(channel_name, device_name)
3953+ target_device = self.get_device(target_name, device_name)
3954+
3955+ # Extract all the current builds
3956+ device_images = {(image['version'], image.get('base', None),
3957+ image['type'])
3958+ for image in device.list_images()}
3959+
3960+ target_images = {(image['version'], image.get('base', None),
3961+ image['type'])
3962+ for image in target_device.list_images()}
3963+
3964+ # Remove any removed image
3965+ for image in device_images - target_images:
3966+ device.remove_image(image[2], image[0], base=image[1])
3967+
3968+ # Create the path if it doesn't exist
3969+ if not os.path.exists(device.path):
3970+ os.makedirs(device.path)
3971+
3972+ # Add any missing image
3973+ with index_json(self.config, device.indexpath, True) as index:
3974+ for image in sorted(target_images - device_images):
3975+ orig = [entry for entry in target_device.list_images()
3976+ if entry['type'] == image[2] and
3977+ entry['version'] == image[0] and
3978+ entry.get('base', None) == image[1]]
3979+
3980+ entry = copy.deepcopy(orig[0])
3981+
3982+ # Remove the current version tarball
3983+ version_detail = None
3984+ version_index = len(entry['files'])
3985+ for fentry in entry['files']:
3986+ if fentry['path'].endswith("version-%s.tar.xz" %
3987+ entry['version']):
3988+
3989+ version_path = "%s/%s" % (
3990+ self.config.publish_path, fentry['path'])
3991+
3992+ if os.path.exists(
3993+ version_path.replace(".tar.xz",
3994+ ".json")):
3995+ with open(
3996+ version_path.replace(
3997+ ".tar.xz", ".json")) as fd:
3998+ metadata = json.loads(fd.read())
3999+ if "channel.ini" in metadata:
4000+ version_detail = \
4001+ metadata['channel.ini'].get(
4002+ "version_detail", None)
4003+
4004+ version_index = fentry['order']
4005+ entry['files'].remove(fentry)
4006+ break
4007+
4008+ # Generate a new one
4009+ path = os.path.join(device.path,
4010+ "version-%s.tar.xz" %
4011+ entry['version'])
4012+ abspath, relpath = tools.expand_path(path,
4013+ device.pub_path)
4014+ if not os.path.exists(abspath):
4015+ tools.generate_version_tarball(
4016+ self.config, channel_name, device_name,
4017+ str(entry['version']),
4018+ abspath.replace(".xz", ""),
4019+ version_detail=version_detail,
4020+ channel_target=target_name)
4021+ tools.xz_compress(abspath.replace(".xz", ""))
4022+ os.remove(abspath.replace(".xz", ""))
4023+ gpg.sign_file(self.config, "image-signing",
4024+ abspath)
4025+
4026+ with open(abspath, "rb") as fd:
4027+ checksum = sha256(fd.read()).hexdigest()
4028+
4029+ # Generate the new file entry
4030+ version = {}
4031+ version['order'] = version_index
4032+ version['path'] = "/%s" % "/".join(
4033+ relpath.split(os.sep))
4034+ version['signature'] = "/%s.asc" % "/".join(
4035+ relpath.split(os.sep))
4036+ version['checksum'] = checksum
4037+ version['size'] = int(os.stat(abspath).st_size)
4038+
4039+ # And add it
4040+ entry['files'].append(version)
4041+ index['images'].append(entry)
4042+
4043+ # Sync phased-percentage
4044+ versions = sorted({entry[0] for entry in target_images})
4045+ if versions:
4046+ device.set_phased_percentage(
4047+ versions[-1],
4048+ target_device.get_phased_percentage(versions[-1]))
4049+
4050+ return True
4051+
4052+ def sync_aliases(self, channel_name):
4053+ """
4054+ Update any channel that's an alias of the current one.
4055+ """
4056+
4057+ with channels_json(self.config, self.indexpath) as channels:
4058+ if channel_name not in channels:
4059+ raise KeyError("Couldn't find channel: %s" % channel_name)
4060+
4061+ alias_channels = [name
4062+ for name, channel
4063+ in self.list_channels().items()
4064+ if channel.get("alias", None) == channel_name
4065+ and name != channel_name]
4066+
4067+ for alias_name in alias_channels:
4068+ self.sync_alias(alias_name)
4069+
4070+ return True
4071+
4072+ def sync_redirects(self, channel_name):
4073+ """
4074+ Update any channel that's a direct of the current one.
4075+ """
4076+
4077+ with channels_json(self.config, self.indexpath) as channels:
4078+ if channel_name not in channels:
4079+ raise KeyError("Couldn't find channel: %s" % channel_name)
4080+
4081+ redirect_channels = [name
4082+ for name, channel
4083+ in self.list_channels().items()
4084+ if channel.get("redirect", None) == channel_name]
4085+
4086+ for redirect_name in redirect_channels:
4087+ self.remove_channel(redirect_name)
4088+ self.create_channel_redirect(redirect_name, channel_name)
4089+
4090+ return True
4091+
4092+
4093+class Device:
4094+ def __init__(self, config, path):
4095+ self.config = config
4096+ self.pub_path = self.config.publish_path
4097+ self.path = path
4098+ self.indexpath = os.path.join(path, "index.json")
4099+
4100+ def create_image(self, entry_type, version, description, paths,
4101+ base=None, bootme=False, minversion=None):
4102+ """
4103+ Add a new image to the index.
4104+ """
4105+
4106+ if len(paths) == 0:
4107+ raise Exception("No file passed for this image.")
4108+
4109+ files = []
4110+ count = 0
4111+
4112+ with index_json(self.config, self.indexpath, True) as index:
4113+ for path in paths:
4114+ abspath, relpath = tools.expand_path(path, self.pub_path)
4115+
4116+ if not os.path.exists(abspath):
4117+ raise Exception("Specified file doesn't exists: %s"
4118+ % abspath)
4119+
4120+ if not os.path.exists("%s.asc" % abspath):
4121+ raise Exception("The GPG file signature doesn't exists: "
4122+ "%s.asc" % abspath)
4123+
4124+ with open(abspath, "rb") as fd:
4125+ checksum = sha256(fd.read()).hexdigest()
4126+
4127+ files.append({'order': count,
4128+ 'path': "/%s" % "/".join(relpath.split(os.sep)),
4129+ 'checksum': checksum,
4130+ 'signature': "/%s.asc" % "/".join(
4131+ relpath.split(os.sep)),
4132+ 'size': int(os.stat(abspath).st_size)})
4133+
4134+ count += 1
4135+
4136+ image = {}
4137+
4138+ if entry_type == "delta":
4139+ if not base:
4140+ raise KeyError("Missing base version for delta image.")
4141+ image['base'] = int(base)
4142+ elif base:
4143+ raise KeyError("Base version set for full image.")
4144+
4145+ if bootme:
4146+ image['bootme'] = bootme
4147+
4148+ if minversion:
4149+ if entry_type == "delta":
4150+ raise KeyError("Minimum version set for delta image.")
4151+ image['minversion'] = minversion
4152+
4153+ image['description'] = description
4154+ image['files'] = files
4155+ image['type'] = entry_type
4156+ image['version'] = version
4157+ index['images'].append(image)
4158+
4159+ return True
4160+
4161+ def expire_images(self, max_images):
4162+ """
4163+ Expire images keeping the last <max_images> full images and
4164+ their deltas. Also remove any delta that has an expired image
4165+ as its base.
4166+ """
4167+
4168+ full_images = sorted([image for image in self.list_images()
4169+ if image['type'] == "full"],
4170+ key=lambda image: image['version'])
4171+
4172+ to_remove = len(full_images) - max_images
4173+ if to_remove <= 0:
4174+ return True
4175+
4176+ full_remove = full_images[:to_remove]
4177+ remove_version = [image['version'] for image in full_remove]
4178+
4179+ for image in self.list_images():
4180+ if image['type'] == "full":
4181+ if image['version'] in remove_version:
4182+ self.remove_image(image['type'], image['version'])
4183+ else:
4184+ if (image['version'] in remove_version
4185+ or image['base'] in remove_version):
4186+ self.remove_image(image['type'], image['version'],
4187+ image['base'])
4188+
4189+ return True
4190+
4191+ def get_image(self, entry_type, version, base=None):
4192+ """
4193+ Look for an image and return a dict representation of it.
4194+ """
4195+
4196+ if entry_type not in ("full", "delta"):
4197+ raise ValueError("Invalid image type: %s" % entry_type)
4198+
4199+ if entry_type == "delta" and not base:
4200+ raise ValueError("Missing base version for delta image.")
4201+
4202+ with index_json(self.config, self.indexpath) as index:
4203+ match = []
4204+ for image in index['images']:
4205+ if (image['type'] == entry_type and image['version'] == version
4206+ and (image['type'] == "full" or
4207+ image['base'] == base)):
4208+ match.append(image)
4209+
4210+ if len(match) != 1:
4211+ raise IndexError("Couldn't find a match.")
4212+
4213+ return match[0]
4214+
4215+ def get_phased_percentage(self, version):
4216+ """
4217+ Returns the phasing percentage for a given version.
4218+ """
4219+
4220+ for entry in self.list_images():
4221+ if entry['version'] == version:
4222+ if "phased-percentage" in entry:
4223+ return entry['phased-percentage']
4224+ else:
4225+ return 100
4226+ else:
4227+ raise IndexError("Invalid version number: %s" % version)
4228+
4229+ def list_images(self):
4230+ """
4231+ Returns a list of all existing images, each image is a dict.
4232+ This is simply a decoded version of the image array in index.json
4233+ """
4234+
4235+ with index_json(self.config, self.indexpath) as index:
4236+ return index['images']
4237+
4238+ def remove_image(self, entry_type, version, base=None):
4239+ """
4240+ Remove an image.
4241+ """
4242+
4243+ image = self.get_image(entry_type, version, base)
4244+ with index_json(self.config, self.indexpath, True) as index:
4245+ index['images'].remove(image)
4246+
4247+ return True
4248+
4249+ def set_description(self, entry_type, version, description,
4250+ translations={}, base=None):
4251+ """
4252+ Set or update an image description.
4253+ """
4254+
4255+ if translations and not isinstance(translations, dict):
4256+ raise TypeError("translations must be a dict.")
4257+
4258+ image = self.get_image(entry_type, version, base)
4259+
4260+ with index_json(self.config, self.indexpath, True) as index:
4261+ for entry in index['images']:
4262+ if entry != image:
4263+ continue
4264+
4265+ entry['description'] = description
4266+ for langid, value in translations.items():
4267+ entry['description_%s' % langid] = value
4268+
4269+ break
4270+
4271+ return True
4272+
4273+ def set_phased_percentage(self, version, percentage):
4274+ """
4275+ Set the phasing percentage on an image version.
4276+ """
4277+
4278+ if not isinstance(percentage, int):
4279+ raise TypeError("percentage must be an integer.")
4280+
4281+ if percentage < 0 or percentage > 100:
4282+ raise ValueError("percentage must be >= 0 and <= 100.")
4283+
4284+ with index_json(self.config, self.indexpath, True) as index:
4285+ versions = sorted({entry['version'] for entry in index['images']})
4286+
4287+ last_version = None
4288+ if versions:
4289+ last_version = versions[-1]
4290+
4291+ if version not in versions:
4292+ raise IndexError("Version doesn't exist: %s" % version)
4293+
4294+ if version != last_version:
4295+ raise Exception("Phased percentage can only be set on the "
4296+ "latest image")
4297+
4298+ for entry in index['images']:
4299+ if entry['version'] == version:
4300+ if percentage == 100 and "phased-percentage" in entry:
4301+ entry.pop("phased-percentage")
4302+ elif percentage != 100:
4303+ entry['phased-percentage'] = percentage
4304+
4305+ return True
4306
4307=== added directory 'secret'
4308=== added directory 'secret/gpg'
4309=== added directory 'secret/gpg/keyrings'
4310=== added directory 'secret/gpg/keys'
4311=== added directory 'secret/ssh'
4312=== added directory 'state'
4313=== added directory 'tests'
4314=== added file 'tests/generate-keys'
4315--- tests/generate-keys 1970-01-01 00:00:00 +0000
4316+++ tests/generate-keys 2014-10-10 11:11:17 +0000
4317@@ -0,0 +1,52 @@
4318+#!/usr/bin/python
4319+# -*- coding: utf-8 -*-
4320+
4321+# Copyright (C) 2013 Canonical Ltd.
4322+# Author: Stéphane Graber <stgraber@ubuntu.com>
4323+
4324+# This program is free software: you can redistribute it and/or modify
4325+# it under the terms of the GNU General Public License as published by
4326+# the Free Software Foundation; version 3 of the License.
4327+#
4328+# This program is distributed in the hope that it will be useful,
4329+# but WITHOUT ANY WARRANTY; without even the implied warranty of
4330+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
4331+# GNU General Public License for more details.
4332+#
4333+# You should have received a copy of the GNU General Public License
4334+# along with this program. If not, see <http://www.gnu.org/licenses/>.
4335+
4336+import os
4337+import shutil
4338+
4339+import sys
4340+sys.path.insert(0, 'lib')
4341+
4342+from systemimage import gpg
4343+
4344+target_dir = "tests/keys/"
4345+if not os.path.exists(target_dir):
4346+ raise Exception("Missing tests/keys directory")
4347+
4348+keys = (("archive-master", "[TESTING] Ubuntu Archive Master Signing Key",
4349+ "ftpmaster@ubuntu.com", 0),
4350+ ("image-master", "[TESTING] Ubuntu System Image Master Signing Key",
4351+ "system-image@ubuntu.com", 0),
4352+ ("image-signing", "[TESTING] Ubuntu System Image Signing Key (YYYY)",
4353+ "system-image@ubuntu.com", "2y"),
4354+ ("device-signing", "[TESTING] Random OEM Signing Key (YYYY)",
4355+ "system-image@ubuntu.com", "2y"))
4356+
4357+for key_name, key_description, key_email, key_expiry in keys:
4358+ key_dir = "%s/%s/" % (target_dir, key_name)
4359+ if os.path.exists(key_dir):
4360+ shutil.rmtree(key_dir)
4361+ os.makedirs(key_dir)
4362+
4363+ uid = gpg.generate_signing_key(key_dir, key_description, key_email,
4364+ key_expiry)
4365+
4366+ print("%s <%s>" % (uid.name, uid.email))
4367+
4368+# All done, let's mark it as done
4369+open("tests/keys/generated", "w+").close()
4370
4371=== added directory 'tests/keys'
4372=== added file 'tests/run'
4373--- tests/run 1970-01-01 00:00:00 +0000
4374+++ tests/run 2014-10-10 11:11:17 +0000
4375@@ -0,0 +1,60 @@
4376+#!/usr/bin/python
4377+# -*- coding: utf-8 -*-
4378+
4379+# Copyright (C) 2013 Canonical Ltd.
4380+# Author: Stéphane Graber <stgraber@ubuntu.com>
4381+
4382+# This program is free software: you can redistribute it and/or modify
4383+# it under the terms of the GNU General Public License as published by
4384+# the Free Software Foundation; version 3 of the License.
4385+#
4386+# This program is distributed in the hope that it will be useful,
4387+# but WITHOUT ANY WARRANTY; without even the implied warranty of
4388+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
4389+# GNU General Public License for more details.
4390+#
4391+# You should have received a copy of the GNU General Public License
4392+# along with this program. If not, see <http://www.gnu.org/licenses/>.
4393+
4394+# Dependencies:
4395+# - python2 (>= 2.7): python-gpgme, python-coverage
4396+# - python3 (>= 3.2): python3-gpgme
4397+
4398+import glob
4399+import os
4400+import re
4401+import shutil
4402+import sys
4403+import unittest
4404+
4405+coverage = True
4406+try:
4407+ from coverage import coverage
4408+ cov = coverage()
4409+ cov.start()
4410+except ImportError:
4411+ print("No coverage report, make sure python-coverage is installed")
4412+ coverage = False
4413+
4414+sys.path.insert(0, 'lib')
4415+
4416+if len(sys.argv) > 1:
4417+ test_filter = sys.argv[1]
4418+else:
4419+ test_filter = ''
4420+
4421+tests = [t[:-3] for t in os.listdir('tests')
4422+ if t.startswith('test_') and t.endswith('.py') and
4423+ re.search(test_filter, t)]
4424+tests.sort()
4425+suite = unittest.TestLoader().loadTestsFromNames(tests)
4426+res = unittest.TextTestRunner(verbosity=2).run(suite)
4427+
4428+if coverage:
4429+ if os.path.exists('tests/coverage'):
4430+ shutil.rmtree('tests/coverage')
4431+ cov.stop()
4432+ cov.html_report(include=glob.glob("lib/systemimage/*.py"),
4433+ directory='tests/coverage')
4434+ print("")
4435+ cov.report(include=glob.glob("lib/systemimage/*.py"))
4436
4437=== added file 'tests/test_config.py'
4438--- tests/test_config.py 1970-01-01 00:00:00 +0000
4439+++ tests/test_config.py 2014-10-10 11:11:17 +0000
4440@@ -0,0 +1,281 @@
4441+# -*- coding: utf-8 -*-
4442+
4443+# Copyright (C) 2013 Canonical Ltd.
4444+# Author: Stéphane Graber <stgraber@ubuntu.com>
4445+
4446+# This program is free software: you can redistribute it and/or modify
4447+# it under the terms of the GNU General Public License as published by
4448+# the Free Software Foundation; version 3 of the License.
4449+#
4450+# This program is distributed in the hope that it will be useful,
4451+# but WITHOUT ANY WARRANTY; without even the implied warranty of
4452+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
4453+# GNU General Public License for more details.
4454+#
4455+# You should have received a copy of the GNU General Public License
4456+# along with this program. If not, see <http://www.gnu.org/licenses/>.
4457+
4458+import os
4459+import shutil
4460+import tempfile
4461+import unittest
4462+
4463+from systemimage import config
4464+from systemimage import tools
4465+
4466+try:
4467+ from unittest import mock
4468+except ImportError:
4469+ import mock
4470+
4471+
4472+class ConfigTests(unittest.TestCase):
4473+ def setUp(self):
4474+ temp_directory = tempfile.mkdtemp()
4475+ self.temp_directory = temp_directory
4476+
4477+ def tearDown(self):
4478+ shutil.rmtree(self.temp_directory)
4479+
4480+ @mock.patch("subprocess.call")
4481+ def test_config(self, mock_call):
4482+ # Good complete config
4483+ config_path = os.path.join(self.temp_directory, "config")
4484+ key_path = os.path.join(self.temp_directory, "key")
4485+
4486+ with open(config_path, "w+") as fd:
4487+ fd.write("""[global]
4488+base_path = %s
4489+mirrors = a, b
4490+
4491+[mirror_default]
4492+ssh_user = user
4493+ssh_key = key
4494+ssh_port = 22
4495+ssh_command = command
4496+
4497+[mirror_a]
4498+ssh_host = hosta
4499+
4500+[mirror_b]
4501+ssh_host = hostb
4502+""" % self.temp_directory)
4503+
4504+ conf = config.Config(config_path)
4505+
4506+ # Test ssh sync
4507+ tools.sync_mirrors(conf)
4508+ expected_calls = [((['ssh', '-i', key_path, '-l', 'user',
4509+ '-p', '22', 'hosta', 'command'],), {}),
4510+ ((['ssh', '-i', key_path, '-l', 'user',
4511+ '-p', '22', 'hostb', 'command'],), {})]
4512+ self.assertEquals(mock_call.call_args_list, expected_calls)
4513+
4514+ # Invalid config
4515+ invalid_config_path = os.path.join(self.temp_directory,
4516+ "invalid_config")
4517+ with open(invalid_config_path, "w+") as fd:
4518+ fd.write("""invalid""")
4519+
4520+ self.assertEquals(config.parse_config(invalid_config_path), {})
4521+
4522+ self.assertRaises(
4523+ Exception, config.Config, os.path.join(self.temp_directory,
4524+ "invalid"))
4525+
4526+ # Test loading config from default location
4527+ config_file = os.path.join(os.path.dirname(config.__file__),
4528+ "../../etc/config")
4529+
4530+ old_pwd = os.getcwd()
4531+ os.chdir(self.temp_directory)
4532+ if not os.path.exists(config_file):
4533+ self.assertRaises(Exception, config.Config)
4534+ else:
4535+ self.assertTrue(config.Config())
4536+ os.chdir(old_pwd)
4537+
4538+ # Empty config
4539+ empty_config_path = os.path.join(self.temp_directory,
4540+ "empty_config")
4541+ with open(empty_config_path, "w+") as fd:
4542+ fd.write("")
4543+
4544+ conf = config.Config(empty_config_path)
4545+ self.assertEquals(conf.base_path, os.getcwd())
4546+
4547+ # Single mirror config
4548+ single_mirror_config_path = os.path.join(self.temp_directory,
4549+ "single_mirror_config")
4550+ with open(single_mirror_config_path, "w+") as fd:
4551+ fd.write("""[global]
4552+mirrors = a
4553+
4554+[mirror_default]
4555+ssh_user = user
4556+ssh_key = key
4557+ssh_port = 22
4558+ssh_command = command
4559+
4560+[mirror_a]
4561+ssh_host = host
4562+""")
4563+
4564+ conf = config.Config(single_mirror_config_path)
4565+ self.assertEquals(conf.mirrors['a'].ssh_command, "command")
4566+
4567+ # Missing mirror_default
4568+ missing_default_config_path = os.path.join(self.temp_directory,
4569+ "missing_default_config")
4570+ with open(missing_default_config_path, "w+") as fd:
4571+ fd.write("""[global]
4572+mirrors = a
4573+
4574+[mirror_a]
4575+ssh_host = host
4576+""")
4577+
4578+ self.assertRaises(KeyError, config.Config, missing_default_config_path)
4579+
4580+ # Missing mirror key
4581+ missing_key_config_path = os.path.join(self.temp_directory,
4582+ "missing_key_config")
4583+ with open(missing_key_config_path, "w+") as fd:
4584+ fd.write("""[global]
4585+mirrors = a
4586+
4587+[mirror_default]
4588+ssh_user = user
4589+ssh_port = 22
4590+ssh_command = command
4591+
4592+[mirror_a]
4593+ssh_host = host
4594+""")
4595+
4596+ self.assertRaises(KeyError, config.Config, missing_key_config_path)
4597+
4598+ # Missing mirror
4599+ missing_mirror_config_path = os.path.join(self.temp_directory,
4600+ "missing_mirror_config")
4601+ with open(missing_mirror_config_path, "w+") as fd:
4602+ fd.write("""[global]
4603+mirrors = a
4604+
4605+[mirror_default]
4606+ssh_user = user
4607+ssh_port = 22
4608+ssh_command = command
4609+ssh_key = key
4610+""")
4611+
4612+ self.assertRaises(KeyError, config.Config, missing_mirror_config_path)
4613+
4614+ # Missing ssh_host
4615+ missing_host_config_path = os.path.join(self.temp_directory,
4616+ "missing_host_config")
4617+ with open(missing_host_config_path, "w+") as fd:
4618+ fd.write("""[global]
4619+mirrors = a
4620+
4621+[mirror_default]
4622+ssh_user = user
4623+ssh_port = 22
4624+ssh_command = command
4625+ssh_key = key
4626+
4627+[mirror_a]
4628+ssh_user = other-user
4629+""")
4630+
4631+ self.assertRaises(KeyError, config.Config, missing_host_config_path)
4632+
4633+ # Test with env path
4634+ test_path = os.path.join(self.temp_directory, "a", "b")
4635+ os.makedirs(os.path.join(test_path, "etc"))
4636+ with open(os.path.join(test_path, "etc", "config"), "w+") as fd:
4637+ fd.write("[global]\nbase_path = a/b/c")
4638+ os.environ['SYSTEM_IMAGE_ROOT'] = test_path
4639+ test_config = config.Config()
4640+ self.assertEquals(test_config.base_path, "a/b/c")
4641+
4642+ # Test the channels config
4643+ # # Multiple channels
4644+ channel_config_path = os.path.join(self.temp_directory,
4645+ "channel_config")
4646+ with open(channel_config_path, "w+") as fd:
4647+ fd.write("""[global]
4648+channels = a, b
4649+
4650+[channel_a]
4651+type = manual
4652+fullcount = 10
4653+
4654+[channel_b]
4655+type = auto
4656+versionbase = 5
4657+deltabase = a, b
4658+files = a, b
4659+file_a = test;arg1;arg2
4660+file_b = test;arg3;arg4
4661+""")
4662+
4663+ conf = config.Config(channel_config_path)
4664+ self.assertEquals(
4665+ conf.channels['b'].files,
4666+ [{'name': 'a', 'generator': 'test',
4667+ 'arguments': ['arg1', 'arg2']},
4668+ {'name': 'b', 'generator': 'test',
4669+ 'arguments': ['arg3', 'arg4']}])
4670+
4671+ self.assertEquals(conf.channels['a'].fullcount, 10)
4672+ self.assertEquals(conf.channels['a'].versionbase, 1)
4673+ self.assertEquals(conf.channels['a'].deltabase, ['a'])
4674+
4675+ self.assertEquals(conf.channels['b'].fullcount, 0)
4676+ self.assertEquals(conf.channels['b'].versionbase, 5)
4677+ self.assertEquals(conf.channels['b'].deltabase, ["a", "b"])
4678+
4679+ # # Single channel
4680+ single_channel_config_path = os.path.join(self.temp_directory,
4681+ "single_channel_config")
4682+ with open(single_channel_config_path, "w+") as fd:
4683+ fd.write("""[global]
4684+channels = a
4685+
4686+[channel_a]
4687+deltabase = a
4688+versionbase = 1
4689+files = a
4690+file_a = test;arg1;arg2
4691+""")
4692+
4693+ conf = config.Config(single_channel_config_path)
4694+ self.assertEquals(
4695+ conf.channels['a'].files,
4696+ [{'name': 'a', 'generator': 'test',
4697+ 'arguments': ['arg1', 'arg2']}])
4698+
4699+ # # Invalid channel
4700+ invalid_channel_config_path = os.path.join(self.temp_directory,
4701+ "invalid_channel_config")
4702+ with open(invalid_channel_config_path, "w+") as fd:
4703+ fd.write("""[global]
4704+channels = a
4705+""")
4706+
4707+ self.assertRaises(KeyError, config.Config, invalid_channel_config_path)
4708+
4709+ # # Invalid file
4710+ invalid_file_channel_config_path = os.path.join(
4711+ self.temp_directory, "invalid_file_channel_config")
4712+ with open(invalid_file_channel_config_path, "w+") as fd:
4713+ fd.write("""[global]
4714+channels = a
4715+
4716+[channel_a]
4717+files = a
4718+""")
4719+
4720+ self.assertRaises(KeyError, config.Config,
4721+ invalid_file_channel_config_path)
4722
4723=== added file 'tests/test_diff.py'
4724--- tests/test_diff.py 1970-01-01 00:00:00 +0000
4725+++ tests/test_diff.py 2014-10-10 11:11:17 +0000
4726@@ -0,0 +1,265 @@
4727+# -*- coding: utf-8 -*-
4728+
4729+# Copyright (C) 2013 Canonical Ltd.
4730+# Author: Stéphane Graber <stgraber@ubuntu.com>
4731+
4732+# This program is free software: you can redistribute it and/or modify
4733+# it under the terms of the GNU General Public License as published by
4734+# the Free Software Foundation; version 3 of the License.
4735+#
4736+# This program is distributed in the hope that it will be useful,
4737+# but WITHOUT ANY WARRANTY; without even the implied warranty of
4738+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
4739+# GNU General Public License for more details.
4740+#
4741+# You should have received a copy of the GNU General Public License
4742+# along with this program. If not, see <http://www.gnu.org/licenses/>.
4743+
4744+import shutil
4745+import sys
4746+import tarfile
4747+import tempfile
4748+import unittest
4749+
4750+from io import BytesIO, StringIO
4751+from systemimage.diff import ImageDiff, compare_files
4752+
4753+
4754+class DiffTests(unittest.TestCase):
4755+ def setUp(self):
4756+ temp_directory = tempfile.mkdtemp()
4757+
4758+ source_tarball_path = "%s/source.tar" % temp_directory
4759+ target_tarball_path = "%s/target.tar" % temp_directory
4760+
4761+ source_tarball = tarfile.open(source_tarball_path, "w")
4762+ target_tarball = tarfile.open(target_tarball_path, "w")
4763+
4764+ # Standard file
4765+ a = tarfile.TarInfo()
4766+ a.name = "a"
4767+ a.size = 4
4768+
4769+ # Standard file
4770+ b = tarfile.TarInfo()
4771+ b.name = "b"
4772+ b.size = 4
4773+
4774+ # Standard directory
4775+ c_dir = tarfile.TarInfo()
4776+ c_dir.name = "c"
4777+ c_dir.type = tarfile.DIRTYPE
4778+ c_dir.mode = 0o755
4779+
4780+ # Standard file
4781+ c = tarfile.TarInfo()
4782+ c.name = "c/c"
4783+ c.size = 4
4784+
4785+ # Standard file
4786+ d_source = tarfile.TarInfo()
4787+ d_source.name = "c/d"
4788+ d_source.size = 8
4789+ d_source.mtime = 1000
4790+
4791+ # Standard file
4792+ d_target = tarfile.TarInfo()
4793+ d_target.name = "c/d"
4794+ d_target.size = 8
4795+ d_target.mtime = 1234
4796+
4797+ # Symlink
4798+ e = tarfile.TarInfo()
4799+ e.name = "e"
4800+ e.type = tarfile.SYMTYPE
4801+ e.linkname = "a"
4802+
4803+ # Hard link
4804+ f = tarfile.TarInfo()
4805+ f.name = "f"
4806+ f.type = tarfile.LNKTYPE
4807+ f.linkname = "a"
4808+
4809+ # Standard file
4810+ g_source = tarfile.TarInfo()
4811+ g_source.name = "c/g"
4812+ g_source.size = 4
4813+ g_source.mtime = 1000
4814+
4815+ # Standard file
4816+ g_target = tarfile.TarInfo()
4817+ g_target.name = "c/g"
4818+ g_target.size = 4
4819+ g_target.mtime = 1001
4820+
4821+ # Hard link
4822+ h_source = tarfile.TarInfo()
4823+ h_source.name = "c/h"
4824+ h_source.type = tarfile.LNKTYPE
4825+ h_source.linkname = "d"
4826+ h_source.mtime = 1000
4827+
4828+ # Hard link
4829+ h_target = tarfile.TarInfo()
4830+ h_target.name = "c/h"
4831+ h_target.type = tarfile.LNKTYPE
4832+ h_target.linkname = "d"
4833+ h_target.mtime = 1001
4834+
4835+ # Hard link
4836+ i = tarfile.TarInfo()
4837+ i.name = "c/a_i"
4838+ i.type = tarfile.LNKTYPE
4839+ i.linkname = "c"
4840+
4841+ # Dangling symlink
4842+ j = tarfile.TarInfo()
4843+ j.name = "c/j"
4844+ j.type = tarfile.SYMTYPE
4845+ j.linkname = "j_non-existent"
4846+
4847+ # Standard directory
4848+ k_dir = tarfile.TarInfo()
4849+ k_dir.name = "dir"
4850+ k_dir.type = tarfile.DIRTYPE
4851+ k_dir.mode = 0o755
4852+
4853+ # Dangling symlink
4854+ l = tarfile.TarInfo()
4855+ l.name = "dir"
4856+ l.type = tarfile.SYMTYPE
4857+ l.linkname = "l_non-existent"
4858+
4859+ # Standard file
4860+ m_source = tarfile.TarInfo()
4861+ m_source.name = "m"
4862+ m_source.size = 4
4863+
4864+ # Hard link
4865+ m_target = tarfile.TarInfo()
4866+ m_target.name = "m"
4867+ m_target.type = tarfile.LNKTYPE
4868+ m_target.linkname = "n"
4869+
4870+ # Hard link
4871+ n_source = tarfile.TarInfo()
4872+ n_source.name = "n"
4873+ n_source.type = tarfile.LNKTYPE
4874+ n_source.linkname = "m"
4875+
4876+ # Standard file
4877+ n_target = tarfile.TarInfo()
4878+ n_target.name = "n"
4879+ n_target.size = 4
4880+
4881+ # Hard link
4882+ o_source = tarfile.TarInfo()
4883+ o_source.name = "system/o.1"
4884+ o_source.type = tarfile.LNKTYPE
4885+ o_source.linkname = "system/o"
4886+
4887+ # Standard file
4888+ o_target = tarfile.TarInfo()
4889+ o_target.name = "system/o"
4890+ o_target.size = 4
4891+
4892+ source_tarball.addfile(a, BytesIO(b"test"))
4893+ source_tarball.addfile(a, BytesIO(b"test"))
4894+ source_tarball.addfile(a, BytesIO(b"test"))
4895+ source_tarball.addfile(b, BytesIO(b"test"))
4896+ source_tarball.addfile(c_dir)
4897+ source_tarball.addfile(d_source, BytesIO(b"test-abc"))
4898+ source_tarball.addfile(g_source, BytesIO(b"test"))
4899+ source_tarball.addfile(h_source, BytesIO(b"test"))
4900+ source_tarball.addfile(k_dir)
4901+ source_tarball.addfile(m_source, BytesIO(b"test"))
4902+ source_tarball.addfile(n_source)
4903+
4904+ target_tarball.addfile(a, BytesIO(b"test"))
4905+ target_tarball.addfile(c_dir)
4906+ target_tarball.addfile(c, BytesIO(b"test"))
4907+ target_tarball.addfile(d_target, BytesIO(b"test-def"))
4908+ target_tarball.addfile(e)
4909+ target_tarball.addfile(f)
4910+ target_tarball.addfile(g_target, BytesIO(b"test"))
4911+ target_tarball.addfile(h_target, BytesIO(b"test"))
4912+ target_tarball.addfile(i)
4913+ target_tarball.addfile(j)
4914+ target_tarball.addfile(l)
4915+ target_tarball.addfile(n_target, BytesIO(b"test"))
4916+ target_tarball.addfile(m_target)
4917+ target_tarball.addfile(o_source)
4918+ target_tarball.addfile(o_target)
4919+
4920+ source_tarball.close()
4921+ target_tarball.close()
4922+
4923+ self.imagediff = ImageDiff(source_tarball_path, target_tarball_path)
4924+ self.source_tarball_path = source_tarball_path
4925+ self.target_tarball_path = target_tarball_path
4926+ self.temp_directory = temp_directory
4927+
4928+ def tearDown(self):
4929+ shutil.rmtree(self.temp_directory)
4930+
4931+ def test_content(self):
4932+ content_set, content_dict = self.imagediff.scan_content("source")
4933+ self.assertEquals(sorted(content_dict.keys()),
4934+ ['a', 'b', 'c', 'c/d', 'c/g', 'c/h', 'dir', 'm',
4935+ 'n'])
4936+
4937+ content_set, content_dict = self.imagediff.scan_content("target")
4938+ self.assertEquals(sorted(content_dict.keys()),
4939+ ['a', 'c', 'c/a_i', 'c/c', 'c/d', 'c/g', 'c/h',
4940+ 'c/j', 'dir', 'e', 'f', 'm', 'n', 'system/o',
4941+ 'system/o.1'])
4942+
4943+ def test_content_invalid_image(self):
4944+ self.assertRaises(KeyError, self.imagediff.scan_content, "invalid")
4945+
4946+ def test_compare_files(self):
4947+ self.assertEquals(compare_files(None, None), True)
4948+ self.assertEquals(compare_files(None, BytesIO(b"abc")), False)
4949+
4950+ def test_compare_image(self):
4951+ diff_set = self.imagediff.compare_images()
4952+ self.assertTrue(("c/a_i", "add") in diff_set)
4953+
4954+ def test_print_changes(self):
4955+ # Redirect stdout
4956+ old_stdout = sys.stdout
4957+
4958+ # FIXME: Would be best to have something that works with both version
4959+ if sys.version[0] == "3":
4960+ sys.stdout = StringIO()
4961+ else:
4962+ sys.stdout = BytesIO()
4963+
4964+ self.imagediff.print_changes()
4965+
4966+ # Unredirect stdout
4967+ output = sys.stdout.getvalue()
4968+ sys.stdout = old_stdout
4969+
4970+ self.assertEquals(output, """ - b (del)
4971+ - c/a_i (add)
4972+ - c/c (add)
4973+ - c/d (mod)
4974+ - c/j (add)
4975+ - dir (mod)
4976+ - e (add)
4977+ - f (add)
4978+ - system/o (add)
4979+ - system/o.1 (add)
4980+""")
4981+
4982+ def test_generate_tarball(self):
4983+ output_tarball = "%s/output.tar" % self.temp_directory
4984+
4985+ self.imagediff.generate_diff_tarball(output_tarball)
4986+ tarball = tarfile.open(output_tarball, "r")
4987+
4988+ files_list = [entry.name for entry in tarball]
4989+ self.assertEquals(files_list, ['removed', 'c/c', 'c/a_i', 'c/d', 'c/j',
4990+ 'dir', 'e', 'f', 'system/o',
4991+ 'system/o.1'])
4992
4993=== added file 'tests/test_generators.py'
4994--- tests/test_generators.py 1970-01-01 00:00:00 +0000
4995+++ tests/test_generators.py 2014-10-10 11:11:17 +0000
4996@@ -0,0 +1,1039 @@
4997+# -*- coding: utf-8 -*-
4998+
4999+# Copyright (C) 2013 Canonical Ltd.
5000+# Author: Stéphane Graber <stgraber@ubuntu.com>
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches