Merge lp:~cjwatson/ubuntu-archive-publishing/add-germinate-scripts into lp:ubuntu-archive-publishing

Proposed by Colin Watson
Status: Merged
Approved by: Colin Watson
Approved revision: 44
Merged at revision: 42
Proposed branch: lp:~cjwatson/ubuntu-archive-publishing/add-germinate-scripts
Merge into: lp:ubuntu-archive-publishing
Diff against target: 2458 lines (+2381/-1)
12 files modified
finalize.d/20-germinate (+1/-1)
lib/scripts/generate_extra_overrides.py (+518/-0)
scripts/cron.germinate (+62/-0)
scripts/generate-extra-overrides.py (+18/-0)
scripts/maintenance-check.py (+529/-0)
tests/germinate-test-data/mock-bin/lockfile (+8/-0)
tests/germinate-test-data/mock-lp-root/scripts/ftpmaster-tools/lp-query-distro.py (+27/-0)
tests/germinate-test-data/mock-lp-root/scripts/generate-extra-overrides.py (+5/-0)
tests/run (+8/-0)
tests/test_cron_germinate.py (+226/-0)
tests/test_generate_extra_overrides.py (+805/-0)
tests/testing.py (+174/-0)
To merge this branch: bzr merge lp:~cjwatson/ubuntu-archive-publishing/add-germinate-scripts
Reviewer Review Type Date Requested Status
William Grant code Approve
Ubuntu Package Archive Administrators Pending
Review via email: mp+222477@code.launchpad.net

Commit message

Add cron.germinate and its associated scripts and tests, split out from Launchpad.

Description of the change

As part of the work to instantiate a derived archive for the phone RTM project, William Grant suggested that we should split cron.germinate out of Launchpad proper and move it into ubuntu-archive-publishing, which in any case removes a large chunk of hardcoded Ubuntu-specific policy in Launchpad.

This is a fairly straight port of the relevant Launchpad code to run standalone. The main changes required were providing some minimal LaunchpadScript-like functionality, using launchpadlib rather than database access, and extensive surgery on generate-extra-overrides' test suite to create mock launchpadlib objects rather than using the test database. There are quite a number of remaining infelicities, especially in maintenance-check which has been on my refactoring list for quite a while, but they're essentially the same infelicities currently found in Launchpad.

This can only be deployed once https://rt.admin.canonical.com/Ticket/Display.html?id=72323 ("Please install python-launchpadlib on pepo and bilimbi") is complete and pepo has been given a "Read Non-Private Data" OAuth permission for ~ubuntu-archive.

To post a comment you must log in.
43. By Colin Watson

Remove comment reference to GlobalLock.

44. By Colin Watson

The webservice spells it architecture_tag, not architecturetag.

Revision history for this message
William Grant (wgrant) :
review: Approve (code)
Revision history for this message
Colin Watson (cjwatson) wrote :

William accurately points out that we're using login_anonymously so the OAuth permission for ~ubuntu-archive is unnecessary.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'finalize.d/20-germinate'
2--- finalize.d/20-germinate 2014-04-15 15:55:56 +0000
3+++ finalize.d/20-germinate 2014-06-09 13:57:00 +0000
4@@ -2,5 +2,5 @@
5
6 if [ "$SECURITY_UPLOAD_ONLY" != "yes" ]
7 then
8- cron.germinate || /bin/true
9+ "$(dirname "$0")/../scripts/cron.germinate" || /bin/true
10 fi
11
12=== added directory 'lib'
13=== added directory 'lib/scripts'
14=== added file 'lib/scripts/__init__.py'
15=== added file 'lib/scripts/generate_extra_overrides.py'
16--- lib/scripts/generate_extra_overrides.py 1970-01-01 00:00:00 +0000
17+++ lib/scripts/generate_extra_overrides.py 2014-06-09 13:57:00 +0000
18@@ -0,0 +1,518 @@
19+#! /usr/bin/python
20+#
21+# Copyright 2011-2014 Canonical Ltd. This software is licensed under the
22+# GNU Affero General Public License version 3 (see the file LICENSE).
23+
24+"""Generate extra overrides using Germinate."""
25+
26+__metaclass__ = type
27+__all__ = [
28+ 'GenerateExtraOverrides',
29+ ]
30+
31+import errno
32+import fcntl
33+from functools import partial
34+import glob
35+import logging
36+from optparse import (
37+ OptionParser,
38+ OptionValueError,
39+ )
40+import os
41+import re
42+import sys
43+import time
44+
45+from germinate.archive import TagFile
46+from germinate.germinator import Germinator
47+from germinate.log import GerminateFormatter
48+from germinate.seeds import (
49+ SeedError,
50+ SeedStructure,
51+ )
52+from launchpadlib.launchpad import Launchpad
53+
54+
55+LOCK_PATH = "/var/lock/"
56+
57+
58+def file_exists(filename):
59+ """Does `filename` exist?"""
60+ return os.access(filename, os.F_OK)
61+
62+
63+class ScriptFormatter(logging.Formatter):
64+ """logging.Formatter encoding our preferred output format."""
65+
66+ def __init__(self, fmt=None, datefmt="%Y-%m-%d %H:%M:%S"):
67+ if fmt is None:
68+ fmt = '%(asctime)s %(levelname)-7s %(message)s'
69+ logging.Formatter.__init__(self, fmt, datefmt)
70+ # Output should be UTC.
71+ self.converter = time.gmtime
72+
73+
74+def log_unhandled_exception_and_exit(func):
75+ """Decorator that logs unhandled exceptions via the logging module.
76+
77+ Exceptions are reraised except at the top level. ie. exceptions are
78+ only propagated to the outermost decorated method. At the top level,
79+ an exception causes the script to terminate.
80+ """
81+
82+ def log_unhandled_exceptions_func(self, *args, **kw):
83+ try:
84+ self._log_unhandled_exceptions_level += 1
85+ return func(self, *args, **kw)
86+ except Exception:
87+ if self._log_unhandled_exceptions_level == 1:
88+ # self.logger is setup in GenerateExtraOverrides.__init__()
89+ # so we can use it.
90+ self.logger.exception("Unhandled exception")
91+ sys.exit(1)
92+ else:
93+ raise
94+ finally:
95+ self._log_unhandled_exceptions_level -= 1
96+ return log_unhandled_exceptions_func
97+
98+
99+class ScriptFailure(Exception):
100+ """Something bad happened and the script is going away."""
101+
102+
103+class AtomicFile:
104+ """Facilitate atomic writing of files."""
105+
106+ def __init__(self, filename):
107+ self.filename = filename
108+ self.fd = open("%s.new" % self.filename, "w")
109+
110+ def __enter__(self):
111+ return self.fd
112+
113+ def __exit__(self, exc_type, exc_value, exc_tb):
114+ self.fd.close()
115+ if exc_type is None:
116+ os.rename("%s.new" % self.filename, self.filename)
117+
118+
119+def find_operable_series(distribution):
120+ """Find all the series we can operate on in this distribution.
121+
122+ We are allowed to modify DEVELOPMENT or FROZEN series, but should leave
123+ series with any other status alone.
124+ """
125+ return [
126+ series for series in distribution.series
127+ if series.status in ("Active Development", "Pre-release Freeze")]
128+
129+
130+class GenerateExtraOverrides:
131+ """Main class for generating extra overrides."""
132+
133+ # State for the log_unhandled_exceptions_and_exit decorator.
134+ _log_unhandled_exceptions_level = 0
135+
136+ def __init__(self, name=None, test_args=None):
137+ if name is None:
138+ self._name = self.__class__.__name__.lower()
139+ else:
140+ self._name = name
141+ self.parser = OptionParser()
142+ self.add_my_options()
143+ self.options, self.args = self.parser.parse_args(args=test_args)
144+
145+ self.logger = logging.getLogger()
146+ hdlr = logging.StreamHandler()
147+ self.logger.setLevel(0)
148+ hdlr.setLevel(logging.DEBUG)
149+ hdlr.setFormatter(ScriptFormatter())
150+ self.logger.addHandler(hdlr)
151+
152+ self.germinate_logger = None
153+
154+ def add_my_options(self):
155+ """Add a 'distribution' context option."""
156+ self.parser.add_option(
157+ "-l", "--launchpad", dest="launchpad_instance",
158+ default="production")
159+ self.parser.add_option(
160+ "-d", "--distribution", dest="distribution",
161+ help="Context distribution name.")
162+
163+ @property
164+ def name(self):
165+ # Include distribution name. Clearer to admins, but also
166+ # puts runs for different distributions under separate
167+ # locks so that they can run simultaneously.
168+ return "%s-%s" % (self._name, self.options.distribution)
169+
170+ @property
171+ def lockfilename(self):
172+ return "launchpad-%s.lock" % self.name
173+
174+ @property
175+ def lockfilepath(self):
176+ return os.path.join(LOCK_PATH, self.lockfilename)
177+
178+ def setup_lock(self):
179+ """Create lockfile.
180+
181+ Note that this will create a lockfile even if you don't actually
182+ use it.
183+ """
184+ if os.path.exists(self.lockfilepath):
185+ self.previous_lockfile_present = True
186+ else:
187+ self.previous_lockfile_present = False
188+ self.flock = open(self.lockfilepath, "w")
189+ self.fdlock = self.flock.fileno()
190+
191+ @log_unhandled_exception_and_exit
192+ def lock_or_die(self):
193+ """Attempt to lock, and sys.exit(1) if the lock's already taken."""
194+ self.setup_lock()
195+ self.logger.info("Creating lockfile: %s", self.lockfilepath)
196+ try:
197+ fcntl.flock(self.fdlock, fcntl.LOCK_EX | fcntl.LOCK_NB)
198+ except IOError as message:
199+ match = re.search("^\[Errno ([0-9]+)\]", str(message))
200+ if match:
201+ if int(match.group(1)) == errno.EWOULDBLOCK:
202+ self.logger.info('Lockfile %s in use' % self.lockfilepath)
203+ sys.exit(1)
204+ else:
205+ raise Exception(
206+ 'Cannot acquire lock on file "%s": %s' %
207+ (self.lockfilepath, message))
208+ else:
209+ raise Exception('Malformed error message "%s"' % message)
210+ if self.previous_lockfile_present:
211+ self.logger.warn("Stale lockfile detected and claimed.")
212+ self.is_locked = True
213+
214+ @log_unhandled_exception_and_exit
215+ def unlock(self):
216+ """Release the lock. Do this before going home."""
217+ if not self.is_locked:
218+ return
219+ self.logger.debug('Removing lock file: %s', self.lockfilepath)
220+ os.unlink(self.lockfilepath)
221+ try:
222+ fcntl.flock(self.fdlock, fcntl.LOCK_UN)
223+ except IOError:
224+ raise Exception('Unlock of file "%s" failed' % self.lockfilepath)
225+ self.is_locked = False
226+
227+ @log_unhandled_exception_and_exit
228+ def run(self):
229+ """Actually run the script."""
230+ try:
231+ self.main()
232+ except ScriptFailure as e:
233+ self.logger.error(str(e))
234+ sys.exit(1)
235+
236+ @log_unhandled_exception_and_exit
237+ def lock_and_run(self):
238+ """Call lock_or_die(), and then run() the script.
239+
240+ Will die with sys.exit(1) if the locking call fails.
241+ """
242+ self.lock_or_die()
243+ try:
244+ self.run()
245+ finally:
246+ self.unlock()
247+
248+ def login(self):
249+ return Launchpad.login_anonymously(
250+ self.name, self.options.launchpad_instance)
251+
252+ def processOptions(self):
253+ """Handle command-line options."""
254+ self.options.launchpad = self.login()
255+
256+ if self.options.distribution is None:
257+ raise OptionValueError("Specify a distribution.")
258+
259+ try:
260+ self.distribution = self.options.launchpad.distributions[
261+ self.options.distribution]
262+ except KeyError:
263+ raise OptionValueError(
264+ "Distribution '%s' not found." % self.options.distribution)
265+
266+ self.series = find_operable_series(self.distribution)
267+ if not self.series:
268+ raise ScriptFailure(
269+ "There is no DEVELOPMENT or FROZEN distroseries for %s." %
270+ self.options.distribution)
271+
272+ # The finalize.d protocol passes the list of root directories for
273+ # the distribution's archives in the ARCHIVEROOTS environment
274+ # variable, joined by spaces. We only care about the first archive.
275+ self.archiveroot = os.environ["ARCHIVEROOTS"].split()[0]
276+ self.germinateroot = self.archiveroot + "-germinate"
277+ self.miscroot = self.archiveroot + "-misc"
278+
279+ def setUpDirs(self):
280+ """Create output directories if they did not already exist."""
281+ if not file_exists(self.germinateroot):
282+ self.logger.debug(
283+ "Creating germinate root %s.", self.germinateroot)
284+ os.makedirs(self.germinateroot)
285+ if not file_exists(self.miscroot):
286+ self.logger.debug("Creating misc root %s.", self.miscroot)
287+ os.makedirs(self.miscroot)
288+
289+ def addLogHandler(self):
290+ """Send germinate's log output to a separate file."""
291+ if self.germinate_logger is not None:
292+ return
293+
294+ self.germinate_logger = logging.getLogger("germinate")
295+ self.germinate_logger.setLevel(logging.INFO)
296+ self.log_file = os.path.join(self.germinateroot, "germinate.output")
297+ handler = logging.FileHandler(self.log_file, mode="w")
298+ handler.setFormatter(GerminateFormatter())
299+ self.germinate_logger.addHandler(handler)
300+ self.germinate_logger.propagate = False
301+
302+ def setUp(self):
303+ """Process options, and set up internal state."""
304+ self.processOptions()
305+ self.setUpDirs()
306+ self.addLogHandler()
307+
308+ def getComponents(self, series):
309+ """Get the list of components to process for a given distroseries.
310+
311+ Even if DistroSeries.component_names starts including partner,
312+ we don't want it; this applies to the primary archive only.
313+ """
314+ return [component
315+ for component in series.component_names
316+ if component != "partner"]
317+
318+ def makeSeedStructures(self, series_name, flavours, seed_bases=None):
319+ structures = {}
320+ for flavour in flavours:
321+ try:
322+ structure = SeedStructure(
323+ "%s.%s" % (flavour, series_name), seed_bases=seed_bases)
324+ if len(structure):
325+ structures[flavour] = structure
326+ else:
327+ self.logger.warning(
328+ "Skipping empty seed structure for %s.%s",
329+ flavour, series_name)
330+ except SeedError as e:
331+ self.logger.warning(
332+ "Failed to fetch seeds for %s.%s: %s",
333+ flavour, series_name, e)
334+ return structures
335+
336+ def logGerminateProgress(self, *args):
337+ """Log a "progress" entry to the germinate log file.
338+
339+ Germinate logs quite a bit of detailed information. To make it
340+ easier to see the structure of its operation, GerminateFormatter
341+ allows tagging some log entries as "progress" entries, which are
342+ printed without a prefix.
343+ """
344+ self.germinate_logger.info(*args, extra={"progress": True})
345+
346+ def composeOutputPath(self, flavour, series_name, arch, base):
347+ return os.path.join(
348+ self.germinateroot,
349+ "%s_%s_%s_%s" % (base, flavour, series_name, arch))
350+
351+ def recordOutput(self, path, seed_outputs):
352+ if seed_outputs is not None:
353+ seed_outputs.add(os.path.basename(path))
354+
355+ def writeGerminateOutput(self, germinator, structure, flavour,
356+ series_name, arch, seed_outputs=None):
357+ """Write dependency-expanded output files.
358+
359+ These files are a reduced subset of those written by the germinate
360+ command-line program.
361+ """
362+ path = partial(self.composeOutputPath, flavour, series_name, arch)
363+
364+ # The structure file makes it possible to figure out how the other
365+ # output files relate to each other.
366+ structure.write(path("structure"))
367+ self.recordOutput(path("structure"), seed_outputs)
368+
369+ # "all" and "all.sources" list the full set of binary and source
370+ # packages respectively for a given flavour/suite/architecture
371+ # combination.
372+ germinator.write_all_list(structure, path("all"))
373+ self.recordOutput(path("all"), seed_outputs)
374+ germinator.write_all_source_list(structure, path("all.sources"))
375+ self.recordOutput(path("all.sources"), seed_outputs)
376+
377+ # Write the dependency-expanded output for each seed. Several of
378+ # these are used by archive administration tools, and others are
379+ # useful for debugging, so it's best to just write them all.
380+ for seedname in structure.names:
381+ germinator.write_full_list(structure, path(seedname), seedname)
382+ self.recordOutput(path(seedname), seed_outputs)
383+
384+ def parseTaskHeaders(self, seedtext):
385+ """Parse a seed for Task headers.
386+
387+ seedtext is a file-like object. Return a dictionary of Task headers,
388+ with keys canonicalised to lower-case.
389+ """
390+ task_headers = {}
391+ task_header_regex = re.compile(
392+ r"task-(.*?):(.*)", flags=re.IGNORECASE)
393+ for line in seedtext:
394+ match = task_header_regex.match(line)
395+ if match is not None:
396+ key, value = match.groups()
397+ task_headers[key.lower()] = value.strip()
398+ return task_headers
399+
400+ def getTaskName(self, task_headers, flavour, seedname, primary_flavour):
401+ """Work out the name of the Task to be generated from this seed.
402+
403+ If there is a Task-Name header, it wins; otherwise, seeds with a
404+ Task-Per-Derivative header are honoured for all flavours and put in
405+ an appropriate namespace, while other seeds are only honoured for
406+ the first flavour and have archive-global names.
407+ """
408+ if "name" in task_headers:
409+ return task_headers["name"]
410+ elif "per-derivative" in task_headers:
411+ return "%s-%s" % (flavour, seedname)
412+ elif primary_flavour:
413+ return seedname
414+ else:
415+ return None
416+
417+ def getTaskSeeds(self, task_headers, seedname):
418+ """Return the list of seeds used to generate a task from this seed.
419+
420+ The list of packages in this task comes from this seed plus any
421+ other seeds listed in a Task-Seeds header.
422+ """
423+ scan_seeds = set([seedname])
424+ if "seeds" in task_headers:
425+ scan_seeds.update(task_headers["seeds"].split())
426+ return sorted(scan_seeds)
427+
428+ def writeOverrides(self, override_file, germinator, structure, arch,
429+ seedname, key, value):
430+ packages = germinator.get_full(structure, seedname)
431+ for package in sorted(packages):
432+ print >>override_file, "%s/%s %s %s" % (
433+ package, arch, key, value)
434+
435+ def germinateArchFlavour(self, override_file, germinator, series_name,
436+ arch, flavour, structure, primary_flavour,
437+ seed_outputs=None):
438+ """Germinate seeds on a single flavour for a single architecture."""
439+ # Expand dependencies.
440+ germinator.plant_seeds(structure)
441+ germinator.grow(structure)
442+ germinator.add_extras(structure)
443+
444+ self.writeGerminateOutput(
445+ germinator, structure, flavour, series_name, arch,
446+ seed_outputs=seed_outputs)
447+
448+ write_overrides = partial(
449+ self.writeOverrides, override_file, germinator, structure, arch)
450+
451+ # Generate apt-ftparchive "extra overrides" for Task fields.
452+ seednames = [name for name in structure.names if name != "extra"]
453+ for seedname in seednames:
454+ with structure[seedname] as seedtext:
455+ task_headers = self.parseTaskHeaders(seedtext)
456+ if task_headers:
457+ task = self.getTaskName(
458+ task_headers, flavour, seedname, primary_flavour)
459+ if task is not None:
460+ scan_seeds = self.getTaskSeeds(task_headers, seedname)
461+ for scan_seed in scan_seeds:
462+ write_overrides(scan_seed, "Task", task)
463+
464+ # Generate apt-ftparchive "extra overrides" for Build-Essential
465+ # fields.
466+ if "build-essential" in structure.names and primary_flavour:
467+ write_overrides("build-essential", "Build-Essential", "yes")
468+
469+ def germinateArch(self, override_file, series_name, components, arch,
470+ flavours, structures, seed_outputs=None):
471+ """Germinate seeds on all flavours for a single architecture."""
472+ germinator = Germinator(arch)
473+
474+ # Read archive metadata.
475+ archive = TagFile(
476+ series_name, components, arch, "file://%s" % self.archiveroot,
477+ cleanup=True)
478+ germinator.parse_archive(archive)
479+
480+ for flavour in flavours:
481+ self.logger.info(
482+ "Germinating for %s/%s/%s", flavour, series_name, arch)
483+ # Add this to the germinate log as well so that that can be
484+ # debugged more easily. Log a separator line first.
485+ self.logGerminateProgress("")
486+ self.logGerminateProgress(
487+ "Germinating for %s/%s/%s", flavour, series_name, arch)
488+
489+ self.germinateArchFlavour(
490+ override_file, germinator, series_name, arch, flavour,
491+ structures[flavour], flavour == flavours[0],
492+ seed_outputs=seed_outputs)
493+
494+ def removeStaleOutputs(self, series_name, seed_outputs):
495+ """Remove stale outputs for a series.
496+
497+ Any per-seed outputs not in seed_outputs are considered stale.
498+ """
499+ all_outputs = glob.glob(
500+ os.path.join(self.germinateroot, "*_*_%s_*" % series_name))
501+ for output in all_outputs:
502+ if os.path.basename(output) not in seed_outputs:
503+ os.remove(output)
504+
505+ def generateExtraOverrides(self, series_name, components, architectures,
506+ flavours, seed_bases=None):
507+ structures = self.makeSeedStructures(
508+ series_name, flavours, seed_bases=seed_bases)
509+
510+ if structures:
511+ seed_outputs = set()
512+ override_path = os.path.join(
513+ self.miscroot, "more-extra.override.%s.main" % series_name)
514+ with AtomicFile(override_path) as override_file:
515+ for arch in architectures:
516+ self.germinateArch(
517+ override_file, series_name, components, arch,
518+ flavours, structures, seed_outputs=seed_outputs)
519+ self.removeStaleOutputs(series_name, seed_outputs)
520+
521+ def process(self, seed_bases=None):
522+ """Do the bulk of the work."""
523+ self.setUp()
524+
525+ for series in self.series:
526+ series_name = series.name
527+ components = self.getComponents(series)
528+ architectures = sorted(
529+ arch.architecture_tag for arch in series.architectures)
530+
531+ self.generateExtraOverrides(
532+ series_name, components, architectures, self.args,
533+ seed_bases=seed_bases)
534+
535+ def main(self):
536+ self.process()
537
538=== added directory 'scripts'
539=== added file 'scripts/cron.germinate'
540--- scripts/cron.germinate 1970-01-01 00:00:00 +0000
541+++ scripts/cron.germinate 2014-06-09 13:57:00 +0000
542@@ -0,0 +1,62 @@
543+#! /bin/sh
544+#
545+# Copyright 2009-2014 Canonical Ltd. This software is licensed under the
546+# GNU Affero General Public License version 3 (see the file LICENSE).
547+
548+set -e
549+set -u
550+
551+ARCHIVEROOT=${TEST_ARCHIVEROOT:-/srv/launchpad.net/ubuntu-archive/ubuntu}
552+MISCROOT=$ARCHIVEROOT/../ubuntu-misc
553+LOCKROOT=$ARCHIVEROOT/..
554+GERMINATEROOT=$ARCHIVEROOT/../ubuntu-germinate
555+
556+LAUNCHPADROOT=${TEST_LAUNCHPADROOT:-/srv/launchpad.net/production/launchpad}
557+SCRIPTSROOT=${TEST_SCRIPTSROOT:-`dirname $0`/..}
558+GENERATE=$SCRIPTSROOT/scripts/generate-extra-overrides.py
559+MAINTENANCE_CHECK=$SCRIPTSROOT/scripts/maintenance-check.py
560+
561+FLAVOURS="ubuntu kubuntu kubuntu-active edubuntu xubuntu mythbuntu lubuntu"
562+FLAVOURS="$FLAVOURS ubuntustudio ubuntu-gnome ubuntu-touch"
563+
564+## Check to see if another germinate run is in progress
565+
566+LOCKFILE=$LOCKROOT/cron.germinate.lock
567+if lockfile -! -l 43200 -r 0 "${LOCKFILE}"; then
568+ echo Another cron.germinate appears to be running
569+ exit 1
570+fi
571+
572+cleanup () {
573+ rm -f "$LOCKFILE"
574+}
575+
576+trap cleanup EXIT
577+
578+cd $GERMINATEROOT
579+
580+$GENERATE -d ubuntu $FLAVOURS
581+
582+# Now generate the Supported extra overrides for all supported distros.
583+SUITES=`$LAUNCHPADROOT/scripts/ftpmaster-tools/lp-query-distro.py supported`
584+for supported_suite in $SUITES; do
585+ echo -n "Running maintenance-check for $supported_suite... "
586+ # The support timeframe information is stored here
587+ SUPPORTED="$MISCROOT/more-extra.override.$supported_suite.main.supported"
588+ # This is the target override file that contains germinate plus
589+ # support info.
590+ TARGET="$MISCROOT/more-extra.override.$supported_suite.main"
591+ # Debug/Log information
592+ LOG="_maintenance-check.$supported_suite.stderr"
593+ if $MAINTENANCE_CHECK $supported_suite > $SUPPORTED 2> $LOG; then
594+ # The target file may be missing on the server and the script should
595+ # not fail if that is the case.
596+ touch $TARGET
597+ # Remove old "Supported" info from extra-overrides as it may be
598+ # stale now and we replace it with fresh content below.
599+ sed /"^.* Supported"/d $TARGET > ${TARGET}.new
600+ cat $SUPPORTED >> ${TARGET}.new
601+ mv ${TARGET}.new $TARGET
602+ fi
603+ echo " done"
604+done
605
606=== added file 'scripts/generate-extra-overrides.py'
607--- scripts/generate-extra-overrides.py 1970-01-01 00:00:00 +0000
608+++ scripts/generate-extra-overrides.py 2014-06-09 13:57:00 +0000
609@@ -0,0 +1,18 @@
610+#!/usr/bin/python
611+#
612+# Copyright 2011-2014 Canonical Ltd. This software is licensed under the
613+# GNU Affero General Public License version 3 (see the file LICENSE).
614+
615+"""Generate extra overrides using Germinate."""
616+
617+import os
618+import sys
619+
620+sys.path.insert(0, os.path.join(sys.path[0], os.pardir, "lib"))
621+
622+from scripts.generate_extra_overrides import GenerateExtraOverrides
623+
624+
625+if __name__ == '__main__':
626+ script = GenerateExtraOverrides("generate-extra-overrides")
627+ script.lock_and_run()
628
629=== added file 'scripts/maintenance-check.py'
630--- scripts/maintenance-check.py 1970-01-01 00:00:00 +0000
631+++ scripts/maintenance-check.py 2014-06-09 13:57:00 +0000
632@@ -0,0 +1,529 @@
633+#!/usr/bin/python
634+#
635+# python port of the nice maintainace-check script by Nick Barcet
636+#
637+
638+import logging
639+from optparse import OptionParser
640+import os
641+import sys
642+import urllib2
643+import urlparse
644+
645+import apt
646+import apt_pkg
647+
648+
649+class UbuntuMaintenance(object):
650+ """ Represents the support timeframe for a regular ubuntu release """
651+
652+ # architectures that are full supported (including LTS time)
653+ PRIMARY_ARCHES = [
654+ "i386",
655+ "amd64",
656+ ]
657+
658+ # architectures we support (but not for LTS time)
659+ SUPPORTED_ARCHES = PRIMARY_ARCHES + [
660+ "armel",
661+ "armhf",
662+ "arm64",
663+ "ppc64el",
664+ ]
665+
666+ # what defines the seeds is documented in wiki.ubuntu.com/SeedManagement
667+ SERVER_SEEDS = [
668+ "server-ship",
669+ "supported-server",
670+ ]
671+ DESKTOP_SEEDS = [
672+ "ship",
673+ "supported-desktop",
674+ "supported-desktop-extra",
675+ ]
676+ SUPPORTED_SEEDS = ["all"]
677+
678+ # normal support timeframe
679+ # time, seeds
680+ SUPPORT_TIMEFRAME = [
681+ ("9m", SUPPORTED_SEEDS),
682+ ]
683+
684+ # shorter than normal support timeframe
685+ # time, seed
686+ SUPPORT_TIMEFRAME_SHORT = [
687+ ]
688+
689+ # distro names that we check the seeds for
690+ DISTRO_NAMES = [
691+ "ubuntu",
692+ ]
693+
694+ # distro names for shorter than default support cycles
695+ DISTRO_NAMES_SHORT = [
696+ ]
697+
698+
699+# This is fun! We have a bunch of cases for 10.04 LTS
700+#
701+# - distro "ubuntu" follows SUPPORT_TIMEFRAME_LTS but only for
702+# amd64/i386
703+# - distros "kubuntu", "edubuntu" and "netbook" need to be
704+# considered *but* only follow SUPPORT_TIMEFRAME
705+# - anything that is in armel follows SUPPORT_TIMEFRAME
706+#
707+class LucidUbuntuMaintenance(UbuntuMaintenance):
708+ """ Represents the support timeframe for a 10.04 (lucid) LTS release,
709+ the exact rules differ from LTS release to LTS release
710+ """
711+
712+ # lts support timeframe, order is important, least supported must be last
713+ # time, seeds
714+ SUPPORT_TIMEFRAME = [
715+ ("5y", UbuntuMaintenance.SERVER_SEEDS),
716+ ("3y", UbuntuMaintenance.DESKTOP_SEEDS),
717+ ("18m", UbuntuMaintenance.SUPPORTED_SEEDS),
718+ ]
719+
720+ # on a LTS this is significant, it defines what names get LTS support
721+ DISTRO_NAMES = [
722+ "ubuntu",
723+ "kubuntu",
724+ ]
725+
726+
727+class PreciseUbuntuMaintenance(UbuntuMaintenance):
728+ """ The support timeframe for the 12.04 (precise) LTS release.
729+ This changes the timeframe for desktop packages from 3y to 5y
730+ """
731+
732+ # lts support timeframe, order is important, least supported must be last
733+ # time, seeds
734+ SUPPORT_TIMEFRAME = [
735+ ("5y", UbuntuMaintenance.SERVER_SEEDS),
736+ ("5y", UbuntuMaintenance.DESKTOP_SEEDS),
737+ ("18m", UbuntuMaintenance.SUPPORTED_SEEDS),
738+ ]
739+
740+ # on a LTS this is significant, it defines what names get LTS support
741+ DISTRO_NAMES = [
742+ "ubuntu",
743+ "kubuntu",
744+ ]
745+
746+
747+class QuantalUbuntuMaintenance(UbuntuMaintenance):
748+
749+ SUPPORT_TIMEFRAME = [
750+ ("18m", UbuntuMaintenance.SUPPORTED_SEEDS),
751+ ]
752+
753+
754+OneiricUbuntuMaintenance = QuantalUbuntuMaintenance
755+
756+
757+class TrustyUbuntuMaintenance(UbuntuMaintenance):
758+ """ The support timeframe for the 14.04 (trusty) LTS release.
759+ This changes the timeframe for desktop packages from 3y to 5y
760+ """
761+
762+ # lts support timeframe, order is important, least supported must be last
763+ # time, seeds
764+ SUPPORT_TIMEFRAME = [
765+ ("5y", UbuntuMaintenance.SERVER_SEEDS),
766+ ("5y", UbuntuMaintenance.DESKTOP_SEEDS),
767+ ("9m", UbuntuMaintenance.SUPPORTED_SEEDS),
768+ ]
769+
770+ SUPPORT_TIMEFRAME_SHORT = [
771+ ("3y", UbuntuMaintenance.SERVER_SEEDS),
772+ ("3y", UbuntuMaintenance.DESKTOP_SEEDS),
773+ ("9m", UbuntuMaintenance.SUPPORTED_SEEDS),
774+ ]
775+
776+ # on a LTS this is significant, it defines what names get LTS support
777+ #
778+ # Kylin is not in this list, becuase it's not seed managed =/,
779+ # it is LTS however as per 2014-03-17 Ubuntu TechBoard meeting
780+ DISTRO_NAMES = [
781+ "ubuntu",
782+ "kubuntu",
783+ "edubuntu",
784+ ]
785+
786+ DISTRO_NAMES_SHORT = [
787+ "ubuntu-gnome",
788+ "xubuntu",
789+ "mythbuntu",
790+ "ubuntustudio",
791+ "lubuntu",
792+ ]
793+
794+
795+# Names of the distribution releases that are not supported by this
796+# tool. All later versions are supported.
797+UNSUPPORTED_DISTRO_RELEASED = [
798+ "dapper",
799+ "edgy",
800+ "feisty",
801+ "gutsy",
802+ "hardy",
803+ "intrepid",
804+ "jaunty",
805+ "karmic",
806+ ]
807+
808+
809+# germinate output base directory
810+BASE_URL = os.environ.get(
811+ "MAINTENANCE_CHECK_BASE_URL",
812+ "http://people.canonical.com/~ubuntu-archive/germinate-output/")
813+
814+# hints dir url, hints file is "$distro.hints" by default
815+# (e.g. lucid.hints)
816+HINTS_DIR_URL = os.environ.get(
817+ "MAINTENANCE_CHECK_HINTS_DIR_URL",
818+ "http://people.canonical.com/"
819+ "~ubuntu-archive/seeds/platform.%s/SUPPORTED_HINTS")
820+
821+# we need the archive root to parse the Sources file to support
822+# by-source hints
823+ARCHIVE_ROOT = os.environ.get(
824+ "MAINTENANCE_CHECK_ARCHIVE_ROOT", "http://archive.ubuntu.com/ubuntu")
825+
826+# support timeframe tag used in the Packages file
827+SUPPORT_TAG = "Supported"
828+
829+
830+def get_binaries_for_source_pkg(srcname):
831+ """ Return all binary package names for the given source package name.
832+
833+ :param srcname: The source package name.
834+ :return: A list of binary package names.
835+ """
836+ pkgnames = set()
837+ recs = apt_pkg.SourceRecords()
838+ while recs.lookup(srcname):
839+ for binary in recs.binaries:
840+ pkgnames.add(binary)
841+ return pkgnames
842+
843+
844+def expand_src_pkgname(pkgname):
845+ """ Expand a package name if it is prefixed with src.
846+
847+ If the package name is prefixed with src it will be expanded
848+ to a list of binary package names. Otherwise the original
849+ package name will be returned.
850+
851+ :param pkgname: The package name (that may include src:prefix).
852+ :return: A list of binary package names (the list may be one element
853+ long).
854+ """
855+ if not pkgname.startswith("src:"):
856+ return [pkgname]
857+ return get_binaries_for_source_pkg(pkgname.split("src:")[1])
858+
859+
860+def create_and_update_deb_src_source_list(distroseries):
861+ """ Create sources.list and update cache.
862+
863+ This creates a sources.list file with deb-src entries for a given
864+ distroseries and apt.Cache.update() to make sure the data is up-to-date.
865+ :param distro: The code name of the distribution series (e.g. lucid).
866+ :return: None
867+ :raises: IOError: When cache update fails.
868+ """
869+ # apt root dir
870+ rootdir = "./aptroot.%s" % distroseries
871+ sources_list_dir = os.path.join(rootdir, "etc", "apt")
872+ if not os.path.exists(sources_list_dir):
873+ os.makedirs(sources_list_dir)
874+ sources_list = open(os.path.join(sources_list_dir, "sources.list"), "w")
875+ for pocket in [
876+ "%s" % distroseries,
877+ "%s-updates" % distroseries,
878+ "%s-security" % distroseries]:
879+ sources_list.write(
880+ "deb-src %s %s main restricted\n" % (
881+ ARCHIVE_ROOT, pocket))
882+ sources_list.write(
883+ "deb %s %s main restricted\n" % (
884+ ARCHIVE_ROOT, pocket))
885+ sources_list.close()
886+ # create required dirs/files for apt.Cache(rootdir) to work on older
887+ # versions of python-apt. once lucid is used it can be removed
888+ for d in ["var/lib/dpkg",
889+ "var/cache/apt/archives/partial",
890+ "var/lib/apt/lists/partial"]:
891+ if not os.path.exists(os.path.join(rootdir, d)):
892+ os.makedirs(os.path.join(rootdir, d))
893+ if not os.path.exists(os.path.join(rootdir, "var/lib/dpkg/status")):
894+ open(os.path.join(rootdir, "var/lib/dpkg/status"), "w")
895+ # open cache with our just prepared rootdir
896+ cache = apt.Cache(rootdir=rootdir)
897+ try:
898+ cache.update()
899+ except SystemError:
900+ logging.exception("cache.update() failed")
901+
902+
903+def get_structure(distroname, version):
904+ """ Get structure file conent for named distro and distro version.
905+
906+ :param name: Name of the distribution (e.g. kubuntu, ubuntu, xubuntu).
907+ :param version: Code name of the distribution version (e.g. lucid).
908+ :return: List of strings with the structure file content
909+ """
910+ f = urllib2.urlopen("%s/%s.%s/structure" % (
911+ BASE_URL, distroname, version))
912+ structure = f.readlines()
913+ f.close()
914+ return structure
915+
916+
917+def expand_seeds(structure, seedname):
918+ """Expand seed by its dependencies using the strucure file.
919+
920+ :param structure: The content of the STRUCTURE file as string list.
921+ :param seedname: The name of the seed as string that needs to be expanded.
922+ :return: A set() for the seed dependencies (excluding the original
923+ seedname).
924+ """
925+ seeds = []
926+ for line in structure:
927+ if line.startswith("%s:" % seedname):
928+ seeds += line.split(":")[1].split()
929+ for seed in seeds:
930+ seeds += expand_seeds(structure, seed)
931+ return set(seeds)
932+
933+
934+def get_packages_for_seeds(name, distro, seeds):
935+ """
936+ Get packages for the given name (e.g. ubuntu) and distro release
937+ (e.g. lucid) that are in the given list of seeds
938+ returns a set() of package names.
939+ """
940+ pkgs_in_seeds = {}
941+ for seed in seeds:
942+ pkgs_in_seeds[seed] = set()
943+ seedurl = "%s/%s.%s/%s" % (BASE_URL, name, distro, seed)
944+ logging.debug("looking for '%s'", seedurl)
945+ try:
946+ f = urllib2.urlopen(seedurl)
947+ for line in f:
948+ # Ignore lines that are not package names (headers etc).
949+ if line[0] < 'a' or line[0] > 'z':
950+ continue
951+ # Each line contains these fields:
952+ # (package, source, why, maintainer, size, inst-size)
953+ if options.source_packages:
954+ pkgname = line.split("|")[1]
955+ else:
956+ pkgname = line.split("|")[0]
957+ pkgs_in_seeds[seed].add(pkgname.strip())
958+ f.close()
959+ except Exception as e:
960+ logging.error("seed %s failed (%s)" % (seedurl, e))
961+ return pkgs_in_seeds
962+
963+
964+def what_seeds(pkgname, seeds):
965+ in_seeds = set()
966+ for s in seeds:
967+ if pkgname in seeds[s]:
968+ in_seeds.add(s)
969+ return in_seeds
970+
971+
972+def compare_support_level(x, y):
973+ """
974+ compare two support level strings of the form 18m, 3y etc
975+ :parm x: the first support level
976+ :parm y: the second support level
977+ :return: negative if x < y, zero if x==y, positive if x > y
978+ """
979+
980+ def support_to_int(support_time):
981+ """
982+ helper that takes a support time string and converts it to
983+ a integer for cmp()
984+ """
985+ # allow strings like "5y (kubuntu-common)
986+ x = support_time.split()[0]
987+ if x.endswith("y"):
988+ return 12 * int(x[0:-1])
989+ elif x.endswith("m"):
990+ return int(x[0:-1])
991+ else:
992+ raise ValueError("support time '%s' has to end with y or m" % x)
993+ return cmp(support_to_int(x), support_to_int(y))
994+
995+
996+def get_packages_support_time(structure, name, pkg_support_time,
997+ support_timeframe_list):
998+ """
999+ input a structure file and a list of pair<timeframe, seedlist>
1000+ return a dict of pkgnames -> support timeframe string
1001+ """
1002+ for (timeframe, seedlist) in support_timeframe_list:
1003+ expanded = set()
1004+ for s in seedlist:
1005+ expanded.add(s)
1006+ expanded |= expand_seeds(structure, s)
1007+ pkgs_in_seeds = get_packages_for_seeds(name, distro, expanded)
1008+ for seed in pkgs_in_seeds:
1009+ for pkg in pkgs_in_seeds[seed]:
1010+ if not pkg in pkg_support_time:
1011+ pkg_support_time[pkg] = timeframe
1012+ else:
1013+ old_timeframe = pkg_support_time[pkg]
1014+ if compare_support_level(old_timeframe, timeframe) < 0:
1015+ logging.debug("overwriting %s from %s to %s" % (
1016+ pkg, old_timeframe, timeframe))
1017+ pkg_support_time[pkg] = timeframe
1018+ if options.with_seeds:
1019+ pkg_support_time[pkg] += " (%s)" % ", ".join(
1020+ what_seeds(pkg, pkgs_in_seeds))
1021+
1022+ return pkg_support_time
1023+
1024+
1025+if __name__ == "__main__":
1026+ parser = OptionParser()
1027+ parser.add_option("--with-seeds", "", default=False,
1028+ action="store_true",
1029+ help="add seed(s) of the package that are responsible "
1030+ "for the maintaince time")
1031+ parser.add_option("--source-packages", "", default=False,
1032+ action="store_true",
1033+ help="show as source pkgs")
1034+ parser.add_option("--hints-file", "", default=None,
1035+ help="use diffenrt use hints file location")
1036+ (options, args) = parser.parse_args()
1037+
1038+ # init
1039+ if len(args) > 0:
1040+ distro = args[0]
1041+ if distro in UNSUPPORTED_DISTRO_RELEASED:
1042+ logging.error("only lucid or later is supported")
1043+ sys.exit(1)
1044+ else:
1045+ distro = "lucid"
1046+
1047+ # maintenance class to use
1048+ klass = globals().get("%sUbuntuMaintenance" % distro.capitalize())
1049+ if klass is None:
1050+ klass = UbuntuMaintenance
1051+ ubuntu_maintenance = klass()
1052+
1053+ # make sure our deb-src information is up-to-date
1054+ create_and_update_deb_src_source_list(distro)
1055+
1056+ if options.hints_file:
1057+ hints_file = options.hints_file
1058+ (schema, netloc, path, query, fragment) = urlparse.urlsplit(
1059+ hints_file)
1060+ if not schema:
1061+ hints_file = "file:%s" % path
1062+ else:
1063+ hints_file = HINTS_DIR_URL % distro
1064+
1065+ # go over the distros we need to check
1066+ pkg_support_time = {}
1067+ for names, support_timeframe in (
1068+ (ubuntu_maintenance.DISTRO_NAMES_SHORT, ubuntu_maintenance.SUPPORT_TIMEFRAME_SHORT),
1069+ (ubuntu_maintenance.DISTRO_NAMES, ubuntu_maintenance.SUPPORT_TIMEFRAME),
1070+ ):
1071+ for name in names:
1072+ # get basic structure file
1073+ try:
1074+ structure = get_structure(name, distro)
1075+ except urllib2.HTTPError:
1076+ logging.error("Can not get structure for '%s'." % name)
1077+ continue
1078+
1079+ # get dicts of pkgname -> support timeframe string
1080+ get_packages_support_time(
1081+ structure, name, pkg_support_time, support_timeframe)
1082+
1083+ # now go over the bits in main that we have not seen (because
1084+ # they are not in any seed and got added manually into "main"
1085+ for arch in ubuntu_maintenance.PRIMARY_ARCHES:
1086+ rootdir = "./aptroot.%s" % distro
1087+ apt_pkg.config.set("APT::Architecture", arch)
1088+ cache = apt.Cache(rootdir=rootdir)
1089+ try:
1090+ cache.update()
1091+ except SystemError:
1092+ logging.exception("cache.update() failed")
1093+ cache.open()
1094+ for pkg in cache:
1095+ # ignore multiarch package names
1096+ if ":" in pkg.name:
1097+ continue
1098+ if not pkg.name in pkg_support_time:
1099+ pkg_support_time[pkg.name] = support_timeframe[-1][0]
1100+ logging.warn(
1101+ "add package in main but not in seeds %s with %s" % (
1102+ pkg.name, pkg_support_time[pkg.name]))
1103+
1104+ # now check the hints file that is used to overwrite
1105+ # the default seeds
1106+ try:
1107+ for line in urllib2.urlopen(hints_file):
1108+ line = line.strip()
1109+ if not line or line.startswith("#"):
1110+ continue
1111+ try:
1112+ (raw_pkgname, support_time) = line.split()
1113+ for pkgname in expand_src_pkgname(raw_pkgname):
1114+ if support_time == 'unsupported':
1115+ try:
1116+ del pkg_support_time[pkgname]
1117+ sys.stderr.write("hints-file: marking %s "
1118+ "unsupported\n" % pkgname)
1119+ except KeyError:
1120+ pass
1121+ else:
1122+ if pkg_support_time.get(pkgname) != support_time:
1123+ sys.stderr.write(
1124+ "hints-file: changing %s from %s to %s\n" % (
1125+ pkgname, pkg_support_time.get(pkgname),
1126+ support_time))
1127+ pkg_support_time[pkgname] = support_time
1128+ except:
1129+ logging.exception("can not parse line '%s'" % line)
1130+ except urllib2.HTTPError as e:
1131+ if e.code != 404:
1132+ raise
1133+ sys.stderr.write("hints-file: %s gave 404 error\n" % hints_file)
1134+
1135+ # output suitable for the extra-override file
1136+ for pkgname in sorted(pkg_support_time.keys()):
1137+ # special case, the hints file may contain overrides that
1138+ # are arch-specific (like zsh-doc/armel)
1139+ if "/" in pkgname:
1140+ print "%s %s %s" % (
1141+ pkgname, SUPPORT_TAG, pkg_support_time[pkgname])
1142+ else:
1143+ # go over the supported arches, they are divided in
1144+ # first-class (PRIMARY) and second-class with different
1145+ # support levels
1146+ for arch in ubuntu_maintenance.SUPPORTED_ARCHES:
1147+ # ensure we do not overwrite arch-specific overwrites
1148+ pkgname_and_arch = "%s/%s" % (pkgname, arch)
1149+ if pkgname_and_arch in pkg_support_time:
1150+ break
1151+ if arch in ubuntu_maintenance.PRIMARY_ARCHES:
1152+ # arch with full LTS support
1153+ print "%s %s %s" % (
1154+ pkgname_and_arch, SUPPORT_TAG,
1155+ pkg_support_time[pkgname])
1156+ else:
1157+ # not a LTS supported architecture, gets only regular
1158+ # support_timeframe
1159+ print "%s %s %s" % (
1160+ pkgname_and_arch, SUPPORT_TAG,
1161+ ubuntu_maintenance.SUPPORT_TIMEFRAME[-1][0])
1162
1163=== added directory 'tests'
1164=== added file 'tests/__init__.py'
1165=== added directory 'tests/germinate-test-data'
1166=== added directory 'tests/germinate-test-data/mock-bin'
1167=== added file 'tests/germinate-test-data/mock-bin/lockfile'
1168--- tests/germinate-test-data/mock-bin/lockfile 1970-01-01 00:00:00 +0000
1169+++ tests/germinate-test-data/mock-bin/lockfile 2014-06-09 13:57:00 +0000
1170@@ -0,0 +1,8 @@
1171+#!/bin/sh
1172+
1173+# This is a mock "lockfile" command that is used during the test
1174+# There is no need to have the real lockfile installed (its part
1175+# of procmail) because there can be no multiple germinate instances
1176+# while we run the tests
1177+
1178+exit 1
1179
1180=== added directory 'tests/germinate-test-data/mock-lp-root'
1181=== added directory 'tests/germinate-test-data/mock-lp-root/scripts'
1182=== added directory 'tests/germinate-test-data/mock-lp-root/scripts/ftpmaster-tools'
1183=== added file 'tests/germinate-test-data/mock-lp-root/scripts/ftpmaster-tools/lp-query-distro.py'
1184--- tests/germinate-test-data/mock-lp-root/scripts/ftpmaster-tools/lp-query-distro.py 1970-01-01 00:00:00 +0000
1185+++ tests/germinate-test-data/mock-lp-root/scripts/ftpmaster-tools/lp-query-distro.py 2014-06-09 13:57:00 +0000
1186@@ -0,0 +1,27 @@
1187+#!/usr/bin/python
1188+#
1189+# This is a mock version of the lp-query-distro.py script that
1190+# returns static data so that the cron.germinate shell script
1191+# can be tested.
1192+
1193+import sys
1194+
1195+
1196+def error_and_exit():
1197+ sys.stderr.write("ERROR: I'm a mock, I only support 'supported' as "
1198+ "argument\n")
1199+ sys.exit(1)
1200+
1201+
1202+def main(args):
1203+ # There is only a very limited subset of arguments that we support,
1204+ # test for it and error if it looks wrong
1205+ if len(args) == 2:
1206+ distro = args[1]
1207+ if distro == "supported":
1208+ return "hardy jaunty karmic lucid maverick"
1209+ error_and_exit()
1210+
1211+
1212+if __name__ == "__main__":
1213+ print main(sys.argv)
1214
1215=== added file 'tests/germinate-test-data/mock-lp-root/scripts/generate-extra-overrides.py'
1216--- tests/germinate-test-data/mock-lp-root/scripts/generate-extra-overrides.py 1970-01-01 00:00:00 +0000
1217+++ tests/germinate-test-data/mock-lp-root/scripts/generate-extra-overrides.py 2014-06-09 13:57:00 +0000
1218@@ -0,0 +1,5 @@
1219+#! /usr/bin/python
1220+#
1221+# We don't need to run generate-extra-overrides.py when testing
1222+# maintenance-check.py. We could, but it would slow down the tests and its
1223+# output is not interesting here.
1224
1225=== added symlink 'tests/germinate-test-data/mock-lp-root/scripts/maintenance-check.py'
1226=== target is u'../../../../scripts/maintenance-check.py'
1227=== added file 'tests/run'
1228--- tests/run 1970-01-01 00:00:00 +0000
1229+++ tests/run 2014-06-09 13:57:00 +0000
1230@@ -0,0 +1,8 @@
1231+#! /bin/sh
1232+set -e
1233+
1234+if [ -z "$*" ]; then
1235+ set -- discover
1236+fi
1237+
1238+PYTHONPATH="$(readlink -f "$(dirname "$0")/../lib")" python -m unittest "$@"
1239
1240=== added file 'tests/test_cron_germinate.py'
1241--- tests/test_cron_germinate.py 1970-01-01 00:00:00 +0000
1242+++ tests/test_cron_germinate.py 2014-06-09 13:57:00 +0000
1243@@ -0,0 +1,226 @@
1244+# Copyright 2010-2014 Canonical Ltd. This software is licensed under the
1245+# GNU Affero General Public License version 3 (see the file LICENSE).
1246+
1247+"""This is a test for the cron.germinate script."""
1248+
1249+__metaclass__ = type
1250+
1251+import copy
1252+import gzip
1253+import os
1254+import subprocess
1255+
1256+from tests.testing import TestCase
1257+
1258+
1259+class TestCronGerminate(TestCase):
1260+
1261+ DISTRO_NAMES = ["platform", "ubuntu", "kubuntu", "netbook"]
1262+ DISTS = ["hardy", "lucid", "maverick"]
1263+ DEVELOPMENT_DIST = "natty"
1264+ COMPONENTS = ["main", "restricted", "universe", "multiverse"]
1265+ ARCHES = ["i386", "amd64", "armel", "powerpc"]
1266+ BASEPATH = os.path.abspath(os.path.dirname(__file__))
1267+ source_root = os.path.normpath(os.path.join(BASEPATH, ".."))
1268+
1269+ def setUp(self):
1270+ super(TestCronGerminate, self).setUp()
1271+
1272+ # Setup a temp archive directory and populate it with the right
1273+ # sub-directories.
1274+ self.tmpdir = self.makeTemporaryDirectory()
1275+ self.archive_dir = self.setup_mock_archive_environment()
1276+ self.ubuntu_misc_dir = os.path.join(self.archive_dir, "ubuntu-misc")
1277+ self.ubuntu_germinate_dir = os.path.join(
1278+ self.archive_dir, "ubuntu-germinate")
1279+ # Create a mock archive environment for all the distros we support and
1280+ # also include "updates" and "security".
1281+ for dist in self.DISTS + [self.DEVELOPMENT_DIST]:
1282+ self.populate_mock_archive_environment(
1283+ self.archive_dir, self.COMPONENTS, self.ARCHES, dist)
1284+ for component in ["security", "updates"]:
1285+ self.populate_mock_archive_environment(
1286+ self.archive_dir, self.COMPONENTS, self.ARCHES,
1287+ "%s-%s" % (dist, component))
1288+ # Generate test dummies for maintenance-time.py, if this is set to
1289+ # "None" instead it will use the network to test against the real
1290+ # data.
1291+ self.germinate_output_dir = self.setup_mock_germinate_output()
1292+
1293+ def create_directory_if_missing(self, directory):
1294+ """Create the given directory if it does not exist."""
1295+ if not os.path.exists(directory):
1296+ os.makedirs(directory)
1297+
1298+ def create_directory_list_if_missing(self, directory_list):
1299+ """Create the given directories from the list if they don't exist."""
1300+ for directory in directory_list:
1301+ self.create_directory_if_missing(directory)
1302+
1303+ def create_gzip_file(self, filepath, content=""):
1304+ """Create a gziped file in the given path with the given content.
1305+
1306+ If no content is given a empty file is created.
1307+ """
1308+ gz = gzip.GzipFile(filepath, "w")
1309+ gz.write(content)
1310+ gz.close()
1311+
1312+ def create_file(self, filepath, content=""):
1313+ """Create a file in the given path with the given content.
1314+
1315+ If no content is given a empty file is created.
1316+ """
1317+ f = open(filepath, "w")
1318+ f.write(content)
1319+ f.close()
1320+
1321+ def setup_mock_germinate_output(self):
1322+ # empty structure files
1323+ germinate_output_dir = os.path.join(
1324+ self.tmpdir, "germinate-test-data", "germinate-output")
1325+ dirs = []
1326+ for distro_name in self.DISTRO_NAMES:
1327+ for distro_series in self.DISTS:
1328+ dirs.append(
1329+ os.path.join(
1330+ germinate_output_dir,
1331+ "%s.%s" % (distro_name, distro_series)))
1332+ self.create_directory_list_if_missing(dirs)
1333+ for dir in dirs:
1334+ self.create_file(os.path.join(dir, "structure"))
1335+ return germinate_output_dir
1336+
1337+ def setup_mock_archive_environment(self):
1338+ """
1339+ Creates a mock archive environment and populate it with the
1340+ subdirectories that germinate will expect.
1341+ """
1342+ archive_dir = os.path.join(
1343+ self.tmpdir, "germinate-test-data", "ubuntu-archive")
1344+ ubuntu_misc_dir = os.path.join(archive_dir, "ubuntu-misc")
1345+ ubuntu_germinate_dir = os.path.join(archive_dir, "ubuntu-germinate")
1346+ ubuntu_dists_dir = os.path.join(archive_dir, "ubuntu", "dists")
1347+ self.create_directory_list_if_missing([
1348+ archive_dir,
1349+ ubuntu_misc_dir,
1350+ ubuntu_germinate_dir,
1351+ ubuntu_dists_dir])
1352+ return archive_dir
1353+
1354+ def populate_mock_archive_environment(self, archive_dir, components_list,
1355+ arches_list, current_devel_distro):
1356+ """
1357+ Populates a mock archive environment with empty source packages and
1358+ empty binary packages.
1359+ """
1360+ for component in components_list:
1361+ # Create the environment for the source packages.
1362+ targetdir = os.path.join(
1363+ archive_dir,
1364+ "ubuntu/dists/%s/%s/source" % (
1365+ current_devel_distro, component))
1366+ self.create_directory_if_missing(targetdir)
1367+ self.create_gzip_file(os.path.join(targetdir, "Sources.gz"))
1368+
1369+ # Create the environment for the binary packages.
1370+ for arch in arches_list:
1371+ for subpath in ["", "debian-installer"]:
1372+ targetdir = os.path.join(
1373+ self.archive_dir,
1374+ "ubuntu/dists/%s/%s/%s/binary-%s" % (
1375+ current_devel_distro, component, subpath, arch))
1376+ self.create_directory_if_missing(targetdir)
1377+ self.create_gzip_file(os.path.join(
1378+ targetdir, "Packages.gz"))
1379+
1380+ def create_fake_environment(self, basepath, archive_dir,
1381+ germinate_output_dir):
1382+ """
1383+ Create a fake process environment based on os.environ that sets
1384+ TEST_ARCHIVEROOT, TEST_SCRIPTSROOT, and TEST_LAUNCHPADROOT, and
1385+ modifies PATH to point to the mock lp-bin directory.
1386+ """
1387+ fake_environ = copy.copy(os.environ)
1388+ fake_environ["TEST_ARCHIVEROOT"] = os.path.abspath(
1389+ os.path.join(archive_dir, "ubuntu"))
1390+ fake_environ["TEST_SCRIPTSROOT"] = os.path.abspath(
1391+ os.path.join(basepath, "germinate-test-data/mock-lp-root"))
1392+ fake_environ["TEST_LAUNCHPADROOT"] = os.path.abspath(
1393+ os.path.join(basepath, "germinate-test-data/mock-lp-root"))
1394+ # Set the PATH in the fake environment so that our mock lockfile is
1395+ # used.
1396+ fake_environ["PATH"] = "%s:%s" % (
1397+ os.path.abspath(os.path.join(
1398+ basepath, "germinate-test-data/mock-bin")),
1399+ os.environ["PATH"])
1400+ # test dummies for get-support-timeframe.py, they need to be
1401+ # in URI format
1402+ if germinate_output_dir:
1403+ # redirect base url to the mock environment
1404+ fake_environ["MAINTENANCE_CHECK_BASE_URL"] = "file://%s" % \
1405+ germinate_output_dir
1406+ # point to mock archive root
1407+ archive_root_url = "file://%s" % os.path.abspath(
1408+ os.path.join(archive_dir, "ubuntu"))
1409+ fake_environ["MAINTENANCE_CHECK_ARCHIVE_ROOT"] = archive_root_url
1410+ # maintenance-check.py expects a format string
1411+ hints_file_url = (
1412+ germinate_output_dir + "/platform.%s/SUPPORTED_HINTS")
1413+ for distro in self.DISTS:
1414+ open(hints_file_url % distro, "w")
1415+ fake_environ["MAINTENANCE_CHECK_HINTS_DIR_URL"] = "file://%s" % \
1416+ os.path.abspath(hints_file_url)
1417+ # add hints override to test that feature
1418+ f=open(hints_file_url % "lucid", "a")
1419+ f.write("linux-image-2.6.32-25-server 5y\n")
1420+ f.close()
1421+ return fake_environ
1422+
1423+ def test_maintenance_update(self):
1424+ """
1425+ Test the maintenance-check.py porition of the soyuz cron.germinate
1426+ shell script by running it inside a fake environment and ensure that
1427+ it did update the "Support" override information for apt-ftparchive
1428+ without destroying/modifying the information that the "germinate"
1429+ script added to it earlier.
1430+ """
1431+ # Write into more-extras.overrides to ensure it is alive after we
1432+ # mucked around.
1433+ canary = "abrowser Task mock\n"
1434+ # Build fake environment based on the real one.
1435+ fake_environ = self.create_fake_environment(
1436+ self.BASEPATH, self.archive_dir, self.germinate_output_dir)
1437+ # Create mock override data files that include the canary string
1438+ # so that we can test later if it is still there.
1439+ for dist in self.DISTS:
1440+ self.create_file(
1441+ os.path.join(self.ubuntu_misc_dir,
1442+ "more-extra.override.%s.main" % dist),
1443+ canary)
1444+
1445+ # Run cron.germinate in the fake environment.
1446+ cron_germinate_path = os.path.join(
1447+ self.source_root, "scripts", "cron.germinate")
1448+ subprocess.call(
1449+ [cron_germinate_path], env=fake_environ, cwd=self.BASEPATH)
1450+
1451+ # And check the output it generated for correctness.
1452+ for dist in self.DISTS:
1453+ supported_override_file = os.path.join(
1454+ self.ubuntu_misc_dir,
1455+ "more-extra.override.%s.main.supported" % dist)
1456+ self.assertTrue(os.path.exists(supported_override_file),
1457+ "no override file created for '%s'" % dist)
1458+ main_override_file = os.path.join(
1459+ self.ubuntu_misc_dir,
1460+ "more-extra.override.%s.main" % dist)
1461+ self.assertIn(canary, open(main_override_file).read())
1462+
1463+ # Check here if we got the data from maintenance-check.py that
1464+ # we expected. This is a kernel name from lucid-updates and it
1465+ # will be valid for 5 years.
1466+ needle = "linux-image-2.6.32-25-server/i386 Supported 5y"
1467+ lucid_supported_override_file = os.path.join(
1468+ self.ubuntu_misc_dir, "more-extra.override.lucid.main")
1469+ self.assertIn(needle, open(lucid_supported_override_file).read())
1470
1471=== added file 'tests/test_generate_extra_overrides.py'
1472--- tests/test_generate_extra_overrides.py 1970-01-01 00:00:00 +0000
1473+++ tests/test_generate_extra_overrides.py 2014-06-09 13:57:00 +0000
1474@@ -0,0 +1,805 @@
1475+# Copyright 2011-2014 Canonical Ltd. This software is licensed under the
1476+# GNU Affero General Public License version 3 (see the file LICENSE).
1477+
1478+"""Test for the `generate-extra-overrides` script."""
1479+
1480+__metaclass__ = type
1481+
1482+from collections import defaultdict
1483+from functools import partial
1484+import gzip
1485+import logging
1486+from optparse import OptionValueError
1487+import os
1488+import tempfile
1489+try:
1490+ from unittest import mock
1491+except ImportError:
1492+ import mock
1493+
1494+from germinate import (
1495+ archive,
1496+ germinator,
1497+ seeds,
1498+ )
1499+
1500+from scripts.generate_extra_overrides import (
1501+ AtomicFile,
1502+ file_exists,
1503+ GenerateExtraOverrides,
1504+ ScriptFailure,
1505+ )
1506+from tests.testing import (
1507+ ensure_directory_exists,
1508+ factory,
1509+ file_contents,
1510+ open_for_writing,
1511+ run_script,
1512+ TestCase,
1513+ write_file,
1514+ )
1515+
1516+
1517+class TestAtomicFile(TestCase):
1518+ """Tests for the AtomicFile helper class."""
1519+
1520+ def test_atomic_file_creates_file(self):
1521+ # AtomicFile creates the named file with the requested contents.
1522+ self.useTempDir()
1523+ filename = factory.getUniqueString()
1524+ text = factory.getUniqueString()
1525+ with AtomicFile(filename) as test:
1526+ test.write(text)
1527+ self.assertEqual(text, file_contents(filename))
1528+
1529+ def test_atomic_file_removes_dot_new(self):
1530+ # AtomicFile does not leave .new files lying around.
1531+ self.useTempDir()
1532+ filename = factory.getUniqueString()
1533+ with AtomicFile(filename):
1534+ pass
1535+ self.assertFalse(file_exists("%s.new" % filename))
1536+
1537+
1538+class IndexStanzaFields:
1539+ """Store and format ordered Index Stanza fields."""
1540+
1541+ def __init__(self):
1542+ self._names_lower = set()
1543+ self.fields = []
1544+
1545+ def append(self, name, value):
1546+ """Append an (field, value) tuple to the internal list.
1547+
1548+ Then we can use the FIFO-like behaviour in makeOutput().
1549+ """
1550+ if name.lower() in self._names_lower:
1551+ return
1552+ self._names_lower.add(name.lower())
1553+ self.fields.append((name, value))
1554+
1555+ def extend(self, entries):
1556+ """Extend the internal list with the key-value pairs in entries.
1557+ """
1558+ for name, value in entries:
1559+ self.append(name, value)
1560+
1561+ def makeOutput(self):
1562+ """Return a line-by-line aggregation of appended fields.
1563+
1564+ Empty fields values will cause the exclusion of the field.
1565+ The output order will preserve the insertion order, FIFO.
1566+ """
1567+ output_lines = []
1568+ for name, value in self.fields:
1569+ if not value:
1570+ continue
1571+
1572+ # Do not add separation space for the special file list fields.
1573+ if name not in ('Files', 'Checksums-Sha1', 'Checksums-Sha256'):
1574+ value = ' %s' % value
1575+
1576+ output_lines.append('%s:%s' % (name, value))
1577+
1578+ return '\n'.join(output_lines)
1579+
1580+
1581+class MockArchivePublisherBase(mock.MagicMock):
1582+
1583+ def getIndexStanza(self):
1584+ fields = self.buildIndexStanzaFields()
1585+ return fields.makeOutput()
1586+
1587+
1588+class MockSourcePackagePublishingHistory(MockArchivePublisherBase):
1589+ """A mock source package publishing history.
1590+
1591+ This is a crude amalgam of SourcePackageRelease and
1592+ SourcePackagePublishingHistory, but it'll do for us.
1593+ """
1594+
1595+ def __init__(self, sourcepackagename=None):
1596+ super(MockSourcePackagePublishingHistory, self).__init__()
1597+ if sourcepackagename is None:
1598+ sourcepackagename = factory.getUniqueString()
1599+ self.sourcepackagename = sourcepackagename
1600+ self.binaries = []
1601+ self.version = unicode(factory.getUniqueInteger()) + "version"
1602+ self.section = factory.getUniqueString(prefix="section")
1603+ self.dsc_maintainer_rfc822 = "%s <%s>" % (
1604+ factory.getUniqueString(), factory.getUniqueEmailAddress())
1605+ self.architecturehintlist = "all"
1606+ self.dsc_format = "1.0"
1607+
1608+ @property
1609+ def dsc_binaries(self):
1610+ return ", ".join(self.binaries)
1611+
1612+ def buildIndexStanzaFields(self):
1613+ fields = IndexStanzaFields()
1614+ fields.append("Package", self.sourcepackagename)
1615+ fields.append("Binary", self.dsc_binaries)
1616+ fields.append("Version", self.version)
1617+ fields.append("Section", self.section)
1618+ fields.append("Maintainer", self.dsc_maintainer_rfc822)
1619+ fields.append("Architecture", self.architecturehintlist)
1620+ fields.append("Format", self.dsc_format)
1621+ return fields
1622+
1623+
1624+class MockBinaryPackagePublishingHistory(MockArchivePublisherBase):
1625+ """A mock binary package publishing history.
1626+
1627+ This is a crude amalgam of BinaryPackageRelease and
1628+ BinaryPackagePublishingHistory, but it'll do for us.
1629+ """
1630+
1631+ def __init__(self, spph, das, binarypackagename=None, depends=None):
1632+ super(MockBinaryPackagePublishingHistory, self).__init__()
1633+ if binarypackagename is None:
1634+ binarypackagename = factory.getUniqueString()
1635+ self.spph = spph
1636+ self.das = das
1637+ self.binarypackagename = binarypackagename
1638+ self.priority = "optional"
1639+ self.depends = depends
1640+
1641+ def buildIndexStanzaFields(self):
1642+ fields = IndexStanzaFields()
1643+ fields.append("Package", self.binarypackagename)
1644+ fields.append("Source", self.spph.sourcepackagename)
1645+ fields.append("Priority", self.priority)
1646+ fields.append("Section", self.spph.section)
1647+ fields.append("Maintainer", self.spph.dsc_maintainer_rfc822)
1648+ fields.append("Architecture", self.das.architecture_tag)
1649+ fields.append("Version", self.spph.version)
1650+ fields.append("Depends", self.depends)
1651+ return fields
1652+
1653+
1654+class MockDistroArchSeries(mock.MagicMock):
1655+ def __init__(self, distroseries, architecture_tag=None):
1656+ super(MockDistroArchSeries, self).__init__()
1657+ if architecture_tag is None:
1658+ architecture_tag = factory.getUniqueString(prefix="arch")
1659+ self.distroseries = distroseries
1660+ self.architecture_tag = architecture_tag
1661+ self.test_binaries = defaultdict(list)
1662+
1663+
1664+class MockDistroSeries(mock.MagicMock):
1665+ def __init__(self, distribution, status="Active Development"):
1666+ super(MockDistroSeries, self).__init__()
1667+ self.name = factory.getUniqueString(prefix="distroseries")
1668+ self.distribution = distribution
1669+ self.component_names = []
1670+ self.status = status
1671+ self.architectures = []
1672+ self.test_sources = defaultdict(list)
1673+
1674+ def makeDistroArchSeries(self, enabled=True, *args, **kwargs):
1675+ das = MockDistroArchSeries(distroseries=self, *args, **kwargs)
1676+ if enabled:
1677+ self.architectures.append(das)
1678+ return das
1679+
1680+
1681+class MockDistribution(mock.MagicMock):
1682+ def __init__(self, name):
1683+ super(MockDistribution, self).__init__()
1684+ self.name = name
1685+ self.series = []
1686+
1687+ def makeDistroSeries(self, *args, **kwargs):
1688+ series = MockDistroSeries(distribution=self, *args, **kwargs)
1689+ self.series.append(series)
1690+ return series
1691+
1692+
1693+class MockLaunchpad(mock.MagicMock):
1694+ def __init__(self):
1695+ super(MockLaunchpad, self).__init__(name="Launchpad")
1696+ self.distributions = {}
1697+
1698+ def makeDistribution(self, name=None):
1699+ if name is None:
1700+ name = factory.getUniqueString()
1701+ distribution = MockDistribution(name)
1702+ self.distributions[name] = distribution
1703+ return distribution
1704+
1705+
1706+class TestGenerateExtraOverrides(TestCase):
1707+ """Tests for the actual `GenerateExtraOverrides` script."""
1708+
1709+ def setUp(self):
1710+ super(TestGenerateExtraOverrides, self).setUp()
1711+ self.seeddir = self.makeTemporaryDirectory()
1712+ self.launchpad = MockLaunchpad()
1713+ # XXX cjwatson 2011-12-06 bug=694140: Make sure germinate doesn't
1714+ # lose its loggers between tests, due to Launchpad's messing with
1715+ # global log state.
1716+ archive._logger = logging.getLogger("germinate.archive")
1717+ germinator._logger = logging.getLogger("germinate.germinator")
1718+ seeds._logger = logging.getLogger("germinate.seeds")
1719+
1720+ def assertFilesEqual(self, expected_path, observed_path):
1721+ self.assertEqual(
1722+ file_contents(expected_path), file_contents(observed_path))
1723+
1724+ def makeDistro(self):
1725+ """Create a distribution for testing.
1726+
1727+ The distribution will have a root directory set up, which will
1728+ be cleaned up after the test. It will have an attached archive.
1729+ """
1730+ publish_root_dir = self.makeTemporaryDirectory()
1731+ os.environ["ARCHIVEROOTS"] = publish_root_dir
1732+ return self.launchpad.makeDistribution()
1733+
1734+ def makeScript(self, distribution, run_setup=True, extra_args=None):
1735+ """Create a script for testing."""
1736+ test_args = []
1737+ if distribution is not None:
1738+ test_args.extend(["-d", distribution.name])
1739+ if extra_args is not None:
1740+ test_args.extend(extra_args)
1741+ script = GenerateExtraOverrides(test_args=test_args)
1742+ for handler in list(script.logger.handlers):
1743+ script.logger.removeHandler(handler)
1744+ script.logger.addHandler(logging.NullHandler())
1745+ script.login = lambda: self.launchpad
1746+ if distribution is not None and run_setup:
1747+ script.setUp()
1748+ else:
1749+ script.distribution = distribution
1750+ return script
1751+
1752+ def setUpDistroAndScript(self, series_statuses=["Active Development"],
1753+ **kwargs):
1754+ """Helper wrapping distro and script setup."""
1755+ self.distro = self.makeDistro()
1756+ self.distroseries = [
1757+ self.distro.makeDistroSeries(status=status)
1758+ for status in series_statuses]
1759+ self.script = self.makeScript(self.distro, **kwargs)
1760+
1761+ def setUpComponent(self, component=None):
1762+ """Create a component and attach it to all distroseries."""
1763+ if component is None:
1764+ component = factory.getUniqueString()
1765+ for distroseries in self.distroseries:
1766+ distroseries.component_names.append(component)
1767+ return component
1768+
1769+ def makePackage(self, component, dases, **kwargs):
1770+ """Create a published source and binary package for testing."""
1771+ package_name = factory.getUniqueString()
1772+ spph = MockSourcePackagePublishingHistory(
1773+ sourcepackagename=package_name)
1774+ dases[0].distroseries.test_sources[component].append(spph)
1775+ for das in dases:
1776+ bpph = MockBinaryPackagePublishingHistory(
1777+ spph, das, binarypackagename=package_name, **kwargs)
1778+ das.test_binaries[component].append(bpph)
1779+ return package_name
1780+
1781+ def makeIndexFiles(self, script, distroseries):
1782+ """Create a limited subset of index files for testing."""
1783+ for component in distroseries.component_names:
1784+ sources_path = os.path.join(
1785+ script.archiveroot, "dists", distroseries.name, component,
1786+ "source", "Sources.gz")
1787+ ensure_directory_exists(os.path.dirname(sources_path))
1788+ sources_index = gzip.GzipFile(sources_path, "wb")
1789+ for spp in distroseries.test_sources[component]:
1790+ stanza = spp.getIndexStanza().encode("utf-8") + "\n\n"
1791+ sources_index.write(stanza)
1792+ sources_index.close()
1793+
1794+ for arch in distroseries.architectures:
1795+ packages_path = os.path.join(
1796+ script.archiveroot, "dists", distroseries.name, component,
1797+ "binary-%s" % arch.architecture_tag, "Packages.gz")
1798+ ensure_directory_exists(os.path.dirname(packages_path))
1799+ packages_index = gzip.GzipFile(packages_path, "wb")
1800+ for bpp in arch.test_binaries[component]:
1801+ stanza = bpp.getIndexStanza().encode("utf-8") + "\n\n"
1802+ packages_index.write(stanza)
1803+ packages_index.close()
1804+
1805+ def composeSeedPath(self, flavour, series_name, seed_name):
1806+ return os.path.join(
1807+ self.seeddir, "%s.%s" % (flavour, series_name), seed_name)
1808+
1809+ def makeSeedStructure(self, flavour, series_name, seed_names,
1810+ seed_inherit={}):
1811+ """Create a simple seed structure file."""
1812+ structure_path = self.composeSeedPath(
1813+ flavour, series_name, "STRUCTURE")
1814+ with open_for_writing(structure_path, "w") as structure:
1815+ for seed_name in seed_names:
1816+ inherit = seed_inherit.get(seed_name, [])
1817+ line = "%s: %s" % (seed_name, " ".join(inherit))
1818+ print >>structure, line.strip()
1819+
1820+ def makeSeed(self, flavour, series_name, seed_name, entries,
1821+ headers=None):
1822+ """Create a simple seed file."""
1823+ seed_path = self.composeSeedPath(flavour, series_name, seed_name)
1824+ with open_for_writing(seed_path, "w") as seed:
1825+ if headers is not None:
1826+ for header in headers:
1827+ print >>seed, header
1828+ print >>seed
1829+ for entry in entries:
1830+ print >>seed, " * %s" % entry
1831+
1832+ def getTaskNameFromSeed(self, script, flavour, series_name, seed,
1833+ primary_flavour):
1834+ """Use script to parse a seed and return its task name."""
1835+ seed_path = self.composeSeedPath(flavour, series_name, seed)
1836+ with open(seed_path) as seed_text:
1837+ task_headers = script.parseTaskHeaders(seed_text)
1838+ return script.getTaskName(
1839+ task_headers, flavour, seed, primary_flavour)
1840+
1841+ def getTaskSeedsFromSeed(self, script, flavour, series_name, seed):
1842+ """Use script to parse a seed and return its task seed list."""
1843+ seed_path = self.composeSeedPath(flavour, series_name, seed)
1844+ with open(seed_path) as seed_text:
1845+ task_headers = script.parseTaskHeaders(seed_text)
1846+ return script.getTaskSeeds(task_headers, seed)
1847+
1848+ def test_name_is_consistent(self):
1849+ # Script instances for the same distro get the same name.
1850+ distro = self.launchpad.makeDistribution()
1851+ self.assertEqual(
1852+ GenerateExtraOverrides(test_args=["-d", distro.name]).name,
1853+ GenerateExtraOverrides(test_args=["-d", distro.name]).name)
1854+
1855+ def test_name_is_unique_for_each_distro(self):
1856+ # Script instances for different distros get different names.
1857+ self.assertNotEqual(
1858+ GenerateExtraOverrides(
1859+ test_args=["-d", self.launchpad.makeDistribution().name]).name,
1860+ GenerateExtraOverrides(
1861+ test_args=["-d", self.launchpad.makeDistribution().name]).name)
1862+
1863+ def test_requires_distro(self):
1864+ # The --distribution or -d argument is mandatory.
1865+ script = self.makeScript(None)
1866+ self.assertRaises(OptionValueError, script.processOptions)
1867+
1868+ def test_requires_real_distro(self):
1869+ # An incorrect distribution name is flagged as an invalid option
1870+ # value.
1871+ script = self.makeScript(
1872+ None, extra_args=["-d", factory.getUniqueString()])
1873+ self.assertRaises(OptionValueError, script.processOptions)
1874+
1875+ def test_looks_up_distro(self):
1876+ # The script looks up and keeps the distribution named on the
1877+ # command line.
1878+ self.setUpDistroAndScript()
1879+ self.assertEqual(self.distro, self.script.distribution)
1880+
1881+ def test_prefers_development_distro_series(self):
1882+ # The script prefers a DEVELOPMENT series for the named
1883+ # distribution over CURRENT and SUPPORTED series.
1884+ self.setUpDistroAndScript(
1885+ ["Supported", "Current Stable Release", "Active Development"])
1886+ self.assertEqual([self.distroseries[2]], self.script.series)
1887+
1888+ def test_permits_frozen_distro_series(self):
1889+ # If there is no DEVELOPMENT series, a FROZEN one will do.
1890+ self.setUpDistroAndScript(
1891+ ["Supported", "Current Stable Release", "Pre-release Freeze"])
1892+ self.assertEqual([self.distroseries[2]], self.script.series)
1893+
1894+ def test_requires_development_frozen_distro_series(self):
1895+ # If there is no DEVELOPMENT or FROZEN series, the script fails.
1896+ self.setUpDistroAndScript(
1897+ ["Supported", "Current Stable Release"], run_setup=False)
1898+ self.assertRaises(ScriptFailure, self.script.processOptions)
1899+
1900+ def test_multiple_development_frozen_distro_series(self):
1901+ # If there are multiple DEVELOPMENT or FROZEN series, they are all
1902+ # used.
1903+ self.setUpDistroAndScript([
1904+ "Active Development", "Active Development",
1905+ "Pre-release Freeze", "Pre-release Freeze",
1906+ ])
1907+ self.assertCountEqual(self.distroseries, self.script.series)
1908+
1909+ def test_components_exclude_partner(self):
1910+ # If a 'partner' component exists, it is excluded.
1911+ self.setUpDistroAndScript()
1912+ self.setUpComponent(component="main")
1913+ self.setUpComponent(component="partner")
1914+ self.assertEqual(1, len(self.script.series))
1915+ self.assertEqual(
1916+ ["main"], self.script.getComponents(self.script.series[0]))
1917+
1918+ def test_compose_output_path_in_germinateroot(self):
1919+ # Output files are written to the correct locations under
1920+ # germinateroot.
1921+ self.setUpDistroAndScript()
1922+ flavour = factory.getUniqueString()
1923+ arch = factory.getUniqueString()
1924+ base = factory.getUniqueString()
1925+ output = self.script.composeOutputPath(
1926+ flavour, self.distroseries[0].name, arch, base)
1927+ self.assertEqual(
1928+ "%s/%s_%s_%s_%s" % (
1929+ self.script.germinateroot, base, flavour,
1930+ self.distroseries[0].name, arch),
1931+ output)
1932+
1933+ def test_make_seed_structures_missing_seeds(self):
1934+ # makeSeedStructures ignores missing seeds.
1935+ self.setUpDistroAndScript()
1936+ series_name = self.distroseries[0].name
1937+ flavour = factory.getUniqueString()
1938+
1939+ structures = self.script.makeSeedStructures(
1940+ series_name, [flavour], seed_bases=["file://%s" % self.seeddir])
1941+ self.assertEqual({}, structures)
1942+
1943+ def test_make_seed_structures_empty_seed_structure(self):
1944+ # makeSeedStructures ignores an empty seed structure.
1945+ self.setUpDistroAndScript()
1946+ series_name = self.distroseries[0].name
1947+ flavour = factory.getUniqueString()
1948+ self.makeSeedStructure(flavour, series_name, [])
1949+
1950+ structures = self.script.makeSeedStructures(
1951+ series_name, [flavour], seed_bases=["file://%s" % self.seeddir])
1952+ self.assertEqual({}, structures)
1953+
1954+ def test_make_seed_structures_valid_seeds(self):
1955+ # makeSeedStructures reads valid seeds successfully.
1956+ self.setUpDistroAndScript()
1957+ series_name = self.distroseries[0].name
1958+ flavour = factory.getUniqueString()
1959+ seed = factory.getUniqueString()
1960+ self.makeSeedStructure(flavour, series_name, [seed])
1961+ self.makeSeed(flavour, series_name, seed, [])
1962+
1963+ structures = self.script.makeSeedStructures(
1964+ series_name, [flavour], seed_bases=["file://%s" % self.seeddir])
1965+ self.assertIn(flavour, structures)
1966+
1967+ def fetchGerminatedOverrides(self, script, distroseries, arch, flavours):
1968+ """Helper to call script.germinateArch and return overrides."""
1969+ structures = script.makeSeedStructures(
1970+ distroseries.name, flavours,
1971+ seed_bases=["file://%s" % self.seeddir])
1972+
1973+ override_fd, override_path = tempfile.mkstemp()
1974+ with os.fdopen(override_fd, "w") as override_file:
1975+ script.germinateArch(
1976+ override_file, distroseries.name,
1977+ script.getComponents(distroseries), arch, flavours,
1978+ structures)
1979+ return file_contents(override_path).splitlines()
1980+
1981+ def test_germinate_output(self):
1982+ # A single call to germinateArch produces output for all flavours on
1983+ # one architecture.
1984+ self.setUpDistroAndScript()
1985+ series_name = self.distroseries[0].name
1986+ component = self.setUpComponent()
1987+ das = self.distroseries[0].makeDistroArchSeries()
1988+ arch = das.architecture_tag
1989+ one = self.makePackage(component, [das])
1990+ two = self.makePackage(component, [das])
1991+ self.makeIndexFiles(self.script, self.distroseries[0])
1992+
1993+ flavour_one = factory.getUniqueString()
1994+ flavour_two = factory.getUniqueString()
1995+ seed = factory.getUniqueString()
1996+ self.makeSeedStructure(flavour_one, series_name, [seed])
1997+ self.makeSeed(flavour_one, series_name, seed, [one])
1998+ self.makeSeedStructure(flavour_two, series_name, [seed])
1999+ self.makeSeed(flavour_two, series_name, seed, [two])
2000+
2001+ overrides = self.fetchGerminatedOverrides(
2002+ self.script, self.distroseries[0], arch,
2003+ [flavour_one, flavour_two])
2004+ self.assertEqual([], overrides)
2005+
2006+ seed_dir_one = os.path.join(
2007+ self.seeddir, "%s.%s" % (flavour_one, series_name))
2008+ self.assertFilesEqual(
2009+ os.path.join(seed_dir_one, "STRUCTURE"),
2010+ self.script.composeOutputPath(
2011+ flavour_one, series_name, arch, "structure"))
2012+ self.assertTrue(file_exists(self.script.composeOutputPath(
2013+ flavour_one, series_name, arch, "all")))
2014+ self.assertTrue(file_exists(self.script.composeOutputPath(
2015+ flavour_one, series_name, arch, "all.sources")))
2016+ self.assertTrue(file_exists(self.script.composeOutputPath(
2017+ flavour_one, series_name, arch, seed)))
2018+
2019+ seed_dir_two = os.path.join(
2020+ self.seeddir, "%s.%s" % (flavour_two, series_name))
2021+ self.assertFilesEqual(
2022+ os.path.join(seed_dir_two, "STRUCTURE"),
2023+ self.script.composeOutputPath(
2024+ flavour_two, series_name, arch, "structure"))
2025+ self.assertTrue(file_exists(self.script.composeOutputPath(
2026+ flavour_two, series_name, arch, "all")))
2027+ self.assertTrue(file_exists(self.script.composeOutputPath(
2028+ flavour_two, series_name, arch, "all.sources")))
2029+ self.assertTrue(file_exists(self.script.composeOutputPath(
2030+ flavour_two, series_name, arch, seed)))
2031+
2032+ def test_germinate_output_task(self):
2033+ # germinateArch produces Task extra overrides.
2034+ self.setUpDistroAndScript()
2035+ series_name = self.distroseries[0].name
2036+ component = self.setUpComponent()
2037+ das = self.distroseries[0].makeDistroArchSeries()
2038+ arch = das.architecture_tag
2039+ one = self.makePackage(component, [das])
2040+ two = self.makePackage(component, [das], depends=one)
2041+ three = self.makePackage(component, [das])
2042+ self.makePackage(component, [das])
2043+ self.makeIndexFiles(self.script, self.distroseries[0])
2044+
2045+ flavour = factory.getUniqueString()
2046+ seed_one = factory.getUniqueString()
2047+ seed_two = factory.getUniqueString()
2048+ self.makeSeedStructure(flavour, series_name, [seed_one, seed_two])
2049+ self.makeSeed(
2050+ flavour, series_name, seed_one, [two],
2051+ headers=["Task-Description: one"])
2052+ self.makeSeed(
2053+ flavour, series_name, seed_two, [three],
2054+ headers=["Task-Description: two"])
2055+
2056+ overrides = self.fetchGerminatedOverrides(
2057+ self.script, self.distroseries[0], arch, [flavour])
2058+ expected_overrides = [
2059+ "%s/%s Task %s" % (one, arch, seed_one),
2060+ "%s/%s Task %s" % (two, arch, seed_one),
2061+ "%s/%s Task %s" % (three, arch, seed_two),
2062+ ]
2063+ self.assertCountEqual(expected_overrides, overrides)
2064+
2065+ def test_task_name(self):
2066+ # The Task-Name field is honoured.
2067+ series_name = factory.getUniqueString()
2068+ package = factory.getUniqueString()
2069+ script = self.makeScript(None)
2070+
2071+ flavour = factory.getUniqueString()
2072+ seed = factory.getUniqueString()
2073+ task = factory.getUniqueString()
2074+ self.makeSeed(
2075+ flavour, series_name, seed, [package],
2076+ headers=["Task-Name: %s" % task])
2077+
2078+ observed_task = self.getTaskNameFromSeed(
2079+ script, flavour, series_name, seed, True)
2080+ self.assertEqual(task, observed_task)
2081+
2082+ def test_task_per_derivative(self):
2083+ # The Task-Per-Derivative field is honoured.
2084+ series_name = factory.getUniqueString()
2085+ package = factory.getUniqueString()
2086+ script = self.makeScript(None)
2087+
2088+ flavour_one = factory.getUniqueString()
2089+ flavour_two = factory.getUniqueString()
2090+ seed_one = factory.getUniqueString()
2091+ seed_two = factory.getUniqueString()
2092+ self.makeSeed(
2093+ flavour_one, series_name, seed_one, [package],
2094+ headers=["Task-Description: one"])
2095+ self.makeSeed(
2096+ flavour_one, series_name, seed_two, [package],
2097+ headers=["Task-Per-Derivative: 1"])
2098+ self.makeSeed(
2099+ flavour_two, series_name, seed_one, [package],
2100+ headers=["Task-Description: one"])
2101+ self.makeSeed(
2102+ flavour_two, series_name, seed_two, [package],
2103+ headers=["Task-Per-Derivative: 1"])
2104+
2105+ observed_task_one_one = self.getTaskNameFromSeed(
2106+ script, flavour_one, series_name, seed_one, True)
2107+ observed_task_one_two = self.getTaskNameFromSeed(
2108+ script, flavour_one, series_name, seed_two, True)
2109+ observed_task_two_one = self.getTaskNameFromSeed(
2110+ script, flavour_two, series_name, seed_one, False)
2111+ observed_task_two_two = self.getTaskNameFromSeed(
2112+ script, flavour_two, series_name, seed_two, False)
2113+
2114+ # seed_one is not per-derivative, so it is honoured only for
2115+ # flavour_one and has a global name.
2116+ self.assertEqual(seed_one, observed_task_one_one)
2117+ self.assertIsNone(observed_task_two_one)
2118+
2119+ # seed_two is per-derivative, so it is honoured for both flavours
2120+ # and has the flavour name prefixed.
2121+ self.assertEqual(
2122+ "%s-%s" % (flavour_one, seed_two), observed_task_one_two)
2123+ self.assertEqual(
2124+ "%s-%s" % (flavour_two, seed_two), observed_task_two_two)
2125+
2126+ def test_task_seeds(self):
2127+ # The Task-Seeds field is honoured.
2128+ series_name = factory.getUniqueString()
2129+ one = factory.getUniqueString()
2130+ two = factory.getUniqueString()
2131+ script = self.makeScript(None)
2132+
2133+ flavour = factory.getUniqueString()
2134+ seed_one = factory.getUniqueString()
2135+ seed_two = factory.getUniqueString()
2136+ self.makeSeed(flavour, series_name, seed_one, [one])
2137+ self.makeSeed(
2138+ flavour, series_name, seed_two, [two],
2139+ headers=["Task-Seeds: %s" % seed_one])
2140+
2141+ task_seeds = self.getTaskSeedsFromSeed(
2142+ script, flavour, series_name, seed_two)
2143+ self.assertCountEqual([seed_one, seed_two], task_seeds)
2144+
2145+ def test_germinate_output_build_essential(self):
2146+ # germinateArch produces Build-Essential extra overrides.
2147+ self.setUpDistroAndScript()
2148+ series_name = self.distroseries[0].name
2149+ component = self.setUpComponent()
2150+ das = self.distroseries[0].makeDistroArchSeries()
2151+ arch = das.architecture_tag
2152+ package = self.makePackage(component, [das])
2153+ self.makeIndexFiles(self.script, self.distroseries[0])
2154+
2155+ flavour = factory.getUniqueString()
2156+ seed = "build-essential"
2157+ self.makeSeedStructure(flavour, series_name, [seed])
2158+ self.makeSeed(flavour, series_name, seed, [package])
2159+
2160+ overrides = self.fetchGerminatedOverrides(
2161+ self.script, self.distroseries[0], arch, [flavour])
2162+ self.assertCountEqual(
2163+ ["%s/%s Build-Essential yes" % (package, arch)], overrides)
2164+
2165+ def test_removes_only_stale_files(self):
2166+ # removeStaleOutputs removes only stale germinate output files.
2167+ self.setUpDistroAndScript()
2168+ series_name = self.distroseries[0].name
2169+ seed_old_file = "old_flavour_%s_i386" % series_name
2170+ seed_new_file = "new_flavour_%s_i386" % series_name
2171+ other_file = "other-file"
2172+ output = partial(os.path.join, self.script.germinateroot)
2173+ for base in (seed_old_file, seed_new_file, other_file):
2174+ write_file(output(base), "")
2175+ self.script.removeStaleOutputs(series_name, set([seed_new_file]))
2176+ self.assertFalse(os.path.exists(output(seed_old_file)))
2177+ self.assertTrue(os.path.exists(output(seed_new_file)))
2178+ self.assertTrue(os.path.exists(output(other_file)))
2179+
2180+ def test_process_missing_seeds(self):
2181+ # The script ignores series with no seed structures.
2182+ flavour = factory.getUniqueString()
2183+ self.setUpDistroAndScript(
2184+ ["Active Development", "Active Development"], extra_args=[flavour])
2185+ self.setUpComponent()
2186+ self.distroseries[0].makeDistroArchSeries()
2187+ self.distroseries[1].makeDistroArchSeries()
2188+ self.makeIndexFiles(self.script, self.distroseries[1])
2189+ seed = factory.getUniqueString()
2190+ self.makeSeedStructure(flavour, self.distroseries[1].name, [seed])
2191+ self.makeSeed(flavour, self.distroseries[1].name, seed, [])
2192+
2193+ self.script.process(seed_bases=["file://%s" % self.seeddir])
2194+ self.assertFalse(os.path.exists(os.path.join(
2195+ self.script.miscroot,
2196+ "more-extra.override.%s.main" % self.distroseries[0].name)))
2197+ self.assertTrue(os.path.exists(os.path.join(
2198+ self.script.miscroot,
2199+ "more-extra.override.%s.main" % self.distroseries[1].name)))
2200+
2201+ def test_process_removes_only_stale_files(self):
2202+ # The script removes only stale germinate output files.
2203+ flavour = factory.getUniqueString()
2204+ self.setUpDistroAndScript(extra_args=[flavour])
2205+ series_name = self.distroseries[0].name
2206+ self.setUpComponent()
2207+ das = self.distroseries[0].makeDistroArchSeries()
2208+ arch = das.architecture_tag
2209+ self.makeIndexFiles(self.script, self.distroseries[0])
2210+
2211+ seed_old = factory.getUniqueString()
2212+ seed_new = factory.getUniqueString()
2213+ self.makeSeedStructure(flavour, series_name, [seed_old])
2214+ self.makeSeed(flavour, series_name, seed_old, [])
2215+ self.script.process(seed_bases=["file://%s" % self.seeddir])
2216+ output = partial(
2217+ self.script.composeOutputPath, flavour, series_name, arch)
2218+ self.assertTrue(os.path.exists(output(seed_old)))
2219+ self.makeSeedStructure(flavour, series_name, [seed_new])
2220+ self.makeSeed(flavour, series_name, seed_new, [])
2221+ self.script.process(seed_bases=["file://%s" % self.seeddir])
2222+ self.assertTrue(os.path.exists(os.path.join(self.script.log_file)))
2223+ self.assertTrue(os.path.exists(output("structure")))
2224+ self.assertTrue(os.path.exists(output("all")))
2225+ self.assertTrue(os.path.exists(output("all.sources")))
2226+ self.assertTrue(os.path.exists(output(seed_new)))
2227+ self.assertFalse(os.path.exists(output(seed_old)))
2228+
2229+ def test_process_skips_disabled_distroarchseries(self):
2230+ # The script does not generate overrides for disabled DistroArchSeries.
2231+ flavour = factory.getUniqueString()
2232+ self.setUpDistroAndScript(extra_args=[flavour])
2233+ das = self.distroseries[0].makeDistroArchSeries()
2234+ self.distroseries[0].makeDistroArchSeries(enabled=False)
2235+ self.script.generateExtraOverrides = mock.MagicMock()
2236+ self.script.process()
2237+ self.assertEqual(1, self.script.generateExtraOverrides.call_count)
2238+ self.assertEqual(
2239+ [das.architecture_tag],
2240+ self.script.generateExtraOverrides.call_args[0][2])
2241+
2242+ def test_main(self):
2243+ # If run end-to-end, the script generates override files containing
2244+ # output for all architectures, and sends germinate's log output to
2245+ # a file.
2246+ flavour = factory.getUniqueString()
2247+ self.setUpDistroAndScript(extra_args=[flavour])
2248+ series_name = self.distroseries[0].name
2249+ component = self.setUpComponent()
2250+ das_one = self.distroseries[0].makeDistroArchSeries()
2251+ das_two = self.distroseries[0].makeDistroArchSeries()
2252+ package = self.makePackage(component, [das_one, das_two])
2253+ self.makeIndexFiles(self.script, self.distroseries[0])
2254+
2255+ seed = factory.getUniqueString()
2256+ self.makeSeedStructure(flavour, series_name, [seed])
2257+ self.makeSeed(
2258+ flavour, series_name, seed, [package],
2259+ headers=["Task-Description: task"])
2260+
2261+ self.script.process(seed_bases=["file://%s" % self.seeddir])
2262+ override_path = os.path.join(
2263+ self.script.miscroot, "more-extra.override.%s.main" % series_name)
2264+ expected_overrides = [
2265+ "%s/%s Task %s" % (package, das_one.architecture_tag, seed),
2266+ "%s/%s Task %s" % (package, das_two.architecture_tag, seed),
2267+ ]
2268+ self.assertCountEqual(
2269+ expected_overrides, file_contents(override_path).splitlines())
2270+
2271+ log_file = os.path.join(self.script.germinateroot, "germinate.output")
2272+ self.assertIn("Downloading file://", file_contents(log_file))
2273+
2274+ def test_run_script_help(self):
2275+ # The script will run stand-alone and generate help output.
2276+ retval, out, err = run_script(
2277+ "scripts/generate-extra-overrides.py", ["-h"])
2278+ self.assertEqual(0, retval)
2279+ self.assertEqual("", err)
2280
2281=== added file 'tests/testing.py'
2282--- tests/testing.py 1970-01-01 00:00:00 +0000
2283+++ tests/testing.py 2014-06-09 13:57:00 +0000
2284@@ -0,0 +1,174 @@
2285+# Copyright 2011-2014 Canonical Ltd. This software is licensed under the
2286+# GNU Affero General Public License version 3 (see the file LICENSE).
2287+
2288+"""Test utilities.
2289+
2290+Many of these are taken more or less directly from Launchpad.
2291+"""
2292+
2293+__metaclass__ = type
2294+
2295+__all__ = [
2296+ "ensure_directory_exists",
2297+ "factory",
2298+ "file_contents",
2299+ "open_for_writing",
2300+ "run_script",
2301+ "TestCase",
2302+ "write_file",
2303+ ]
2304+
2305+import errno
2306+from itertools import count
2307+import logging
2308+import os
2309+import shutil
2310+import subprocess
2311+import sys
2312+import tempfile
2313+import unittest
2314+
2315+
2316+def ensure_directory_exists(directory, mode=0777):
2317+ """Create 'directory' if it doesn't exist.
2318+
2319+ :return: True if the directory had to be created, False otherwise.
2320+ """
2321+ try:
2322+ os.makedirs(directory, mode=mode)
2323+ except OSError as e:
2324+ if e.errno == errno.EEXIST:
2325+ return False
2326+ raise
2327+ return True
2328+
2329+
2330+def open_for_writing(filename, mode, dirmode=0777):
2331+ """Open 'filename' for writing, creating directories if necessary.
2332+
2333+ :param filename: The path of the file to open.
2334+ :param mode: The mode to open the filename with. Should be 'w', 'a' or
2335+ something similar. See ``open`` for more details. If you pass in
2336+ a read-only mode (e.g. 'r'), then we'll just accept that and return
2337+ a read-only file-like object.
2338+ :param dirmode: The mode to use to create directories, if necessary.
2339+ :return: A file-like object that can be used to write to 'filename'.
2340+ """
2341+ try:
2342+ return open(filename, mode)
2343+ except IOError as e:
2344+ if e.errno == errno.ENOENT:
2345+ os.makedirs(os.path.dirname(filename), mode=dirmode)
2346+ return open(filename, mode)
2347+
2348+
2349+def write_file(path, content):
2350+ with open_for_writing(path, 'w') as f:
2351+ f.write(content)
2352+
2353+
2354+def run_script(script_relpath, args, expect_returncode=0):
2355+ """Run a script for testing purposes.
2356+
2357+ :param script_relpath: The relative path to the script, from the tree
2358+ root.
2359+ :param args: Arguments to provide to the script.
2360+ :param expect_returncode: The return code expected. If a different value
2361+ is returned, and exception will be raised.
2362+ """
2363+ script = os.path.join(sys.path[0], script_relpath)
2364+ args = [script] + args
2365+ process = subprocess.Popen(
2366+ args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
2367+ stdout, stderr = process.communicate()
2368+ if process.returncode != expect_returncode:
2369+ raise AssertionError('Failed:\n%s\n%s' % (stdout, stderr))
2370+ return (process.returncode, stdout, stderr)
2371+
2372+
2373+def file_contents(path):
2374+ """Return the contents of the file at path."""
2375+ with open(path) as handle:
2376+ return handle.read()
2377+
2378+
2379+class ObjectFactory:
2380+
2381+ _unique_int_counter = count(100000)
2382+
2383+ def getUniqueEmailAddress(self):
2384+ return "%s@example.com" % self.getUniqueString("email")
2385+
2386+ def getUniqueInteger(self):
2387+ return next(self._unique_int_counter)
2388+
2389+ def getUniqueString(self, prefix=None):
2390+ if prefix is None:
2391+ prefix = "unique"
2392+ return "%s-%s" % (prefix, self.getUniqueInteger())
2393+
2394+
2395+factory = ObjectFactory()
2396+
2397+
2398+class TestCase(unittest.TestCase):
2399+
2400+ def setUp(self):
2401+ super(TestCase, self).setUp()
2402+ self.save_env = dict(os.environ)
2403+
2404+ def makeTemporaryDirectory(self):
2405+ """Create a temporary directory, and return its path."""
2406+ temp_dir = tempfile.mkdtemp()
2407+ self.addCleanup(shutil.rmtree, temp_dir)
2408+ return temp_dir
2409+
2410+ def useTempDir(self):
2411+ """Use a temporary directory for this test."""
2412+ tempdir = self.makeTemporaryDirectory()
2413+ cwd = os.getcwd()
2414+ os.chdir(tempdir)
2415+ self.addCleanup(os.chdir, cwd)
2416+ return tempdir
2417+
2418+ def resetEnvironment(self):
2419+ for key in set(os.environ.keys()) - set(self.save_env.keys()):
2420+ del os.environ[key]
2421+ for key, value in os.environ.items():
2422+ if value != self.save_env[key]:
2423+ os.environ[key] = self.save_env[key]
2424+ for key in set(self.save_env.keys()) - set(os.environ.keys()):
2425+ os.environ[key] = self.save_env[key]
2426+
2427+ def resetLogging(self):
2428+ # Remove all handlers from non-root loggers, and remove the loggers
2429+ # too.
2430+ loggerDict = logging.Logger.manager.loggerDict
2431+ for name, logger in list(loggerDict.items()):
2432+ if not isinstance(logger, logging.PlaceHolder):
2433+ for handler in list(logger.handlers):
2434+ logger.removeHandler(handler)
2435+ del loggerDict[name]
2436+
2437+ # Remove all handlers from the root logger
2438+ root = logging.getLogger('')
2439+ for handler in root.handlers:
2440+ root.removeHandler(handler)
2441+
2442+ # Set the root logger's log level back to the default level:
2443+ # WARNING.
2444+ root.setLevel(logging.WARNING)
2445+
2446+ # Clean out the guts of the logging module. We don't want handlers
2447+ # that have already been closed hanging around for the atexit
2448+ # handler to barf on, for example.
2449+ del logging._handlerList[:]
2450+ logging._handlers.clear()
2451+
2452+ def tearDown(self):
2453+ self.resetEnvironment()
2454+ self.resetLogging()
2455+
2456+ # Monkey-patch for Python 2/3 compatibility.
2457+ if not hasattr(unittest.TestCase, 'assertCountEqual'):
2458+ assertCountEqual = unittest.TestCase.assertItemsEqual

Subscribers

People subscribed via source and target branches