Merge lp:~jtv/maas/import-to-tftp into lp:~maas-committers/maas/trunk
- import-to-tftp
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Jeroen T. Vermeulen |
Approved revision: | no longer in the source branch. |
Merged at revision: | 649 |
Proposed branch: | lp:~jtv/maas/import-to-tftp |
Merge into: | lp:~maas-committers/maas/trunk |
Diff against target: |
426 lines (+396/-3) 3 files modified
HACKING.txt (+3/-3) scripts/maas-import-pxe-files (+191/-0) src/maas/tests/test_maas_import_pxe_files.py (+202/-0) |
To merge this branch: | bzr merge lp:~jtv/maas/import-to-tftp |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Raphaël Badin (community) | Approve | ||
Review via email: mp+110451@code.launchpad.net |
Commit message
Script to import kernels/initrds for install netboots.
Description of the change
See the “PXE boot requirements” document for the reasoning and conventions behind this branch. The branch adds a script that downloads the files that MAAS needs to provide over TFTP for a node to boot into the installer.
(Only the kernel and initrd are needed for this use-case; the minimal netbooted system will download the installer from an Ubuntu archive and start executing it.)
In a Cobblerless world this script replaces most, but not quite all, of what the existing maas-import-isos script does. There's still the preseeds and kernel parameters to be dealt with, but I think those will need to be done with knowledge from the database. My guess right now is that it can be done separately from the downloads.
There is still one problem with this script: a race condition. Updates are skipped when there are no changes, and the files are replaced as near-atomically as possible, but that doesn't help downloads that are ongoing at the time. Even if tftpd keeps file handles open for ongoing downloads (not too likely for a stateless UDP-based protocol!) so that each file will produce a consistent byte stream, the download is still 2 files, so a node could still see a mismatch between the kernel and the initrd. We agreed to tolerate this for now, but I'll file a bug.
For testing purposes I had to introduce a test dependency on curl. Amazingly, wget does not support the “file://” URLs I needed for my test. So for testing (and testing only) I substitute curl. The script is written such that it should work with either.
Jeroen
Robie Basak (racb) wrote : | # |
Raphaël Badin (rvb) wrote : | # |
Looks good. A few remarks:
[0]
99 + if [ -f $dest/pxelinux.0 ] && cmp pxelinux.0 $dest/pxelinux.0
I think you want to add "--quiet" to "cmp pxelinux.0 $dest/pxelinux.0" since you only care about the return code.
[1]
124 + rm -rf -- $dest/$name.old $dest/$name.new
How about using '-f' when moving files instead of cleaning up the files?
[2]
125 + # Put new downloads next to the old. If any file copying is needed
You've written the doc for this method without mentioning 'downloads'. I realize in practice this is used to move downloaded files but still, for consistency, I think you should s/downloads/
[3]
192 + cd -- $DOWNLOAD_DIR
Please consider using pushd/popd to restore go back to the original directory just before the script exits.
[4]
200 + # Replace the "base" with sub-architecture once we start supporting
201 + # those.
202 + arch_dir=
203 +
204 + mkdir -p -- $arch_dir
[…]
209 + rel_dir=
210 + mkdir -p -- $rel_dir
211 + update_
There is an additional white space on lines 202, 209 and 210.
[5]
259 + with open(os.
Please consider using "open(os.
[6]
196 + umask a+r
That looks like an important feature of the script. Don't you think it's worth a test?
[7]
66 +TFTPROOT=
and
202 + arch_dir=
arch_dir will be /var/lib/
[8]
Same "problem" with the download path: it downloads http://
[9]
54 +# Ubuntu releases that are to be downloaded.
55 +SUPPORTED_
Just a question here, is MAAS really supporting pre-precise releases?
[10]
70 +DOWNLOAD=
You've used --no-verbose and not --quiet here. This means that the output of this script looks like this: http://
[11]
I've noticed you've left the existing 'maas-import-isos' untouched. I understand we probably want to keep it around until the Cobbler replacement is done. But don't you think we should file bugs to make sure we don't forget to cleanup things when the replacement will be completed? I'd like your opinion on this because I've got the exact same dilemma about preseed templates/snippets.
Jeroen T. Vermeulen (jtv) wrote : | # |
Thanks for the unexpected additional review. I've made the changes you asked for:
> Can we please use "generic" instead of "base"? "base" sounds like it is used
> as the foundation for something else which I think is a bit misleading. The
> armhf subarchitectures correspond to kernel flavours and "generic" is the
> standard i386/amd64 flavour that we are moving to so I think this makes sense.
Just goes to show that no matter how many people you sanity-check with, there's always something you don't know about. I updated the code, and the PXE requirements document as well.
> ARCHIVE=${ARCHIVE:-http://
> dists at the end - the standard way of specifying an archive is one level
> higher than this, without the dists. Then compose_
> insert "dists" into the URL instead.
Done. Gratifying to see how few changes that took.
You also mentioned some changes you might want to undertake yourself:
> The list of supported arches will need to grow from amd64 and i386 to
> armhf/highbank, armhf/armadaxp etc. At this point, it might be cleanest to
> specify ARCHES as "amd64/generic i386/generic armhf/highbank" etc.
Yes, something like that. Right now we can't quite do that because these settings are shared with the old, Cobbler-based scripts.
> The archive needs to be selected based on the architecture in use, in order to
> switch from archive.u.c/ubuntu to ports.u.
> into a function, rather than directly a variable?
I suppose. But it's a worry for later... Other solutions may become apparent in the meantime.
> compose_
> URL differs based on arch and release. The needed URLs don't necessary end in
> pxelinux.0, linux and initrd.gz though, so I think this needs to be a bit more
> specialised and return the actual URL of the kernel image, initrd image and
> pxelinux.0 instead. For example: the armadaxp kernel image is called uImage
> because it is in a different format.
A bit disheartening, but I guess we'll just have to deal with it when the time comes.
Jeroen
Jeroen T. Vermeulen (jtv) wrote : | # |
And on to Raphaël's review!
> 99 + if [ -f $dest/pxelinux.0 ] && cmp pxelinux.0 $dest/pxelinux.0
>
> I think you want to add "--quiet" to "cmp pxelinux.0 $dest/pxelinux.0" since
> you only care about the return code.
Oh yes, absolutely. That also means I no longer need to check for the file's existence. I only did that to suppress the output — the documentation for --quiet suggested that the error message for a missing file would still be there, as you'd normally expect anyway, so I didn't think to use it.
> [1]
>
> 124 + rm -rf -- $dest/$name.old $dest/$name.new
>
> How about using '-f' when moving files instead of cleaning up the files?
Won't work for (non-empty) directories. There's even an option to make “mv” act as if the destination were a file, but I couldn't find a way to make that do what I wanted either.
> [2]
>
> 125 + # Put new downloads next to the old. If any file copying is
> needed
>
> You've written the doc for this method without mentioning 'downloads'. I
> realize in practice this is used to move downloaded files but still, for
> consistency, I think you should s/downloads/
Oops. Older incarnation of the code, when this wasn't extracted yet. Fixed.
> [3]
>
> 192 + cd -- $DOWNLOAD_DIR
>
> Please consider using pushd/popd to restore go back to the original directory
> just before the script exits.
I did consider it at the time, but bear in mind that “cd” will only affect the process of the shell that was forked for the purpose of running the script. The caller isn't affected.
> [4]
>
> 200 + # Replace the "base" with sub-architecture once we start
> supporting
> 201 + # those.
> 202 + arch_dir=
> 203 +
> 204 + mkdir -p -- $arch_dir
> […]
> 209 + rel_dir=
> 210 + mkdir -p -- $rel_dir
> 211 + update_
>
> There is an additional white space on lines 202, 209 and 210.
Took me a while to figure it out, since I didn't find any extra space in my editor: I got tabs I never asked for! All fixed now.
> [5]
>
> 259 + with open(os.
>
> Please consider using "open(os.
> compatibility.
Compatibility with what? Windows? Python 3? Unusual encodings? Given that this is for ASCII text files written from the same tests, what's the benefit of treating them as binary files?
> [6]
>
> 196 + umask a+r
>
> That looks like an important feature of the script. Don't you think it's
> worth a test?
Not particularly, no. It just means that the script will write world-readable files even if your umask doesn't normally do that. It doesn't affect its environment in any other way.
> [7]
>
> 66 +TFTPROOT=
> and
> 202 + arch_dir=
>
> arch_dir will be /var/lib/
> problem but maybe you'll want to remove that '//'.
I felt the trailing slash on $TFTPROOT was clearer, but if you have a problem...
MAAS Lander (maas-lander) wrote : | # |
There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.
Raphaël Badin (rvb) wrote : | # |
> Compatibility with what? Windows? Python 3? Unusual encodings? Given that this is for ASCII text files written from
> the same tests, what's the benefit of treating them as binary files?
Yes, Windows. I guess it's a little bit stupid in this instance as you mention but nonetheless, it's probably a good habit.
> Remember: cleanup is a noun, clean up is a verb!
Rarg!
> I don't like to worry too much about these small things in advance: if cleanup fits in naturally with the other work,
> we'll just do it and not worry about it. If we discover that it doesn't, then that is a good time to file a bug
> or a card so we don't remember. Until then, piling up detailed bugs or cards will just cloud our view of the work
> ahead. And as always: no matter how clear-cut a remote problem seems now, expect our view of it to change by the
> time it becomes relevant.
Fair enough.
Preview Diff
1 | === modified file 'HACKING.txt' |
2 | --- HACKING.txt 2012-06-08 12:27:18 +0000 |
3 | +++ HACKING.txt 2012-06-15 13:33:19 +0000 |
4 | @@ -44,16 +44,16 @@ |
5 | python-formencode python-oauth python-oops python-oops-datedir-repo \ |
6 | python-twisted python-oops-wsgi python-oops-twisted \ |
7 | python-psycopg2 python-yaml python-convoy python-django-south \ |
8 | - python-avahi python-dbus python-celery python-tempita |
9 | + python-avahi python-dbus python-celery python-tempita distro-info |
10 | |
11 | Additionally, you need to install the following python libraries |
12 | for development convenience:: |
13 | |
14 | $ sudo apt-get install python-sphinx python-lxml python-pocket-lint |
15 | |
16 | -If you intend to run the test suite, you also need xvfb and firefox:: |
17 | +If you intend to run the test suite, you also need a few other things: |
18 | |
19 | - $ sudo apt-get install xvfb firefox avahi-utils |
20 | + $ sudo apt-get install xvfb firefox avahi-utils curl |
21 | |
22 | All other development dependencies are pulled automatically from `PyPI`_ |
23 | when buildout runs. |
24 | |
25 | === added file 'scripts/maas-import-pxe-files' |
26 | --- scripts/maas-import-pxe-files 1970-01-01 00:00:00 +0000 |
27 | +++ scripts/maas-import-pxe-files 2012-06-15 13:33:19 +0000 |
28 | @@ -0,0 +1,191 @@ |
29 | +#!/usr/bin/env bash |
30 | +# Copyright 2012 Canonical Ltd. This software is licensed under the |
31 | +# GNU Affero General Public License version 3 (see the file LICENSE). |
32 | +# |
33 | +# Download static files needed for net-booting nodes through TFTP: |
34 | +# pre-boot loader, kernels, and initrd images. |
35 | +# |
36 | +# This script downloads the required files into the TFTP home directory |
37 | +# (by default, /var/lib/tftpboot). Run it with the necessarily privileges |
38 | +# to write them there. |
39 | + |
40 | +# Exit immediately if a command exits with a non-zero status. |
41 | +set -o errexit |
42 | +# Treat unset variables as an error when substituting. |
43 | +set -o nounset |
44 | + |
45 | +# Load settings if available. |
46 | +settings="/etc/maas/import_isos" |
47 | +[ -r $settings ] && . $settings |
48 | +local_settings="$(pwd)/$settings" |
49 | +[ -r $local_settings ] && . $local_settings |
50 | + |
51 | +# Download location for Ubuntu releases. |
52 | +ARCHIVE=${ARCHIVE:-http://archive.ubuntu.com/ubuntu/} |
53 | + |
54 | +# Ubuntu releases that are to be downloaded. |
55 | +SUPPORTED_RELEASES=$(distro-info --supported) |
56 | +RELEASES=${RELEASES:-$SUPPORTED_RELEASES} |
57 | + |
58 | +# The current Ubuntu release. |
59 | +STABLE_RELEASE=$(distro-info --stable) |
60 | +CURRENT_RELEASE=${CURRENT_RELEASE:-$STABLE_RELEASE} |
61 | + |
62 | +# Supported architectures. |
63 | +ARCHES=${ARCHES:-amd64 i386} |
64 | + |
65 | +# TFTP root directory. (Don't let the "root" vs. "boot" confuse you.) |
66 | +TFTPROOT=${TFTPROOT:-/var/lib/tftpboot} |
67 | + |
68 | +# Command line to download a resource at a given URL into the current |
69 | +# directory. A wget command line will work here, but curl will do as well. |
70 | +DOWNLOAD=${DOWNLOAD:-wget --no-verbose} |
71 | + |
72 | + |
73 | +# Put together a full URL for where the installer files for architecture $1 |
74 | +# and release $2 can be downloaded. |
75 | +compose_installer_download_url() { |
76 | + local arch=$1 release=$2 |
77 | + local installer_url="$ARCHIVE/dists/$release/main/installer-$arch" |
78 | + echo "$installer_url/current/images/netboot/ubuntu-installer/$arch/" |
79 | +} |
80 | + |
81 | + |
82 | +# Download the pre-boot loader, pxelinux.0, for architecture $2 if it exists, |
83 | +# and if so, install it into directory $1. (Not all architectures need this |
84 | +# file, and there's rarely an urgent need for the very latest file, so if |
85 | +# the download fails this function just skips it.) |
86 | +update_pre_boot_loader() { |
87 | + local dest=$1 arch=$2 |
88 | + local url=$(compose_installer_download_url $arch $CURRENT_RELEASE) |
89 | + if ! $DOWNLOAD $url/pxelinux.0 |
90 | + then |
91 | + echo "No pre-boot loader for $arch; skipping." |
92 | + return |
93 | + fi |
94 | + |
95 | + # If the file has changed, move it into place (replacing any previous |
96 | + # version). Otherwise, leave the filesystem alone. |
97 | + if [ -f pxelinux.0 ] |
98 | + then |
99 | + if cmp --quiet pxelinux.0 $dest/pxelinux.0 |
100 | + then |
101 | + rm -f -- pxelinux.0 |
102 | + else |
103 | + echo "Updating pre-boot loader for $arch." |
104 | + mv -- pxelinux.0 $dest/ |
105 | + fi |
106 | + fi |
107 | +} |
108 | + |
109 | + |
110 | +# Move local directory $1 into directory $2, so that it becomes $2/$1. |
111 | +# If a directory of the same name already existed in $2, remove it along |
112 | +# with all its contents. |
113 | +# This will use "$2/$1.new" and "$2/$1.old" as temporary locations, which |
114 | +# will also be deleted. |
115 | +install_dir() { |
116 | + local name=$1 dest=$2 |
117 | + # Use the "old"/"new" directories to maximize speed and minimize |
118 | + # inconvenience: ensure that the new directory contents are on the |
119 | + # partition they will ultimately need to be on, then move the old |
120 | + # ones out of the way and immediately replace them with the new, and |
121 | + # finally remove the old ones outside of the critical path. |
122 | + # This will not resolve races with ongoing downloads by booting nodes. |
123 | + # It merely minimizes this script's side of the race window. |
124 | + rm -rf -- $dest/$name.old $dest/$name.new |
125 | + # Put new downloads next to the old. If any file copying is needed |
126 | + # because directories are on different partitions, it happens here. |
127 | + mv -- $name $dest/$name.new |
128 | + |
129 | + # Start of the critical window. |
130 | + |
131 | + # Move existing directory (if any) out of the way. |
132 | + [ -d $dest/$name ] && mv -- $dest/$name $dest/$name.old |
133 | + |
134 | + # Move new directory into place. |
135 | + mv -- $dest/$name.new $dest/$name |
136 | + |
137 | + # End of the critical window. |
138 | + |
139 | + # Clean up the old directory (if any). |
140 | + rm -rf -- $dest/$name.old |
141 | +} |
142 | + |
143 | + |
144 | +# Compare given file names between directories $1 and $2. Print "yes" if |
145 | +# they are all identical, or "no" if any of them are different or missing. |
146 | +identical_files() { |
147 | + local lhs=$1 rhs=$2 |
148 | + local file |
149 | + shift 2 |
150 | + for file in $* |
151 | + do |
152 | + if ! cmp --quiet $lhs/$file $rhs/$file |
153 | + then |
154 | + echo "no" |
155 | + return |
156 | + fi |
157 | + done |
158 | + echo "yes" |
159 | +} |
160 | + |
161 | + |
162 | +# Download kernel/initrd for installing Ubuntu release $3 for |
163 | +# architecture $2 into directory $1/install. |
164 | +update_install_files() { |
165 | + local dest=$1 arch=$2 release=$3 |
166 | + local files="initrd.gz linux" |
167 | + local url=$(compose_installer_download_url $arch $release) |
168 | + local file |
169 | + |
170 | + mkdir "install" |
171 | + pushd "install" >/dev/null |
172 | + for file in $files |
173 | + do |
174 | + $DOWNLOAD $url/$file |
175 | + done |
176 | + popd >/dev/null |
177 | + |
178 | + # TODO: Prevent this change from upsetting any currently booting nodes. |
179 | + if [ $(identical_files "install" $dest/"install" $files) != 'yes' ] |
180 | + then |
181 | + echo "Updating files for $release-$arch." |
182 | + install_dir "install" $dest |
183 | + fi |
184 | +} |
185 | + |
186 | + |
187 | +main() { |
188 | + local arch release arch_dir rel_dir |
189 | + |
190 | + DOWNLOAD_DIR=$(mktemp -d) |
191 | + echo "Downloading to temporary location $DOWNLOAD_DIR." |
192 | + cd -- $DOWNLOAD_DIR |
193 | + |
194 | + # All files we create here are public. The TFTP user will need to be |
195 | + # able to read them. |
196 | + umask a+r |
197 | + |
198 | + for arch in $ARCHES |
199 | + do |
200 | + # Replace the "generic" with sub-architecture once we start |
201 | + # supporting those. |
202 | + arch_dir="$TFTPROOT/maas/$arch/generic" |
203 | + |
204 | + mkdir -p -- $arch_dir |
205 | + update_pre_boot_loader $arch_dir $arch |
206 | + |
207 | + for release in $RELEASES |
208 | + do |
209 | + rel_dir="$arch_dir/$release" |
210 | + mkdir -p -- $rel_dir |
211 | + update_install_files $rel_dir $arch $release |
212 | + done |
213 | + done |
214 | + |
215 | + rm -rf -- $DOWNLOAD_DIR |
216 | +} |
217 | + |
218 | + |
219 | +main |
220 | |
221 | === added file 'src/maas/tests/test_maas_import_pxe_files.py' |
222 | --- src/maas/tests/test_maas_import_pxe_files.py 1970-01-01 00:00:00 +0000 |
223 | +++ src/maas/tests/test_maas_import_pxe_files.py 2012-06-15 13:33:19 +0000 |
224 | @@ -0,0 +1,202 @@ |
225 | +# Copyright 2012 Canonical Ltd. This software is licensed under the |
226 | +# GNU Affero General Public License version 3 (see the file LICENSE). |
227 | + |
228 | +"""Tests for the maas-import-pxe-files script.""" |
229 | + |
230 | +from __future__ import ( |
231 | + absolute_import, |
232 | + print_function, |
233 | + unicode_literals, |
234 | + ) |
235 | + |
236 | +__metaclass__ = type |
237 | +__all__ = [] |
238 | + |
239 | +import os |
240 | +from stat import ST_MTIME |
241 | +from subprocess import check_call |
242 | + |
243 | +from maastesting.factory import factory |
244 | +from maastesting.testcase import TestCase |
245 | +from testtools.matchers import ( |
246 | + FileContains, |
247 | + FileExists, |
248 | + Not, |
249 | + ) |
250 | + |
251 | + |
252 | +def make_name(prefix, separator='-'): |
253 | + """Return an arbitrary name, with the given prefix.""" |
254 | + return separator.join([prefix, factory.getRandomString(4)]) |
255 | + |
256 | + |
257 | +def read_file(path, name): |
258 | + """Return the contents of the file at `path/name`.""" |
259 | + with open(os.path.join(path, name)) as infile: |
260 | + return infile.read() |
261 | + |
262 | + |
263 | +def get_write_time(path): |
264 | + """Return last modification time of file at `path`.""" |
265 | + return os.stat(path)[ST_MTIME] |
266 | + |
267 | + |
268 | +def backdate(path): |
269 | + """Set the last modification time for the file at `path` to the past.""" |
270 | + os.utime(path, (99999, 99999)) |
271 | + |
272 | + |
273 | +def compose_download_dir(archive, arch, release): |
274 | + """Locate a bootloader, initrd, and kernel in an archive. |
275 | + |
276 | + :param archive: Archive directory (corresponding to the script's ARCHIVE |
277 | + setting, except here it's a filesystem path not a URL). |
278 | + :param arch: Architecture. |
279 | + :param release: Ubuntu release name. |
280 | + :return: Full absolute path to the directory holding the requisite files |
281 | + for this archive, arch, and release. |
282 | + """ |
283 | + return os.path.join( |
284 | + archive, 'dists', release, 'main', 'installer-%s' % arch, 'current', |
285 | + 'images', 'netboot', 'ubuntu-installer', arch) |
286 | + |
287 | + |
288 | +def compose_tftp_path(tftproot, arch, *path): |
289 | + """Compose path for MAAS TFTP files for given architecture. |
290 | + |
291 | + After the TFTP root directory and the architecture, just append any path |
292 | + components you want to drill deeper, e.g. the release name to get to the |
293 | + files for a specific release. |
294 | + """ |
295 | + return os.path.join(tftproot, 'maas', arch, 'generic', *path) |
296 | + |
297 | + |
298 | +class TestImportPXEFiles(TestCase): |
299 | + |
300 | + def make_downloads(self, release=None, arch=None): |
301 | + """Set up a directory with an image for "download" by the script. |
302 | + |
303 | + Returns an "ARCHIVE" URL for the download. |
304 | + """ |
305 | + if release is None: |
306 | + release = make_name('release') |
307 | + if arch is None: |
308 | + arch = make_name('arch') |
309 | + archive = self.make_dir() |
310 | + download = compose_download_dir(archive, arch, release) |
311 | + os.makedirs(download) |
312 | + for filename in ['initrd.gz', 'linux', 'pxelinux.0']: |
313 | + factory.make_file(download, filename) |
314 | + return archive |
315 | + |
316 | + def call_script(self, archive_dir, tftproot, arch=None, release=None): |
317 | + """Call the maas-download-pxe-files script with given settings. |
318 | + |
319 | + The ARCHIVE URL and TFTPROOT path must be set, or the script will try |
320 | + to download from the Ubuntu server and store into the system's real |
321 | + TFTP root directory, respectively. Both bad ideas. |
322 | + """ |
323 | + script = './scripts/maas-import-pxe-files' |
324 | + env = { |
325 | + 'ARCHIVE': 'file://%s' % archive_dir, |
326 | + # Substitute curl for wget; it accepts file:// URLs. |
327 | + 'DOWNLOAD': 'curl -O --silent', |
328 | + 'TFTPROOT': tftproot, |
329 | + } |
330 | + if arch is not None: |
331 | + env['ARCHES'] = arch |
332 | + if release is not None: |
333 | + env['RELEASES'] = release |
334 | + env['CURRENT_RELEASE'] = release |
335 | + with open('/dev/null', 'w') as dev_null: |
336 | + check_call(script, env=env, stdout=dev_null) |
337 | + |
338 | + def test_downloads_pre_boot_loader(self): |
339 | + arch = make_name('arch') |
340 | + release = 'precise' |
341 | + archive = self.make_downloads(arch=arch, release=release) |
342 | + tftproot = self.make_dir() |
343 | + self.call_script(archive, tftproot, arch=arch, release=release) |
344 | + tftp_path = compose_tftp_path(tftproot, arch, 'pxelinux.0') |
345 | + download_path = compose_download_dir(archive, arch, release) |
346 | + expected_contents = read_file(download_path, 'pxelinux.0') |
347 | + self.assertThat(tftp_path, FileContains(expected_contents)) |
348 | + |
349 | + def test_ignores_missing_pre_boot_loader(self): |
350 | + arch = make_name('arch') |
351 | + release = 'precise' |
352 | + archive = self.make_downloads(arch=arch, release=release) |
353 | + download_path = compose_download_dir(archive, arch, release) |
354 | + os.remove(os.path.join(download_path, 'pxelinux.0')) |
355 | + tftproot = self.make_dir() |
356 | + self.call_script(archive, tftproot, arch=arch, release=release) |
357 | + tftp_path = compose_tftp_path(tftproot, arch, 'pxelinux.0') |
358 | + self.assertThat(tftp_path, Not(FileExists())) |
359 | + |
360 | + def test_updates_pre_boot_loader(self): |
361 | + arch = make_name('arch') |
362 | + release = 'precise' |
363 | + tftproot = self.make_dir() |
364 | + tftp_path = compose_tftp_path(tftproot, arch, 'pxelinux.0') |
365 | + os.makedirs(os.path.dirname(tftp_path)) |
366 | + with open(tftp_path, 'w') as existing_file: |
367 | + existing_file.write(factory.getRandomString()) |
368 | + archive = self.make_downloads(arch=arch, release=release) |
369 | + self.call_script(archive, tftproot, arch=arch, release=release) |
370 | + download_path = compose_download_dir(archive, arch, release) |
371 | + expected_contents = read_file(download_path, 'pxelinux.0') |
372 | + self.assertThat(tftp_path, FileContains(expected_contents)) |
373 | + |
374 | + def test_leaves_pre_boot_loader_untouched_if_unchanged(self): |
375 | + # If pxelinux.0 has not changed between script runs, the old |
376 | + # copy stays in place. |
377 | + arch = make_name('arch') |
378 | + release = 'precise' |
379 | + archive = self.make_downloads(arch=arch, release=release) |
380 | + tftproot = self.make_dir() |
381 | + self.call_script(archive, tftproot, arch=arch, release=release) |
382 | + tftp_path = compose_tftp_path(tftproot, arch, 'pxelinux.0') |
383 | + backdate(tftp_path) |
384 | + original_timestamp = get_write_time(tftp_path) |
385 | + self.call_script(archive, tftproot, arch=arch, release=release) |
386 | + self.assertEqual(original_timestamp, get_write_time(tftp_path)) |
387 | + |
388 | + def test_downloads_install_image(self): |
389 | + arch = make_name('arch') |
390 | + release = 'precise' |
391 | + archive = self.make_downloads(arch=arch, release=release) |
392 | + tftproot = self.make_dir() |
393 | + self.call_script(archive, tftproot, arch=arch, release=release) |
394 | + tftp_path = compose_tftp_path( |
395 | + tftproot, arch, release, 'install', 'linux') |
396 | + download_path = compose_download_dir(archive, arch, release) |
397 | + expected_contents = read_file(download_path, 'linux') |
398 | + self.assertThat(tftp_path, FileContains(expected_contents)) |
399 | + |
400 | + def test_updates_install_image(self): |
401 | + arch = make_name('arch') |
402 | + release = 'precise' |
403 | + tftproot = self.make_dir() |
404 | + tftp_path = compose_tftp_path( |
405 | + tftproot, arch, release, 'install', 'linux') |
406 | + os.makedirs(os.path.dirname(tftp_path)) |
407 | + with open(tftp_path, 'w') as existing_file: |
408 | + existing_file.write(factory.getRandomString()) |
409 | + archive = self.make_downloads(arch=arch, release=release) |
410 | + self.call_script(archive, tftproot, arch=arch, release=release) |
411 | + download_path = compose_download_dir(archive, arch, release) |
412 | + expected_contents = read_file(download_path, 'linux') |
413 | + self.assertThat(tftp_path, FileContains(expected_contents)) |
414 | + |
415 | + def test_leaves_install_image_untouched_if_unchanged(self): |
416 | + arch = make_name('arch') |
417 | + release = 'precise' |
418 | + archive = self.make_downloads(arch=arch, release=release) |
419 | + tftproot = self.make_dir() |
420 | + self.call_script(archive, tftproot, arch=arch, release=release) |
421 | + tftp_path = compose_tftp_path( |
422 | + tftproot, arch, release, 'install', 'linux') |
423 | + backdate(tftp_path) |
424 | + original_timestamp = get_write_time(tftp_path) |
425 | + self.call_script(archive, tftproot, arch=arch, release=release) |
426 | + self.assertEqual(original_timestamp, get_write_time(tftp_path)) |
This is really readable and shouldn't be too difficult to add ARM support to cleanly - thanks!
Can we please use "generic" instead of "base"? "base" sounds like it is used as the foundation for something else which I think is a bit misleading. The armhf subarchitectures correspond to kernel flavours and "generic" is the standard i386/amd64 flavour that we are moving to so I think this makes sense.
ARCHIVE=${ARCHIVE:-http:// archive. ubuntu. com/ubuntu/ dists/} should skip the dists at the end - the standard way of specifying an archive is one level higher than this, without the dists. Then compose_ installer_ download_ url would insert "dists" into the URL instead.
ARM support:
I'm happy to work on these as a separate merge request. Here are the changes I think we'll need:
The list of supported arches will need to grow from amd64 and i386 to armhf/highbank, armhf/armadaxp etc. At this point, it might be cleanest to specify ARCHES as "amd64/generic i386/generic armhf/highbank" etc.
The archive needs to be selected based on the architecture in use, in order to switch from archive.u.c/ubuntu to ports.u. c/ubuntu- ports. Can this be made into a function, rather than directly a variable?
compose_ installer_ download_ url will need to use a lookup table as the exact URL differs based on arch and release. The needed URLs don't necessary end in pxelinux.0, linux and initrd.gz though, so I think this needs to be a bit more specialised and return the actual URL of the kernel image, initrd image and pxelinux.0 instead. For example: the armadaxp kernel image is called uImage because it is in a different format.