Merge lp:~pwlars/qa-iso-static-validation-test/static-nosetests into lp:qa-iso-static-validation-test
- static-nosetests
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 18 |
Proposed branch: | lp:~pwlars/qa-iso-static-validation-test/static-nosetests |
Merge into: | lp:qa-iso-static-validation-test |
Diff against target: |
330 lines (+77/-85) 1 file modified
iso_static_validation.py (+77/-85) |
To merge this branch: | bzr merge lp:~pwlars/qa-iso-static-validation-test/static-nosetests |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Para Siva | Approve | ||
Joe Talbott (community) | Approve | ||
Max Brustkern | Pending | ||
Gema Gomez | Pending | ||
Review via email:
|
Commit message
Description of the change
Consider this more of a proof of concept. It *should* work, but there's
a lot of cleanup I'd like to do. Still, we can go ahead and merge it
and start getting the benefit of xunit output right away unless someone
disagrees. I fixed a few opportunistic things along the way, and got
tired of dealing with all the logging output stuff. Really, *all* of
these need to be properly converted so that they are messages passed
from unit test failures rather than logging messages anyway. I did some
of them, but got hand cramps along the way. :)
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Gema Gomez (gema) wrote : | # |
Can you guys add nuclearbob to the review, we need to make sure the changes are utah friendly!
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Paul Larson (pwlars) wrote : | # |
> Looks good to me. I think I'd make the environment variables (esp. NAME) a
> bit more specific to avoid any ambiguity.
Yeah, good point. I was just preserving the original names as I hacked through it to prove it would work. I'll change them. There's quite a lot of cleanup to be done here still, but I may save some of that for later as this diff is already getting a bit large. Probably one of the main things to note here is the logic inversion for the skip cases since I couldn't use the decorator, but they should be all correct. I tested it locally and had the same number of skips before and after.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Para Siva (psivaa) wrote : | # |
I tried the change briefly with one of the tests today and skipping seemed working fine.
http://
Still did not try nosetests --with-xunit though
- 19. By Paul Larson
-
Change environment var NAME to ISONAME
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Paul Larson (pwlars) wrote : | # |
changed NAME to ISONAME
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Para Siva (psivaa) : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Max Brustkern (nuclearbob) wrote : | # |
For the most part, I like these changes. The only thing I might take issue with is the replacement of logging statements with print. If we're going to be using this internally in utah, it's better to use the logging facility to integrate. Right now, we're not using it internally, so print makes more sense since it's just running standalone.
That brings me to something that's not an issue with this, but something I think we should consider. We now have an ISO class in UTAH. I looked at making these tests methods of the class, but I didn't want to remove the ability to run them separately. As I've thought about it, to me, it makes sense to make each test a method of the class, I.E., iso.testa, iso.testb, and then have a nose/unittest compatible class which just instantiates an ISO and each test case runs ISO.testa, iso.testb, etc. I'd like to hear whether that approach makes sense to other people. That way, we can easily do validation as part of provisioning, and we still have a script to run it separately and produce the output. Alternately, the ISO class could contain a unittest.TestCase object, and the script could just call that. That would require no synchronization to be maintained between the class and the test script, but it would make the test script a little bare as well.
Finally, is there a strict need for the iso we're testing to be named correctly? Could we extract its relevant information (series, arch, type) from the iso itself (using the iso class in utah or otherwise) and then generate the URL from that? It's a little more work, but it's also a little more flexible.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Paul Larson (pwlars) wrote : | # |
> For the most part, I like these changes. The only thing I might take issue
> with is the replacement of logging statements with print. If we're going to
> be using this internally in utah, it's better to use the logging facility to
> integrate. Right now, we're not using it internally, so print makes more
> sense since it's just running standalone.
As mentioned, this was a quick hack, mostly to prove it would work. However, I don't think I remember actually replacing any logging statements with print. I do know of one instance where I replaced a "unittest.fail" (Which I'm pretty sure wouldn't have actually worked anyway) with a print, and this is not ideal but at least it wouldn't choke if we got to this point. What I did do, is replace some of the logging messages (more to do here) with a test result message string so that the person reviewing the logs can have a chance at understanding what went wrong without reading the source of the test.
>
> That brings me to something that's not an issue with this, but something I
> think we should consider. We now have an ISO class in UTAH. I looked at
> making these tests methods of the class, but I didn't want to remove the
> ability to run them separately. As I've thought about it, to me, it makes
> sense to make each test a method of the class, I.E., iso.testa, iso.testb, and
> then have a nose/unittest compatible class which just instantiates an ISO and
> each test case runs ISO.testa, iso.testb, etc. I'd like to hear whether that
> approach makes sense to other people. That way, we can easily do validation
> as part of provisioning, and we still have a script to run it separately and
> produce the output. Alternately, the ISO class could contain a
> unittest.TestCase object, and the script could just call that. That would
> require no synchronization to be maintained between the class and the test
> script, but it would make the test script a little bare as well.
In this case we don't actually want to provision anything. But otherwise, this sounds interesting. I'd like to hear more about it, but maybe as an email rather than on a code review?
>
> Finally, is there a strict need for the iso we're testing to be named
> correctly? Could we extract its relevant information (series, arch, type)
> from the iso itself (using the iso class in utah or otherwise) and then
> generate the URL from that? It's a little more work, but it's also a little
> more flexible.
That's a possibility, you'd want to take a look at .disk/info in the iso image and make sure that it. Here's an example from an old iso I had lying around:
Ubuntu-Server 12.10 "Quantal Quetzal" - Alpha amd64+mac (20120903.2)
So as long as they don't ever change *that* format, we should be able to get everything we need from it :)
Since there don't seem to be any major objections, I'll go ahead and merge this, start fixing up the 12 or so static test jobs in jenkins to pull the xunit results, and we can look at some of these other cleanups as we go.
Preview Diff
1 | === modified file 'iso_static_validation.py' |
2 | --- iso_static_validation.py 2012-09-18 16:16:16 +0000 |
3 | +++ iso_static_validation.py 2012-09-21 16:09:19 +0000 |
4 | @@ -21,7 +21,6 @@ |
5 | # |
6 | |
7 | import os |
8 | -import argparse |
9 | import sys |
10 | import logging |
11 | import urllib2 |
12 | @@ -32,63 +31,55 @@ |
13 | import tempfile |
14 | import shutil |
15 | |
16 | -# Defaults |
17 | -DEFAULT_URL = 'http://cdimage.ubuntu.com/' |
18 | -ARCHIVE_TEST_URL = 'http://people.canonical.com/~ubuntu-archive/testing/' |
19 | - |
20 | -#constants |
21 | -ONE_MB_BLOCK = 2 ** 20 |
22 | -PRECISE_SIZE_LIMIT_MB = 737148928 |
23 | -QUANTAL_SIZE_LIMIT_MB = 838860800 |
24 | - |
25 | -#Read the iso name and initialise global variables |
26 | -parser = argparse.ArgumentParser() |
27 | -parser.add_argument('--isoroot') |
28 | -parser.add_argument('--name') |
29 | -args = parser.parse_args() |
30 | - |
31 | -logging.basicConfig(level=logging.INFO) |
32 | - |
33 | -if args.name == None or '.iso' not in args.name or args.isoroot == None: |
34 | - print('The usage:', sys.argv[0], |
35 | - '--name release-variant-arch.iso --isoroot ~/iso_location') |
36 | - sys.exit(1) |
37 | -else: |
38 | - st_name = args.name.rstrip('.iso') |
39 | - try: |
40 | - (st_release, st_variant, st_arch) = st_name.rsplit('-') |
41 | - except ValueError: |
42 | - sys.exit("Image name be in the form of release-variant-arch.iso") |
43 | - logging.debug("The file being tested is: '%s'", args.name) |
44 | - |
45 | -#Set a flag for ubiquity based images |
46 | -if st_variant == 'desktop' or st_variant == 'dvd': |
47 | - ubiquity_image = True |
48 | -else: |
49 | - ubiquity_image = False |
50 | |
51 | |
52 | class TestValidateISO(unittest.TestCase): |
53 | @classmethod |
54 | def setUpClass(cls): |
55 | + # Defaults |
56 | + cls.DEFAULT_URL = 'http://cdimage.ubuntu.com/' |
57 | + cls.ARCHIVE_TEST_URL = 'http://people.canonical.com/~ubuntu-archive/testing/' |
58 | + |
59 | + #constants |
60 | + cls.ONE_MB_BLOCK = 2 ** 20 |
61 | + cls.PRECISE_SIZE_LIMIT_MB = 737148928 |
62 | + cls.QUANTAL_SIZE_LIMIT_MB = 838860800 |
63 | + |
64 | + #Read the iso name and initialise global variables |
65 | + cls.isoroot=os.getenv('ISOROOT') |
66 | + cls.name=os.getenv('ISONAME') |
67 | + |
68 | + |
69 | + if cls.name == None or '.iso' not in cls.name or cls.isoroot == None: |
70 | + print("Please see the README") |
71 | + sys.exit(1) |
72 | + else: |
73 | + cls.st_name = cls.name.rstrip('.iso') |
74 | + try: |
75 | + (cls.st_release, cls.st_variant, cls.st_arch) = cls.st_name.rsplit('-') |
76 | + except ValueError: |
77 | + sys.exit("Image name must be in the form of release-variant-arch.iso") |
78 | + |
79 | + #Set a flag for ubiquity based images |
80 | + if cls.st_variant == 'desktop' or cls.st_variant == 'dvd': |
81 | + cls.ubiquity_image = True |
82 | + else: |
83 | + cls.ubiquity_image = False |
84 | try: |
85 | cls.temp_dir = tempfile.mkdtemp() |
86 | except OSError: |
87 | - logging.error("Creating temp dirrectory failed") |
88 | - unittest.fail("Setup error") |
89 | - |
90 | + print("Creating temp dirrectory failed") |
91 | + raise |
92 | + |
93 | def setUp(self): |
94 | - self.iso_name = args.name |
95 | - self.block_size = ONE_MB_BLOCK |
96 | - self.iso_location = os.path.join(args.isoroot, args.name) |
97 | - self.st_release = st_release |
98 | - self.st_variant = st_variant |
99 | - self.st_arch = st_arch |
100 | + self.iso_name = self.name |
101 | + self.block_size = self.ONE_MB_BLOCK |
102 | + self.iso_location = os.path.join(self.isoroot, self.name) |
103 | |
104 | if self.st_release != 'precise': |
105 | - self.url = DEFAULT_URL |
106 | + self.url = self.DEFAULT_URL |
107 | else: |
108 | - self.url = os.path.join(DEFAULT_URL, 'precise') |
109 | + self.url = os.path.join(self.DEFAULT_URL, 'precise') |
110 | |
111 | if self.st_variant == 'desktop': |
112 | self.url = os.path.join(self.url, 'daily-live') |
113 | @@ -98,14 +89,14 @@ |
114 | self.url = os.path.join(self.url, 'daily') |
115 | elif self.st_variant == 'server': |
116 | if self.st_release == 'precise': |
117 | - self.url = os.path.join(DEFAULT_URL, 'ubuntu-server', |
118 | + self.url = os.path.join(self.DEFAULT_URL, 'ubuntu-server', |
119 | 'precise', 'daily') |
120 | else: # current dev release |
121 | - self.url = os.path.join(DEFAULT_URL, 'ubuntu-server', 'daily') |
122 | + self.url = os.path.join(self.DEFAULT_URL, 'ubuntu-server', 'daily') |
123 | else: |
124 | self.fail("Image name be in the form of release-variant-arch.iso") |
125 | |
126 | - self.probs_url = os.path.join(ARCHIVE_TEST_URL, self.st_release + |
127 | + self.probs_url = os.path.join(self.ARCHIVE_TEST_URL, self.st_release + |
128 | '_probs.html') |
129 | |
130 | # Test if the sha check sum of the iso matches that is given in the server |
131 | @@ -115,9 +106,9 @@ |
132 | try: |
133 | fh = urllib2.urlopen(sha256sum_url) |
134 | except urllib2.HTTPError, e: |
135 | - logging.error("Failed to fetch URL '%s': %s. Aborting!", |
136 | - sha256sum_url, e) |
137 | - self.fail("Failed to fetch sha checksum from the server") |
138 | + self.fail( |
139 | + "Failed to fetch URL '{0}': {1}. Aborting!".format( |
140 | + ha256sum_url, e)) |
141 | |
142 | # Process the file (contains a number of entries containing iso |
143 | # names and the corresponding checksums) |
144 | @@ -129,11 +120,10 @@ |
145 | try: |
146 | iso_sha256 = current_iso_digest[self.iso_name] |
147 | except KeyError: |
148 | - logging.error("No digest found for image '%s'", self.iso_name) |
149 | - self.fail("The iso does not have a matching digest") |
150 | + self.fail("No digest found for image '{0}'".format( |
151 | + elf.iso_name)) |
152 | |
153 | # Validate checksum of file fname against sha256 hash |
154 | - logging.debug("Calculating hash for file '%s'", self.iso_name) |
155 | sha256 = hashlib.sha256() |
156 | with open(self.iso_location) as f: |
157 | while True: |
158 | @@ -142,9 +132,9 @@ |
159 | break |
160 | sha256.update(data) |
161 | sha256 = sha256.hexdigest() |
162 | - logging.debug("Expected Checksum: '%s'", iso_sha256) |
163 | - logging.debug("Local File Checksum: '%s'", sha256) |
164 | - self.assertEqual(iso_sha256, sha256) |
165 | + self.assertEqual(iso_sha256, sha256, |
166 | + "\nExpected checksum {0}\nLocal file checksum {1}".format( |
167 | + iso_sha256, sha256)) |
168 | |
169 | # Test if the list file in the repository matches the content of the iso |
170 | def test_files_list(self): |
171 | @@ -161,26 +151,25 @@ |
172 | try: |
173 | list_repository = urllib2.urlopen(list_url) |
174 | except urllib2.HTTPError, e: |
175 | - logging.error("Failed to fetch URL '%s': %s . Aborting!", |
176 | - list_url, e) |
177 | - self.fail("Failed to fetch files list from the server") |
178 | + self.fail("Failed to fetch URL '{0}': {1} . Aborting!".format( |
179 | + list_url, e)) |
180 | |
181 | - logging.debug('Extracting list file info from ISO') |
182 | cmd = ["bsdtar", "-tf", self.iso_location] |
183 | output = subprocess.Popen(cmd, stdout=subprocess.PIPE, |
184 | stderr=subprocess.PIPE) |
185 | (stdout, stderr) = output.communicate() |
186 | - logging.debug('Checking for error in extracting the file list') |
187 | - self.assertEqual(stderr, '') |
188 | + self.assertEqual(stderr, '', "Error extracting the file list") |
189 | |
190 | for list_entry in list_repository: |
191 | - logging.debug('Checking that each list entry is in the iso') |
192 | - self.assertIn(list_entry[1:].rstrip(), stdout) |
193 | + self.assertIn(list_entry[1:].rstrip(), stdout, |
194 | + "{0} missing from iso".format(list_entry[1:])) |
195 | + #FIXME: This should probably generate a complete list and save |
196 | + #it as an artifact. |
197 | |
198 | # Test if the manfest is the same as that is given in the server |
199 | - @unittest.skipUnless(ubiquity_image, |
200 | - "manifest only for ubiquity based images") |
201 | def test_manifest(self): |
202 | + if(self.ubiquity_image==False): |
203 | + self.skipTest("manifest only for ubiquity based images") |
204 | manifest_url = os.path.join(self.url, 'current', |
205 | self.st_release + '-' + |
206 | self.st_variant + '-' + |
207 | @@ -190,16 +179,17 @@ |
208 | except urllib2.HTTPError, e: |
209 | logging.error("Failed to fetch URL '%s': %s . Aborting!", |
210 | manifest_url, e) |
211 | - self.fail("Failed to fetch manifest file from the server") |
212 | + self.fail( |
213 | + "Failed to fetch manifest file from the server:\n{0}".format(e)) |
214 | |
215 | - logging.debug('Extracting manifest from the iso') |
216 | cmd = ["bsdtar", "-xf", self.iso_location, "-C", self.temp_dir, |
217 | "casper/filesystem.manifest"] |
218 | output = subprocess.Popen(cmd, stdout=subprocess.PIPE, |
219 | stderr=subprocess.PIPE) |
220 | (stdout, stderr) = output.communicate() |
221 | logging.debug('Check for error in extracting manifest info from iso') |
222 | - self.assertEqual(stderr, '') |
223 | + self.assertEqual(stderr, '', |
224 | + "Error extracting manifest from iso:\n{0}".format(stderr)) |
225 | |
226 | manifest_path = os.path.join(self.temp_dir, |
227 | 'casper/filesystem.manifest') |
228 | @@ -243,9 +233,9 @@ |
229 | self.assertIn(id_server, build_info_line) |
230 | |
231 | # Test if wubi is present for desktop or dvd and it is a valid PE |
232 | - @unittest.skipUnless(st_variant == 'desktop' and |
233 | - st_arch != 'powerpc', "Skipping non ubiquity images") |
234 | def test_wubi(self): |
235 | + if(self.st_variant != 'desktop' or self.st_arch == 'powerpc'): |
236 | + self.skipTest("Skipping non ubiquity images") |
237 | logging.debug('Extracting wubi from the desktop images') |
238 | cmd = ["bsdtar", "-xf", self.iso_location, "-C", |
239 | self.temp_dir, "./wubi.exe"] |
240 | @@ -266,9 +256,9 @@ |
241 | self.assertIn(" PE32 executable (GUI)", stdout) |
242 | |
243 | # Test if the relevant files are present in the iso for desktop iso |
244 | - @unittest.skipUnless(st_variant == 'desktop' and |
245 | - st_arch != 'powerpc', "Skipping for non desktop iso") |
246 | def test_files_ubiquity(self): |
247 | + if(self.st_variant != 'desktop' or self.st_arch == 'powerpc'): |
248 | + self.skipTest("Skipping for non desktop iso") |
249 | cmd = ["bsdtar", "-tf", self.iso_location] |
250 | output = subprocess.Popen(cmd, stdout=subprocess.PIPE, |
251 | stderr=subprocess.PIPE) |
252 | @@ -281,9 +271,9 @@ |
253 | self.assertIn(list_server[2:].rstrip(), stdout) |
254 | |
255 | # Test if the relevant important files are present in the iso for dvd iso |
256 | - @unittest.skipUnless(st_variant == 'dvd', |
257 | - "This file list test is only specific for dvd") |
258 | def test_files_ubiquity_dvd(self): |
259 | + if(self.st_variant != 'dvd'): |
260 | + self.skipTest("This file list test is only specific for dvd") |
261 | cmd = ["bsdtar", "-tf", self.iso_location] |
262 | output = subprocess.Popen(cmd, stdout=subprocess.PIPE, |
263 | stderr=subprocess.PIPE) |
264 | @@ -300,8 +290,9 @@ |
265 | # file_list_di for i386 and amd64 |
266 | # file_list_di_powerpc for powerpc and amd64+mac |
267 | # Test is skipped for ubiquity based images |
268 | - @unittest.skipIf(ubiquity_image, "Skipping for ubiquity images") |
269 | def test_files_di(self): |
270 | + if(self.ubiquity_image): |
271 | + self.skipTest("Skipping for ubiquity images") |
272 | cmd = ["bsdtar", "-tf", self.iso_location] |
273 | output = subprocess.Popen(cmd, stdout=subprocess.PIPE, |
274 | stderr=subprocess.PIPE) |
275 | @@ -319,9 +310,9 @@ |
276 | self.assertIn(list_server[2:].rstrip(), stdout) |
277 | |
278 | # Test if vmlinuz is a valid executable |
279 | - @unittest.skipUnless(ubiquity_image and st_arch != 'powerpc', |
280 | - "vmlinuz is present only for ubiquity based images") |
281 | def test_vmlinuz(self): |
282 | + if(self.ubiquity_image == False or self.st_arch == 'powerpc'): |
283 | + self.skipTest("vmlinuz is present only for ubiquity based images") |
284 | cmd = ["bsdtar", "-xf", self.iso_location, "-C", |
285 | self.temp_dir, "casper/vmlinuz"] |
286 | output = subprocess.Popen(cmd, stdout=subprocess.PIPE, |
287 | @@ -342,8 +333,9 @@ |
288 | |
289 | # Test if filesystem.squashfs is present and |
290 | # if that is a squash fs for ubiquity based images |
291 | - @unittest.skipUnless(ubiquity_image, "Skipping for non ubiquity images") |
292 | def test_filesystem_squashfs(self): |
293 | + if(self.ubiquity_image == False): |
294 | + self.skipTest("Skipping for non ubiquity images") |
295 | logging.debug('Extracting the filesystem.squashfs') |
296 | cmd = ["bsdtar", "-xf", self.iso_location, "-C", |
297 | self.temp_dir, "casper/filesystem.squashfs"] |
298 | @@ -375,8 +367,9 @@ |
299 | |
300 | # Test if the report.html exists for the d-i based images |
301 | # and that it does not have any conflicting packages |
302 | - @unittest.skipIf(ubiquity_image, "Skipping for ubiquity based images") |
303 | def test_report_html(self): |
304 | + if(self.ubiquity_image == True): |
305 | + self.skipTest("Skipping for ubiquity based images") |
306 | report_url = os.path.join(self.url, 'current', 'report.html') |
307 | try: |
308 | fh_report_page = urllib2.urlopen(report_url).read() |
309 | @@ -404,11 +397,11 @@ |
310 | |
311 | if(self.st_release == 'precise'): |
312 | # check if the size of the image fits a 703MB cd for precise |
313 | - self.assertTrue(statinfo.st_size < PRECISE_SIZE_LIMIT_MB, |
314 | + self.assertTrue(statinfo.st_size < self.PRECISE_SIZE_LIMIT_MB, |
315 | "Size: %s" % (statinfo.st_size)) |
316 | else: |
317 | # check if the size of the image fits a 800MB cd for quantal |
318 | - self.assertTrue(statinfo.st_size < QUANTAL_SIZE_LIMIT_MB, |
319 | + self.assertTrue(statinfo.st_size < self.QUANTAL_SIZE_LIMIT_MB, |
320 | "Size: %s" % (statinfo.st_size)) |
321 | |
322 | # clean up |
323 | @@ -418,7 +411,6 @@ |
324 | shutil.rmtree(cls.temp_dir, ignore_errors=False) |
325 | except OSError: |
326 | logging.error("Removing the temp directory failed") |
327 | - unittest.fail("Cleanup error") |
328 | |
329 | if __name__ == '__main__': |
330 | from test import test_support |
Looks good to me. I think I'd make the environment variables (esp. NAME) a bit more specific to avoid any ambiguity.