Merge lp:~wesley-wiedenmeier/curtin/partial-testing into lp:~curtin-dev/curtin/trunk
- partial-testing
- Merge into trunk
Status: | Work in progress |
---|---|
Proposed branch: | lp:~wesley-wiedenmeier/curtin/partial-testing |
Merge into: | lp:~curtin-dev/curtin/trunk |
Diff against target: |
4195 lines (+3486/-164) 55 files modified
Makefile (+12/-0) curtin/block/__init__.py (+18/-6) curtin/commands/block_meta.py (+2/-1) curtin/commands/curthooks.py (+21/-8) curtin/commands/install.py (+15/-0) curtin/reporter/events.py (+4/-9) curtin/util.py (+12/-6) doc/devel/README-storagetests.txt (+191/-0) examples/storagetests/allindata.yaml (+222/-0) examples/storagetests/basicdos.yaml (+63/-0) examples/storagetests/bcache_basic.yaml (+52/-0) examples/storagetests/bcache_double.yaml (+75/-0) examples/storagetests/bcache_shared_cache.yaml (+71/-0) examples/storagetests/crypt_basic.yaml (+43/-0) examples/storagetests/diskonlydos.yaml (+8/-0) examples/storagetests/diskonlygpt.yaml (+8/-0) examples/storagetests/formats_on_lvm.yaml (+67/-0) examples/storagetests/gpt_boot.yaml (+58/-0) examples/storagetests/gpt_simple.yaml (+54/-0) examples/storagetests/logical.yaml (+84/-0) examples/storagetests/lvm.yaml (+51/-0) examples/storagetests/lvm_mult_lvols_on_pvol.yaml (+74/-0) examples/storagetests/lvm_multiple_vg.yaml (+64/-0) examples/storagetests/lvm_with_dash.yaml (+50/-0) examples/storagetests/mdadm.yaml (+59/-0) examples/storagetests/mdadm_bcache.yaml (+135/-0) examples/storagetests/mdadm_lvm.yaml (+112/-0) examples/storagetests/whole_disk_btrfs_xfs.yaml (+19/-0) examples/storagetests/whole_disk_ext.yaml (+27/-0) examples/storagetests/whole_disk_fat.yaml (+27/-0) examples/storagetests/whole_disk_swap.yaml (+11/-0) tests/storagetest_runner/__init__.py (+471/-0) tests/storagetest_runner/test_advanced_format.py (+43/-0) tests/storagetest_runner/test_basic.py (+38/-0) tests/storagetest_runner/test_nvme.py (+35/-0) tests/storagetest_runner/test_scsi.py (+36/-0) tests/storagetests/__init__.py (+261/-0) tests/storagetests/test_bcache.py (+17/-0) tests/storagetests/test_clear_holders.py (+105/-0) tests/storagetests/test_complex.py (+17/-0) tests/storagetests/test_disk_partitions.py (+21/-0) tests/storagetests/test_format.py (+19/-0) tests/storagetests/test_layers_on_mdadm.py (+21/-0) tests/storagetests/test_lvm.py (+16/-0) tests/storagetests/test_raid.py (+19/-0) tests/storagetests/verifiers.py (+222/-0) tests/unittests/test_reporter.py (+4/-8) tests/vmtests/__init__.py (+87/-22) tests/vmtests/image_sync.py (+1/-1) tools/curtin-log-print (+152/-0) tools/launch (+1/-1) tools/report-webhook-logger (+0/-100) tools/report_webhook_logger.py (+174/-0) tools/run-pep8 (+6/-1) tools/run-pyflakes (+11/-1) |
To merge this branch: | bzr merge lp:~wesley-wiedenmeier/curtin/partial-testing |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
curtin developers | Pending | ||
Review via email:
|
Commit message
Description of the change
Add storagetest test suite to isolate storage configuration
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Wesley Wiedenmeier (wesley-wiedenmeier) wrote : | # |
I think that the storagetests and the storagetest_runner are ready for review now, this added a lot of verification for storage configuration and makes it easy to add quite a bit more. The test runner works pretty well, and I have managed to run all of the tests in about 2 hours with 4 proceses. I think that quite a bit of the time there was used waiting for apt-get on a bad internet connection. I haven't tried with a local repo mirror yet though.
There are still some outstanding issues running some of the actual test cases, but these will require bugfixes on curtin to correct. The test cases which cannot be run right now are disabled in the test suite, and can be enabled when the bugs they cause are fixed.
There are some other branches pending merge that this branch depends on, I will go through and make a list of those tomorrow.
I also added a doc to doc/devel/ about storagetests and storagetest_runner that was based on the doc for vmtests.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:470
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Christian Ehrhardt (paelzer) wrote : | # |
Hi Wesley,
I just ran your branch to take a look and found some minor things I'd ask you if you consider them useful to add.
The json files - while being json and not intended for humans - would still be much more readable it they had some line breaks. Like what "python -m json.tool /tmp/foo/
Then in the log a few outputs seem not to go through LOG.
I have a few questions about some of them:
1. Parse storage tests reporting data and ensure that all tests passed ... ok
Should get a "2016-07-12 07:15:09,184 - vmtests - INFO -" prefix like the others IMHO.
2. 10.245.168.14 - - [12/Jul/2016 07:15:22] "GET /storagetest-
10.245.168.14 - - [12/Jul/2016 07:15:22] "GET /storagetest-
10.245.168.14 - - [12/Jul/2016 07:15:22] "GET /curtin.tar.xz HTTP/1.1" 200 -
What are thos actually doing, it is always the same text - Could be something like:
2016-07-12 07:15:09,184 - vmtests - INFO - fetching json reporting
2016-07-12 07:15:09,184 - vmtests - INFO - fetching json disk config
2016-07-12 07:15:09,184 - vmtests - INFO - fetching foo tarball
3. just after "10.245.168.14 - - [12/Jul/2016 07:15:22] "GET /curtin.tar.xz HTTP/1.1" 200 -" is the longest "wait" duration in the tests. There should be some sort of "doing the test now" or anything else that somebody looking at it knows what is taking the time. This doesn't have to be step 1,2,3,4,../20 - just after the "Booting target image" a short "running test XY now".
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Christian Ehrhardt (paelzer) wrote : | # |
Some comments when reading through the code
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Christian Ehrhardt (paelzer) wrote : | # |
For a speed check I tried to see if it has issues going concurrently.
I think I didn't find the right way to call it - I tried:
rm -rf output/; CURTIN_
I found that with this call the section following section tries to write an infinitely huge file:
Building tarball of curtin: /mnt/nvme/
Wanted to let you know just in case that infinite write is a bug.
Later on I found this in the doc:
nosetests3 --processes=-1 tests/storagete
That gave me a stuck system - maybe too much cpus (6x2threads) and by that too much output?
In any case the output indenting was totally broken - I had to reset my console to scroll again.
That failed me then with errors, not sure if that is bad or just a wrong call - here is the log: http://
I realized that since this first "hanging" processes -1 run all tests were failing this way now.
All on:
Traceback (most recent call last):
File "/mnt/nvme/
self.
AssertionError: False is not true
Debugging gave me: "(qemu) qemu-system-x86_64: cannot set up guest memory 'pc.ram': Cannot allocate memory"
That likely also was my first hanging - but that could be fixed by freeing some up :-)
But I wonder if we need some sort of "is enough mem avail" prior to call qemu?
In the following retry it left me again with a good return code, but plenty of running qemu processes up.
That really needs some hardening.
Then I wanted to step down and did only:
nosetests3 --processes=2 tests/storagete
To check if it works at all.
I got the same console that gets misformatted after a while ending with
Ran 0 tests in 112.494s
Since all fails keep the logs around here the log file: http://
TL;DR: concurrent execution needs some fixes and probably a bit hardening against shooting itself :-)
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Christian Ehrhardt (paelzer) wrote : | # |
I ran checkers and found several of the following (no need to mark them all inline):
tests/storagete
=> if really unused a _ would be even better.
tests/storagete
=> since log would format for you ...
I know pylint it is noisy and sometimes even disagrees with other checkers, but most of these are also only a search and replace away so probably worth to fix.
I'm not so sure about
tests/storagete
tests/storagete
I already complained about short names before, this is another example (there are more like mp, fp, e, ...)
tests/storagete
A totally different one is:
tests/storagete
That never is an issue so far, as all inheriting classes of e.g. BasicTests also inherit from class BaseStorageTest
One that is probably worth for sure to avoid later issues:
tests/storagete
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Wesley Wiedenmeier (wesley-wiedenmeier) wrote : | # |
Hey, thanks for looking through and reviewing, I'm reading through the diff comments atm, I'll reply inline.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Wesley Wiedenmeier (wesley-wiedenmeier) wrote : | # |
> For a speed check I tried to see if it has issues going concurrently.
> I think I didn't find the right way to call it - I tried:
> rm -rf output/; CURTIN_
> --nologcapture --processes=10 --process-
>
> I found that with this call the section following section tries to write an
> infinitely huge file:
> Building tarball of curtin: /mnt/nvme/
>
> Wanted to let you know just in case that infinite write is a bug.
Thanks for pointing that out, that's definintely a bug in how I generate the curtin tarball. I hadn't run before in the jenkins runner, because sparse files don't work right in my /home partition and I didn't want the tests writing huge files there, but yeah, the jenkins runner puts the curtin tmp dir inside the curtin dir being tested, so it would cause tar to try to include the tarball its generating in itself. I need to make the tests aware of what environment they're running in so they know to omit that directory.
> Later on I found this in the doc:
> nosetests3 --processes=-1 tests/storagete
>
> That gave me a stuck system - maybe too much cpus (6x2threads) and by that too
> much output?
> In any case the output indenting was totally broken - I had to reset my
> console to scroll again.
Yeah, I've seen the output get messed up too, but I think what is happening is just that the processes are racing to write to stdout and the data they're writing is getting corrupted somehow. I haven't tried with more than 4 processes at once, so there may be some bugs that occur in that case. I'll look through the log and try to see if I can figure out what was going wrong there.
> That failed me then with errors, not sure if that is bad or just a wrong call
> - here is the log: http://
> I realized that since this first "hanging" processes -1 run all tests were
> failing this way now.
> All on:
> Traceback (most recent call last):
> File "/mnt/nvme/
> testing/
> test_reported_
> self.assertTrue
> AssertionError: False is not true
>
> Debugging gave me: "(qemu) qemu-system-x86_64: cannot set up guest memory
> 'pc.ram': Cannot allocate memory"
> That likely also was my first hanging - but that could be fixed by freeing
> some up :-)
> But I wonder if we need some sort of "is enough mem avail" prior to call qemu?
Yeah, the test runner eats up memory, not quite as much as the vmtests do while the target system tarball is being extracted in the vm though. A check would definitely be good, I'll look into how to write that.
Watching a vm trying to run from swap isn't fun :)
> In the following retry it left me again with a good return code, but plenty of
> running qemu processes up.
> That really needs some hardening.
>
> Then I wanted to step down and did only:
> nosetests3 --processes=2 tests/storagete
> To check if it works at all.
> I got the same console that gets misformatted after a while ending with
> Ran 0 tests in 112.494s
>
> Since all fails keep th...
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Wesley Wiedenmeier (wesley-wiedenmeier) wrote : | # |
> Hi Wesley,
>
> I just ran your branch to take a look and found some minor things I'd ask you
> if you consider them useful to add.
>
> The json files - while being json and not intended for humans - would still be
> much more readable it they had some line breaks. Like what "python -m
> json.tool /tmp/foo/
Yeah, that makes sense. I'll switch it to use something like 'json.dump(... indent=4)'
> Then in the log a few outputs seem not to go through LOG.
> I have a few questions about some of them:
>
> 1. Parse storage tests reporting data and ensure that all tests passed ... ok
> Should get a "2016-07-12 07:15:09,184 - vmtests - INFO -" prefix like the
> others IMHO.
Oh, that's the docstring for the test function, I think that's just added to nosetests's output when running with '-vv'. It may be good to emit a message through LOG though and remove the docstring cause the times could be good for debugging.
> 2. 10.245.168.14 - - [12/Jul/2016 07:15:22] "GET /storagetest-
> HTTP/1.1" 200 -
> 10.245.168.14 - - [12/Jul/2016 07:15:22] "GET /storagetest-
> HTTP/1.1" 200 -
> 10.245.168.14 - - [12/Jul/2016 07:15:22] "GET /curtin.tar.xz HTTP/1.1" 200 -
>
> What are thos actually doing, it is always the same text - Could be something
> like:
> 2016-07-12 07:15:09,184 - vmtests - INFO - fetching json reporting
> 2016-07-12 07:15:09,184 - vmtests - INFO - fetching json disk config
> 2016-07-12 07:15:09,184 - vmtests - INFO - fetching foo tarball
I'm not sure if I can actually change that or not, those messages come from http.server, and I don't know if there is an interface in it to handle custom messages, I'll check in the source for it though.
> 3. just after "10.245.168.14 - - [12/Jul/2016 07:15:22] "GET /curtin.tar.xz
> HTTP/1.1" 200 -" is the longest "wait" duration in the tests. There should be
> some sort of "doing the test now" or anything else that somebody looking at it
> knows what is taking the time. This doesn't have to be step 1,2,3,4,../20 -
> just after the "Booting target image" a short "running test XY now".
Yeah, it would definitely be nice to have a bit of debug there. I can add in optional logging to tools.report_
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Wesley Wiedenmeier (wesley-wiedenmeier) wrote : | # |
> I ran checkers and found several of the following (no need to mark them all
> inline):
>
> tests/storagete
> Unused variable '_err' [python/pylint]
>
> => if really unused a _ would be even better.
Yeah, that makes sense. I just got in the habit of writing _err from block meta, but I should probably clean up some of the variable names there too. For the most part the err part of the util.subp return is never used.
> tests/storagete
> interpolation] Use % formatting in logging functions and pass the % parameters
> as arguments
>
> => since log would format for you ...
>
> I know pylint it is noisy and sometimes even disagrees with other checkers,
> but most of these are also only a search and replace away so probably worth to
> fix.
Yeah, I can use the old style format specifiers there, I'll switch that over.
> I'm not so sure about
> tests/storagete
> Using type() instead of isinstance() for a typecheck. [python/pylint]
Right yeah, I'll switch that, isinstance reads nicer.
> tests/storagete
> Redefining built-in 'type' [python/pylint]
Oh, yeah, that was for error type in __exit__, I'll rename that to 'etype'
> I already complained about short names before, this is another example (there
> are more like mp, fp, e, ...)
> tests/storagete
> variable name "d" [python/pylint]
Yeah, I think some of these are just trying to keep the line from going over 80, but some were probably in the list comprehensions, where the scope of the var is so limited it should be okay. I'll switch everything outside of comprehensions over to long names real quick, thanks for bringing that up.
>
> A totally different one is:
> tests/storagete
> 'BasicTests' has no 'assertIsNotNone' member [python/pylint]
> That never is an issue so far, as all inheriting classes of e.g. BasicTests
> also inherit from class BaseStorageTest
> the methods are in scope would it hurt letting the verifiers also derive from
> unittest.TestCase?
I guess the verifiers could inherit from TestCase, but I think it may be cleaner for them not to, partly just because that would require '__test__ = False' in all of them to prevent nosetests3 from trying to run them as individual tests. Also, because the BaseStorageTest class overrides the TestCase.run() method, if the verifier classes were to inherit from TestCase and were listed as parents of the individual test classes after the BaseStorageTest, I think that the call to super() in BaseStorageTest
> One that is probably worth for sure to avoid later issues:
> tests/storagete
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Wesley Wiedenmeier (wesley-wiedenmeier) wrote : | # |
Replied to some of the inline commments
- 471. By Wesley Wiedenmeier
-
Fix typo in README-
storagetests. txt - 472. By Wesley Wiedenmeier
-
Merge in more recent revision of lp:~wesley-wiedenmeier/curtin/trusty-preserve
for cleaner handling of blkid failure in disk_handler preserve code on trusty - 473. By Wesley Wiedenmeier
-
Use util.json_dumps instead of json.dumps for json formatting for storage tests
as it sets indentation levels nicely - 474. By Wesley Wiedenmeier
-
replaced docstring in storagetest_
runner. test_reporting_ data with LOG.info
message to get a timestap on the output from it in nosetests - 475. By Wesley Wiedenmeier
-
Cleanups based on pylint
- 476. By Wesley Wiedenmeier
-
In storagetest_runner, when building curtin tarball, exclude the 'output' dir
so that tar does not try to create a tarball recursively when using jenkins
runner
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Wesley Wiedenmeier (wesley-wiedenmeier) wrote : | # |
I just pushed some cleanup for the comments posted. Not everything's been addressed yet, but the rest I'll get to tomorrow.
Just so it doesn't get lost, the diff comments are on revision 470.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:476
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
- 477. By Wesley Wiedenmeier
-
Supress server.
SimpleHTTPReque stHandler logging messages - 478. By Wesley Wiedenmeier
-
some cleanup in block based on diff comments
- 479. By Wesley Wiedenmeier
-
Re merge from lp:~wesley-wiedenmeier/curtin/curtin-fix-sysfs-partition-data to
pull in cleaner variable handling - 480. By Wesley Wiedenmeier
-
better handling of test_py_ver in storagetest_
runner. gen_user_ data - 481. By Wesley Wiedenmeier
-
sp
- 482. By Wesley Wiedenmeier
-
Updated storagetest_runner documentation with complete list of environment
variables used in vmtests that are still applicable to storagetest_runner - 483. By Wesley Wiedenmeier
-
Wait until enough memory is available before starting tests
- tools/launch: in Usage() state that --mem arg is in Mb not Kb, as this is
what qemu takes as input
- vmtests/__init_ _.py:
- add the function stall_if_not_enough_ memory( )
this function gets free memory from /proc/meminfo and if less than what is
required, delays then tries again up to a configurable maximum amount of
time until enough memory is available to start the tests. if the maximum
amount of delay time is reached and there still is not enough memory
available it raises an error that stops the vmtest
- add environment configuration variables CURTIN_VMTEST_ INSTANCE_ MEMORY and
CURTIN_VMTEST_ MEMORY_ MAX_STALL to control how much memory to allocate to
the test vm and how long to wait until there is enough memory available
- for both the tools/launch and tools/xkvm command, specify how much memory
qemu should use based on the value of CURTIN_VMTEST_ INSTANCE_ MEMORY
- storagetest_runner/ __init_ _.py: use the stall_if_ not_enough_ memory( )
function from vmtests and specify how much memory used based on the vmtest
environment variable
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:483
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:483
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
- 484. By Wesley Wiedenmeier
-
Add trusty test interactive
- 485. By Wesley Wiedenmeier
-
Enable dos logical/extended pattitioning test
- 486. By Wesley Wiedenmeier
-
Enable trusty storagetest_runner instances as lp: #1596384 only occurs under
really heavy loads on a system without enough resources to run the tests
properly, and should not affect the test server - 487. By Wesley Wiedenmeier
-
Instead of parsing /proc/meminfo for vmtests.
stall_if_ not_enough_ memory, use
'free -m'
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:487
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:487
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Scott Moser (smoser) wrote : | # |
i really like the goal here.
some comments inline.
- 488. By Wesley Wiedenmeier
-
Remove -w flag from free cmd in stall_if_
not_enough_ memory
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:488
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:488
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Wesley Wiedenmeier (wesley-wiedenmeier) wrote : | # |
Thanks for reviewing. I went through and replied to the diff comments, I'll get most of them handled soon, one or two may take a bit longer. Just so they aren't lost because of future commits, the diff comments are at: r487
- 489. By Wesley Wiedenmeier
-
Merge from lp:~wesley-wiedenmeier/curtin/trusty-preserve to get better
formatting of log messages in disk_handler without using \ - 490. By Wesley Wiedenmeier
-
Use encode=False rather than .decode() in image sync output
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:490
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:490
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
- 491. By Wesley Wiedenmeier
-
renamed some of the storagetests configs with non descriptive names
- 492. By Wesley Wiedenmeier
-
In storagetest_runner, don't use make to start tests, use nosetests directly,
to avoid having to install make - 493. By Wesley Wiedenmeier
-
Remove encoding from util.json_dumps as it isn't actually needed anywhere
where util.json_dumps is used - 494. By Wesley Wiedenmeier
-
Fixed call to util.json_dumps
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:494
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:494
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
- 495. By Wesley Wiedenmeier
-
Removed vmtests.
stall_if_ not_enough memory and associated varialbes and
documentation, as it has been moved into another branch to merge separately
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:495
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:495
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:496
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:496
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
- 497. By Wesley Wiedenmeier
-
Merge from trunk to resole conflict
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:497
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:497
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Wesley Wiedenmeier (wesley-wiedenmeier) wrote : | # |
There are some branches that have already been merged into
this branch because they were neccessary for getting
storagetests working at all. Most of them are bug fixes,
except trunk.add-
report_
reporting from a test environment.
In addition, several bugs have already been found using the
current test configs for storagetests. Branches exist to
handle most of these. Those branches have not been merged
into this one, but once they have merged into trunk I can
enable the storagetests that reproduce those bugs.
Already Merged Branches:
lp:~wesley-wiedenmeier/curtin/trunk.add-web-reporter-to-vmtests
merge into: lp:~raharper/curtin/trunk.add-web-reporter-to-vmtests
reason: improvements to original logic for
'ip address' to get local lan ip in vmtests
lp:~raharper/curtin/trunk.add-web-reporter-to-vmtests
merge into: lp:curtin
reason: during vmtests, capture reporting events from
curtin and verify that the events are recieved
and properly formatted
lp:~wesley-wiedenmeier/curtin/curtin-fix-sysfs-partition-data
merge info: lp:curtin
reason: the block.sysfs_
useful, and is used in storagetests, but the
old implementation was only able to operate on
the path to whole disks, not to give
lp:~wesley-wiedenmeier/curtin/trusty-preserve
merge into: lp:curtin
reason: fix handling of disk preservation on
special disks
lp:~wesley-wiedenmeier/curtin/1598310
merge into: lp:curtin
reason: (LP: 1598310) The current implementation of
running on a path for which lsblk may give
This branch fixes this and adds unittests
lp:~wesley-wiedenmeier/curtin/1597522
merge into: lp:curtin
reason: (LP: 1597522) A fix that went into trunk a
while ago as a work around for a bug in
like mkfs.ext4 silently ignored it, but it
caused other tools, such as mkfs.xfs to fail
and mkfs.btrfs to try to create a filesystem
This branch fixes that, and adds it to vmtests
Bugs found by storagetests fixed so far:
- (LP: 1597522) Curtin passes -s flag to all mkfs cmds,
lp:~wesley-wiedenmeier/curtin/1597522
Already merged into partial-testing, but not
in trunk
- (LP: 1592962) U...
- 498. By Wesley Wiedenmeier
-
Remove test_cciss from storagetest_runner, as it isn't properly reproducing the
cciss issue
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:498
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:498
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
- 499. By Wesley Wiedenmeier
-
Merge from trunk for tox env fix
- 500. By Wesley Wiedenmeier
-
Fix incorrectly named conf file
- 501. By Wesley Wiedenmeier
-
Added more challenging test of handling weird lvm names to test_clear_holders
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:501
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 502. By Wesley Wiedenmeier
-
Merge from trunk
- 503. By Wesley Wiedenmeier
-
Fix config file lvm_mult_
lvols_on_ pvol - 504. By Wesley Wiedenmeier
-
Fix disk id for lvm_mult_
lvols_on_ pvol.yaml - 505. By Wesley Wiedenmeier
-
Fixed disk id in expected_holders for mult_lvols_
on_pvol. yaml - 506. By Wesley Wiedenmeier
-
Merge from trunk
- 507. By Wesley Wiedenmeier
-
Merge from trunk
- 508. By Wesley Wiedenmeier
-
Removed ntfs test conf as creating ntfs volume writes ~10G of data and slows
tests down too much - 509. By Wesley Wiedenmeier
-
Removed reference to ntfs test conf file in test_format
- 510. By Wesley Wiedenmeier
-
In test_clear_holders, use block.sys_
block_path instead of no longer extant
block_meta.block_find_ sysfs_path - 511. By Wesley Wiedenmeier
-
Fixed up whole_disk_fat conf file (10G disk was too large to make a fat16
filesystem, so use vfat instead) and enabled fat conf file, as bug now fixed in
trunk - 512. By Wesley Wiedenmeier
-
Add lots of 'wipe: superblock'
- 513. By Wesley Wiedenmeier
-
Merge from trunk
- 514. By Wesley Wiedenmeier
-
Get list of test deps from curtin.deps in storagetest_runner
- 515. By Wesley Wiedenmeier
-
Disable logical.yaml as it causes parted bug on advanced format disks
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:515
https:/
Executed test runs:
SUCCESS: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
- 516. By Wesley Wiedenmeier
-
Merge from trunk to bring in test fix
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:516
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 517. By Wesley Wiedenmeier
-
Remove tests on Wily as it is no longer supported
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:517
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 518. By Wesley Wiedenmeier
-
Remove all test references to reiserfs as not supported
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:518
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 519. By Wesley Wiedenmeier
-
For lvm_mult_
lvols_on_ pvol.yaml, replace dos extended/logical partitioning with
gpt, as the dos logical partitioning causes problems on advanced format disks
sometimes. This allows lvm handling to be tested even though dos
extended/logical has to be disabled
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:519
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 520. By Wesley Wiedenmeier
-
Merge from trunk
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:520
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 521. By Wesley Wiedenmeier
-
Remove accidental duplicate wipe statement in mdadm.yaml
- 522. By Wesley Wiedenmeier
-
Remove unneeded partitions in mdadm.yaml
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:522
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 523. By Wesley Wiedenmeier
-
Sleep for a second after running bcache verification to make a kernel panic
when unregistering the bcache device during the start of the next test - 524. By Wesley Wiedenmeier
-
Merge from trunk to pull in unittest fixes
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:524
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 525. By Wesley Wiedenmeier
-
Added a dmcrypt clear_holders test, but disabled
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:525
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 526. By Wesley Wiedenmeier
-
Fix crypt_basic.yaml
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:526
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 527. By Wesley Wiedenmeier
-
Add allindata.yaml as a clear_holders test
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:527
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 528. By Wesley Wiedenmeier
-
Fix expected holders for allindata.yaml
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:528
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 529. By Wesley Wiedenmeier
-
remove expected_holders for backing to volgroup1 in allindata.yaml as it i snot
always possible to predict which logical partition is on each md device
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:529
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 530. By Wesley Wiedenmeier
-
Merge from trunk to disable tests for wily as it is EOL
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:530
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 531. By Wesley Wiedenmeier
-
Merge from trunk
- 532. By Wesley Wiedenmeier
-
Merge in updates to storagetests.
test_clear_ holders from
lp:~wesley-wiedenmeier/curtin/clear-holders-storagetests
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:532
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 533. By Wesley Wiedenmeier
-
Merge from trunk to pull optionally ignoring errors in mdadm.mdadm_
assemble
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:533
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Scott Moser (smoser) wrote : | # |
I've marked this work in progress.
Ryan and I discussed this some, it is valuable, and we'd like to runt his, but at this point the branch probably needs rework to merge from trunk...
so just categorizing as work-in-progress.
Unmerged revisions
- 533. By Wesley Wiedenmeier
-
Merge from trunk to pull optionally ignoring errors in mdadm.mdadm_
assemble - 532. By Wesley Wiedenmeier
-
Merge in updates to storagetests.
test_clear_ holders from
lp:~wesley-wiedenmeier/curtin/clear-holders-storagetests - 531. By Wesley Wiedenmeier
-
Merge from trunk
- 530. By Wesley Wiedenmeier
-
Merge from trunk to disable tests for wily as it is EOL
- 529. By Wesley Wiedenmeier
-
remove expected_holders for backing to volgroup1 in allindata.yaml as it i snot
always possible to predict which logical partition is on each md device - 528. By Wesley Wiedenmeier
-
Fix expected holders for allindata.yaml
- 527. By Wesley Wiedenmeier
-
Add allindata.yaml as a clear_holders test
- 526. By Wesley Wiedenmeier
-
Fix crypt_basic.yaml
- 525. By Wesley Wiedenmeier
-
Added a dmcrypt clear_holders test, but disabled
- 524. By Wesley Wiedenmeier
-
Merge from trunk to pull in unittest fixes
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2016-08-18 16:02:27 +0000 |
3 | +++ Makefile 2016-09-15 18:06:47 +0000 |
4 | @@ -39,6 +39,18 @@ |
5 | echo " apt-get install -qy python3-sphinx"; exit 1; } 1>&2 |
6 | make -C doc html |
7 | |
8 | +storagetests: deps |
9 | + sudo nosetests $(noseopts) tests/storagetests |
10 | + |
11 | +storagetests3: deps |
12 | + sudo nosetests3 $(noseopts) tests/storagetests |
13 | + |
14 | +run_storagetests: vmtest-deps |
15 | + nosetests3 $(noseopts) tests/storagetest_runner |
16 | + |
17 | +deps: |
18 | + sudo bin/curtin -v --install-deps |
19 | + |
20 | # By default don't sync images when running all tests. |
21 | vmtest: |
22 | nosetests3 $(noseopts) tests/vmtests |
23 | |
24 | === modified file 'curtin/block/__init__.py' |
25 | --- curtin/block/__init__.py 2016-08-30 19:21:06 +0000 |
26 | +++ curtin/block/__init__.py 2016-09-15 18:06:47 +0000 |
27 | @@ -385,13 +385,21 @@ |
28 | return |
29 | |
30 | |
31 | -def blkid(devs=None, cache=True): |
32 | +def dev_blkid(path, cache=True): |
33 | + """return blkid information for a single device""" |
34 | + blkid_data = blkid([path], cache=cache) |
35 | + if len(blkid_data) == 0: |
36 | + raise ValueError("Did not find blkid info for '%s':" % path) |
37 | + if len(blkid_data) != 1: |
38 | + raise ValueError("blkid '%s' returned multiple results: '%s'" % |
39 | + path, blkid_data) |
40 | + return next(d for d in blkid_data.values()) |
41 | + |
42 | + |
43 | +def blkid(devices=[], cache=True): |
44 | """ |
45 | get data about block devices from blkid and convert to dict |
46 | """ |
47 | - if devs is None: |
48 | - devs = [] |
49 | - |
50 | # 14.04 blkid reads undocumented /dev/.blkid.tab |
51 | # man pages mention /run/blkid.tab and /etc/blkid.tab |
52 | if not cache: |
53 | @@ -401,6 +409,8 @@ |
54 | os.unlink(cachefile) |
55 | |
56 | cmd = ['blkid', '-o', 'full'] |
57 | + cmd.extend(devices) |
58 | + |
59 | # blkid output is <device_path>: KEY=VALUE |
60 | # where KEY is TYPE, UUID, PARTUUID, LABEL |
61 | out, err = util.subp(cmd, capture=True) |
62 | @@ -626,8 +636,10 @@ |
63 | |
64 | |
65 | def sysfs_partition_data(blockdev=None, sysfs_path=None): |
66 | - # given block device or sysfs_path, return a list of tuples |
67 | - # of (kernel_name, number, offset, size) |
68 | + """ |
69 | + given block device or sysfs_path, return a list of tuples of |
70 | + (kernel_name, number, offset, size) |
71 | + """ |
72 | if blockdev: |
73 | blockdev = os.path.normpath(blockdev) |
74 | sysfs_path = sys_block_path(blockdev) |
75 | |
76 | === modified file 'curtin/commands/block_meta.py' |
77 | --- curtin/commands/block_meta.py 2016-08-11 17:21:40 +0000 |
78 | +++ curtin/commands/block_meta.py 2016-09-15 18:06:47 +0000 |
79 | @@ -589,7 +589,8 @@ |
80 | # Figure out what point should be |
81 | while len(path) > 0 and path[0] == "/": |
82 | path = path[1:] |
83 | - mount_point = os.path.join(state['target'], path) |
84 | + mount_point = os.path.sep.join([state['target'], path]) |
85 | + mount_point = os.path.normpath(mount_point) |
86 | |
87 | # Create mount point if does not exist |
88 | util.ensure_dir(mount_point) |
89 | |
90 | === modified file 'curtin/commands/curthooks.py' |
91 | --- curtin/commands/curthooks.py 2016-08-22 16:20:23 +0000 |
92 | +++ curtin/commands/curthooks.py 2016-09-15 18:06:47 +0000 |
93 | @@ -648,7 +648,8 @@ |
94 | stack_prefix = state.get('report_stack_prefix', '') |
95 | |
96 | with events.ReportEventStack( |
97 | - name=stack_prefix, reporting_enabled=True, level="INFO", |
98 | + name=stack_prefix + '/writing-config', |
99 | + reporting_enabled=True, level="INFO", |
100 | description="writing config files and configuring apt"): |
101 | write_files(cfg, target) |
102 | do_apt_config(cfg, target) |
103 | @@ -669,7 +670,8 @@ |
104 | data=None, target=target) |
105 | |
106 | with events.ReportEventStack( |
107 | - name=stack_prefix, reporting_enabled=True, level="INFO", |
108 | + name=stack_prefix + '/installing-kernel', |
109 | + reporting_enabled=True, level="INFO", |
110 | description="installing kernel"): |
111 | setup_zipl(cfg, target) |
112 | install_kernel(cfg, target) |
113 | @@ -678,27 +680,38 @@ |
114 | restore_dist_interfaces(cfg, target) |
115 | |
116 | with events.ReportEventStack( |
117 | - name=stack_prefix, reporting_enabled=True, level="INFO", |
118 | + name=stack_prefix + '/setting-up-swap', |
119 | + reporting_enabled=True, level="INFO", |
120 | description="setting up swap"): |
121 | add_swap(cfg, target, state.get('fstab')) |
122 | |
123 | with events.ReportEventStack( |
124 | - name=stack_prefix, reporting_enabled=True, level="INFO", |
125 | - description="apply networking"): |
126 | + name=stack_prefix + '/apply-networking-config', |
127 | + reporting_enabled=True, level="INFO", |
128 | + description="apply networking config"): |
129 | apply_networking(target, state) |
130 | |
131 | with events.ReportEventStack( |
132 | - name=stack_prefix, reporting_enabled=True, level="INFO", |
133 | + name=stack_prefix + '/writing-etc-fstab', |
134 | + reporting_enabled=True, level="INFO", |
135 | description="writing etc/fstab"): |
136 | copy_fstab(state.get('fstab'), target) |
137 | |
138 | with events.ReportEventStack( |
139 | - name=stack_prefix, reporting_enabled=True, level="INFO", |
140 | + name=stack_prefix + '/configuring-multipath', |
141 | + reporting_enabled=True, level="INFO", |
142 | description="configuring multipath"): |
143 | detect_and_handle_multipath(cfg, target) |
144 | |
145 | with events.ReportEventStack( |
146 | - name=stack_prefix, reporting_enabled=True, level="INFO", |
147 | + name=stack_prefix + '/installing-missing-packages', |
148 | + reporting_enabled=True, level="INFO", |
149 | + description="installing missing packages"): |
150 | + install_missing_packages(cfg, target) |
151 | + |
152 | + with events.ReportEventStack( |
153 | + name=stack_prefix + '/system-upgrade', |
154 | + reporting_enabled=True, level="INFO", |
155 | description="updating packages on target system"): |
156 | system_upgrade(cfg, target) |
157 | |
158 | |
159 | === modified file 'curtin/commands/install.py' |
160 | --- curtin/commands/install.py 2016-07-13 07:57:01 +0000 |
161 | +++ curtin/commands/install.py 2016-09-15 18:06:47 +0000 |
162 | @@ -72,6 +72,17 @@ |
163 | pass |
164 | |
165 | |
166 | +def copy_install_log(logfile, target, log_target_path): |
167 | + """Copy curtin install log file to target system""" |
168 | + if not logfile: |
169 | + LOG.warn('Cannot copy curtin install log to target, no log exists') |
170 | + return |
171 | + |
172 | + LOG.debug('Copying curtin install log to target') |
173 | + target = os.path.sep.join([target, log_target_path]) |
174 | + shutil.copy(logfile, os.path.normpath(target)) |
175 | + |
176 | + |
177 | def writeline(fname, output): |
178 | """Write a line to a file.""" |
179 | if not output.endswith('\n'): |
180 | @@ -421,6 +432,10 @@ |
181 | legacy_reporter.report_failure(exp_msg) |
182 | raise e |
183 | finally: |
184 | + log_target_path = instcfg.get('save_install_log', |
185 | + '/root/curtin-install.log') |
186 | + if log_target_path: |
187 | + copy_install_log(logfile, workingd.target, log_target_path) |
188 | for d in ('sys', 'dev', 'proc'): |
189 | util.do_umount(os.path.join(workingd.target, d)) |
190 | mounted = block.get_mountpoints() |
191 | |
192 | === modified file 'curtin/reporter/events.py' |
193 | --- curtin/reporter/events.py 2016-05-06 17:21:33 +0000 |
194 | +++ curtin/reporter/events.py 2016-09-15 18:06:47 +0000 |
195 | @@ -84,6 +84,10 @@ |
196 | self.post_files = post_files |
197 | if result not in status: |
198 | raise ValueError("Invalid result: %s" % result) |
199 | + if self.result == status.WARN: |
200 | + self.level = "WARN" |
201 | + elif self.result == status.FAIL: |
202 | + self.level = "ERROR" |
203 | |
204 | def as_string(self): |
205 | return '{0}: {1}: {2}: {3}'.format( |
206 | @@ -95,10 +99,6 @@ |
207 | data['result'] = self.result |
208 | if self.post_files: |
209 | data['files'] = _collect_file_info(self.post_files) |
210 | - if self.result == status.WARN: |
211 | - data['level'] = "WARN" |
212 | - elif self.result == status.FAIL: |
213 | - data['level'] = "ERROR" |
214 | return data |
215 | |
216 | |
217 | @@ -122,10 +122,6 @@ |
218 | |
219 | See :py:func:`.report_event` for parameter details. |
220 | """ |
221 | - if result == status.SUCCESS: |
222 | - event_description = "finished: " + event_description |
223 | - else: |
224 | - event_description = "failed: " + event_description |
225 | event = FinishReportingEvent(event_name, event_description, result, |
226 | post_files=post_files, level=level) |
227 | return report_event(event) |
228 | @@ -141,7 +137,6 @@ |
229 | :param event_description: |
230 | A human-readable description of the event that has occurred. |
231 | """ |
232 | - event_description = "started: " + event_description |
233 | event = ReportingEvent(START_EVENT_TYPE, event_name, event_description, |
234 | level=level) |
235 | return report_event(event) |
236 | |
237 | === modified file 'curtin/util.py' |
238 | --- curtin/util.py 2016-08-29 17:08:26 +0000 |
239 | +++ curtin/util.py 2016-09-15 18:06:47 +0000 |
240 | @@ -195,8 +195,9 @@ |
241 | 'Command: %(cmd)s\n' |
242 | 'Exit code: %(exit_code)s\n' |
243 | 'Reason: %(reason)s\n' |
244 | - 'Stdout: %(stdout)r\n' |
245 | - 'Stderr: %(stderr)r') |
246 | + 'Stdout: %(stdout)s\n' |
247 | + 'Stderr: %(stderr)s') |
248 | + stdout_indent_level = 8 |
249 | |
250 | def __init__(self, stdout=None, stderr=None, |
251 | exit_code=None, cmd=None, |
252 | @@ -217,14 +218,14 @@ |
253 | self.exit_code = exit_code |
254 | |
255 | if not stderr: |
256 | - self.stderr = '' |
257 | + self.stderr = "''" |
258 | else: |
259 | - self.stderr = stderr |
260 | + self.stderr = self._indent_text(stderr) |
261 | |
262 | if not stdout: |
263 | - self.stdout = '' |
264 | + self.stdout = "''" |
265 | else: |
266 | - self.stdout = stdout |
267 | + self.stdout = self._indent_text(stdout) |
268 | |
269 | if reason: |
270 | self.reason = reason |
271 | @@ -241,6 +242,11 @@ |
272 | } |
273 | IOError.__init__(self, message) |
274 | |
275 | + def _indent_text(self, text): |
276 | + if type(text) == bytes: |
277 | + text = text.decode() |
278 | + return text.replace('\n', '\n' + ' ' * self.stdout_indent_level) |
279 | + |
280 | |
281 | class LogTimer(object): |
282 | def __init__(self, logfunc, msg): |
283 | |
284 | === added file 'doc/devel/README-storagetests.txt' |
285 | --- doc/devel/README-storagetests.txt 1970-01-01 00:00:00 +0000 |
286 | +++ doc/devel/README-storagetests.txt 2016-09-15 18:06:47 +0000 |
287 | @@ -0,0 +1,191 @@ |
288 | +== Background == |
289 | +Since there are many possible configurations for curtin disk partitioning, and |
290 | +some aspects of disk partitioning such as data preservation and shutting down |
291 | +previously existing storage devices cannot be tested with vmtests, a second |
292 | +test suite, 'storagetests' is provided to verify these features. The |
293 | +storagetests run only the disk partitioning stage of curtin installation, and |
294 | +validate that storage configuration was done properly in more detail than the |
295 | +vmtests. In order to do this, the storage tests do their validation inside of |
296 | +the vm instance they are testing on, and report their results back to the |
297 | +storagetest_runner using curtin's reporting module. |
298 | + |
299 | +== Storage Tests == |
300 | +The storage tests call curtin's block_meta.meta_custom directly, rather than |
301 | +using curtin's cli. Since curtin keeps track of its internal state using |
302 | +system environment variables, modifications to the system environment are made |
303 | +before the tests start. Curtin configuration and calls to block meta functions |
304 | +are wrapped by the storagetests.CurtinConfig object. All storage test classes |
305 | +inherit from storagetests.BaseStorageTest, which configures test logging and |
306 | +reporting. The storage validation code is in classes in storagetests.verifiers |
307 | +which are used as mix ins to the test runner classes. |
308 | + |
309 | +Overview of a storage test running: |
310 | + 1. The storage test class being run inherits from BaseStorageTest as well as |
311 | + individual storage verifiers relevant to the test class. |
312 | + 2. Before any testing starts, a setUpClass defined in BaseStorageTest runs in |
313 | + order to configure logging and reporting. This method looks for the |
314 | + existence of conf file at: storagetests.STORAGE_TEST_REPORTER_CONF_FILE. If |
315 | + this file is present, then it will be loaded and used as configuration for |
316 | + curtin.reporting. If the file is not present, no error will be thrown, |
317 | + allowing the test suite to be used manually outside of storagetest_runner. |
318 | + Logging is set to verbose and a separate log file is created for each test |
319 | + class, starting at the base dir in storagetests.STORAGE_TEST_LOG_DIR. |
320 | + 3. The run() method defined in unittests.TestCase is overridden in |
321 | + BaseStorageTest in order to report success/fail for the test back to |
322 | + storagetest_runner even in the event of an exception while running a test. |
323 | + The actual tests are started by making a call with super() to the real |
324 | + TestCase.run() method. |
325 | + 4. The test runner class has an associated list of test names and |
326 | + configuration files. A base test function in the class is used to run a |
327 | + test on every storage configuration in the list. This is done using |
328 | + nose_parameterized, so that each test instance has a separate test function |
329 | + name. |
330 | + 5. The test configuration files do not have any identifier for the disk to run |
331 | + on present by default, and look for a list of disk id -> path mappings in a |
332 | + json file located at storagetests.STORAGE_TEST_DISK_CONF_FILE. By default |
333 | + the tests use the mapping disk1 -> vdb, disk2 -> vdc... |
334 | + 6. The test runner class calls _config_tester, located in BaseStorageTest, |
335 | + which handles test running. This first checks if the config file for the |
336 | + test has been listed as a disabled test for the test class and skips it if |
337 | + it has. It then runs the configuration once, and if data preservation is to |
338 | + be tested, writes test files into the mountpoints for the configuration. |
339 | + Then, it verifies that storage devices have been configured correctly, |
340 | + running all verification functions present in the test class. Verification |
341 | + functions must have names starting with '_test_'. They start with a leading |
342 | + underscore to ensure that the nosetest suite does not run them on their |
343 | + own. If data preservation is being tested, the config file is then run |
344 | + again, under a new environment, then the mountpoints are checked to ensure |
345 | + that the expected test files are intact. |
346 | + 7. Once the test verification is complete, the BaseStorageTest.run() function |
347 | + dumps the number of errors and failures for the test case into a json file, |
348 | + and the reports back to the storage_test runner if reporting has been |
349 | + configured. In the reported event, the result is set to FAIL if the test |
350 | + did not pass completely, and the log file from test running is posted. |
351 | + |
352 | +The tests for block_meta.clear_holders() have the same overall structure, but |
353 | +instead of testing whether the storage configs were handled correctly, they |
354 | +configure storage, verify that the expected devices show up in the |
355 | +/sys/block/<name>/holders/ dir for each of the storage config entries, |
356 | +then call clear_holders() on each disk that has been configured and ensure that |
357 | +the devices have been cleared properly. |
358 | + |
359 | +== Storage Test Runner == |
360 | +The storage test runner is based on curtin vmtests, and allows all of the |
361 | +storage config tests to be run on all supported ubuntu releases and with a |
362 | +variety of different disk configurations. |
363 | + |
364 | +Overview of storagetest_runner: |
365 | + 1. The storagetest_runner uses the same image store as vmtests to retrieve |
366 | + images and query the latest image to use. However, it does not have a |
367 | + mechanism for calling for images to be synced, as this is handled by |
368 | + vmtests already |
369 | + 2. In order to get the test code into the target vm, a tarball is generated of |
370 | + the base directory for the copy of curtin the tests are running off of. |
371 | + A webserver is then set up and run in a separate thread from the main test |
372 | + using the python3 http.server, and a context manager to change cwd to the |
373 | + tmpdir that the curtin tarball has been written into is entered once the vm |
374 | + is started so that the webserver will be able to serve the tarball. |
375 | + 3. In addition to the curtin tarball, the test runner needs to pass reporting |
376 | + configuration and a list of paths to use for test disks into the vm. These |
377 | + test configuration files are written into the directory under the vmtest's |
378 | + tmpdir that is being used as the base for the webserver so that they are |
379 | + available to the vm as well. |
380 | + 4. The test runner then generates cloud-init configuration to install the |
381 | + deps needed to run the storagetests and download the needed files from the |
382 | + httpd. Since these tests need to be able to run on older systems, the |
383 | + cc scripts can be generated either using python3 or python2. |
384 | + 5. While the tests are running, tools.report_webhook_logger.CaptureReporting |
385 | + is used to log all curtin reporting events from the storagetests. Since the |
386 | + tests run block_meta in the same python instance as the test code, the |
387 | + reporting configuration applied by the test runner will apply to block meta |
388 | + as well, so the reporting events received by the test runner need to be |
389 | + filtered for tests sent by the storagetest suite rather than by block_meta. |
390 | + Once the reporting events have been filtered to just the storagetest |
391 | + start/finish events, the finish event results are verified, and the curtin |
392 | + log generated during the test is decoded and written to the log dir of the |
393 | + test runner, allowing inspection if there was a failure. |
394 | + 6. The storagetest_runner only reports success if all of the storage tests run |
395 | + in the vm were either skipped or succeeded, and uses the same rules to |
396 | + decide whether or not to delete the tmpdir it created depending on test |
397 | + results as vmtests. |
398 | + |
399 | +== Debugging == |
400 | +For the storagetest runner, the same general debugging steps can be used as for |
401 | +vmtests, so refer to README-vmtest.txt. For debugging specific storage test |
402 | +failures, interactive tests have been provided but disabled for each of the |
403 | +storagetest_runner test files. They can be run manually using: |
404 | + 'nosetests3 storagetest_runner/test_<name>.py:<Rel><Type><Test>Interactive' |
405 | +The interactive tests will boot the same system as would be used for non |
406 | +interactive tests, using curses mode for the graphics so that it will work over |
407 | +ssh, and do all of the pre configuration needed to start the tests without |
408 | +actually starting them. The tests can then be run manually and the output can |
409 | +be inspected to determine the cause of failure. |
410 | + |
411 | +== Running == |
412 | +To run the storagetest_runner on all tests, use: |
413 | + 'make run_storagetests' |
414 | + |
415 | +A single test can be run using: |
416 | + 'nosetests3 storagetest_runner/test_<name>.py' |
417 | + |
418 | +The storagetest_runner should be run as a regular user as it does not sync |
419 | +images, which was the only part of vmtests that required root. The |
420 | +storagetest_runner test classes can be handled in parallel by nosetests using |
421 | + 'nosetests3 --processes=-1 tests/storagetest_runner' |
422 | +Note that '-1' can be replaced with how many concurrent processes are desired, |
423 | +but will use 1 process per cpu core if left at -1. |
424 | + |
425 | +Since each vm will have to install several packages before starting, having a |
426 | +local repo mirror may help test speed. |
427 | + |
428 | +To run the actual storagetests, run: |
429 | + 'make storagetests' or 'make storagetests3' |
430 | +These must be run as root, and will destroy data on the system they run on, so |
431 | +they should only be run in a vm or on a test system with care taken to ensure |
432 | +that the disks specified for use by storagetests are not important |
433 | + |
434 | +== Environment Variables == |
435 | +Many of the environment variables for vmtests also work on storagetest_runner, |
436 | +but not all. Below are the environment variables which work for |
437 | +storagetest_runner. |
438 | + |
439 | + * CURTIN_VMTEST_KEEP_DATA_PASS CURTIN_VMTEST_KEEP_DATA_FAIL: |
440 | + default: |
441 | + CURTIN_VMTEST_KEEP_DATA_PASS=none |
442 | + CURTIN_VMTEST_KEEP_DATA_FAIL=all |
443 | + These 2 variables determine what portions of the temporary |
444 | + test data are kept. |
445 | + |
446 | + The variables contain a comma ',' delimited list of directories |
447 | + that should be kept in the case of a pass or fail. Additionally, |
448 | + the values 'all' and 'none' are accepted. |
449 | + |
450 | + Each storagetest_runner instance that runs has its own sub-directory |
451 | + under the top level CURTIN_VMTEST_TOPDIR. In that directory are |
452 | + directories: |
453 | + boot: files to be served by storagetest_runner internal webserver to |
454 | + configure storagetests |
455 | + install: cloud-init user-data for target image seed disk |
456 | + disks: target disks to be used for tests |
457 | + logs: boot log and storage test logs |
458 | + collect: unused in storagetests |
459 | + |
460 | + * CURTIN_VMTEST_TOPDIR: default $TMPDIR/vmtest-<timestamp> |
461 | + vmtest and storagetest_runner put all test data under this value. |
462 | + By default, it creates a directory in TMPDIR (/tmp) named as |
463 | + "vmtest-<timestamp>" |
464 | + |
465 | + If you set this value, you must ensure that the directory is either |
466 | + non-existent or clean. |
467 | + |
468 | + * CURTIN_VMTEST_LOG: default $TMPDIR/vmtest-<timestamp>.log |
469 | + vmtest and storagetest_runner writes extended log information to this file. |
470 | + The default puts the log along side the TOPDIR. |
471 | + |
472 | + * IMAGE_DIR: default /srv/images |
473 | + vmtest image sync (used by storagetest_runner) keeps a mirror of maas |
474 | + ephemeral images in this directory |
475 | + |
476 | +Environment 'boolean' values: |
477 | + For boolean environment variables the value is considered True |
478 | + if it is any value other than case insensitive 'false', '' or "0" |
479 | |
480 | === added directory 'examples/storagetests' |
481 | === added file 'examples/storagetests/allindata.yaml' |
482 | --- examples/storagetests/allindata.yaml 1970-01-01 00:00:00 +0000 |
483 | +++ examples/storagetests/allindata.yaml 2016-09-15 18:06:47 +0000 |
484 | @@ -0,0 +1,222 @@ |
485 | +showtrace: true |
486 | +storage: |
487 | + version: 1 |
488 | + config: |
489 | + - id: disk1 |
490 | + type: disk |
491 | + ptable: gpt |
492 | + model: QEMU HARDDISK |
493 | + name: main_disk |
494 | + grub_device: 1 |
495 | + wipe: superblock |
496 | + - id: bios_boot_partition |
497 | + type: partition |
498 | + size: 1MB |
499 | + device: disk1 |
500 | + flag: bios_grub |
501 | + number: 1 |
502 | + wipe: superblock |
503 | + - id: sda1 |
504 | + type: partition |
505 | + size: 1GB |
506 | + device: disk1 |
507 | + number: 2 # XXX: we really need to stop using id with DiskPartnum |
508 | + wipe: superblock |
509 | + - id: sda2 |
510 | + type: partition |
511 | + size: 1GB |
512 | + device: disk1 |
513 | + number: 3 # XXX: we really need to stop using id with DiskPartnum |
514 | + wipe: superblock |
515 | + - id: sda3 |
516 | + type: partition |
517 | + size: 1GB |
518 | + device: disk1 |
519 | + number: 4 # XXX: we really need to stop using id with DiskPartnum |
520 | + wipe: superblock |
521 | + - id: sda4 |
522 | + type: partition |
523 | + size: 1GB |
524 | + device: disk1 |
525 | + number: 5 # XXX: we really need to stop using id with DiskPartnum |
526 | + wipe: superblock |
527 | + - id: sda5 |
528 | + type: partition |
529 | + size: 3GB |
530 | + device: disk1 |
531 | + number: 6 # XXX: we really need to stop using id with DiskPartnum |
532 | + wipe: superblock |
533 | + - id: disk2 |
534 | + type: disk |
535 | + ptable: gpt |
536 | + model: QEMU HARDDISK |
537 | + name: second_disk |
538 | + wipe: superblock |
539 | + - id: sdb1 |
540 | + type: partition |
541 | + size: 1GB |
542 | + device: disk2 |
543 | + wipe: superblock |
544 | + - id: sdb2 |
545 | + type: partition |
546 | + size: 1GB |
547 | + device: disk2 |
548 | + wipe: superblock |
549 | + - id: sdb3 |
550 | + type: partition |
551 | + size: 1GB |
552 | + device: disk2 |
553 | + wipe: superblock |
554 | + - id: sdb4 |
555 | + type: partition |
556 | + size: 1GB |
557 | + device: disk2 |
558 | + wipe: superblock |
559 | + - id: disk3 |
560 | + type: disk |
561 | + ptable: gpt |
562 | + model: QEMU HARDDISK |
563 | + name: third_disk |
564 | + wipe: superblock |
565 | + - id: sdc1 |
566 | + type: partition |
567 | + size: 1GB |
568 | + device: disk3 |
569 | + wipe: superblock |
570 | + - id: sdc2 |
571 | + type: partition |
572 | + size: 1GB |
573 | + device: disk3 |
574 | + wipe: superblock |
575 | + - id: sdc3 |
576 | + type: partition |
577 | + size: 1GB |
578 | + device: disk3 |
579 | + wipe: superblock |
580 | + - id: sdc4 |
581 | + type: partition |
582 | + size: 1GB |
583 | + device: disk3 |
584 | + wipe: superblock |
585 | + - id: disk4 |
586 | + type: disk |
587 | + ptable: gpt |
588 | + model: QEMU HARDDISK |
589 | + name: fourth_disk |
590 | + wipe: superblock |
591 | + - id: sdd1 |
592 | + type: partition |
593 | + size: 1GB |
594 | + device: disk4 |
595 | + wipe: superblock |
596 | + - id: sdd2 |
597 | + type: partition |
598 | + size: 1GB |
599 | + device: disk4 |
600 | + wipe: superblock |
601 | + - id: sdd3 |
602 | + type: partition |
603 | + size: 1GB |
604 | + device: disk4 |
605 | + wipe: superblock |
606 | + - id: sdd4 |
607 | + type: partition |
608 | + size: 1GB |
609 | + device: disk4 |
610 | + wipe: superblock |
611 | + - id: mddevice0 |
612 | + name: md0 |
613 | + type: raid |
614 | + raidlevel: 5 |
615 | + devices: |
616 | + - sda1 |
617 | + - sdb1 |
618 | + - sdc1 |
619 | + spare_devices: |
620 | + - sdd1 |
621 | + - id: mddevice1 |
622 | + name: md1 |
623 | + type: raid |
624 | + raidlevel: raid6 |
625 | + devices: |
626 | + - sda2 |
627 | + - sdb2 |
628 | + - sdc2 |
629 | + - sdd2 |
630 | + spare_devices: |
631 | + - sda3 |
632 | + - id: mddevice2 |
633 | + name: md2 |
634 | + type: raid |
635 | + raidlevel: 1 |
636 | + devices: |
637 | + - sda4 |
638 | + - sdb3 |
639 | + spare_devices: |
640 | + - sdc3 |
641 | + - sdb4 |
642 | + - id: mddevice3 |
643 | + name: md3 |
644 | + type: raid |
645 | + raidlevel: raid0 |
646 | + devices: |
647 | + - sdc4 |
648 | + - sdd3 |
649 | + - id: volgroup1 |
650 | + name: vg1 |
651 | + type: lvm_volgroup |
652 | + devices: |
653 | + - mddevice0 |
654 | + - mddevice1 |
655 | + - mddevice2 |
656 | + - mddevice3 |
657 | + - id: lvmpart1 |
658 | + name: lv1 |
659 | + size: 1G |
660 | + type: lvm_partition |
661 | + volgroup: volgroup1 |
662 | + - id: lvmpart2 |
663 | + name: lv2 |
664 | + size: 1G |
665 | + type: lvm_partition |
666 | + volgroup: volgroup1 |
667 | + - id: lvmpart3 |
668 | + name: lv3 |
669 | + type: lvm_partition |
670 | + volgroup: volgroup1 |
671 | + - id: dmcrypt0 |
672 | + type: dm_crypt |
673 | + volume: lvmpart3 |
674 | + key: testkey |
675 | + dm_name: dmcrypt0 |
676 | + - id: lv1_fs |
677 | + name: storage |
678 | + type: format |
679 | + fstype: ext3 |
680 | + volume: lvmpart1 |
681 | + - id: lv2_fs |
682 | + name: storage |
683 | + type: format |
684 | + fstype: ext4 |
685 | + volume: lvmpart2 |
686 | + - id: dmcrypt_fs |
687 | + name: storage |
688 | + type: format |
689 | + fstype: xfs |
690 | + volume: dmcrypt0 |
691 | + - id: sda5_root |
692 | + type: format |
693 | + fstype: ext4 |
694 | + volume: sda5 |
695 | + - id: sda5_mount |
696 | + type: mount |
697 | + path: / |
698 | + device: sda5_root |
699 | + - id: lv1_mount |
700 | + type: mount |
701 | + path: /srv/data |
702 | + device: lv1_fs |
703 | + - id: lv2_mount |
704 | + type: mount |
705 | + path: /srv/backup |
706 | + device: lv2_fs |
707 | |
708 | === added file 'examples/storagetests/basicdos.yaml' |
709 | --- examples/storagetests/basicdos.yaml 1970-01-01 00:00:00 +0000 |
710 | +++ examples/storagetests/basicdos.yaml 2016-09-15 18:06:47 +0000 |
711 | @@ -0,0 +1,63 @@ |
712 | +storage: |
713 | + version: 1 |
714 | + config: |
715 | + - id: disk1 |
716 | + type: disk |
717 | + ptable: msdos |
718 | + name: main_disk |
719 | + wipe: superblock |
720 | + - id: disk1p1 |
721 | + type: partition |
722 | + number: 1 |
723 | + size: 3GB |
724 | + device: disk1 |
725 | + flag: boot |
726 | + wipe: superblock |
727 | + - id: disk1p2 |
728 | + type: partition |
729 | + number: 2 |
730 | + size: 1GB |
731 | + device: disk1 |
732 | + wipe: superblock |
733 | + - id: disk1p3 |
734 | + type: partition |
735 | + size: 1GB |
736 | + device: disk1 |
737 | + wipe: superblock |
738 | + - id: disk1p3 |
739 | + type: partition |
740 | + size: 1GB |
741 | + device: disk1 |
742 | + wipe: superblock |
743 | + - id: disk1p1_root |
744 | + type: format |
745 | + fstype: ext4 |
746 | + volume: disk1p1 |
747 | + - id: disk1p2_home |
748 | + type: format |
749 | + fstype: ext4 |
750 | + volume: disk1p2 |
751 | + - id: disk1p1_mount |
752 | + type: mount |
753 | + path: / |
754 | + device: disk1p1_root |
755 | + - id: disk1p2_mount |
756 | + type: mount |
757 | + path: /home |
758 | + device: disk1p2_home |
759 | + - id: disk2 |
760 | + type: disk |
761 | + name: sparedisk |
762 | + wipe: superblock |
763 | + - id: disk3 |
764 | + type: disk |
765 | + name: btrfs_volume |
766 | + wipe: superblock |
767 | + - id: btrfs_disk_fmt_id |
768 | + type: format |
769 | + fstype: btrfs |
770 | + volume: disk3 |
771 | + - id: btrfs_disk_mnt_id |
772 | + type: mount |
773 | + path: /btrfs |
774 | + device: btrfs_disk_fmt_id |
775 | |
776 | === added file 'examples/storagetests/bcache_basic.yaml' |
777 | --- examples/storagetests/bcache_basic.yaml 1970-01-01 00:00:00 +0000 |
778 | +++ examples/storagetests/bcache_basic.yaml 2016-09-15 18:06:47 +0000 |
779 | @@ -0,0 +1,52 @@ |
780 | +storage: |
781 | + version: 1 |
782 | + config: |
783 | + - id: disk1 |
784 | + type: disk |
785 | + ptable: gpt |
786 | + name: main_disk |
787 | + wipe: superblock |
788 | + grub_device: true |
789 | + - id: disk2 |
790 | + type: disk |
791 | + name: cache_disk |
792 | + wipe: superblock |
793 | + ptable: gpt |
794 | + - id: disk1p1 |
795 | + type: partition |
796 | + size: 3GB |
797 | + device: disk1 |
798 | + flag: boot |
799 | + wipe: superblock |
800 | + - id: disk1p2 |
801 | + type: partition |
802 | + size: 4GB |
803 | + device: disk1 |
804 | + wipe: superblock |
805 | + - id: disk2p1 |
806 | + type: partition |
807 | + size: 2GB |
808 | + device: disk2 |
809 | + wipe: superblock |
810 | + - id: bcache0 |
811 | + type: bcache |
812 | + name: cache_one |
813 | + cache_device: disk2p1 |
814 | + backing_device: disk1p2 |
815 | + cache_mode: writethrough |
816 | + - id: disk1p1_root |
817 | + type: format |
818 | + fstype: ext4 |
819 | + volume: disk1p1 |
820 | + - id: cached_home |
821 | + type: format |
822 | + fstype: ext4 |
823 | + volume: bcache0 |
824 | + - id: disk1p1_mount |
825 | + type: mount |
826 | + path: / |
827 | + device: disk1p1_root |
828 | + - id: home_mount |
829 | + type: mount |
830 | + path: /home |
831 | + device: cached_home |
832 | |
833 | === added file 'examples/storagetests/bcache_double.yaml' |
834 | --- examples/storagetests/bcache_double.yaml 1970-01-01 00:00:00 +0000 |
835 | +++ examples/storagetests/bcache_double.yaml 2016-09-15 18:06:47 +0000 |
836 | @@ -0,0 +1,75 @@ |
837 | +storage: |
838 | + version: 1 |
839 | + config: |
840 | + - id: disk1 |
841 | + type: disk |
842 | + ptable: gpt |
843 | + name: main_disk |
844 | + wipe: superblock |
845 | + grub_device: true |
846 | + - id: disk2 |
847 | + type: disk |
848 | + name: cache_disk |
849 | + wipe: superblock |
850 | + ptable: gpt |
851 | + - id: disk3 |
852 | + type: disk |
853 | + wipe: superblock |
854 | + name: second_cache_disk |
855 | + - id: disk1p1 |
856 | + type: partition |
857 | + size: 3GB |
858 | + device: disk1 |
859 | + flag: boot |
860 | + wipe: superblock |
861 | + - id: disk1p2 |
862 | + type: partition |
863 | + size: 2GB |
864 | + device: disk1 |
865 | + wipe: superblock |
866 | + - id: disk1p3 |
867 | + type: partition |
868 | + size: 2GB |
869 | + device: disk1 |
870 | + wipe: superblock |
871 | + - id: disk2p1 |
872 | + type: partition |
873 | + size: 2GB |
874 | + device: disk2 |
875 | + wipe: superblock |
876 | + - id: bcache0 |
877 | + type: bcache |
878 | + name: cache_one |
879 | + cache_device: disk2p1 |
880 | + backing_device: disk1p2 |
881 | + cache_mode: writethrough |
882 | + - id: second_bcache |
883 | + type: bcache |
884 | + name: cache_two |
885 | + cache_device: disk3 |
886 | + backing_device: disk1p3 |
887 | + cache_mode: writeback |
888 | + - id: disk1p1_root |
889 | + type: format |
890 | + fstype: ext4 |
891 | + volume: disk1p1 |
892 | + - id: cached_home |
893 | + type: format |
894 | + fstype: ext4 |
895 | + volume: bcache0 |
896 | + - id: cached_srv |
897 | + type: format |
898 | + fstype: btrfs |
899 | + volume: second_bcache |
900 | + - id: disk1p1_mount |
901 | + type: mount |
902 | + path: / |
903 | + device: disk1p1_root |
904 | + - id: home_mount |
905 | + type: mount |
906 | + path: /home |
907 | + device: cached_home |
908 | + - id: srv_mount |
909 | + type: mount |
910 | + path: /srv |
911 | + device: cached_srv |
912 | |
913 | === added file 'examples/storagetests/bcache_shared_cache.yaml' |
914 | --- examples/storagetests/bcache_shared_cache.yaml 1970-01-01 00:00:00 +0000 |
915 | +++ examples/storagetests/bcache_shared_cache.yaml 2016-09-15 18:06:47 +0000 |
916 | @@ -0,0 +1,71 @@ |
917 | +storage: |
918 | + version: 1 |
919 | + config: |
920 | + - id: disk1 |
921 | + type: disk |
922 | + ptable: gpt |
923 | + name: main_disk |
924 | + wipe: superblock |
925 | + grub_device: true |
926 | + - id: disk2 |
927 | + type: disk |
928 | + name: cache_disk |
929 | + wipe: superblock |
930 | + ptable: gpt |
931 | + - id: disk1p1 |
932 | + type: partition |
933 | + size: 3GB |
934 | + device: disk1 |
935 | + flag: boot |
936 | + wipe: superblock |
937 | + - id: disk1p2 |
938 | + type: partition |
939 | + size: 4GB |
940 | + device: disk1 |
941 | + wipe: superblock |
942 | + - id: disk1p3 |
943 | + type: partition |
944 | + size: 2GB |
945 | + device: disk1 |
946 | + wipe: superblock |
947 | + - id: disk2p1 |
948 | + type: partition |
949 | + size: 2GB |
950 | + device: disk2 |
951 | + wipe: superblock |
952 | + - id: bcache0 |
953 | + type: bcache |
954 | + name: cache_one |
955 | + cache_device: disk2p1 |
956 | + backing_device: disk1p2 |
957 | + cache_mode: writethrough |
958 | + - id: bcache1 |
959 | + type: bcache |
960 | + name: cache_two |
961 | + cache_device: disk2p1 |
962 | + backing_device: disk1p3 |
963 | + cache_mode: writeback |
964 | + - id: disk1p1_root |
965 | + type: format |
966 | + fstype: ext4 |
967 | + volume: disk1p1 |
968 | + - id: cached_home |
969 | + type: format |
970 | + fstype: ext4 |
971 | + volume: bcache0 |
972 | + - id: cached_srv |
973 | + type: format |
974 | + fstype: xfs |
975 | + volume: bcache1 |
976 | + - id: disk1p1_mount |
977 | + type: mount |
978 | + path: / |
979 | + device: disk1p1_root |
980 | + - id: home_mount |
981 | + type: mount |
982 | + path: /home |
983 | + device: cached_home |
984 | + - id: srv_mount |
985 | + type: mount |
986 | + path: /srv |
987 | + device: cached_srv |
988 | |
989 | === added file 'examples/storagetests/crypt_basic.yaml' |
990 | --- examples/storagetests/crypt_basic.yaml 1970-01-01 00:00:00 +0000 |
991 | +++ examples/storagetests/crypt_basic.yaml 2016-09-15 18:06:47 +0000 |
992 | @@ -0,0 +1,43 @@ |
993 | +storage: |
994 | + version: 1 |
995 | + config: |
996 | + - id: disk1 |
997 | + type: disk |
998 | + ptable: gpt |
999 | + name: main_disk |
1000 | + wipe: superblock |
1001 | + grub_device: true |
1002 | + - id: disk1p1 |
1003 | + type: partition |
1004 | + number: 1 |
1005 | + size: 3GB |
1006 | + device: disk1 |
1007 | + flag: boot |
1008 | + wipe: superblock |
1009 | + - id: disk1p2 |
1010 | + type: partition |
1011 | + number: 2 |
1012 | + size: 1GB |
1013 | + device: disk1 |
1014 | + wipe: superblock |
1015 | + - id: disk1p1_root |
1016 | + type: format |
1017 | + fstype: ext4 |
1018 | + volume: disk1p1 |
1019 | + - id: crypt_id |
1020 | + type: dm_crypt |
1021 | + dm_name: crypt0 |
1022 | + volume: disk1p2 |
1023 | + key: test_key_123 |
1024 | + - id: crypt_fmt |
1025 | + type: format |
1026 | + volume: crypt_id |
1027 | + fstype: ext4 |
1028 | + - id: disk1p1_mount |
1029 | + type: mount |
1030 | + path: / |
1031 | + device: disk1p1_root |
1032 | + - id: disk1p2_mount |
1033 | + type: mount |
1034 | + path: /home |
1035 | + device: crypt_fmt |
1036 | |
1037 | === added file 'examples/storagetests/diskonlydos.yaml' |
1038 | --- examples/storagetests/diskonlydos.yaml 1970-01-01 00:00:00 +0000 |
1039 | +++ examples/storagetests/diskonlydos.yaml 2016-09-15 18:06:47 +0000 |
1040 | @@ -0,0 +1,8 @@ |
1041 | +storage: |
1042 | + version: 1 |
1043 | + config: |
1044 | + - id: disk1 |
1045 | + type: disk |
1046 | + ptable: msdos |
1047 | + name: second_disk |
1048 | + wipe: superblock |
1049 | |
1050 | === added file 'examples/storagetests/diskonlygpt.yaml' |
1051 | --- examples/storagetests/diskonlygpt.yaml 1970-01-01 00:00:00 +0000 |
1052 | +++ examples/storagetests/diskonlygpt.yaml 2016-09-15 18:06:47 +0000 |
1053 | @@ -0,0 +1,8 @@ |
1054 | +storage: |
1055 | + version: 1 |
1056 | + config: |
1057 | + - id: disk1 |
1058 | + type: disk |
1059 | + ptable: gpt |
1060 | + name: main_disk |
1061 | + wipe: superblock |
1062 | |
1063 | === added file 'examples/storagetests/formats_on_lvm.yaml' |
1064 | --- examples/storagetests/formats_on_lvm.yaml 1970-01-01 00:00:00 +0000 |
1065 | +++ examples/storagetests/formats_on_lvm.yaml 2016-09-15 18:06:47 +0000 |
1066 | @@ -0,0 +1,67 @@ |
1067 | +storage: |
1068 | + version: 1 |
1069 | + config: |
1070 | + - id: disk1 |
1071 | + type: disk |
1072 | + name: main_disk |
1073 | + wipe: superblock |
1074 | + - id: disk2 |
1075 | + type: disk |
1076 | + name: second_disk |
1077 | + wipe: superblock |
1078 | + - id: disk3 |
1079 | + type: disk |
1080 | + name: third_disk |
1081 | + wipe: superblock |
1082 | + - id: volgroup1 |
1083 | + name: vg1 |
1084 | + type: lvm_volgroup |
1085 | + devices: |
1086 | + - disk1 |
1087 | + - disk2 |
1088 | + - disk3 |
1089 | + - id: lvol1 |
1090 | + type: lvm_partition |
1091 | + name: lv1 |
1092 | + size: 1G |
1093 | + volgroup: volgroup1 |
1094 | + - id: lvol2 |
1095 | + type: lvm_partition |
1096 | + name: lv2 |
1097 | + size: 1G |
1098 | + volgroup: volgroup1 |
1099 | + - id: lvol3 |
1100 | + type: lvm_partition |
1101 | + name: lv3 |
1102 | + size: 1G |
1103 | + volgroup: volgroup1 |
1104 | + - id: lvol4 |
1105 | + type: lvm_partition |
1106 | + name: lv4 |
1107 | + size: 1G |
1108 | + volgroup: volgroup1 |
1109 | + - id: lvol5 |
1110 | + type: lvm_partition |
1111 | + name: lv5 |
1112 | + size: 1G |
1113 | + volgroup: volgroup1 |
1114 | + - id: vfat_on_lvm |
1115 | + volume: lvol1 |
1116 | + type: format |
1117 | + fstype: vfat |
1118 | + - id: ext_on_lvm |
1119 | + volume: lvol2 |
1120 | + type: format |
1121 | + fstype: ext4 |
1122 | + - id: btrfs_on_lvm |
1123 | + volume: lvol3 |
1124 | + type: format |
1125 | + fstype: btrfs |
1126 | + - id: xfs_on_lvm |
1127 | + volume: lvol4 |
1128 | + type: format |
1129 | + fstype: xfs |
1130 | + - id: swap_on_lvm |
1131 | + volume: lvol5 |
1132 | + type: format |
1133 | + fstype: swap |
1134 | |
1135 | === added file 'examples/storagetests/gpt_boot.yaml' |
1136 | --- examples/storagetests/gpt_boot.yaml 1970-01-01 00:00:00 +0000 |
1137 | +++ examples/storagetests/gpt_boot.yaml 2016-09-15 18:06:47 +0000 |
1138 | @@ -0,0 +1,58 @@ |
1139 | +storage: |
1140 | + version: 1 |
1141 | + config: |
1142 | + - id: disk1 |
1143 | + type: disk |
1144 | + ptable: gpt |
1145 | + name: main_disk |
1146 | + wipe: superblock |
1147 | + grub_device: true |
1148 | + - id: disk1_bios_grub |
1149 | + type: partition |
1150 | + size: 1MB |
1151 | + device: disk1 |
1152 | + flag: bios_grub |
1153 | + wipe: superblock |
1154 | + - id: disk1p1 |
1155 | + type: partition |
1156 | + size: 3GB |
1157 | + device: disk1 |
1158 | + flag: boot |
1159 | + wipe: superblock |
1160 | + - id: disk1p2 |
1161 | + type: partition |
1162 | + size: 1GB |
1163 | + device: disk1 |
1164 | + wipe: superblock |
1165 | + - id: disk1p1_root |
1166 | + type: format |
1167 | + fstype: ext4 |
1168 | + volume: disk1p1 |
1169 | + - id: disk1p2_home |
1170 | + type: format |
1171 | + fstype: ext4 |
1172 | + volume: disk1p2 |
1173 | + - id: disk1p1_mount |
1174 | + type: mount |
1175 | + path: / |
1176 | + device: disk1p1_root |
1177 | + - id: disk1p2_mount |
1178 | + type: mount |
1179 | + path: /home |
1180 | + device: disk1p2_home |
1181 | + - id: disk2 |
1182 | + type: disk |
1183 | + name: sparedisk |
1184 | + wipe: superblock |
1185 | + - id: disk3 |
1186 | + type: disk |
1187 | + name: btrfs_volume |
1188 | + wipe: superblock |
1189 | + - id: btrfs_disk_fmt_id |
1190 | + type: format |
1191 | + fstype: btrfs |
1192 | + volume: disk3 |
1193 | + - id: btrfs_disk_mnt_id |
1194 | + type: mount |
1195 | + path: /btrfs |
1196 | + device: btrfs_disk_fmt_id |
1197 | |
1198 | === added file 'examples/storagetests/gpt_simple.yaml' |
1199 | --- examples/storagetests/gpt_simple.yaml 1970-01-01 00:00:00 +0000 |
1200 | +++ examples/storagetests/gpt_simple.yaml 2016-09-15 18:06:47 +0000 |
1201 | @@ -0,0 +1,54 @@ |
1202 | +storage: |
1203 | + version: 1 |
1204 | + config: |
1205 | + - id: disk1 |
1206 | + type: disk |
1207 | + ptable: gpt |
1208 | + name: main_disk |
1209 | + wipe: superblock |
1210 | + grub_device: true |
1211 | + - id: disk1p1 |
1212 | + type: partition |
1213 | + number: 1 |
1214 | + size: 3GB |
1215 | + device: disk1 |
1216 | + flag: boot |
1217 | + wipe: superblock |
1218 | + - id: disk1p2 |
1219 | + type: partition |
1220 | + number: 2 |
1221 | + size: 1GB |
1222 | + device: disk1 |
1223 | + wipe: superblock |
1224 | + - id: disk1p1_root |
1225 | + type: format |
1226 | + fstype: ext4 |
1227 | + volume: disk1p1 |
1228 | + - id: disk1p2_home |
1229 | + type: format |
1230 | + fstype: ext4 |
1231 | + volume: disk1p2 |
1232 | + - id: disk1p1_mount |
1233 | + type: mount |
1234 | + path: / |
1235 | + device: disk1p1_root |
1236 | + - id: disk1p2_mount |
1237 | + type: mount |
1238 | + path: /home |
1239 | + device: disk1p2_home |
1240 | + - id: disk2 |
1241 | + type: disk |
1242 | + name: sparedisk |
1243 | + wipe: superblock |
1244 | + - id: disk3 |
1245 | + type: disk |
1246 | + name: btrfs_volume |
1247 | + wipe: superblock |
1248 | + - id: btrfs_disk_fmt_id |
1249 | + type: format |
1250 | + fstype: btrfs |
1251 | + volume: disk3 |
1252 | + - id: btrfs_disk_mnt_id |
1253 | + type: mount |
1254 | + path: /btrfs |
1255 | + device: btrfs_disk_fmt_id |
1256 | |
1257 | === added file 'examples/storagetests/logical.yaml' |
1258 | --- examples/storagetests/logical.yaml 1970-01-01 00:00:00 +0000 |
1259 | +++ examples/storagetests/logical.yaml 2016-09-15 18:06:47 +0000 |
1260 | @@ -0,0 +1,84 @@ |
1261 | +storage: |
1262 | + version: 1 |
1263 | + config: |
1264 | + - id: disk1 |
1265 | + type: disk |
1266 | + ptable: msdos |
1267 | + name: main_disk |
1268 | + wipe: superblock |
1269 | + - id: disk1primary1 |
1270 | + type: partition |
1271 | + size: 3GB |
1272 | + device: disk1 |
1273 | + flag: boot |
1274 | + wipe: superblock |
1275 | + - id: disk1primary2 |
1276 | + type: partition |
1277 | + size: 2GB |
1278 | + device: disk1 |
1279 | + flag: boot |
1280 | + wipe: superblock |
1281 | + - id: disk1extended |
1282 | + type: partition |
1283 | + size: 4GB |
1284 | + device: disk1 |
1285 | + flag: extended |
1286 | + wipe: superblock |
1287 | + - id: disk1logical1 |
1288 | + type: partition |
1289 | + size: 2GB |
1290 | + device: disk1 |
1291 | + flag: logical |
1292 | + wipe: superblock |
1293 | + - id: disk1logical2 |
1294 | + type: partition |
1295 | + size: 1GB |
1296 | + device: disk1 |
1297 | + flag: logical |
1298 | + wipe: superblock |
1299 | + - id: disk1logical3 |
1300 | + type: partition |
1301 | + size: 1GB |
1302 | + device: disk1 |
1303 | + flag: logical |
1304 | + wipe: superblock |
1305 | + - id: disk1p1_root |
1306 | + type: format |
1307 | + fstype: ext4 |
1308 | + volume: disk1primary1 |
1309 | + - id: disk1p1_mount |
1310 | + type: mount |
1311 | + path: / |
1312 | + device: disk1p1_root |
1313 | + - id: disk1p2_home |
1314 | + type: format |
1315 | + fstype: ext4 |
1316 | + volume: disk1primary2 |
1317 | + - id: disk1p2_mount |
1318 | + type: mount |
1319 | + path: /home |
1320 | + device: disk1p2_home |
1321 | + - id: disk1l1_fmt |
1322 | + type: format |
1323 | + fstype: ext4 |
1324 | + volume: disk1logical1 |
1325 | + - id: disk1l1_mnt |
1326 | + type: mount |
1327 | + path: /media/l1 |
1328 | + device: disk1l1_fmt |
1329 | + - id: disk1l2_fmt |
1330 | + type: format |
1331 | + fstype: ext4 |
1332 | + volume: disk1logical2 |
1333 | + - id: disk1l2_mnt |
1334 | + type: mount |
1335 | + path: /media/l2 |
1336 | + device: disk1l2_fmt |
1337 | + - id: disk1l3_fmt |
1338 | + type: format |
1339 | + fstype: ext4 |
1340 | + volume: disk1logical3 |
1341 | + - id: disk1l3_mnt |
1342 | + type: mount |
1343 | + path: /media/l3 |
1344 | + device: disk1l3_fmt |
1345 | |
1346 | === added file 'examples/storagetests/lvm.yaml' |
1347 | --- examples/storagetests/lvm.yaml 1970-01-01 00:00:00 +0000 |
1348 | +++ examples/storagetests/lvm.yaml 2016-09-15 18:06:47 +0000 |
1349 | @@ -0,0 +1,51 @@ |
1350 | +storage: |
1351 | + version: 1 |
1352 | + config: |
1353 | + - id: disk1 |
1354 | + type: disk |
1355 | + ptable: gpt |
1356 | + name: main_disk |
1357 | + wipe: superblock |
1358 | + - id: disk1p1 |
1359 | + type: partition |
1360 | + size: 3GB |
1361 | + device: disk1 |
1362 | + wipe: superblock |
1363 | + flag: boot |
1364 | + - id: disk1p2 |
1365 | + type: partition |
1366 | + size: 2G |
1367 | + wipe: superblock |
1368 | + device: disk1 |
1369 | + - id: disk1p3 |
1370 | + type: partition |
1371 | + wipe: superblock |
1372 | + size: 3G |
1373 | + device: disk1 |
1374 | + - id: volgroup1 |
1375 | + name: vg1 |
1376 | + type: lvm_volgroup |
1377 | + devices: |
1378 | + - disk1p2 |
1379 | + - disk1p3 |
1380 | + - id: lvmpart1 |
1381 | + name: lv1 |
1382 | + type: lvm_partition |
1383 | + volgroup: volgroup1 |
1384 | + - id: disk1p1_root |
1385 | + type: format |
1386 | + fstype: ext4 |
1387 | + volume: disk1p1 |
1388 | + - id: lv1_fs |
1389 | + name: storage |
1390 | + type: format |
1391 | + fstype: fat32 |
1392 | + volume: lvmpart1 |
1393 | + - id: disk1p1_mount |
1394 | + type: mount |
1395 | + path: / |
1396 | + device: disk1p1_root |
1397 | + - id: lv1_mount |
1398 | + type: mount |
1399 | + path: /srv/data |
1400 | + device: lv1_fs |
1401 | |
1402 | === added file 'examples/storagetests/lvm_mult_lvols_on_pvol.yaml' |
1403 | --- examples/storagetests/lvm_mult_lvols_on_pvol.yaml 1970-01-01 00:00:00 +0000 |
1404 | +++ examples/storagetests/lvm_mult_lvols_on_pvol.yaml 2016-09-15 18:06:47 +0000 |
1405 | @@ -0,0 +1,74 @@ |
1406 | +storage: |
1407 | + version: 1 |
1408 | + config: |
1409 | + - id: disk1 |
1410 | + type: disk |
1411 | + ptable: gpt |
1412 | + wipe: superblock |
1413 | + model: QEMU HARDDISK |
1414 | + name: main_disk |
1415 | + - id: disk1_bios_grub |
1416 | + type: partition |
1417 | + size: 1MB |
1418 | + device: disk1 |
1419 | + flag: bios_grub |
1420 | + - id: disk1p1 |
1421 | + type: partition |
1422 | + size: 3GB |
1423 | + device: disk1 |
1424 | + wipe: superblock |
1425 | + - id: disk1p2 |
1426 | + type: partition |
1427 | + wipe: superblock |
1428 | + size: 2G |
1429 | + device: disk1 |
1430 | + - id: disk1p3 |
1431 | + type: partition |
1432 | + wipe: superblock |
1433 | + size: 3G |
1434 | + device: disk1 |
1435 | + - id: disk2 |
1436 | + type: disk |
1437 | + wipe: superblock |
1438 | + - id: volgroup1 |
1439 | + name: vg-with-dash |
1440 | + type: lvm_volgroup |
1441 | + devices: |
1442 | + - disk1p2 |
1443 | + - disk1p3 |
1444 | + - disk2 |
1445 | + - id: lvmpart1 |
1446 | + name: lv-name-one |
1447 | + size: 1G |
1448 | + type: lvm_partition |
1449 | + volgroup: volgroup1 |
1450 | + - id: lvmpart2 |
1451 | + name: lv2 |
1452 | + type: lvm_partition |
1453 | + volgroup: volgroup1 |
1454 | + - id: disk1_root |
1455 | + type: format |
1456 | + fstype: ext4 |
1457 | + volume: disk1p1 |
1458 | + - id: lv1_fs |
1459 | + name: storage |
1460 | + type: format |
1461 | + fstype: fat32 |
1462 | + volume: lvmpart1 |
1463 | + - id: lv2_fs |
1464 | + name: storage |
1465 | + type: format |
1466 | + fstype: ext3 |
1467 | + volume: lvmpart2 |
1468 | + - id: disk1_mount |
1469 | + type: mount |
1470 | + path: / |
1471 | + device: disk1_root |
1472 | + - id: lv1_mount |
1473 | + type: mount |
1474 | + path: /srv/data |
1475 | + device: lv1_fs |
1476 | + - id: lv2_mount |
1477 | + type: mount |
1478 | + path: /srv/backup |
1479 | + device: lv2_fs |
1480 | |
1481 | === added file 'examples/storagetests/lvm_multiple_vg.yaml' |
1482 | --- examples/storagetests/lvm_multiple_vg.yaml 1970-01-01 00:00:00 +0000 |
1483 | +++ examples/storagetests/lvm_multiple_vg.yaml 2016-09-15 18:06:47 +0000 |
1484 | @@ -0,0 +1,64 @@ |
1485 | +storage: |
1486 | + version: 1 |
1487 | + config: |
1488 | + - id: disk1 |
1489 | + type: disk |
1490 | + name: main_disk |
1491 | + wipe: superblock |
1492 | + ptable: gpt |
1493 | + - id: disk1p1 |
1494 | + type: partition |
1495 | + device: disk1 |
1496 | + number: 1 |
1497 | + wipe: superblock |
1498 | + size: 2G |
1499 | + - id: disk1p2 |
1500 | + type: partition |
1501 | + device: disk1 |
1502 | + number: 2 |
1503 | + wipe: superblock |
1504 | + size: 2G |
1505 | + - id: disk2 |
1506 | + type: disk |
1507 | + name: second_disk |
1508 | + wipe: superblock |
1509 | + - id: disk3 |
1510 | + type: disk |
1511 | + name: third_disk |
1512 | + wipe: superblock |
1513 | + - id: volgroup1 |
1514 | + name: vg1 |
1515 | + type: lvm_volgroup |
1516 | + devices: |
1517 | + - disk1p1 |
1518 | + - disk1p2 |
1519 | + - id: volgroup2 |
1520 | + name: vg2 |
1521 | + type: lvm_volgroup |
1522 | + devices: |
1523 | + - disk2 |
1524 | + - disk3 |
1525 | + - id: vg1-lvol1 |
1526 | + type: lvm_partition |
1527 | + name: lv1 |
1528 | + volgroup: volgroup1 |
1529 | + - id: vg2-lvol1 |
1530 | + type: lvm_partition |
1531 | + name: lv1 |
1532 | + volgroup: volgroup2 |
1533 | + - id: vfat_on_lvm |
1534 | + volume: vg1-lvol1 |
1535 | + type: format |
1536 | + fstype: vfat |
1537 | + - id: ext_on_lvm |
1538 | + volume: vg2-lvol1 |
1539 | + type: format |
1540 | + fstype: ext4 |
1541 | + - id: mount1 |
1542 | + type: mount |
1543 | + device: vfat_on_lvm |
1544 | + path: /srv/test |
1545 | + - id: mount2 |
1546 | + type: mount |
1547 | + device: ext_on_lvm |
1548 | + path: /srv/test2 |
1549 | |
1550 | === added file 'examples/storagetests/lvm_with_dash.yaml' |
1551 | --- examples/storagetests/lvm_with_dash.yaml 1970-01-01 00:00:00 +0000 |
1552 | +++ examples/storagetests/lvm_with_dash.yaml 2016-09-15 18:06:47 +0000 |
1553 | @@ -0,0 +1,50 @@ |
1554 | +storage: |
1555 | + version: 1 |
1556 | + config: |
1557 | + - id: disk1 |
1558 | + type: disk |
1559 | + ptable: gpt |
1560 | + name: main_disk |
1561 | + wipe: superblock |
1562 | + - id: disk1p1 |
1563 | + type: partition |
1564 | + size: 3GB |
1565 | + device: disk1 |
1566 | + wipe: superblock |
1567 | + flag: boot |
1568 | + - id: disk1p2 |
1569 | + type: partition |
1570 | + size: 2G |
1571 | + wipe: superblock |
1572 | + device: disk1 |
1573 | + - id: volgroup1 |
1574 | + name: volgroup-with-dash |
1575 | + type: lvm_volgroup |
1576 | + devices: |
1577 | + - disk1p2 |
1578 | + - id: lvmpart1 |
1579 | + name: lvol-with-dash |
1580 | + type: lvm_partition |
1581 | + size: 1G |
1582 | + volgroup: volgroup1 |
1583 | + - id: lvmpart2 |
1584 | + name: lvol---many----dashes_underscore+.abc.def |
1585 | + type: lvm_partition |
1586 | + volgroup: volgroup1 |
1587 | + - id: disk1p1_root |
1588 | + type: format |
1589 | + fstype: ext4 |
1590 | + volume: disk1p1 |
1591 | + - id: lv1_fs |
1592 | + name: storage |
1593 | + type: format |
1594 | + fstype: fat32 |
1595 | + volume: lvmpart1 |
1596 | + - id: disk1p1_mount |
1597 | + type: mount |
1598 | + path: / |
1599 | + device: disk1p1_root |
1600 | + - id: lv1_mount |
1601 | + type: mount |
1602 | + path: /srv/data |
1603 | + device: lv1_fs |
1604 | |
1605 | === added file 'examples/storagetests/mdadm.yaml' |
1606 | --- examples/storagetests/mdadm.yaml 1970-01-01 00:00:00 +0000 |
1607 | +++ examples/storagetests/mdadm.yaml 2016-09-15 18:06:47 +0000 |
1608 | @@ -0,0 +1,59 @@ |
1609 | +storage: |
1610 | + version: 1 |
1611 | + config: |
1612 | + - id: disk1 |
1613 | + type: disk |
1614 | + ptable: gpt |
1615 | + name: main_disk |
1616 | + wipe: superblock |
1617 | + - id: bios_boot_partition |
1618 | + type: partition |
1619 | + size: 1MB |
1620 | + device: disk1 |
1621 | + flag: bios_grub |
1622 | + wipe: superblock |
1623 | + - id: disk1p1 |
1624 | + type: partition |
1625 | + size: 3GB |
1626 | + device: disk1 |
1627 | + wipe: superblock |
1628 | + - id: disk1p2 |
1629 | + type: partition |
1630 | + size: 1GB |
1631 | + wipe: superblock |
1632 | + device: disk1 |
1633 | + - id: disk1p3 |
1634 | + type: partition |
1635 | + wipe: superblock |
1636 | + size: 1GB |
1637 | + device: disk1 |
1638 | + - id: disk1p4 |
1639 | + type: partition |
1640 | + wipe: superblock |
1641 | + size: 1GB |
1642 | + device: disk1 |
1643 | + - id: mddevice |
1644 | + name: md0 |
1645 | + type: raid |
1646 | + raidlevel: 1 |
1647 | + devices: |
1648 | + - disk1p2 |
1649 | + - disk1p3 |
1650 | + spare_devices: |
1651 | + - disk1p4 |
1652 | + - id: disk1p1_root |
1653 | + type: format |
1654 | + fstype: ext4 |
1655 | + volume: disk1p1 |
1656 | + - id: raid_storage |
1657 | + type: format |
1658 | + fstype: ext4 |
1659 | + volume: mddevice |
1660 | + - id: disk1p1_mount |
1661 | + type: mount |
1662 | + path: / |
1663 | + device: disk1p1_root |
1664 | + - id: raid_mount |
1665 | + type: mount |
1666 | + path: /media/data |
1667 | + device: raid_storage |
1668 | |
1669 | === added file 'examples/storagetests/mdadm_bcache.yaml' |
1670 | --- examples/storagetests/mdadm_bcache.yaml 1970-01-01 00:00:00 +0000 |
1671 | +++ examples/storagetests/mdadm_bcache.yaml 2016-09-15 18:06:47 +0000 |
1672 | @@ -0,0 +1,135 @@ |
1673 | +storage: |
1674 | + version: 1 |
1675 | + config: |
1676 | + - grub_device: true |
1677 | + id: disk1 |
1678 | + wipe: superblock |
1679 | + type: disk |
1680 | + ptable: gpt |
1681 | + name: main_disk |
1682 | + - id: bios_boot_partition |
1683 | + type: partition |
1684 | + size: 1MB |
1685 | + wipe: superblock |
1686 | + device: disk1 |
1687 | + flag: bios_grub |
1688 | + number: 1 |
1689 | + - id: disk1p1 |
1690 | + type: partition |
1691 | + size: 3GB |
1692 | + wipe: superblock |
1693 | + device: disk1 |
1694 | + number: 2 # XXX: we really need to stop using id with DiskPartnum |
1695 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa1 |
1696 | + - id: disk1p2 |
1697 | + type: partition |
1698 | + size: 1GB |
1699 | + wipe: superblock |
1700 | + device: disk1 |
1701 | + number: 3 # XXX: we really need to stop using id with DiskPartnum |
1702 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa2 |
1703 | + - id: disk1p3 |
1704 | + type: partition |
1705 | + wipe: superblock |
1706 | + size: 1GB |
1707 | + device: disk1 |
1708 | + number: 4 # XXX: we really need to stop using id with DiskPartnum |
1709 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa3 |
1710 | + - id: disk1p4 |
1711 | + type: partition |
1712 | + wipe: superblock |
1713 | + size: 1GB |
1714 | + device: disk1 |
1715 | + number: 5 # XXX: we really need to stop using id with DiskPartnum |
1716 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa4 |
1717 | + - id: disk1p5 |
1718 | + wipe: superblock |
1719 | + type: partition |
1720 | + size: 1GB |
1721 | + device: disk1 |
1722 | + number: 6 # XXX: we really need to stop using id with DiskPartnum |
1723 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa5 |
1724 | + - id: disk1p6 |
1725 | + type: partition |
1726 | + size: 1GB |
1727 | + wipe: superblock |
1728 | + device: disk1 |
1729 | + number: 7 # XXX: we really need to stop using id with DiskPartnum |
1730 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa6 |
1731 | + - id: disk2 |
1732 | + type: disk |
1733 | + name: second_disk |
1734 | + wipe: superblock |
1735 | + - id: disk3 |
1736 | + wipe: superblock |
1737 | + type: disk |
1738 | + ptable: gpt |
1739 | + name: third_disk |
1740 | + - id: disk3p1 |
1741 | + type: partition |
1742 | + size: 3GB |
1743 | + device: disk3 |
1744 | + wipe: superblock |
1745 | + uuid: deadbeef-dead-beef-dead-deadbeefaac1 |
1746 | + - id: mddevice |
1747 | + name: md0 |
1748 | + type: raid |
1749 | + raidlevel: 1 |
1750 | + devices: |
1751 | + - disk1p2 |
1752 | + - disk1p3 |
1753 | + spare_devices: |
1754 | + - disk1p4 |
1755 | + - id: bcache1_raid |
1756 | + type: bcache |
1757 | + name: cached_array |
1758 | + backing_device: mddevice |
1759 | + cache_device: disk1p5 |
1760 | + cache_mode: writeback |
1761 | + - id: bcache_normal |
1762 | + type: bcache |
1763 | + name: cached_array_2 |
1764 | + backing_device: disk1p6 |
1765 | + cache_device: disk1p5 |
1766 | + cache_mode: writethrough |
1767 | + - id: bcachefoo |
1768 | + type: bcache |
1769 | + name: cached_array_3 |
1770 | + backing_device: disk3p1 |
1771 | + cache_device: disk2 |
1772 | + cache_mode: writearound |
1773 | + - id: disk1p1_fs |
1774 | + type: format |
1775 | + fstype: ext4 |
1776 | + volume: disk1p1 |
1777 | + uuid: deadbeef-dead-beef-dead-deadbeeffff1 |
1778 | + - id: bcache_raid_storage |
1779 | + type: format |
1780 | + fstype: ext4 |
1781 | + volume: bcache1_raid |
1782 | + uuid: deadbeef-dead-beef-dead-deadbeefcac1 |
1783 | + - id: bcache_normal_storage |
1784 | + type: format |
1785 | + fstype: ext4 |
1786 | + volume: bcache_normal |
1787 | + uuid: deadbeef-dead-beef-dead-deadbeefcac2 |
1788 | + - id: bcachefoo_fulldiskascache_storage |
1789 | + type: format |
1790 | + fstype: ext4 |
1791 | + volume: bcachefoo |
1792 | + - id: disk1p1_mount |
1793 | + type: mount |
1794 | + path: / |
1795 | + device: disk1p1_fs |
1796 | + - id: bcache1_raid_mount |
1797 | + type: mount |
1798 | + path: /media/data |
1799 | + device: bcache_raid_storage |
1800 | + - id: bcache0_mount |
1801 | + type: mount |
1802 | + path: /media/bcache_normal |
1803 | + device: bcache_normal_storage |
1804 | + - id: disk1p1_non_root_mount |
1805 | + type: mount |
1806 | + path: /media/bcachefoo_fulldiskascache_storage |
1807 | + device: bcachefoo_fulldiskascache_storage |
1808 | |
1809 | === added file 'examples/storagetests/mdadm_lvm.yaml' |
1810 | --- examples/storagetests/mdadm_lvm.yaml 1970-01-01 00:00:00 +0000 |
1811 | +++ examples/storagetests/mdadm_lvm.yaml 2016-09-15 18:06:47 +0000 |
1812 | @@ -0,0 +1,112 @@ |
1813 | +storage: |
1814 | + version: 1 |
1815 | + config: |
1816 | + - grub_device: true |
1817 | + id: disk1 |
1818 | + type: disk |
1819 | + ptable: gpt |
1820 | + name: main_disk |
1821 | + wipe: superblock |
1822 | + - id: bios_boot_partition |
1823 | + type: partition |
1824 | + size: 1MB |
1825 | + wipe: superblock |
1826 | + device: disk1 |
1827 | + flag: bios_grub |
1828 | + number: 1 |
1829 | + - id: disk1p1 |
1830 | + type: partition |
1831 | + wipe: superblock |
1832 | + size: 3GB |
1833 | + device: disk1 |
1834 | + number: 2 # XXX: we really need to stop using id with DiskPartnum |
1835 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa1 |
1836 | + - id: disk1p2 |
1837 | + type: partition |
1838 | + size: 1GB |
1839 | + wipe: superblock |
1840 | + device: disk1 |
1841 | + number: 3 # XXX: we really need to stop using id with DiskPartnum |
1842 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa2 |
1843 | + - id: disk1p3 |
1844 | + type: partition |
1845 | + size: 1GB |
1846 | + wipe: superblock |
1847 | + device: disk1 |
1848 | + number: 4 # XXX: we really need to stop using id with DiskPartnum |
1849 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa3 |
1850 | + - id: disk1p4 |
1851 | + type: partition |
1852 | + size: 1GB |
1853 | + wipe: superblock |
1854 | + device: disk1 |
1855 | + number: 5 # XXX: we really need to stop using id with DiskPartnum |
1856 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa4 |
1857 | + - id: disk1p5 |
1858 | + type: partition |
1859 | + size: 1GB |
1860 | + device: disk1 |
1861 | + wipe: superblock |
1862 | + number: 6 # XXX: we really need to stop using id with DiskPartnum |
1863 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa5 |
1864 | + - id: disk1p6 |
1865 | + type: partition |
1866 | + size: 1GB |
1867 | + device: disk1 |
1868 | + wipe: superblock |
1869 | + number: 7 # XXX: we really need to stop using id with DiskPartnum |
1870 | + uuid: deadbeef-dead-beef-dead-deadbeefaaa6 |
1871 | + - id: disk2 |
1872 | + type: disk |
1873 | + name: second_disk |
1874 | + wipe: superblock |
1875 | + - id: disk3 |
1876 | + type: disk |
1877 | + wipe: superblock |
1878 | + ptable: gpt |
1879 | + name: third_disk |
1880 | + - id: disk3p1 |
1881 | + type: partition |
1882 | + size: 3GB |
1883 | + device: disk3 |
1884 | + wipe: superblock |
1885 | + uuid: deadbeef-dead-beef-dead-deadbeefaac1 |
1886 | + - id: mddevice |
1887 | + name: md0 |
1888 | + type: raid |
1889 | + raidlevel: 1 |
1890 | + devices: |
1891 | + - disk1p2 |
1892 | + - disk1p3 |
1893 | + spare_devices: |
1894 | + - disk1p4 |
1895 | + - id: volgroup1 |
1896 | + name: raid_vg_1 |
1897 | + devices: |
1898 | + - mddevice |
1899 | + - disk1p6 |
1900 | + - disk3p1 |
1901 | + - disk2 |
1902 | + type: lvm_volgroup |
1903 | + - id: lvol1 |
1904 | + name: storage_dev |
1905 | + type: lvm_partition |
1906 | + volgroup: volgroup1 |
1907 | + - id: disk1p1_fs |
1908 | + type: format |
1909 | + fstype: ext4 |
1910 | + volume: disk1p1 |
1911 | + uuid: deadbeef-dead-beef-dead-deadbeeffff1 |
1912 | + - id: lvol1_fmt |
1913 | + type: format |
1914 | + fstype: ext4 |
1915 | + volume: lvol1 |
1916 | + uuid: deadbeef-dead-beef-dead-deadbeefcac2 |
1917 | + - id: disk1p1_mount |
1918 | + type: mount |
1919 | + path: / |
1920 | + device: disk1p1_fs |
1921 | + - id: lvol1_mount |
1922 | + type: mount |
1923 | + path: /media/storage |
1924 | + device: lvol1_fmt |
1925 | |
1926 | === added file 'examples/storagetests/whole_disk_btrfs_xfs.yaml' |
1927 | --- examples/storagetests/whole_disk_btrfs_xfs.yaml 1970-01-01 00:00:00 +0000 |
1928 | +++ examples/storagetests/whole_disk_btrfs_xfs.yaml 2016-09-15 18:06:47 +0000 |
1929 | @@ -0,0 +1,19 @@ |
1930 | +storage: |
1931 | + version: 1 |
1932 | + config: |
1933 | + - id: disk1 |
1934 | + type: disk |
1935 | + name: main_disk |
1936 | + wipe: superblock |
1937 | + - id: disk2 |
1938 | + type: disk |
1939 | + name: second_disk |
1940 | + wipe: superblock |
1941 | + - id: disk1_fmt |
1942 | + type: format |
1943 | + fstype: btrfs |
1944 | + volume: disk1 |
1945 | + - id: disk2_fmt |
1946 | + type: format |
1947 | + fstype: xfs |
1948 | + volume: disk2 |
1949 | |
1950 | === added file 'examples/storagetests/whole_disk_ext.yaml' |
1951 | --- examples/storagetests/whole_disk_ext.yaml 1970-01-01 00:00:00 +0000 |
1952 | +++ examples/storagetests/whole_disk_ext.yaml 2016-09-15 18:06:47 +0000 |
1953 | @@ -0,0 +1,27 @@ |
1954 | +storage: |
1955 | + version: 1 |
1956 | + config: |
1957 | + - id: disk1 |
1958 | + type: disk |
1959 | + name: main_disk |
1960 | + wipe: superblock |
1961 | + - id: disk2 |
1962 | + type: disk |
1963 | + name: second_disk |
1964 | + wipe: superblock |
1965 | + - id: disk3 |
1966 | + type: disk |
1967 | + name: third_disk |
1968 | + wipe: superblock |
1969 | + - id: disk1_fmt |
1970 | + type: format |
1971 | + fstype: ext2 |
1972 | + volume: disk1 |
1973 | + - id: disk2_fmt |
1974 | + type: format |
1975 | + fstype: ext3 |
1976 | + volume: disk2 |
1977 | + - id: disk3_fmt |
1978 | + type: format |
1979 | + fstype: ext4 |
1980 | + volume: disk3 |
1981 | |
1982 | === added file 'examples/storagetests/whole_disk_fat.yaml' |
1983 | --- examples/storagetests/whole_disk_fat.yaml 1970-01-01 00:00:00 +0000 |
1984 | +++ examples/storagetests/whole_disk_fat.yaml 2016-09-15 18:06:47 +0000 |
1985 | @@ -0,0 +1,27 @@ |
1986 | +storage: |
1987 | + version: 1 |
1988 | + config: |
1989 | + - id: disk1 |
1990 | + type: disk |
1991 | + name: main_disk |
1992 | + wipe: superblock |
1993 | + - id: disk2 |
1994 | + type: disk |
1995 | + name: second_disk |
1996 | + wipe: superblock |
1997 | + - id: disk3 |
1998 | + type: disk |
1999 | + name: third_disk |
2000 | + wipe: superblock |
2001 | + - id: disk1_fmt |
2002 | + type: format |
2003 | + fstype: fat |
2004 | + volume: disk1 |
2005 | + - id: disk2_fmt |
2006 | + type: format |
2007 | + fstype: vfat |
2008 | + volume: disk2 |
2009 | + - id: disk3_fmt |
2010 | + type: format |
2011 | + fstype: fat32 |
2012 | + volume: disk3 |
2013 | |
2014 | === added file 'examples/storagetests/whole_disk_swap.yaml' |
2015 | --- examples/storagetests/whole_disk_swap.yaml 1970-01-01 00:00:00 +0000 |
2016 | +++ examples/storagetests/whole_disk_swap.yaml 2016-09-15 18:06:47 +0000 |
2017 | @@ -0,0 +1,11 @@ |
2018 | +storage: |
2019 | + version: 1 |
2020 | + config: |
2021 | + - id: disk1 |
2022 | + type: disk |
2023 | + name: main_disk |
2024 | + wipe: superblock |
2025 | + - id: disk1_fmt |
2026 | + type: format |
2027 | + fstype: swap |
2028 | + volume: disk1 |
2029 | |
2030 | === added directory 'tests/storagetest_runner' |
2031 | === added file 'tests/storagetest_runner/__init__.py' |
2032 | --- tests/storagetest_runner/__init__.py 1970-01-01 00:00:00 +0000 |
2033 | +++ tests/storagetest_runner/__init__.py 2016-09-15 18:06:47 +0000 |
2034 | @@ -0,0 +1,471 @@ |
2035 | +import base64 |
2036 | +import json |
2037 | +import os |
2038 | +import subprocess |
2039 | +import textwrap |
2040 | +import threading |
2041 | +import time |
2042 | +import unittest |
2043 | +from http import server |
2044 | +from curtin import (util, deps) |
2045 | +import yaml |
2046 | + |
2047 | +import vmtests |
2048 | +from vmtests import helpers |
2049 | +from curtin.reporter import events |
2050 | +from storagetests import (STORAGE_TEST_REPORTER_CONF_FILE, |
2051 | + STORAGE_TEST_REPORT_STACK_PREFIX, |
2052 | + STORAGE_TEST_DISK_CONF_FILE) |
2053 | + |
2054 | +from tools.report_webhook_logger import CaptureReporting |
2055 | + |
2056 | +LOG = vmtests.logger |
2057 | + |
2058 | +# location for curtin on target system |
2059 | +CURTIN_TAR_PATH = '/tmp/curtin.tar.xz' |
2060 | +CURTIN_EXTRACT_PATH = '/curtin-storagetests' |
2061 | + |
2062 | +TEST_DEPS = [package for (executable, package) in deps.REQUIRED_EXECUTABLES] |
2063 | + |
2064 | + |
2065 | +class ServeInBackground(object): |
2066 | + """context manager that runs a webserver in a separate thread""" |
2067 | + |
2068 | + def __init__(self): |
2069 | + # when given a addr of 0, first available unused used |
2070 | + addr = ('', 0) |
2071 | + self.httpd_request_handler = server.SimpleHTTPRequestHandler |
2072 | + self.httpd_request_handler.log_request = self.dont_log_request |
2073 | + self.httpd = server.HTTPServer(addr, self.httpd_request_handler) |
2074 | + self.httpd.server_activate() |
2075 | + self.port = self.httpd.server_port |
2076 | + self.worker = threading.Thread(target=self.httpd.serve_forever) |
2077 | + |
2078 | + def dont_log_request(self, code=None, size=None): |
2079 | + """ |
2080 | + is able to reqplace SimpleHTTPRequestHandler.log_request to prevent |
2081 | + httpd requests from being logged to console |
2082 | + """ |
2083 | + return |
2084 | + |
2085 | + def __enter__(self): |
2086 | + self.worker.start() |
2087 | + return self |
2088 | + |
2089 | + def __exit__(self, etype, value, trace): |
2090 | + self.httpd.shutdown() |
2091 | + self.httpd.server_close() |
2092 | + |
2093 | + |
2094 | +class InDirectory(object): |
2095 | + """context manager to switch directory while tests are running""" |
2096 | + |
2097 | + def __init__(self, target): |
2098 | + self.target = target |
2099 | + self.old = os.getcwd() |
2100 | + |
2101 | + def __enter__(self): |
2102 | + os.chdir(self.target) |
2103 | + |
2104 | + def __exit__(self, etype, value, trace): |
2105 | + os.chdir(self.old) |
2106 | + |
2107 | + |
2108 | +def gen_user_data(local_ip, httpd_port, files_to_retrieve, extra_scripts, |
2109 | + test_py_ver, shutdown, run_tests): |
2110 | + """ |
2111 | + generate user data to download curtin tarball, extract it, run storagetests |
2112 | + and report back |
2113 | + |
2114 | + @param local_ip: local ip of test system to be used when accessing local |
2115 | + http servers |
2116 | + |
2117 | + @param httpd_port: port of local httpd for serving config files |
2118 | + |
2119 | + @param files_to_retrieve: a list of files to retrieve and write before |
2120 | + starting tests. each entry is a tuple, item 0 is |
2121 | + the path to the file from the httpd document |
2122 | + root, item 1 is the path to write the file to in |
2123 | + the target |
2124 | + |
2125 | + @param extra_scripts: extra scripts to run before tests |
2126 | + |
2127 | + @param test_py_ver: integer major python version number (either 2 or 3) |
2128 | + |
2129 | + @param shutdown: boolean value to specify if system should be shut down |
2130 | + after tests have been run |
2131 | + |
2132 | + @param run_tests: acutally run the tests |
2133 | + """ |
2134 | + |
2135 | + if test_py_ver == 2: |
2136 | + python_packages = ['python', 'python-pip', 'python-nose'] |
2137 | + pip_name = 'pip' |
2138 | + nosetests = 'nosetests' |
2139 | + elif test_py_ver == 3: |
2140 | + python_packages = ['python3', 'python3-pip', 'python3-nose'] |
2141 | + pip_name = 'pip3' |
2142 | + nosetests = 'nosetests3' |
2143 | + else: |
2144 | + raise ValueError("not a valid python major version number: %s", |
2145 | + test_py_ver) |
2146 | + |
2147 | + base_cloudconfig = { |
2148 | + 'password': 'passw0rd', |
2149 | + 'chpasswd': {'expire': False}, |
2150 | + 'power_state': {'mode': 'poweroff' if shutdown else 'none'}, |
2151 | + 'network': {'config': 'disabled'}, |
2152 | + } |
2153 | + (ssh_keys, _) = util.subp(['tools/ssh-keys-list', 'cloud-config'], |
2154 | + capture=True) |
2155 | + |
2156 | + # precises' cloud-init version has limited support for |
2157 | + # cloud-config-archive and expects cloud-config pieces to be appendable |
2158 | + # to a single file and yaml.load()'able. Resolve this by using |
2159 | + # yaml.dump() when generating a list of parts |
2160 | + # |
2161 | + # precise also does not seem to follow write-files or packages config |
2162 | + # entries, so these are handled instead by scripts below |
2163 | + parts = [{'type': 'text/cloud-config', 'content': yaml.dump(p, indent=1)} |
2164 | + for p in [base_cloudconfig, ssh_keys]] |
2165 | + |
2166 | + # install packages using --no-install-recommends, as mdadm causes postfix |
2167 | + # to be installed and that takes quite a bit of extra time |
2168 | + install_packages = textwrap.dedent( |
2169 | + """ |
2170 | + apt-get update |
2171 | + apt-get install --yes --no-install-recommends {} |
2172 | + """).format(' '.join(TEST_DEPS + python_packages)) |
2173 | + |
2174 | + # failsafe poweroff runs on precise only, where power_state does not exist |
2175 | + precise_poweroff = textwrap.dedent( |
2176 | + """ |
2177 | + [ "$(lsb_release -sc)" = "precise" ] || exit 0; |
2178 | + shutdown -P now "Shutting down on precise" |
2179 | + """) |
2180 | + |
2181 | + # download files_to_retrieve elements and write |
2182 | + download_scripts = [] |
2183 | + for (httpd_path, target_path) in files_to_retrieve: |
2184 | + file_url = ('http://' + local_ip + ':{port:d}/{path}' |
2185 | + .format(port=httpd_port, path=httpd_path)) |
2186 | + download_scripts.append(textwrap.dedent( |
2187 | + """ |
2188 | + wget -q -O "{target}" "{url}" |
2189 | + """).format(target=target_path, url=file_url)) |
2190 | + |
2191 | + # extract curtin tarball |
2192 | + extract_curtin_tar = textwrap.dedent( |
2193 | + """ |
2194 | + mkdir -p "{extract_loc}" |
2195 | + tar xf "{tar_loc}" -C "{extract_loc}" |
2196 | + """).format(tar_loc=CURTIN_TAR_PATH, extract_loc=CURTIN_EXTRACT_PATH) |
2197 | + |
2198 | + # install python dependencies via pip |
2199 | + python_deps = ['pyyaml', 'nose-parameterized'] |
2200 | + get_python_deps = "{pip} install {pydeps}".format( |
2201 | + pip=pip_name, pydeps=' '.join(python_deps)) |
2202 | + |
2203 | + # launch storagetests |
2204 | + launch_storagetests = textwrap.dedent( |
2205 | + """ |
2206 | + {target_runner} -w "{extract_loc}" tests/storagetests |
2207 | + """).format(extract_loc=CURTIN_EXTRACT_PATH, |
2208 | + target_runner=nosetests) |
2209 | + |
2210 | + scripts = download_scripts + [install_packages, extract_curtin_tar, |
2211 | + get_python_deps] + extra_scripts |
2212 | + if run_tests: |
2213 | + scripts.append(launch_storagetests) |
2214 | + |
2215 | + if shutdown: |
2216 | + scripts.append(precise_poweroff) |
2217 | + |
2218 | + for part in scripts: |
2219 | + if not part.startswith("#!"): |
2220 | + part = "#!/bin/sh -x\n" + part |
2221 | + LOG.debug('Cloud config archive content (pre-json): %s', part) |
2222 | + parts.append({'content': part, 'type': 'text/x-shellscript'}) |
2223 | + |
2224 | + return '#cloud-config-archive\n' + json.dumps(parts, indent=1) |
2225 | + |
2226 | + |
2227 | +class TestStorageBase(unittest.TestCase): |
2228 | + """ |
2229 | + Base for storagetest_runner test classes. |
2230 | + Handles booting target vm and verifying results |
2231 | + """ |
2232 | + __test__ = False |
2233 | + interactive = False |
2234 | + |
2235 | + # appended to scripts to run before starting tests |
2236 | + cc_extra_scripts = [] |
2237 | + |
2238 | + # major release only |
2239 | + test_py_ver = 3 |
2240 | + |
2241 | + # should be overridden by relbase mixin |
2242 | + arch = None |
2243 | + release = None |
2244 | + krel = None |
2245 | + |
2246 | + # these probably don't need to be modified |
2247 | + boot_timeout = 2400 |
2248 | + disk_block_size = 512 |
2249 | + |
2250 | + # this can be overridden to change disk preferences |
2251 | + extra_disks = [('10G', None), ('10G', None), ('10G', None), ('10G', None)] |
2252 | + |
2253 | + # this tells storage tests which disks in target to test on, is merged into |
2254 | + # storage config, so can be used to specify disk by any means (path/serial) |
2255 | + storage_test_disks = {'disk1': {'path': '/dev/vdb'}, |
2256 | + 'disk2': {'path': '/dev/vdc'}, |
2257 | + 'disk3': {'path': '/dev/vdd'}, |
2258 | + 'disk4': {'path': '/dev/vde'}} |
2259 | + |
2260 | + @classmethod |
2261 | + def setUpClass(cls): |
2262 | + """setUpClass that does only a single boot with xkvm""" |
2263 | + |
2264 | + local_ip = vmtests.get_lan_ip() |
2265 | + setup_start = time.time() |
2266 | + LOG.info('Starting setup for testclass: %s', cls.__name__) |
2267 | + |
2268 | + # set up image store with sync disabled, as it is done by vmtests |
2269 | + img_store = vmtests.ImageStore(vmtests.IMAGE_SRC_URL, |
2270 | + vmtests.IMAGE_DIR) |
2271 | + img_store.sync = False |
2272 | + (img_verstr, (boot_img, boot_kernel, boot_initrd, _)) = \ |
2273 | + img_store.get_image(cls.release, cls.arch, cls.krel) |
2274 | + LOG.debug('Image %s\n boot=%s\n kernel=%s\n initrd=%s\n', |
2275 | + img_verstr, boot_img, boot_kernel, boot_initrd) |
2276 | + |
2277 | + # set up tmpdir, no userdata is passed in yet |
2278 | + cls.td = vmtests.TempDir(cls.__name__, "") |
2279 | + LOG.info('Using tempdir: %s', cls.td.tmpdir) |
2280 | + cls.boot_log = os.path.join(cls.td.logs, 'boot-serial.log') |
2281 | + cls.storage_log = os.path.join(cls.td.logs, 'storage-events.json') |
2282 | + xout_path = os.path.join(cls.td.logs, 'boot-xkvm.out') |
2283 | + LOG.info('Boot console log: %s', cls.boot_log) |
2284 | + LOG.info('Storage Test Events: %s', cls.storage_log) |
2285 | + httpd_base_dir = cls.td.boot |
2286 | + |
2287 | + # prepare backgrounded webservers |
2288 | + storage_events_logger = CaptureReporting(cls.storage_log) |
2289 | + publish_srv = ServeInBackground() |
2290 | + in_dir = InDirectory(httpd_base_dir) |
2291 | + |
2292 | + # get userdata |
2293 | + reporting_conf_path = 'storagetest-reporting.json' |
2294 | + test_disk_conf_path = 'storagetest-disk-conf.json' |
2295 | + curtin_tar_path = 'curtin.tar.xz' |
2296 | + files_to_retrieve = [ |
2297 | + (reporting_conf_path, STORAGE_TEST_REPORTER_CONF_FILE), |
2298 | + (test_disk_conf_path, STORAGE_TEST_DISK_CONF_FILE), |
2299 | + (curtin_tar_path, CURTIN_TAR_PATH), |
2300 | + ] |
2301 | + user_data = gen_user_data(local_ip, publish_srv.port, |
2302 | + files_to_retrieve, cls.cc_extra_scripts, |
2303 | + cls.test_py_ver, (not cls.interactive), |
2304 | + (not cls.interactive)) |
2305 | + cls.td.write_userdata(user_data) |
2306 | + |
2307 | + # prepare to serve storage test config and curtin tarball |
2308 | + reporting_url = ('http://' + local_ip + |
2309 | + ':{:d}/'.format(storage_events_logger.port)) |
2310 | + reporting_conf = util.json_dumps({ |
2311 | + 'reporting': { |
2312 | + 'tests': { |
2313 | + 'level': 'DEBUG', |
2314 | + 'type': 'webhook', |
2315 | + 'endpoint': reporting_url, |
2316 | + }}}) |
2317 | + util.write_file(os.path.join(httpd_base_dir, reporting_conf_path), |
2318 | + reporting_conf) |
2319 | + |
2320 | + test_disk_conf = util.json_dumps(cls.storage_test_disks) |
2321 | + util.write_file(os.path.join(httpd_base_dir, test_disk_conf_path), |
2322 | + test_disk_conf) |
2323 | + |
2324 | + current_curtin_path = os.getcwd() |
2325 | + LOG.info('Building tarball of curtin: %s', current_curtin_path) |
2326 | + # tarball is build excluding 'output' as this is where the jenkins test |
2327 | + # runner puts the base for the curtin tmp dir and including it causes |
2328 | + # tar to try to include the output tarball in itself, making an |
2329 | + # infinitely large file |
2330 | + subprocess.check_call( |
2331 | + ['tar', 'cf', os.path.join(httpd_base_dir, curtin_tar_path), |
2332 | + '-C', current_curtin_path, '--exclude=output', '.'], |
2333 | + stdout=vmtests.DEVNULL, stderr=subprocess.STDOUT) |
2334 | + |
2335 | + # create boot disk and extra disks |
2336 | + block_size_args = ','.join([ |
2337 | + 'logical_block_size={}'.format(cls.disk_block_size), |
2338 | + 'physical_block_size={}'.format(cls.disk_block_size), |
2339 | + 'min_io_size={}'.format(cls.disk_block_size), ]) |
2340 | + |
2341 | + def format_disk_args(number, src, driver, |
2342 | + fmt=vmtests.TARGET_IMAGE_FORMAT): |
2343 | + if driver is None or len(driver) == 0: |
2344 | + driver = 'virtio-blk' |
2345 | + drive_arg = ','.join(['file={}'.format(src), 'if=none', |
2346 | + 'cache=unsafe', 'format={}'.format(fmt), |
2347 | + 'id=drv{:d}'.format(number), |
2348 | + 'index={:d}'.format(number), ]) |
2349 | + device_arg = ','.join([driver, 'drive=drv{:d}'.format(number), |
2350 | + 'serial=dev{:d}'.format(number), |
2351 | + block_size_args, ]) |
2352 | + return ['-drive', drive_arg, '-device', device_arg] |
2353 | + |
2354 | + disks = [] |
2355 | + cls.td.boot_disk = os.path.join(cls.td.disks, 'boot_disk.img') |
2356 | + subprocess.check_call(['qemu-img', 'create', '-f', 'qcow2', |
2357 | + '-b', boot_img, cls.td.boot_disk, '4G'], |
2358 | + stdout=vmtests.DEVNULL, stderr=subprocess.STDOUT) |
2359 | + disks.extend(format_disk_args(0, cls.td.boot_disk, None, fmt='qcow2')) |
2360 | + |
2361 | + for (disk_no, (disk_sz, disk_driver)) in enumerate(cls.extra_disks): |
2362 | + dpath = os.path.join( |
2363 | + cls.td.disks, 'extra_disk_{:d}.img'.format(disk_no)) |
2364 | + subprocess.check_call( |
2365 | + ['qemu-img', 'create', '-f', vmtests.TARGET_IMAGE_FORMAT, |
2366 | + dpath, disk_sz], |
2367 | + stdout=vmtests.DEVNULL, |
2368 | + stderr=subprocess.STDOUT) |
2369 | + disks.extend(format_disk_args(disk_no + 1, dpath, disk_driver)) |
2370 | + |
2371 | + # create xkvm command |
2372 | + xkvm = os.path.join(current_curtin_path, 'tools/xkvm') |
2373 | + cmd = ([xkvm, '-v', '--no-dowait' if cls.interactive else '--dowait', |
2374 | + '--netdev={}'.format(vmtests.DEFAULT_BRIDGE), '--'] + disks + |
2375 | + ['-m', '1024', |
2376 | + '-curses' if cls.interactive else '-nographic', |
2377 | + '-serial', 'file:{}'.format(cls.boot_log)] + |
2378 | + ['-drive', |
2379 | + 'file={},if=virtio,media=cdrom'.format(cls.td.seed_disk)] + |
2380 | + ['-kernel', boot_kernel, '-initrd', boot_initrd, |
2381 | + '-append', 'root=/dev/vda console=ttyS0 ds=nocloud']) |
2382 | + |
2383 | + # boot storage test runner system |
2384 | + try: |
2385 | + LOG.info('Booting target image + running tests: %s', cls.boot_log) |
2386 | + LOG.debug('%s', ' '.join(cmd)) |
2387 | + with open(xout_path, 'wb') as fpout: |
2388 | + with in_dir, publish_srv, storage_events_logger: |
2389 | + cls.boot_system(cmd, console_log=cls.boot_log, |
2390 | + proc_out=fpout, timeout=cls.boot_timeout, |
2391 | + purpose='boot', |
2392 | + interactive=cls.interactive) |
2393 | + except Exception as error: |
2394 | + LOG.error('Booting storage test system failed: %s', error) |
2395 | + cls.tearDownClass() |
2396 | + raise error |
2397 | + finally: |
2398 | + if os.path.exists(cls.boot_log): |
2399 | + content = (util.load_file(cls.boot_log, mode='rb') |
2400 | + .decode('utf-8', errors='replace')) |
2401 | + LOG.debug('boot serial console output:\n%s', content) |
2402 | + else: |
2403 | + LOG.warning('booting test system did not produce console log') |
2404 | + |
2405 | + LOG.info( |
2406 | + '{} setUpClass finished. took {:02f} seconds. Running testcases.' |
2407 | + .format(cls.__name__, time.time() - setup_start)) |
2408 | + |
2409 | + @classmethod |
2410 | + def tearDownClass(cls): |
2411 | + """determine tests passed and clear tmpdir if needed""" |
2412 | + |
2413 | + success = False |
2414 | + sfile = os.path.exists(cls.td.success_file) |
2415 | + efile = os.path.exists(cls.td.errors_file) |
2416 | + if not (sfile or efile): |
2417 | + LOG.warning('class %s had no status. possibly no tests run', |
2418 | + cls.__name__) |
2419 | + elif sfile and efile: |
2420 | + LOG.warning('class %s had success and fail', cls.__name__) |
2421 | + elif sfile: |
2422 | + success = True |
2423 | + |
2424 | + vmtests.clean_working_dir(cls.td.tmpdir, success, |
2425 | + keep_pass=vmtests.KEEP_DATA['pass'], |
2426 | + keep_fail=vmtests.KEEP_DATA['fail']) |
2427 | + |
2428 | + @classmethod |
2429 | + def boot_system(cls, cmd, console_log, proc_out, timeout, purpose, |
2430 | + interactive=False): |
2431 | + """boot target, wrapping boot cmd in timout check""" |
2432 | + |
2433 | + def boot_interactive(): |
2434 | + helpers.check_call(cmd, timeout=timeout) |
2435 | + return True |
2436 | + |
2437 | + def myboot(): |
2438 | + helpers.check_call(cmd, timeout=timeout, stdout=proc_out, |
2439 | + stderr=subprocess.STDOUT) |
2440 | + return True |
2441 | + |
2442 | + return vmtests.boot_log_wrap( |
2443 | + cls.__name__, boot_interactive if interactive else myboot, |
2444 | + cmd, console_log, timeout, purpose) |
2445 | + |
2446 | + def test_reported_results(self): |
2447 | + """parsing storage test reporting log to verify test results""" |
2448 | + LOG.info('Parsing storage test reporting log to verify test results') |
2449 | + |
2450 | + # load data |
2451 | + self.assertTrue(os.path.isfile(self.storage_log)) |
2452 | + data = json.loads(util.load_file(self.storage_log)) |
2453 | + |
2454 | + # filter out events that come from curtin block_meta |
2455 | + # since storagetests.make_command_environment sets the report stack |
2456 | + # prefix to a known value, all reporting events that come from within |
2457 | + # block meta will start with this prefix, while events from |
2458 | + # BaseStorageTest.run() will not |
2459 | + data = [e for e in data if 'name' in e and not |
2460 | + e.get('name').startswith(STORAGE_TEST_REPORT_STACK_PREFIX)] |
2461 | + |
2462 | + # data must be even length because everything needs to have start and |
2463 | + # stop |
2464 | + self.assertEqual( |
2465 | + len(data) % 2, 0, |
2466 | + 'some reporting data missing, maybe not all tests ran?') |
2467 | + |
2468 | + while len(data) > 0: |
2469 | + (start, finish) = data[:2] |
2470 | + data = data[2:] |
2471 | + self.assertEqual(start.get('event_type'), |
2472 | + events.START_EVENT_TYPE) |
2473 | + self.assertEqual(finish.get('event_type'), |
2474 | + events.FINISH_EVENT_TYPE) |
2475 | + |
2476 | + # decode returned curtin log |
2477 | + self.assertEqual(len(finish.get('files')), 2) |
2478 | + for (enc_log, log_dir_prefix) in zip( |
2479 | + finish.get('files'), ('curtin_logs', 'test_stats')): |
2480 | + log_file_basename = os.path.basename(enc_log.get('path')) |
2481 | + log_target_path = os.path.join(self.td.logs, log_dir_prefix, |
2482 | + log_file_basename) |
2483 | + decoded = base64.b64decode(enc_log.get('content')) |
2484 | + util.write_file(log_target_path, decoded, omode='wb') |
2485 | + |
2486 | + # log result |
2487 | + test_result = finish.get('result') == events.status.SUCCESS |
2488 | + record_file = (self.td.success_file if test_result |
2489 | + else self.td.errors_file) |
2490 | + |
2491 | + record_listing = [] |
2492 | + if os.path.exists(record_file): |
2493 | + contents = util.load_file(record_file) |
2494 | + if len(contents) != 0: |
2495 | + record_listing = json.loads(contents) |
2496 | + record_entry = {'runner_class': type(self).__name__, |
2497 | + 'test_name': start.get('name'), |
2498 | + 'result': finish.get('result')} |
2499 | + LOG.debug('test result: %s', record_entry) |
2500 | + record_listing.append(record_entry) |
2501 | + util.write_file( |
2502 | + record_file, util.json_dumps(record_listing)) |
2503 | + |
2504 | + self.assertTrue( |
2505 | + test_result, 'failed test: {}'.format(start.get('name'))) |
2506 | |
2507 | === added file 'tests/storagetest_runner/test_advanced_format.py' |
2508 | --- tests/storagetest_runner/test_advanced_format.py 1970-01-01 00:00:00 +0000 |
2509 | +++ tests/storagetest_runner/test_advanced_format.py 2016-09-15 18:06:47 +0000 |
2510 | @@ -0,0 +1,43 @@ |
2511 | +from . import TestStorageBase |
2512 | +from vmtests.releases import base_vm_classes as relbase |
2513 | + |
2514 | + |
2515 | +class AdvancedFormatTestBase(TestStorageBase): |
2516 | + __test__ = False |
2517 | + disk_block_size = 4096 |
2518 | + |
2519 | + |
2520 | +class PreciseAdvancedFormatTestStorage(relbase.precise_hwe_t, |
2521 | + AdvancedFormatTestBase): |
2522 | + __test__ = True |
2523 | + test_py_ver = 2 |
2524 | + |
2525 | + |
2526 | +class TrustyAdvancedFormatTestStorage(relbase.trusty, |
2527 | + AdvancedFormatTestBase): |
2528 | + __test__ = True |
2529 | + test_py_ver = 2 |
2530 | + |
2531 | + |
2532 | +class XenialAdvancedFormatTestStorage(relbase.xenial, |
2533 | + AdvancedFormatTestBase): |
2534 | + __test__ = True |
2535 | + |
2536 | + |
2537 | +class YakketyAdvancedFormatTestStorage(relbase.yakkety, |
2538 | + AdvancedFormatTestBase): |
2539 | + __test__ = True |
2540 | + |
2541 | + |
2542 | +class TrustyAdvancedFormatTestInteractive(relbase.trusty, |
2543 | + AdvancedFormatTestBase): |
2544 | + # this is just for manually running storagetests, should be turned off |
2545 | + __test__ = False |
2546 | + interactive = True |
2547 | + test_py_ver = 2 |
2548 | + |
2549 | + |
2550 | +class XenialAdvancedFormatTestInteractive(relbase.xenial, |
2551 | + AdvancedFormatTestBase): |
2552 | + __test__ = False |
2553 | + interactive = True |
2554 | |
2555 | === added file 'tests/storagetest_runner/test_basic.py' |
2556 | --- tests/storagetest_runner/test_basic.py 1970-01-01 00:00:00 +0000 |
2557 | +++ tests/storagetest_runner/test_basic.py 2016-09-15 18:06:47 +0000 |
2558 | @@ -0,0 +1,38 @@ |
2559 | +from . import TestStorageBase |
2560 | +from vmtests.releases import base_vm_classes as relbase |
2561 | + |
2562 | + |
2563 | +class PreciseTestStorage(relbase.precise_hwe_t, TestStorageBase): |
2564 | + __test__ = True |
2565 | + test_py_ver = 2 |
2566 | + |
2567 | + |
2568 | +class TrustyTestStorage(relbase.trusty, TestStorageBase): |
2569 | + __test__ = True |
2570 | + test_py_ver = 2 |
2571 | + |
2572 | + |
2573 | +class XenialTestStorage(relbase.xenial, TestStorageBase): |
2574 | + __test__ = True |
2575 | + |
2576 | + |
2577 | +class YakketyTestStorage(relbase.yakkety, TestStorageBase): |
2578 | + __test__ = True |
2579 | + |
2580 | + |
2581 | +class PreciseTestInteractive(relbase.precise_hwe_t, TestStorageBase): |
2582 | + # this is just for manual testing and should be left disabled |
2583 | + __test__ = False |
2584 | + interactive = True |
2585 | + test_py_ver = 2 |
2586 | + |
2587 | + |
2588 | +class TrustyTestInteractive(relbase.trusty, TestStorageBase): |
2589 | + __test__ = False |
2590 | + interactive = True |
2591 | + test_py_ver = 2 |
2592 | + |
2593 | + |
2594 | +class YakketyTestInteractive(relbase.yakkety, TestStorageBase): |
2595 | + __test__ = False |
2596 | + interactive = True |
2597 | |
2598 | === added file 'tests/storagetest_runner/test_nvme.py' |
2599 | --- tests/storagetest_runner/test_nvme.py 1970-01-01 00:00:00 +0000 |
2600 | +++ tests/storagetest_runner/test_nvme.py 2016-09-15 18:06:47 +0000 |
2601 | @@ -0,0 +1,35 @@ |
2602 | +from . import TestStorageBase |
2603 | +from vmtests.releases import base_vm_classes as relbase |
2604 | + |
2605 | + |
2606 | +class NVMETestBase(TestStorageBase): |
2607 | + __test__ = False |
2608 | + extra_disks = [('10G', 'nvme'), ('10G', 'nvme'), ('10G', 'nvme')] |
2609 | + storage_test_disks = {'disk1': {'path': '/dev/nvme0n1'}, |
2610 | + 'disk2': {'path': '/dev/nvme1n1'}, |
2611 | + 'disk3': {'path': '/dev/nvme2n1'}, |
2612 | + 'disk4': {'path': '/dev/nvmd3n1'}} |
2613 | + |
2614 | + |
2615 | +class PreciseNVMETestStorage(relbase.precise_hwe_t, NVMETestBase): |
2616 | + __test__ = True |
2617 | + test_py_ver = 2 |
2618 | + |
2619 | + |
2620 | +class TrustyNVMETestStorage(relbase.trusty, NVMETestBase): |
2621 | + __test__ = True |
2622 | + test_py_ver = 2 |
2623 | + |
2624 | + |
2625 | +class XenialNVMETestStorage(relbase.xenial, NVMETestBase): |
2626 | + __test__ = True |
2627 | + |
2628 | + |
2629 | +class YakketyNVMETestStorage(relbase.yakkety, NVMETestBase): |
2630 | + __test__ = True |
2631 | + |
2632 | + |
2633 | +class YakketyNVMETestInteractive(relbase.yakkety, NVMETestBase): |
2634 | + # this is just for manual testing, should be left disabled |
2635 | + __test__ = False |
2636 | + interactive = True |
2637 | |
2638 | === added file 'tests/storagetest_runner/test_scsi.py' |
2639 | --- tests/storagetest_runner/test_scsi.py 1970-01-01 00:00:00 +0000 |
2640 | +++ tests/storagetest_runner/test_scsi.py 2016-09-15 18:06:47 +0000 |
2641 | @@ -0,0 +1,36 @@ |
2642 | +from . import TestStorageBase |
2643 | +from vmtests.releases import base_vm_classes as relbase |
2644 | + |
2645 | + |
2646 | +class SCSITestBase(TestStorageBase): |
2647 | + __test__ = False |
2648 | + extra_disks = [('10G', 'scsi-hd'), ('10G', 'scsi-hd'), ('10G', 'scsi-hd'), |
2649 | + ('10G', 'scsi-hd')] |
2650 | + storage_test_disks = {'disk1': {'path': '/dev/sda'}, |
2651 | + 'disk2': {'path': '/dev/sdb'}, |
2652 | + 'disk3': {'path': '/dev/sdc'}, |
2653 | + 'disk4': {'path': '/dev/sdd'}} |
2654 | + |
2655 | + |
2656 | +class PreciseSCSITestStorage(relbase.precise_hwe_t, SCSITestBase): |
2657 | + __test__ = True |
2658 | + test_py_ver = 2 |
2659 | + |
2660 | + |
2661 | +class TrustySCSITestStorage(relbase.trusty, SCSITestBase): |
2662 | + __test__ = True |
2663 | + test_py_ver = 2 |
2664 | + |
2665 | + |
2666 | +class XenialSCSITestStorage(relbase.xenial, SCSITestBase): |
2667 | + __test__ = True |
2668 | + |
2669 | + |
2670 | +class YakketySCSITestStorage(relbase.yakkety, SCSITestBase): |
2671 | + __test__ = True |
2672 | + |
2673 | + |
2674 | +class YakketySCSITestInteractive(relbase.yakkety, SCSITestBase): |
2675 | + # this is just for manual testing, should be left disabled |
2676 | + __test__ = False |
2677 | + interactive = True |
2678 | |
2679 | === added directory 'tests/storagetests' |
2680 | === added file 'tests/storagetests/__init__.py' |
2681 | --- tests/storagetests/__init__.py 1970-01-01 00:00:00 +0000 |
2682 | +++ tests/storagetests/__init__.py 2016-09-15 18:06:47 +0000 |
2683 | @@ -0,0 +1,261 @@ |
2684 | +import json |
2685 | +import logging |
2686 | +import os |
2687 | +import shutil |
2688 | +import tempfile |
2689 | +import unittest |
2690 | + |
2691 | +from collections import namedtuple |
2692 | +from collections import OrderedDict |
2693 | + |
2694 | +from curtin import (block, util, config, reporter) |
2695 | +from curtin.commands import block_meta |
2696 | +from curtin.reporter import events |
2697 | + |
2698 | +STORAGE_TEST_REPORTER_CONF_FILE = '/storagetests-reporting.json' |
2699 | +STORAGE_TEST_REPORT_STACK_PREFIX = 'curtin/storagetests' |
2700 | +STORAGE_TEST_LOG_DIR = '/tmp/storagetest_logs' |
2701 | +STORAGE_TEST_DISK_CONF_FILE = '/storagetests-disks.json' |
2702 | + |
2703 | +VERBOSE_LOG_LEVEL = 2 |
2704 | + |
2705 | + |
2706 | +def make_command_environment(tdir): |
2707 | + """Configure os.environ to match setup during real curtin install""" |
2708 | + # copied from util.load_command_environment |
2709 | + mapping = {'scratch': 'WORKING_DIR', 'fstab': 'OUTPUT_FSTAB', |
2710 | + 'interfaces': 'OUTPUT_INTERFACES', 'config': 'CONFIG', |
2711 | + 'target': 'TARGET_MOUNT_POINT', |
2712 | + 'network_state': 'OUTPUT_NETWORK_STATE', |
2713 | + 'network_config': 'OUTPUT_NETWORK_CONFIG', |
2714 | + 'report_stack_prefix': 'CURTIN_REPORTSTACK'} |
2715 | + |
2716 | + directories = ('WORKING_DIR', 'TARGET_MOUNT_POINT') |
2717 | + data = {env_name: os.path.join(tdir, name) |
2718 | + for name, env_name in mapping.items()} |
2719 | + for directory in directories: |
2720 | + util.ensure_dir(data[directory]) |
2721 | + data['CURTIN_REPORTSTACK'] = STORAGE_TEST_REPORT_STACK_PREFIX |
2722 | + |
2723 | + # set verbosity to high |
2724 | + data['CURTIN_VERBOSITY'] = str(VERBOSE_LOG_LEVEL) |
2725 | + |
2726 | + os.environ.update(data) |
2727 | + |
2728 | + |
2729 | +class BaseStorageTest(unittest.TestCase): |
2730 | + """ |
2731 | + Base class for storage test runner classes. |
2732 | + Configures logging, reporting, and provides test runner function |
2733 | + """ |
2734 | + __test__ = False |
2735 | + disabled_tests = [] |
2736 | + preserve_test_files = [('test1.txt', 'partition preserve test one'), |
2737 | + ('test2', 'test #2'), |
2738 | + ('test3.txt', 'last test')] |
2739 | + |
2740 | + @classmethod |
2741 | + def setUpClass(cls): |
2742 | + """set up reporter if reporter configuration available""" |
2743 | + cls.report = os.path.exists(STORAGE_TEST_REPORTER_CONF_FILE) |
2744 | + cls.report_prefix = cls.__name__ |
2745 | + |
2746 | + # set up local log |
2747 | + if not os.path.exists(STORAGE_TEST_LOG_DIR): |
2748 | + util.ensure_dir(STORAGE_TEST_LOG_DIR) |
2749 | + log_file_name = '{}.log'.format(cls.report_prefix) |
2750 | + report_dump_name = '{}_results.json'.format(cls.report_prefix) |
2751 | + cls.log_file = os.path.join(STORAGE_TEST_LOG_DIR, log_file_name) |
2752 | + cls.report_dump = os.path.join(STORAGE_TEST_LOG_DIR, report_dump_name) |
2753 | + cls.log = logging.getLogger() |
2754 | + handler = logging.FileHandler(filename=cls.log_file) |
2755 | + handler.setLevel(VERBOSE_LOG_LEVEL) |
2756 | + cls.log.addHandler(handler) |
2757 | + cls.log.setLevel(VERBOSE_LOG_LEVEL) |
2758 | + |
2759 | + # init reporter |
2760 | + if cls.report: |
2761 | + conf = json.loads(util.load_file(STORAGE_TEST_REPORTER_CONF_FILE)) |
2762 | + reporter.update_configuration(conf.get('reporting', conf)) |
2763 | + |
2764 | + def run(self, result=None): |
2765 | + """run actual test (overridden from TestCase.run)""" |
2766 | + test_reporter = events.ReportEventStack( |
2767 | + name=self.id(), description='storage test', |
2768 | + reporting_enabled=True, level='INFO', |
2769 | + post_files=[self.log_file, self.report_dump]) |
2770 | + with test_reporter: |
2771 | + super(BaseStorageTest, self).run(result) |
2772 | + json_result = util.json_dumps({ |
2773 | + 'errors': [str(e[0]) for e in result.errors], |
2774 | + 'failures': [str(e[0]) for e in result.failures], |
2775 | + }) |
2776 | + util.write_file(self.report_dump, json_result) |
2777 | + if result.test.passed not in (True, None): |
2778 | + test_reporter.result = 'FAIL' |
2779 | + |
2780 | + def _get_config(self, conf_file): |
2781 | + """ |
2782 | + check if current config file has been disabled and skip if it has. |
2783 | + otherwise load CurtinConfig object with config file and set paths |
2784 | + """ |
2785 | + if conf_file in self.disabled_tests: |
2786 | + self.skipTest('testing conf file disabled: {}'.format(conf_file)) |
2787 | + conf = CurtinConfig(conf_file) |
2788 | + if not conf.set_test_disk_paths(): |
2789 | + self.skipTest('not enough disks for config: {}'.format(conf_file)) |
2790 | + return conf |
2791 | + |
2792 | + def _get_test_functions(self): |
2793 | + """find all functions in current test class staring with _test_""" |
2794 | + return [getattr(self, f) for f in dir(self) if f.startswith('_test_')] |
2795 | + |
2796 | + def _config_tester(self, conf_file, with_preserve=False): |
2797 | + """run block meta with specified config file and examine results""" |
2798 | + conf = self._get_config(conf_file) |
2799 | + with CurtinEnvironment() as env: |
2800 | + conf.run() |
2801 | + for test_function in self._get_test_functions(): |
2802 | + test_function(conf, env) |
2803 | + |
2804 | + # before leaving test env and removing tmpdir create files for |
2805 | + # preserve test |
2806 | + if with_preserve: |
2807 | + conf.set_preserves() |
2808 | + for mountpoint in conf.preserved_mountpoints(): |
2809 | + path = env.join_mount_path(mountpoint) |
2810 | + for test_file in self.preserve_test_files: |
2811 | + util.write_file(os.path.join(path, test_file[0]), |
2812 | + test_file[1]) |
2813 | + |
2814 | + # check that preserve works |
2815 | + if with_preserve: |
2816 | + with CurtinEnvironment() as env: |
2817 | + conf.run() |
2818 | + for mountpoint in conf.preserved_mountpoints(): |
2819 | + path = env.join_mount_path(mountpoint) |
2820 | + for test_file in self.preserve_test_files: |
2821 | + test_file_path = os.path.join(path, test_file[0]) |
2822 | + self.assertTrue(os.path.isfile(test_file_path)) |
2823 | + self.assertEqual(util.load_file(test_file_path), |
2824 | + test_file[1]) |
2825 | + |
2826 | + |
2827 | +class CurtinConfig(object): |
2828 | + """ |
2829 | + class to handle configuration file loading/parsing |
2830 | + also wraps calls to block_meta |
2831 | + """ |
2832 | + |
2833 | + conf_base = "examples/storagetests" |
2834 | + |
2835 | + def __init__(self, conf_file): |
2836 | + # read config |
2837 | + conf_path = os.path.join(self.conf_base, conf_file) |
2838 | + if not os.path.exists(conf_path): |
2839 | + raise ValueError("could not find conf file %s" % conf_path) |
2840 | + self.config = config.load_config(conf_path) |
2841 | + |
2842 | + @property |
2843 | + def actual_config(self): |
2844 | + """the 'real' storage configuration (list of config entries)""" |
2845 | + return self.config['storage']['config'] |
2846 | + |
2847 | + def set_test_disk_paths(self): |
2848 | + """set paths for disks in config, return True iff all paths valid""" |
2849 | + default = {'disk1': {'path': '/dev/vdb'}, |
2850 | + 'disk2': {'path': '/dev/vdc'}, |
2851 | + 'disk3': {'path': '/dev/vdd'}} |
2852 | + paths = (json.loads(util.load_file(STORAGE_TEST_DISK_CONF_FILE)) |
2853 | + if os.path.exists(STORAGE_TEST_DISK_CONF_FILE) else default) |
2854 | + for disk in self.find_matching('type', 'disk'): |
2855 | + disk.update(paths.get(disk['id'], {})) |
2856 | + |
2857 | + # only return true if curtin can find a path to every disk |
2858 | + return not any(p for p in [self.path_to(d) for d in paths] |
2859 | + if p is None or not block.is_valid_device(p)) |
2860 | + |
2861 | + def set_preserves(self): |
2862 | + """Set preserve flags everywhere in config that supports them""" |
2863 | + preservable_types = ["disk", "partition", "format", "lvm_volgroup", |
2864 | + "lvm_partition"] |
2865 | + for preservable in preservable_types: |
2866 | + entries = self.find_matching('type', preservable) |
2867 | + for entry in entries: |
2868 | + entry['preserve'] = True |
2869 | + |
2870 | + def preserved_mountpoints(self): |
2871 | + """get list of mountpoints on filesystems marked 'preserve'""" |
2872 | + mps = self.find_matching('type', 'mount') |
2873 | + return list(mp['path'] for mp in mps |
2874 | + if self.with_id(mp['device']).get('preserve')) |
2875 | + |
2876 | + def find_matching(self, attr, val): |
2877 | + """search through config for matching entry""" |
2878 | + return list(i for i in self.actual_config if i.get(attr) == val) |
2879 | + |
2880 | + def with_id(self, target): |
2881 | + """find config entry with given id""" |
2882 | + entry = self.find_matching('id', target) |
2883 | + if entry is None or len(entry) != 1: |
2884 | + return None |
2885 | + return entry[0] |
2886 | + |
2887 | + def path_to(self, target): |
2888 | + """ |
2889 | + wrapper for block_meta.get_path_to_storage_volume that handles |
2890 | + failure more cleanly |
2891 | + """ |
2892 | + try: |
2893 | + if isinstance(target, dict): |
2894 | + target = target.get('id') |
2895 | + path = block_meta.get_path_to_storage_volume( |
2896 | + target, self.as_ordered_dict()) |
2897 | + except (ValueError, NotImplementedError): |
2898 | + return None |
2899 | + return path |
2900 | + |
2901 | + def kname_of(self, target): |
2902 | + """get kname for target""" |
2903 | + path = self.path_to(target) |
2904 | + return os.path.split(path)[-1] |
2905 | + |
2906 | + def as_ordered_dict(self): |
2907 | + """ |
2908 | + get dict that matches block_meta.extract_storage_config_odict |
2909 | + """ |
2910 | + return OrderedDict((d['id'], d) for (i, d) in |
2911 | + enumerate(self.actual_config)) |
2912 | + |
2913 | + def run(self): |
2914 | + """start block_meta.meta_custom on loaded config""" |
2915 | + arg_holder = namedtuple('args', 'config') |
2916 | + args = arg_holder(config=self.config) |
2917 | + block_meta.meta_custom(args) |
2918 | + |
2919 | + |
2920 | +class CurtinEnvironment(object): |
2921 | + """context manager that approximates curtin internal state env vars""" |
2922 | + |
2923 | + def __init__(self): |
2924 | + self.tmpdir = tempfile.mkdtemp() |
2925 | + |
2926 | + def __enter__(self): |
2927 | + make_command_environment(self.tmpdir) |
2928 | + return self |
2929 | + |
2930 | + def join_mount_path(self, abs_path): |
2931 | + """get path to mountpoint in tmpdir from target mount path""" |
2932 | + while len(abs_path) > 0 and abs_path[0] == "/": |
2933 | + abs_path = abs_path[1:] |
2934 | + return os.path.join(self.tmpdir, 'target', abs_path) |
2935 | + |
2936 | + def __exit__(self, etype, value, traceback): |
2937 | + # Note, this is unmount technique at end of |
2938 | + # curtin.commands.install.cmd_install |
2939 | + mounted = block.get_mountpoints() |
2940 | + mounted.sort(key=lambda x: -1 * x.count("/")) |
2941 | + for directory in [d for d in mounted if self.tmpdir in d]: |
2942 | + util.do_umount(directory) |
2943 | + util.do_umount(os.path.join(self.tmpdir, 'target')) |
2944 | + shutil.rmtree(self.tmpdir) |
2945 | |
2946 | === added file 'tests/storagetests/test_bcache.py' |
2947 | --- tests/storagetests/test_bcache.py 1970-01-01 00:00:00 +0000 |
2948 | +++ tests/storagetests/test_bcache.py 2016-09-15 18:06:47 +0000 |
2949 | @@ -0,0 +1,17 @@ |
2950 | +from . import BaseStorageTest |
2951 | +from .verifiers import (BasicTests, BcacheTests) |
2952 | +from nose_parameterized import (parameterized, param) |
2953 | + |
2954 | +BCACHE_TESTS = [ |
2955 | + param('simple bcache', 'bcache_basic.yaml'), |
2956 | + param('bcache with shared cache dev', 'bcache_shared_cache.yaml'), |
2957 | + param('two separate caches', 'bcache_double.yaml'), |
2958 | +] |
2959 | + |
2960 | + |
2961 | +class TestBcache(BaseStorageTest, BasicTests, BcacheTests): |
2962 | + __test__ = True |
2963 | + |
2964 | + @parameterized.expand(BCACHE_TESTS) |
2965 | + def test_bcache(self, _, conf_file, with_preserve=False): |
2966 | + self._config_tester(conf_file, with_preserve=with_preserve) |
2967 | |
2968 | === added file 'tests/storagetests/test_clear_holders.py' |
2969 | --- tests/storagetests/test_clear_holders.py 1970-01-01 00:00:00 +0000 |
2970 | +++ tests/storagetests/test_clear_holders.py 2016-09-15 18:06:47 +0000 |
2971 | @@ -0,0 +1,105 @@ |
2972 | +from . import (BaseStorageTest, CurtinEnvironment) |
2973 | +from curtin.block import clear_holders |
2974 | +from nose_parameterized import (parameterized, param) |
2975 | + |
2976 | +CLEAR_HOLDERS_TESTS = [ |
2977 | + param('basic gpt', 'gpt_simple.yaml'), |
2978 | + param('dos logical', 'logical.yaml'), |
2979 | + param('simple raid', 'mdadm.yaml', |
2980 | + expected_holders={'disk1p2': {'md0'}, 'disk1p3': {'md0'}}), |
2981 | + param('simple lvm', 'lvm.yaml', |
2982 | + expected_holders={'disk1p2': {'dm-0'}, 'disk1p3': {'dm-0'}}), |
2983 | + param('lvm multiple volgroups', 'lvm_multiple_vg.yaml', |
2984 | + expected_holders={'disk1p1': {'dm-0'}, |
2985 | + 'disk1p2': {'dm-0'}, |
2986 | + 'disk2': {'dm-1'}, |
2987 | + 'disk3': {'dm-1'}}), |
2988 | + param('lvm bug 1592962 dash in name', 'lvm_with_dash.yaml', |
2989 | + expected_holders={'disk1p2': {'dm-0', 'dm-1'}}), |
2990 | + param('lvm bug 1588875 mult lvols on pvol', 'lvm_mult_lvols_on_pvol.yaml', |
2991 | + expected_holders={'disk1p2': {'dm-0', 'dm-1'}, |
2992 | + 'disk1p3': {'dm-1'}, |
2993 | + 'disk2': {'dm-1'}}), |
2994 | + param('simple bcache', 'bcache_basic.yaml', |
2995 | + expected_holders={'disk1p2': {'bcache0'}, 'disk2p1': {'bcache0'}}), |
2996 | + param('two separate bcaches', 'bcache_double.yaml', |
2997 | + expected_holders={'disk1p2': {'bcache0'}, |
2998 | + 'disk1p3': {'bcache1'}, |
2999 | + 'disk2p1': {'bcache0'}, |
3000 | + 'disk3': {'bcache1'}}), |
3001 | + param('shared cache device bcache', 'bcache_shared_cache.yaml', |
3002 | + expected_holders={'disk1p2': {'bcache0'}, |
3003 | + 'disk1p3': {'bcache1'}, |
3004 | + 'disk2p1': {'bcache0', 'bcache1'}}), |
3005 | + param('bcache on mdadm', 'mdadm_bcache.yaml', |
3006 | + expected_holders={'disk1p2': {'md0'}, |
3007 | + 'disk1p3': {'md0'}, |
3008 | + 'disk1p4': {'md0'}, |
3009 | + 'disk1p5': {'bcache0', 'bcache1'}, |
3010 | + 'disk1p6': {'bcache1'}, |
3011 | + 'disk2': {'bcache2'}, |
3012 | + 'disk3p1': {'bcache2'}, |
3013 | + 'md0': {'bcache0'}}), |
3014 | + param('simple cryptseup', 'crypt_basic.yaml', |
3015 | + expected_holders={'disk1p2': {'dm-0'}}), |
3016 | + param('all in data', 'allindata.yaml', |
3017 | + expected_holders={'sda1': {'md0'}, 'sdb1': {'md0'}, 'sdc1': {'md0'}, |
3018 | + 'sdd1': {'md0'}, 'sda2': {'md1'}, 'sdb2': {'md1'}, |
3019 | + 'sdc2': {'md1'}, 'sdd2': {'md1'}, 'sda3': {'md1'}, |
3020 | + 'sda4': {'md2'}, 'sdb3': {'md2'}, 'sdc3': {'md2'}, |
3021 | + 'sdb4': {'md2'}, 'sdc4': {'md3'}, 'sdd3': {'md3'}, |
3022 | + 'lvmpart3': {'dm-3'}}), |
3023 | +] |
3024 | + |
3025 | + |
3026 | +class TestClearHolders(BaseStorageTest): |
3027 | + __test__ = True |
3028 | + # there is a higher chance of the runner vm encountering a kernel panic |
3029 | + # when running mdadm_bcache.yaml than any of the other bcache tests, so it |
3030 | + # cannot be enabled here |
3031 | + |
3032 | + # mdadm.yaml cannot be enabled because when running many tests in parallel |
3033 | + # it sometimes takes way too long for the raid array to assemble |
3034 | + |
3035 | + # logical.yaml is disabled because of #1610628 |
3036 | + |
3037 | + # allindata.yaml and crypt_basic.yaml are disabled because of #1611452, |
3038 | + # which likely will not be fixed. however, clear_holders does work with |
3039 | + # these files enabled in all situations where curtin is able to create |
3040 | + # crypt devices successfully |
3041 | + disabled_tests = ['mdadm_bcache.yaml', 'mdadm.yaml', 'logical.yaml', |
3042 | + 'allindata.yaml', 'crypt_basic.yaml'] |
3043 | + |
3044 | + @parameterized.expand(CLEAR_HOLDERS_TESTS) |
3045 | + def test_clear(self, _, conf_file, expected_holders={}): |
3046 | + conf = self._get_config(conf_file) |
3047 | + self.log.info('Testing clear_holders on: {}'.format(conf_file)) |
3048 | + |
3049 | + # set up test config |
3050 | + with CurtinEnvironment(): |
3051 | + conf.run() |
3052 | + |
3053 | + # check that holders are as expected |
3054 | + for (dev_id, holders) in expected_holders.items(): |
3055 | + self.log.info('verifying dev with id: %s has holders: %s', |
3056 | + dev_id, holders) |
3057 | + path = conf.path_to(dev_id) |
3058 | + self.assertIsNotNone(path) |
3059 | + found = set(clear_holders.get_holders(path)) |
3060 | + self.assertEqual(found, holders, "Incorrect holders on '{}': {}" |
3061 | + .format(dev_id, holders)) |
3062 | + |
3063 | + # run clear_holders on all paths |
3064 | + paths = [conf.path_to(e) for e in conf.find_matching('type', 'disk')] |
3065 | + clear_holders.start_clear_holders_deps() |
3066 | + clear_holders.clear_holders(paths) |
3067 | + |
3068 | + # make sure everything is clear manually |
3069 | + for path in paths: |
3070 | + remaining_holders = clear_holders.get_holders(path) |
3071 | + self.assertEqual(len(remaining_holders), 0, |
3072 | + "Clear holders failed, '{}' still held by: {}" |
3073 | + .format(path, remaining_holders)) |
3074 | + |
3075 | + # make sure that assert_clear works |
3076 | + clear_holders.assert_clear(paths) |
3077 | |
3078 | === added file 'tests/storagetests/test_complex.py' |
3079 | --- tests/storagetests/test_complex.py 1970-01-01 00:00:00 +0000 |
3080 | +++ tests/storagetests/test_complex.py 2016-09-15 18:06:47 +0000 |
3081 | @@ -0,0 +1,17 @@ |
3082 | +from . import BaseStorageTest |
3083 | +from .verifiers import (BasicTests, BcacheTests, LvmTests, RaidTests) |
3084 | +from nose_parameterized import (parameterized, param) |
3085 | + |
3086 | +COMPLEX_TESTS = [ |
3087 | + param('all in data', 'allindata.yaml') |
3088 | +] |
3089 | + |
3090 | + |
3091 | +class TestComplexStorage(BaseStorageTest, BasicTests, BcacheTests, |
3092 | + LvmTests, RaidTests): |
3093 | + __test__ = True |
3094 | + disabled_tests = ['allindata.yaml'] |
3095 | + |
3096 | + @parameterized.expand(COMPLEX_TESTS) |
3097 | + def test_complex(self, _, conf_file, with_preserve=False): |
3098 | + self._config_tester(conf_file, with_preserve=with_preserve) |
3099 | |
3100 | === added file 'tests/storagetests/test_disk_partitions.py' |
3101 | --- tests/storagetests/test_disk_partitions.py 1970-01-01 00:00:00 +0000 |
3102 | +++ tests/storagetests/test_disk_partitions.py 2016-09-15 18:06:47 +0000 |
3103 | @@ -0,0 +1,21 @@ |
3104 | +from . import BaseStorageTest |
3105 | +from .verifiers import (BasicTests) |
3106 | +from nose_parameterized import (parameterized, param) |
3107 | + |
3108 | +BASIC_PARTITIONING_TESTS = [ |
3109 | + param('dos disk only', 'diskonlydos.yaml'), |
3110 | + param('gpt disk only', 'diskonlygpt.yaml'), |
3111 | + param('basic gpt', 'gpt_simple.yaml', with_preserve=True), |
3112 | + param('gpt with bios_grub', 'gpt_boot.yaml', with_preserve=True), |
3113 | + param('basic dos', 'basicdos.yaml', with_preserve=True), |
3114 | + param('dos logical', 'logical.yaml', with_preserve=True), |
3115 | +] |
3116 | + |
3117 | + |
3118 | +class TestBasic(BaseStorageTest, BasicTests): |
3119 | + __test__ = True |
3120 | + disabled_tests = ['logical.yaml'] |
3121 | + |
3122 | + @parameterized.expand(BASIC_PARTITIONING_TESTS) |
3123 | + def test_basic_config(self, _, conf_file, with_preserve=False): |
3124 | + self._config_tester(conf_file, with_preserve=with_preserve) |
3125 | |
3126 | === added file 'tests/storagetests/test_format.py' |
3127 | --- tests/storagetests/test_format.py 1970-01-01 00:00:00 +0000 |
3128 | +++ tests/storagetests/test_format.py 2016-09-15 18:06:47 +0000 |
3129 | @@ -0,0 +1,19 @@ |
3130 | +from . import BaseStorageTest |
3131 | +from .verifiers import BasicTests |
3132 | +from nose_parameterized import (parameterized, param) |
3133 | + |
3134 | +FORMAT_TESTS = [ |
3135 | + param('whole disk ext fs', 'whole_disk_ext.yaml'), |
3136 | + param('whole disk btrfs xfs', 'whole_disk_btrfs_xfs.yaml'), |
3137 | + param('whole disk dos formats', 'whole_disk_fat.yaml'), |
3138 | + param('whole disk swap', 'whole_disk_swap.yaml'), |
3139 | + param('formats on lvm', 'formats_on_lvm.yaml'), |
3140 | +] |
3141 | + |
3142 | + |
3143 | +class TestWholeDiskFormat(BaseStorageTest, BasicTests): |
3144 | + __test__ = True |
3145 | + |
3146 | + @parameterized.expand(FORMAT_TESTS) |
3147 | + def test_whole_disk_formats(self, _, conf_file): |
3148 | + self._config_tester(conf_file) |
3149 | |
3150 | === added file 'tests/storagetests/test_layers_on_mdadm.py' |
3151 | --- tests/storagetests/test_layers_on_mdadm.py 1970-01-01 00:00:00 +0000 |
3152 | +++ tests/storagetests/test_layers_on_mdadm.py 2016-09-15 18:06:47 +0000 |
3153 | @@ -0,0 +1,21 @@ |
3154 | +from . import BaseStorageTest |
3155 | +from .verifiers import (BasicTests, RaidTests, BcacheTests, LvmTests) |
3156 | +from nose_parameterized import (parameterized, param) |
3157 | + |
3158 | +LAYERS_ON_MDADM_TESTS = [ |
3159 | + param('bcache on mdadm', 'mdadm_bcache.yaml'), |
3160 | + param('lvm on mdadm', 'mdadm_lvm.yaml'), |
3161 | +] |
3162 | + |
3163 | + |
3164 | +class TestLayersOnMdadm(BaseStorageTest, BasicTests, RaidTests, |
3165 | + BcacheTests, LvmTests): |
3166 | + __test__ = True |
3167 | + # about 50% of the time running md_check on these devices fails with: |
3168 | + # md_check failed with exception Array syncing, not idle state: /dev/md0 |
3169 | + # (I think when the next storage layer is created it causes a resync) |
3170 | + disabled_tests = ['mdadm_bcache.yaml', 'mdadm_lvm.yaml'] |
3171 | + |
3172 | + @parameterized.expand(LAYERS_ON_MDADM_TESTS) |
3173 | + def test_layers_on_mdadm(self, _, conf_file, with_preserve=False): |
3174 | + self._config_tester(conf_file, with_preserve=with_preserve) |
3175 | |
3176 | === added file 'tests/storagetests/test_lvm.py' |
3177 | --- tests/storagetests/test_lvm.py 1970-01-01 00:00:00 +0000 |
3178 | +++ tests/storagetests/test_lvm.py 2016-09-15 18:06:47 +0000 |
3179 | @@ -0,0 +1,16 @@ |
3180 | +from . import BaseStorageTest |
3181 | +from .verifiers import (BasicTests, LvmTests) |
3182 | +from nose_parameterized import (parameterized, param) |
3183 | + |
3184 | +LVM_TESTS = [ |
3185 | + param('simple lvm', 'lvm.yaml', with_preserve=True), |
3186 | + param('multiple volgroups', 'lvm_multiple_vg.yaml', with_preserve=True), |
3187 | +] |
3188 | + |
3189 | + |
3190 | +class TestLVM(BaseStorageTest, BasicTests, LvmTests): |
3191 | + __test__ = True |
3192 | + |
3193 | + @parameterized.expand(LVM_TESTS) |
3194 | + def test_lvm(self, _, conf_file, with_preserve=False): |
3195 | + self._config_tester(conf_file, with_preserve=with_preserve) |
3196 | |
3197 | === added file 'tests/storagetests/test_raid.py' |
3198 | --- tests/storagetests/test_raid.py 1970-01-01 00:00:00 +0000 |
3199 | +++ tests/storagetests/test_raid.py 2016-09-15 18:06:47 +0000 |
3200 | @@ -0,0 +1,19 @@ |
3201 | +from . import BaseStorageTest |
3202 | +from .verifiers import (BasicTests, RaidTests) |
3203 | +from nose_parameterized import (parameterized, param) |
3204 | + |
3205 | +MDADM_TESTS = [ |
3206 | + param('simple mdadm', 'mdadm.yaml', with_preserve=True), |
3207 | +] |
3208 | + |
3209 | + |
3210 | +class TestRaid(BaseStorageTest, BasicTests, RaidTests): |
3211 | + __test__ = True |
3212 | + # there is a race waiting for the mdadm device to start up and shut down |
3213 | + # when the host system is under stress from tests running in parallel |
3214 | + # that makes it impossible to enable this file here |
3215 | + disabled_tests = ['mdadm.yaml'] |
3216 | + |
3217 | + @parameterized.expand(MDADM_TESTS) |
3218 | + def test_mdadm(self, _, conf_file, with_preserve=False): |
3219 | + self._config_tester(conf_file, with_preserve=with_preserve) |
3220 | |
3221 | === added file 'tests/storagetests/verifiers.py' |
3222 | --- tests/storagetests/verifiers.py 1970-01-01 00:00:00 +0000 |
3223 | +++ tests/storagetests/verifiers.py 2016-09-15 18:06:47 +0000 |
3224 | @@ -0,0 +1,222 @@ |
3225 | +import os |
3226 | +import time |
3227 | +from curtin import (block, util) |
3228 | + |
3229 | + |
3230 | +class BasicTests(object): |
3231 | + """Verifer for disk partitions, formats and mounts""" |
3232 | + __test__ = False |
3233 | + |
3234 | + # test functions shared between most configs: |
3235 | + def _test_ptable(self, conf, env): |
3236 | + """test that the right partition table has been set up""" |
3237 | + for conf_entry in conf.find_matching('type', 'disk'): |
3238 | + needed_ptable = conf_entry.get('ptable') |
3239 | + dev_path = conf.path_to(conf_entry) |
3240 | + self.assertIsNotNone(dev_path) |
3241 | + self.assertTrue(block.is_valid_device(dev_path)) |
3242 | + try: |
3243 | + current_ptable = block.dev_blkid( |
3244 | + dev_path, cache=False)['PTTYPE'] |
3245 | + except (util.ProcessExecutionError, KeyError): |
3246 | + # blkid did not get info or failed to run, will be fixed when |
3247 | + # lp:~wesley-wiedenmeier/curtin/blkid lands |
3248 | + continue |
3249 | + |
3250 | + # if ptable not in config then don't need one to be created |
3251 | + if needed_ptable is None: |
3252 | + continue |
3253 | + |
3254 | + if needed_ptable == 'gpt': |
3255 | + self.assertEqual(current_ptable, 'gpt') |
3256 | + elif needed_ptable in ['dos', 'msdos']: |
3257 | + self.assertIn(current_ptable, ['dos', 'msdos']) |
3258 | + |
3259 | + def _test_partitions(self, conf, env): |
3260 | + """make sure all required partitions exist and are the right size""" |
3261 | + for conf_entry in conf.find_matching('type', 'partition'): |
3262 | + # Do not try to verify size of extended partitions, as releases |
3263 | + # after trusty will report 0 in /sys/class/block/*/size for the |
3264 | + # extended partition |
3265 | + if conf_entry.get('flag') == 'extended': |
3266 | + continue |
3267 | + |
3268 | + # check it was created |
3269 | + dev_path = conf.path_to(conf_entry) |
3270 | + self.assertIsNotNone(dev_path) |
3271 | + self.assertTrue(block.is_valid_device(dev_path)) |
3272 | + |
3273 | + # check size |
3274 | + # if none specified, dont try to check |
3275 | + if conf_entry.get('size') is None: |
3276 | + continue |
3277 | + |
3278 | + sysfs_size_path = os.path.join( |
3279 | + '/sys/class/block', conf.kname_of(conf_entry), 'size') |
3280 | + part_size_blocks = int(util.load_file(sysfs_size_path)) |
3281 | + # sysfs size is always reported in 512 byte blocks even if the |
3282 | + # underlying disk is advanced format, so do not load |
3283 | + # queue/logical_blocks_size to get multiplier |
3284 | + part_size = 512 * part_size_blocks |
3285 | + req_size = util.human2bytes(conf_entry.get('size')) |
3286 | + self.assertEqual(req_size, part_size) |
3287 | + |
3288 | + def _test_format(self, conf, env): |
3289 | + """make sure formats created properly""" |
3290 | + for conf_entry in conf.find_matching('type', 'format'): |
3291 | + req_fs = conf_entry.get('fstype') |
3292 | + volume_conf = conf.with_id(conf_entry.get('volume')) |
3293 | + |
3294 | + # get current fs |
3295 | + lsblk_info = block._lsblock().get(conf.kname_of(volume_conf)) |
3296 | + if lsblk_info is None: |
3297 | + # skip for now until better detection written |
3298 | + continue |
3299 | + current_fs = lsblk_info.get('FSTYPE') |
3300 | + if current_fs is None or len(current_fs) == 0: |
3301 | + # skip for now until better detection written |
3302 | + continue |
3303 | + |
3304 | + # Blkid calls fat formats vfat and lsblk doesn't differentiate |
3305 | + # between fatsize, so if needed format is fat, only verify that the |
3306 | + # actual format is some kind of fat |
3307 | + if req_fs.startswith('fat'): |
3308 | + self.assertIn('fat', current_fs) |
3309 | + else: |
3310 | + self.assertEqual(req_fs, current_fs) |
3311 | + |
3312 | + def _test_mount(self, conf, env): |
3313 | + """make sure fstab is okay and mounts are in right place""" |
3314 | + mount_configs = conf.find_matching('type', 'mount') |
3315 | + # do not try to verify that fstab exists if no mount points |
3316 | + if len(mount_configs) == 0: |
3317 | + return |
3318 | + |
3319 | + # verify fstab exists and load data |
3320 | + fstab_path = os.path.join(env.tmpdir, 'fstab') |
3321 | + self.assertTrue(os.path.exists(env.tmpdir)) |
3322 | + self.assertTrue(os.path.exists(fstab_path)) |
3323 | + fstab = util.load_file(fstab_path).splitlines() |
3324 | + |
3325 | + # check mountpoints |
3326 | + for conf_entry in mount_configs: |
3327 | + format_conf = conf.with_id(conf_entry.get('device')) |
3328 | + dev_path = conf.path_to(format_conf.get('volume')) |
3329 | + self.assertTrue(os.path.exists(dev_path)) |
3330 | + |
3331 | + mountpoint = conf_entry.get('path') |
3332 | + if format_conf.get('fstype') == 'swap': |
3333 | + mountpoint = 'none' |
3334 | + |
3335 | + ident = None |
3336 | + if (conf.with_id(format_conf.get('volume')).get('type') in |
3337 | + ['raid', 'bcache', 'disk', 'lvm_partition']): |
3338 | + ident = dev_path |
3339 | + else: |
3340 | + # anything else we mount by uuid |
3341 | + dev_uuid = block.get_volume_uuid(dev_path) |
3342 | + self.assertIsNotNone(dev_uuid) |
3343 | + ident = 'UUID=' + dev_uuid |
3344 | + |
3345 | + try: |
3346 | + fstab_entry = next(i.split() for i in fstab if ident in i) |
3347 | + except StopIteration: |
3348 | + self.fail('fstab missing {}'.format(conf_entry['id'])) |
3349 | + self.assertEqual(fstab_entry[1], mountpoint, |
3350 | + 'Fstabe entry for {} incorrect' |
3351 | + .format(conf_entry['id'])) |
3352 | + |
3353 | + |
3354 | +class LvmTests(object): |
3355 | + """Verifier for lvm volume groups and lvm logical partitions""" |
3356 | + __test__ = False |
3357 | + |
3358 | + def _test_lvm_vg(self, conf, env): |
3359 | + """ensure that all volgroups have required physical volumes""" |
3360 | + for volgroup in conf.find_matching('type', 'lvm_volgroup'): |
3361 | + req_paths = set(conf.path_to(dev) for dev in |
3362 | + volgroup.get('devices')) |
3363 | + (out, _) = util.subp(['pvdisplay', '-C', '--separator', '=', '-o', |
3364 | + 'vg_name,pv_name', '--noheadings'], |
3365 | + capture=True) |
3366 | + vg_name = volgroup.get('name') |
3367 | + self.assertIsNotNone(vg_name) |
3368 | + pv_paths = set(l.split('=')[-1] for l in out.strip().splitlines() |
3369 | + if vg_name in l) |
3370 | + self.assertEqual(pv_paths, req_paths, |
3371 | + 'lvm volgroup {} requires devs: {}, only has {}' |
3372 | + .format(vg_name, req_paths, pv_paths)) |
3373 | + |
3374 | + def _test_lvm_lv(self, conf, env): |
3375 | + """ensure that all needed logical volumes are present""" |
3376 | + for lvm_part in conf.find_matching('type', 'lvm_partition'): |
3377 | + lv_name = lvm_part.get('name') |
3378 | + vg_name = conf.with_id(lvm_part.get('volgroup')).get('name') |
3379 | + self.assertIsNotNone(lv_name) |
3380 | + self.assertIsNotNone(vg_name) |
3381 | + (out, _) = util.subp(['lvdisplay', '-C', '--separator', '=', '-o', |
3382 | + 'lv_name,vg_name', '--noheadings'], |
3383 | + capture=True) |
3384 | + out = out.strip() |
3385 | + |
3386 | + for line in out.splitlines(): |
3387 | + line = line.strip() |
3388 | + (line_lv, line_vg) = line.split('=') |
3389 | + if line_lv == lv_name and line_vg == vg_name: |
3390 | + break |
3391 | + else: |
3392 | + raise AssertionError("lvm lv '{}' missing in volgroup '{}'" |
3393 | + .format(lv_name, vg_name)) |
3394 | + |
3395 | + |
3396 | +class RaidTests(object): |
3397 | + """Verifier for mdadm configuration""" |
3398 | + __test__ = False |
3399 | + |
3400 | + def _test_mdadm(self, conf, env): |
3401 | + """ensure all mdadm devices present and using correct devices""" |
3402 | + for conf_entry in conf.find_matching('type', 'raid'): |
3403 | + req_devs = list(conf.path_to(i) for i in conf_entry.get('devices')) |
3404 | + req_spare = list(conf.path_to(i) for i in |
3405 | + conf_entry.get('spare_devices')) |
3406 | + raidlevel = conf_entry.get('raidlevel') |
3407 | + md_path = conf.path_to(conf_entry) |
3408 | + # FIXME: this is a bit of a hack, should later be switched to use |
3409 | + # mdadm.md_block_until_sync() |
3410 | + error = None |
3411 | + for _ in range(20): |
3412 | + time.sleep(1) |
3413 | + try: |
3414 | + block.mdadm.md_check(md_path, raidlevel, req_devs, |
3415 | + req_spare) |
3416 | + break |
3417 | + except ValueError as _error: |
3418 | + error = _error |
3419 | + continue |
3420 | + else: |
3421 | + # md_check failed after 20 seconds, fail test |
3422 | + self.fail('md_check failed with exception {}'.format(error)) |
3423 | + |
3424 | + |
3425 | +class BcacheTests(object): |
3426 | + """Verifier for bcache configuration""" |
3427 | + __test__ = False |
3428 | + |
3429 | + def _test_bcache(self, conf, env): |
3430 | + """ensure that correct cache and backing devices are used""" |
3431 | + for conf_entry in conf.find_matching('type', 'bcache'): |
3432 | + cache_kname = conf.kname_of(conf_entry['cache_device']) |
3433 | + backing_kname = conf.kname_of(conf_entry['backing_device']) |
3434 | + bcache_path = conf.path_to(conf_entry) |
3435 | + bcache_kname = conf.kname_of(conf_entry) |
3436 | + self.assertTrue(block.is_valid_device(bcache_path)) |
3437 | + cache_slaves = os.listdir( |
3438 | + os.path.join('/sys/block', bcache_kname, 'slaves')) |
3439 | + self.assertEqual(set((cache_kname, backing_kname)), |
3440 | + set(cache_slaves)) |
3441 | + |
3442 | + # since registering and then unregistering a bcache device too |
3443 | + # quickly can cause a kernel panic, sleep for 1 second to make it |
3444 | + # less likely that a kernel panic will occur when clear_holders |
3445 | + # runs on the next storage test |
3446 | + time.sleep(1) |
3447 | |
3448 | === modified file 'tests/unittests/test_reporter.py' |
3449 | --- tests/unittests/test_reporter.py 2016-06-14 13:54:46 +0000 |
3450 | +++ tests/unittests/test_reporter.py 2016-09-15 18:06:47 +0000 |
3451 | @@ -147,8 +147,7 @@ |
3452 | event_dict = self._get_reported_event(mock_report_event).as_dict() |
3453 | self.assertEqual(event_dict.get('name'), self.ev_name) |
3454 | self.assertEqual(event_dict.get('level'), 'INFO') |
3455 | - self.assertEqual(event_dict.get('description'), |
3456 | - 'started: ' + self.ev_desc) |
3457 | + self.assertEqual(event_dict.get('description'), self.ev_desc) |
3458 | self.assertEqual(event_dict.get('event_type'), events.START_EVENT_TYPE) |
3459 | |
3460 | @patch('curtin.reporter.events.report_event') |
3461 | @@ -157,8 +156,7 @@ |
3462 | event = self._get_reported_event(mock_report_event) |
3463 | self.assertIsInstance(event, events.FinishReportingEvent) |
3464 | event_dict = event.as_dict() |
3465 | - self.assertEqual(event_dict.get('description'), |
3466 | - 'finished: ' + self.ev_desc) |
3467 | + self.assertEqual(event_dict.get('description'), self.ev_desc) |
3468 | |
3469 | @patch('curtin.reporter.events.report_event') |
3470 | def test_report_finished_event_levelset(self, mock_report_event): |
3471 | @@ -166,15 +164,13 @@ |
3472 | result=events.status.FAIL) |
3473 | event_dict = self._get_reported_event(mock_report_event).as_dict() |
3474 | self.assertEqual(event_dict.get('level'), 'ERROR') |
3475 | - self.assertEqual(event_dict.get('description'), |
3476 | - 'failed: ' + self.ev_desc) |
3477 | + self.assertEqual(event_dict.get('description'), self.ev_desc) |
3478 | |
3479 | events.report_finish_event(self.ev_name, self.ev_desc, |
3480 | result=events.status.WARN) |
3481 | event_dict = self._get_reported_event(mock_report_event).as_dict() |
3482 | self.assertEqual(event_dict.get('level'), 'WARN') |
3483 | - self.assertEqual(event_dict.get('description'), |
3484 | - 'failed: ' + self.ev_desc) |
3485 | + self.assertEqual(event_dict.get('description'), self.ev_desc) |
3486 | |
3487 | @patch('curtin.reporter.events.report_event') |
3488 | def test_report_finished_post_files(self, mock_report_event): |
3489 | |
3490 | === modified file 'tests/vmtests/__init__.py' |
3491 | --- tests/vmtests/__init__.py 2016-08-05 12:22:55 +0000 |
3492 | +++ tests/vmtests/__init__.py 2016-09-15 18:06:47 +0000 |
3493 | @@ -15,6 +15,7 @@ |
3494 | import curtin.net as curtin_net |
3495 | import curtin.util as util |
3496 | |
3497 | +from tools.report_webhook_logger import CaptureReporting |
3498 | from curtin.commands.install import INSTALL_PASS_MSG |
3499 | |
3500 | from .image_sync import query as imagesync_query |
3501 | @@ -306,6 +307,26 @@ |
3502 | self.success_file = os.path.join(self.logs, "success") |
3503 | self.errors_file = os.path.join(self.logs, "errors.json") |
3504 | |
3505 | + # write userdata |
3506 | + self.write_userdata(user_data) |
3507 | + |
3508 | + # create target disk |
3509 | + logger.debug('Creating target disk') |
3510 | + self.target_disk = os.path.join(self.disks, "install_disk.img") |
3511 | + subprocess.check_call(["qemu-img", "create", "-f", TARGET_IMAGE_FORMAT, |
3512 | + self.target_disk, "10G"], |
3513 | + stdout=DEVNULL, stderr=subprocess.STDOUT) |
3514 | + |
3515 | + # create output disk, mount ro |
3516 | + logger.debug('Creating output disk') |
3517 | + self.output_disk = os.path.join(self.boot, OUTPUT_DISK_NAME) |
3518 | + subprocess.check_call(["qemu-img", "create", "-f", TARGET_IMAGE_FORMAT, |
3519 | + self.output_disk, "10M"], |
3520 | + stdout=DEVNULL, stderr=subprocess.STDOUT) |
3521 | + subprocess.check_call(["mkfs.ext2", "-F", self.output_disk], |
3522 | + stdout=DEVNULL, stderr=subprocess.STDOUT) |
3523 | + |
3524 | + def write_userdata(self, user_data): |
3525 | # write cloud-init for installed system |
3526 | meta_data_file = os.path.join(self.install, "meta-data") |
3527 | with open(meta_data_file, "w") as fp: |
3528 | @@ -314,13 +335,6 @@ |
3529 | with open(user_data_file, "w") as fp: |
3530 | fp.write(user_data) |
3531 | |
3532 | - # create target disk |
3533 | - logger.debug('Creating target disk') |
3534 | - self.target_disk = os.path.join(self.disks, "install_disk.img") |
3535 | - subprocess.check_call(["qemu-img", "create", "-f", TARGET_IMAGE_FORMAT, |
3536 | - self.target_disk, "10G"], |
3537 | - stdout=DEVNULL, stderr=subprocess.STDOUT) |
3538 | - |
3539 | # create seed.img for installed system's cloud init |
3540 | logger.debug('Creating seed disk') |
3541 | self.seed_disk = os.path.join(self.boot, "seed.img") |
3542 | @@ -328,15 +342,6 @@ |
3543 | user_data_file, meta_data_file], |
3544 | stdout=DEVNULL, stderr=subprocess.STDOUT) |
3545 | |
3546 | - # create output disk, mount ro |
3547 | - logger.debug('Creating output disk') |
3548 | - self.output_disk = os.path.join(self.boot, OUTPUT_DISK_NAME) |
3549 | - subprocess.check_call(["qemu-img", "create", "-f", TARGET_IMAGE_FORMAT, |
3550 | - self.output_disk, "10M"], |
3551 | - stdout=DEVNULL, stderr=subprocess.STDOUT) |
3552 | - subprocess.check_call(["mkfs.ext2", "-F", self.output_disk], |
3553 | - stdout=DEVNULL, stderr=subprocess.STDOUT) |
3554 | - |
3555 | def collect_output(self): |
3556 | logger.debug('extracting output disk') |
3557 | subprocess.check_call(['tar', '-C', self.collect, '-xf', |
3558 | @@ -412,7 +417,8 @@ |
3559 | dowait = "--dowait" |
3560 | |
3561 | # create launch cmd |
3562 | - cmd = ["tools/launch", "--arch=" + cls.arch, "-v", dowait] |
3563 | + cmd = ["tools/launch", "--arch=" + cls.arch, "-v", dowait, |
3564 | + "--mem=1024"] |
3565 | if not cls.interactive: |
3566 | cmd.extend(["--silent", "--power=off"]) |
3567 | |
3568 | @@ -516,6 +522,30 @@ |
3569 | fp.write(json.dumps({'grub': {'update_nvram': True}})) |
3570 | configs.append(grub_config) |
3571 | |
3572 | + # set reporting logger |
3573 | + cls.reporting_log = os.path.join(cls.td.logs, 'webhooks-events.json') |
3574 | + reporting_logger = CaptureReporting(cls.reporting_log) |
3575 | + |
3576 | + # write reporting config |
3577 | + reporting_config = os.path.join(cls.td.install, 'reporting.cfg') |
3578 | + localhost_url = ('http://' + get_lan_ip() + |
3579 | + ':{:d}/'.format(reporting_logger.port)) |
3580 | + with open(reporting_config, 'w') as fp: |
3581 | + fp.write(json.dumps({ |
3582 | + 'install': { |
3583 | + 'log_file': '/tmp/install.log', |
3584 | + 'post_files': ['/tmp/install.log'], |
3585 | + }, |
3586 | + 'reporting': { |
3587 | + 'maas': { |
3588 | + 'level': 'DEBUG', |
3589 | + 'type': 'webhook', |
3590 | + 'endpoint': localhost_url, |
3591 | + }, |
3592 | + }, |
3593 | + })) |
3594 | + configs.append(reporting_config) |
3595 | + |
3596 | if cls.multipath: |
3597 | disks = disks * cls.multipath_num_paths |
3598 | |
3599 | @@ -530,9 +560,10 @@ |
3600 | logger.info('Running curtin installer: {}'.format(cls.install_log)) |
3601 | try: |
3602 | with open(lout_path, "wb") as fpout: |
3603 | - cls.boot_system(cmd, timeout=cls.install_timeout, |
3604 | - console_log=cls.install_log, proc_out=fpout, |
3605 | - purpose="install") |
3606 | + with reporting_logger: |
3607 | + cls.boot_system(cmd, timeout=cls.install_timeout, |
3608 | + console_log=cls.install_log, |
3609 | + proc_out=fpout, purpose="install") |
3610 | except TimeoutExpired: |
3611 | logger.error('Curtin installer failed with timeout') |
3612 | cls.tearDownClass() |
3613 | @@ -800,6 +831,25 @@ |
3614 | self.assertIn(link, contents) |
3615 | self.assertIn(diskname, contents) |
3616 | |
3617 | + def test_reporting_data(self): |
3618 | + with open(self.reporting_log, 'r') as fp: |
3619 | + data = json.load(fp) |
3620 | + self.assertTrue(len(data) > 0) |
3621 | + first_event = data[0] |
3622 | + self.assertEqual(first_event['event_type'], 'start') |
3623 | + next_event = data[1] |
3624 | + # make sure we don't have that timestamp bug |
3625 | + self.assertNotEqual(first_event['timestamp'], next_event['timestamp']) |
3626 | + final_event = data[-1] |
3627 | + self.assertEqual(final_event['event_type'], 'finish') |
3628 | + self.assertEqual(final_event['name'], 'cmd-install') |
3629 | + # check for install log |
3630 | + [events_with_files] = [ev for ev in data if 'files' in ev] |
3631 | + self.assertIn('files', events_with_files) |
3632 | + [files] = events_with_files.get('files', []) |
3633 | + self.assertIn('path', files) |
3634 | + self.assertEqual('/tmp/install.log', files.get('path', '')) |
3635 | + |
3636 | def test_interfacesd_eth0_removed(self): |
3637 | """ Check that curtin has removed /etc/network/interfaces.d/eth0.cfg |
3638 | by examining the output of a find /etc/network > find_interfaces.d |
3639 | @@ -914,6 +964,9 @@ |
3640 | def test_interfacesd_eth0_removed(self): |
3641 | pass |
3642 | |
3643 | + def test_reporting_data(self): |
3644 | + pass |
3645 | + |
3646 | def _maybe_raise(self, exc): |
3647 | if self.allow_test_fails: |
3648 | raise exc |
3649 | @@ -1011,6 +1064,9 @@ |
3650 | collect_post = textwrap.dedent( |
3651 | 'tar -C "%s" -cf "%s" .' % (output_dir, output_device)) |
3652 | |
3653 | + # copy /root for curtin config and install.log |
3654 | + copy_rootdir = textwrap.dedent("cp -a /root " + output_dir) |
3655 | + |
3656 | # failsafe poweroff runs on precise only, where power_state does |
3657 | # not exist. |
3658 | precise_poweroff = textwrap.dedent("""#!/bin/sh -x |
3659 | @@ -1018,8 +1074,8 @@ |
3660 | shutdown -P now "Shutting down on precise" |
3661 | """) |
3662 | |
3663 | - scripts = ([collect_prep] + collect_scripts + [collect_post] + |
3664 | - [precise_poweroff]) |
3665 | + scripts = ([collect_prep] + [copy_rootdir] + collect_scripts + |
3666 | + [collect_post] + [precise_poweroff]) |
3667 | |
3668 | for part in scripts: |
3669 | if not part.startswith("#!"): |
3670 | @@ -1093,5 +1149,14 @@ |
3671 | return ret |
3672 | |
3673 | |
3674 | +def get_lan_ip(): |
3675 | + out = subprocess.check_output(['ip', 'addr']) |
3676 | + out = out.decode() |
3677 | + line = next(l for l in out.splitlines() if 'inet' in l and 'global' in l) |
3678 | + addr = line.split()[1] |
3679 | + if '/' in addr: |
3680 | + addr = addr[:addr.index('/')] |
3681 | + return addr |
3682 | + |
3683 | apply_keep_settings() |
3684 | logger = _initialize_logging() |
3685 | |
3686 | === modified file 'tests/vmtests/image_sync.py' |
3687 | --- tests/vmtests/image_sync.py 2016-06-20 17:10:03 +0000 |
3688 | +++ tests/vmtests/image_sync.py 2016-09-15 18:06:47 +0000 |
3689 | @@ -400,7 +400,7 @@ |
3690 | verbosity=vlevel) |
3691 | try: |
3692 | if args.output_format == FORMAT_JSON: |
3693 | - print(util.json_dumps(results).decode()) |
3694 | + print(util.json_dumps(results)) |
3695 | else: |
3696 | output = [] |
3697 | for item in results: |
3698 | |
3699 | === added file 'tools/__init__.py' |
3700 | === added file 'tools/curtin-log-print' |
3701 | --- tools/curtin-log-print 1970-01-01 00:00:00 +0000 |
3702 | +++ tools/curtin-log-print 2016-09-15 18:06:47 +0000 |
3703 | @@ -0,0 +1,152 @@ |
3704 | +#!/usr/bin/env python3 |
3705 | +# Copyright (C) 2016 Canonical Ltd. |
3706 | +# |
3707 | +# Author: Ryan Harper <ryan.harper@canonical.com> |
3708 | +# |
3709 | +# Curtin is free software: you can redistribute it and/or modify it under |
3710 | +# the terms of the GNU Affero General Public License as published by the |
3711 | +# Free Software Foundation, either version 3 of the License, or (at your |
3712 | +# option) any later version. |
3713 | +# |
3714 | +# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY |
3715 | +# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS |
3716 | +# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for |
3717 | +# more details. |
3718 | +# |
3719 | +# You should have received a copy of the GNU Affero General Public License |
3720 | +# along with Curtin. If not, see <http://www.gnu.org/licenses/>. |
3721 | +import argparse |
3722 | +import datetime |
3723 | +import json |
3724 | +import sys |
3725 | +import base64 |
3726 | + |
3727 | + |
3728 | +# An event: |
3729 | +''' |
3730 | +{ |
3731 | + "description": "executing late commands", |
3732 | + "event_type": "start", |
3733 | + "level": "INFO", |
3734 | + "name": "cmd-install/stage-late" |
3735 | + "origin": "curtin", |
3736 | + "timestamp": 1461164249.1590767, |
3737 | +}, |
3738 | + |
3739 | + { |
3740 | + "description": "executing late commands", |
3741 | + "event_type": "finish", |
3742 | + "level": "INFO", |
3743 | + "name": "cmd-install/stage-late", |
3744 | + "origin": "curtin", |
3745 | + "result": "SUCCESS", |
3746 | + "timestamp": 1461164249.1590767 |
3747 | + } |
3748 | + |
3749 | +''' |
3750 | +format_key = { |
3751 | + '%d': 'delta', |
3752 | + '%D': 'description', |
3753 | + '%e': 'event_type', |
3754 | + '%l': 'level', |
3755 | + '%n': 'name', |
3756 | + '%o': 'origin', |
3757 | + '%r': 'result', |
3758 | + '%t': 'timestamp', |
3759 | +} |
3760 | + |
3761 | +formatting_help = " ".join(["{}: {}".format(k.replace('%', '%%'), v) |
3762 | + for k, v in format_key.items()]) |
3763 | + |
3764 | + |
3765 | +def format_record(msg, event): |
3766 | + for i, j in format_key.items(): |
3767 | + if i in msg: |
3768 | + msg = msg.replace(i, "{%s}" % j) |
3769 | + return msg.format(**event) |
3770 | + |
3771 | + |
3772 | +def dump_event_files(event): |
3773 | + content = {k: v for k, v in event.items() if k not in ['content']} |
3774 | + files = content['files'] |
3775 | + for f in files: |
3776 | + fname = f['path'] |
3777 | + fcontent = base64.b64decode(f['content']).decode('ascii') |
3778 | + print("%s:\n%s" % (fname, fcontent)) |
3779 | + |
3780 | + |
3781 | +def generate_records(j, blame_sort=False, print_format="%d seconds in %D", |
3782 | + dump_files=False): |
3783 | + records = [] |
3784 | + timestamps = {} |
3785 | + total_time = 0 |
3786 | + for event in j: |
3787 | + name = event.get('name') |
3788 | + if 'files' in event: |
3789 | + print('Event with files: %s %s @ %s' % (event['event_type'], |
3790 | + event['name'], |
3791 | + event['timestamp'])) |
3792 | + if dump_files: |
3793 | + dump_event_files(event) |
3794 | + |
3795 | + if event['event_type'] == 'start': |
3796 | + timestamps[name] = {'start': event['timestamp']} |
3797 | + else: |
3798 | + timestamps[name].update({'finish': event['timestamp']}) |
3799 | + start = datetime.datetime.utcfromtimestamp( |
3800 | + timestamps[name]['start']) |
3801 | + end = datetime.datetime.utcfromtimestamp( |
3802 | + timestamps[name]['finish']) |
3803 | + delta = end - start |
3804 | + total_time += delta.total_seconds() |
3805 | + event['delta'] = "{:08.5f}".format(delta.total_seconds()) |
3806 | + records.append(format_record(print_format, event)) |
3807 | + |
3808 | + records.append(' ---\n%3.5f seconds total time' % total_time) |
3809 | + return records |
3810 | + |
3811 | + |
3812 | +def main(): |
3813 | + parser = argparse.ArgumentParser( |
3814 | + description='curtin-print-log - pretty print and sort curtin logs', |
3815 | + prog='curtin-print-log') |
3816 | + parser.add_argument('--blame', action='store_true', |
3817 | + default=False, |
3818 | + dest='blame_sort', |
3819 | + help='sort events by total time.') |
3820 | + parser.add_argument('--dumpfiles', action='store_true', |
3821 | + default=False, |
3822 | + dest='dump_files', |
3823 | + help='dump content of any posted files') |
3824 | + parser.add_argument('--format', action='store', |
3825 | + dest='print_format', |
3826 | + default='%d seconds in %D', |
3827 | + help='specify formatting of output. ' + |
3828 | + formatting_help) |
3829 | + parser.add_argument('infile', nargs='?', type=argparse.FileType('r'), |
3830 | + help='Path to log to parse. Use - for stdin') |
3831 | + |
3832 | + opts = parser.parse_args(sys.argv[1:]) |
3833 | + if not opts.infile: |
3834 | + parser.print_help() |
3835 | + sys.exit(1) |
3836 | + |
3837 | + try: |
3838 | + j = json.load(opts.infile) |
3839 | + except json.JSONDecodeError: |
3840 | + print("Input must be valid JSON") |
3841 | + sys.exit(1) |
3842 | + |
3843 | + records = generate_records(j, blame_sort=opts.blame_sort, |
3844 | + print_format=opts.print_format, |
3845 | + dump_files=opts.dump_files) |
3846 | + summary = [] |
3847 | + if opts.blame_sort is True: |
3848 | + summary = records[-1:] |
3849 | + records = sorted(records[:-1], reverse=True) |
3850 | + |
3851 | + print("\n".join(records + summary)) |
3852 | + |
3853 | + |
3854 | +if __name__ == '__main__': |
3855 | + main() |
3856 | |
3857 | === modified file 'tools/launch' |
3858 | --- tools/launch 2016-07-15 16:14:53 +0000 |
3859 | +++ tools/launch 2016-09-15 18:06:47 +0000 |
3860 | @@ -37,7 +37,7 @@ |
3861 | -h | --help show this message |
3862 | -i | --initrd F use initramfs F |
3863 | -k | --kernel F use kernel K |
3864 | - --mem K memory in Kb |
3865 | + --mem M memory in Mb |
3866 | -n | --netdev netdev can be 'user' or a bridge |
3867 | -p | --publish F make file 'F' available in web server |
3868 | --silent use -nographic |
3869 | |
3870 | === removed file 'tools/report-webhook-logger' |
3871 | --- tools/report-webhook-logger 2015-10-09 14:47:09 +0000 |
3872 | +++ tools/report-webhook-logger 1970-01-01 00:00:00 +0000 |
3873 | @@ -1,100 +0,0 @@ |
3874 | -#!/usr/bin/python3 |
3875 | -try: |
3876 | - # python2 |
3877 | - import SimpleHTTPServer as http_server |
3878 | - import SocketServer as socketserver |
3879 | -except ImportError: |
3880 | - import http.server as http_server |
3881 | - import socketserver |
3882 | - |
3883 | -import json |
3884 | -import sys |
3885 | - |
3886 | -EXAMPLE_CONFIG = """\ |
3887 | -# example config |
3888 | -reporting: |
3889 | - mypost: |
3890 | - type: webhook |
3891 | - endpoint: %(endpoint)s |
3892 | -install: |
3893 | - log_file: /tmp/foo |
3894 | - post_files: [/tmp/foo] |
3895 | - |
3896 | -# example python: |
3897 | -from curtin.reporter import events, update_configuration |
3898 | -cfg = {'mypost': {'type': 'webhook', 'endpoint': '%(endpoint)s'}} |
3899 | -update_configuration(cfg) |
3900 | -with events.ReportEventStack(name="myname", description="mydesc", |
3901 | - reporting_enabled=True): |
3902 | - print("do something") |
3903 | -""" |
3904 | - |
3905 | -if len(sys.argv) > 2: |
3906 | - PORT = int(sys.argv[2]) |
3907 | - addr = sys.argv[1] |
3908 | -elif len(sys.argv) > 1: |
3909 | - PORT = int(sys.argv[1]) |
3910 | - addr = "" |
3911 | -else: |
3912 | - PORT = 8000 |
3913 | - addr = "" |
3914 | - |
3915 | - |
3916 | -def render_event_string(event_str): |
3917 | - return json.dumps(json.loads(event_str), indent=1) |
3918 | - |
3919 | - |
3920 | -class ServerHandler(http_server.SimpleHTTPRequestHandler): |
3921 | - |
3922 | - def log_request(self, code, size=None): |
3923 | - lines = [ |
3924 | - "== %s %s ==" % (self.command, self.path), |
3925 | - str(self.headers).replace('\r', '')] |
3926 | - if self._message: |
3927 | - lines.append(self._message) |
3928 | - sys.stdout.write('\n'.join(lines) + '\n') |
3929 | - sys.stdout.flush() |
3930 | - |
3931 | - def do_GET(self): |
3932 | - self._message = None |
3933 | - self.send_response(200) |
3934 | - self.end_headers() |
3935 | - self.wfile.write("content of %s\n" % self.path) |
3936 | - |
3937 | - def do_POST(self): |
3938 | - length = int(self.headers['Content-Length']) |
3939 | - post_data = self.rfile.read(length).decode('utf-8') |
3940 | - try: |
3941 | - self._message = render_event_string(post_data) |
3942 | - except Exception as e: |
3943 | - self._message = '\n'.join( |
3944 | - ["failed printing event: %s" % e, post_data]) |
3945 | - |
3946 | - msg = "received post to %s" % self.path |
3947 | - self.send_response(200) |
3948 | - self.send_header("Content-type", "text/plain") |
3949 | - self.end_headers() |
3950 | - self.wfile.write(msg.encode('utf-8')) |
3951 | - |
3952 | -# avoid 'Address already in use' after ctrl-c |
3953 | -socketserver.TCPServer.allow_reuse_address = True |
3954 | - |
3955 | -Handler = ServerHandler |
3956 | -httpd = socketserver.TCPServer(("", PORT), Handler) |
3957 | -httpd.allow_reuse_address = True |
3958 | - |
3959 | -info = { |
3960 | - 'interface': addr or "localhost", |
3961 | - 'port': PORT, |
3962 | - 'endpoint': "http://" + (addr or "localhost") + ":%s" % PORT |
3963 | -} |
3964 | -print("Serving at: %(endpoint)s" % info) |
3965 | -print("Post to this with:\n%s\n" % (EXAMPLE_CONFIG % info)) |
3966 | - |
3967 | -try: |
3968 | - httpd.serve_forever() |
3969 | -except KeyboardInterrupt: |
3970 | - sys.stdout.flush() |
3971 | - pass |
3972 | -httpd.server_close() |
3973 | -sys.exit(0) |
3974 | |
3975 | === added file 'tools/report_webhook_logger.py' |
3976 | --- tools/report_webhook_logger.py 1970-01-01 00:00:00 +0000 |
3977 | +++ tools/report_webhook_logger.py 2016-09-15 18:06:47 +0000 |
3978 | @@ -0,0 +1,174 @@ |
3979 | +#!/usr/bin/python3 |
3980 | +try: |
3981 | + # python2 |
3982 | + import SimpleHTTPServer as http_server |
3983 | + import SocketServer as socketserver |
3984 | +except ImportError: |
3985 | + import http.server as http_server |
3986 | + import socketserver |
3987 | + |
3988 | +import json |
3989 | +import os |
3990 | +import sys |
3991 | +import threading |
3992 | + |
3993 | +EXAMPLE_CONFIG = """\ |
3994 | +# example config |
3995 | +reporting: |
3996 | + mypost: |
3997 | + type: webhook |
3998 | + endpoint: %(endpoint)s |
3999 | +install: |
4000 | + log_file: /tmp/foo |
4001 | + post_files: [/tmp/foo] |
4002 | + |
4003 | +# example python: |
4004 | +from curtin.reporter import events, update_configuration |
4005 | +cfg = {'mypost': {'type': 'webhook', 'endpoint': '%(endpoint)s'}} |
4006 | +update_configuration(cfg) |
4007 | +with events.ReportEventStack(name="myname", description="mydesc", |
4008 | + reporting_enabled=True): |
4009 | + print("do something") |
4010 | +""" |
4011 | + |
4012 | +CURTIN_EVENTS = [] |
4013 | +DEFAULT_PORT = 8000 |
4014 | +addr = "" |
4015 | + |
4016 | + |
4017 | +def render_event_string(event_str): |
4018 | + return json.dumps(json.loads(event_str), indent=1) |
4019 | + |
4020 | + |
4021 | +def write_event_string(target, event_str): |
4022 | + try: |
4023 | + with open(target, 'r') as fp: |
4024 | + data = json.load(fp) |
4025 | + except: |
4026 | + data = [] |
4027 | + data.append(json.loads(event_str)) |
4028 | + with open(target, 'w') as fp: |
4029 | + json.dump(data, fp) |
4030 | + |
4031 | + |
4032 | +class ServerHandler(http_server.SimpleHTTPRequestHandler): |
4033 | + result_log_file = None |
4034 | + |
4035 | + def log_request(self, code, size=None): |
4036 | + if self.result_log_file: |
4037 | + return |
4038 | + lines = [ |
4039 | + "== %s %s ==" % (self.command, self.path), |
4040 | + str(self.headers).replace('\r', '')] |
4041 | + if self._message: |
4042 | + lines.append(self._message) |
4043 | + sys.stdout.write('\n'.join(lines) + '\n') |
4044 | + sys.stdout.flush() |
4045 | + |
4046 | + def do_GET(self): |
4047 | + self._message = None |
4048 | + self.send_response(200) |
4049 | + self.end_headers() |
4050 | + self.wfile.write("content of %s\n" % self.path) |
4051 | + |
4052 | + def do_POST(self): |
4053 | + length = int(self.headers['Content-Length']) |
4054 | + post_data = self.rfile.read(length).decode('utf-8') |
4055 | + try: |
4056 | + if self.result_log_file: |
4057 | + write_event_string(self.result_log_file, post_data) |
4058 | + self._message = render_event_string(post_data) |
4059 | + except Exception as e: |
4060 | + self._message = '\n'.join( |
4061 | + ["failed printing event: %s" % e, post_data]) |
4062 | + |
4063 | + msg = "received post to %s" % self.path |
4064 | + self.send_response(200) |
4065 | + self.send_header("Content-type", "text/plain") |
4066 | + self.end_headers() |
4067 | + self.wfile.write(msg.encode('utf-8')) |
4068 | + |
4069 | + |
4070 | +def GenServerHandlerWithResultFile(file_path): |
4071 | + class ExtendedServerHandler(ServerHandler): |
4072 | + result_log_file = file_path |
4073 | + return ExtendedServerHandler |
4074 | + |
4075 | + |
4076 | +def get_httpd(port=None, result_file=None): |
4077 | + # avoid 'Address already in use' after ctrl-c |
4078 | + socketserver.TCPServer.allow_reuse_address = True |
4079 | + |
4080 | + # get first available port if none specified |
4081 | + if port is None: |
4082 | + port = 0 |
4083 | + |
4084 | + if result_file: |
4085 | + Handler = GenServerHandlerWithResultFile(result_file) |
4086 | + else: |
4087 | + Handler = ServerHandler |
4088 | + httpd = socketserver.TCPServer(("", port), Handler) |
4089 | + httpd.allow_reuse_address = True |
4090 | + |
4091 | + return httpd |
4092 | + |
4093 | + |
4094 | +def run_server(port=DEFAULT_PORT, log_data=True): |
4095 | + """Run the server and capture output, redirecting output to /dev/null if |
4096 | + log_data = False""" |
4097 | + httpd = get_httpd(port=port) |
4098 | + |
4099 | + _stdout = sys.stdout |
4100 | + with open(os.devnull, 'w') as fp: |
4101 | + try: |
4102 | + if not log_data: |
4103 | + sys.stdout = fp |
4104 | + httpd.serve_forever() |
4105 | + except KeyboardInterrupt: |
4106 | + sys.stdout.flush() |
4107 | + pass |
4108 | + finally: |
4109 | + sys.stdout = _stdout |
4110 | + httpd.server_close() |
4111 | + |
4112 | + return CURTIN_EVENTS |
4113 | + |
4114 | + |
4115 | +class CaptureReporting: |
4116 | + |
4117 | + def __init__(self, result_file): |
4118 | + self.result_file = result_file |
4119 | + self.httpd = get_httpd(result_file=self.result_file, |
4120 | + port=None) |
4121 | + self.httpd.server_activate() |
4122 | + (self.bind_addr, self.port) = self.httpd.server_address |
4123 | + |
4124 | + def __enter__(self): |
4125 | + if os.path.exists(self.result_file): |
4126 | + os.remove(self.result_file) |
4127 | + self.worker = threading.Thread(target=self.httpd.serve_forever) |
4128 | + self.worker.start() |
4129 | + return self |
4130 | + |
4131 | + def __exit__(self, etype, value, trace): |
4132 | + self.httpd.shutdown() |
4133 | + |
4134 | + |
4135 | +if __name__ == "__main__": |
4136 | + if len(sys.argv) > 2: |
4137 | + port = int(sys.argv[2]) |
4138 | + addr = sys.argv[1] |
4139 | + elif len(sys.argv) > 1: |
4140 | + port = int(sys.argv[1]) |
4141 | + addr = "" |
4142 | + else: |
4143 | + port = DEFAULT_PORT |
4144 | + info = { |
4145 | + 'interface': addr or "localhost", |
4146 | + 'port': port, |
4147 | + 'endpoint': "http://" + (addr or "localhost") + ":%s" % port |
4148 | + } |
4149 | + print("Serving at: %(endpoint)s" % info) |
4150 | + print("Post to this with:\n%s\n" % (EXAMPLE_CONFIG % info)) |
4151 | + run_server(port=port, log_data=True) |
4152 | + sys.exit(0) |
4153 | |
4154 | === modified file 'tools/run-pep8' |
4155 | --- tools/run-pep8 2015-03-09 16:32:36 +0000 |
4156 | +++ tools/run-pep8 2016-09-15 18:06:47 +0000 |
4157 | @@ -1,6 +1,11 @@ |
4158 | #!/bin/bash |
4159 | |
4160 | -pycheck_dirs=( "curtin/" "tests/" ) |
4161 | +pycheck_dirs=( |
4162 | + "curtin/" |
4163 | + "tests/" |
4164 | + "tools/curtin-log-print" |
4165 | + "tools/report_webhook_logger.py" |
4166 | +) |
4167 | bin_files=( ) |
4168 | CR=" |
4169 | " |
4170 | |
4171 | === modified file 'tools/run-pyflakes' |
4172 | --- tools/run-pyflakes 2016-02-12 16:43:18 +0000 |
4173 | +++ tools/run-pyflakes 2016-09-15 18:06:47 +0000 |
4174 | @@ -4,10 +4,20 @@ |
4175 | CR=" |
4176 | " |
4177 | vmtests="" |
4178 | +storagetest_runner="" |
4179 | if [ "$PYTHON_VERSION" = "3" ]; then |
4180 | vmtests="tests/vmtests/" |
4181 | + storagetest_runner="tests/storagetest_runner/" |
4182 | fi |
4183 | -pycheck_dirs=( "curtin/" "tests/unittests/" $vmtests ) |
4184 | +pycheck_dirs=( |
4185 | + "curtin/" |
4186 | + "tests/unittests/" |
4187 | + $vmtests |
4188 | + $storagetest_runner |
4189 | + "tests/storagetests/" |
4190 | + "tools/curtin-log-print" |
4191 | + "tools/report_webhook_logger.py" |
4192 | +) |
4193 | |
4194 | set -f |
4195 | if [ $# -eq 0 ]; then |
The storagetests are now at a point where they can be fully run and the results collected just using 'make run_storagetests'. The storagetest_runner uses the same general structure as vmtests so it shouldn't require any modifications of the test system other than adding it to the list of things to run.
The storagetest_runner generates a tarball of curtin and issues cloud-config scripts to the target image to retrieve and extract that tarball as it boots. The storagetests are then run, and use curtin.reporter to report success and failure to the test runner, as well as log files generated by block_meta while the test was running