Merge ~raharper/curtin:feature/enable-storage-vmtest-on-centos into curtin:master
- Git
- lp:~raharper/curtin
- feature/enable-storage-vmtest-on-centos
- Merge into master
Status: | Merged | ||||||||
---|---|---|---|---|---|---|---|---|---|
Approved by: | Ryan Harper | ||||||||
Approved revision: | e0e98376b2e7ff3a09f3a8b339c1d029a3274b83 | ||||||||
Merge reported by: | Server Team CI bot | ||||||||
Merged at revision: | not available | ||||||||
Proposed branch: | ~raharper/curtin:feature/enable-storage-vmtest-on-centos | ||||||||
Merge into: | curtin:master | ||||||||
Diff against target: |
7050 lines (+2535/-1590) 82 files modified
curtin/__init__.py (+2/-0) curtin/block/__init__.py (+0/-72) curtin/block/deps.py (+103/-0) curtin/block/iscsi.py (+25/-9) curtin/block/lvm.py (+2/-1) curtin/block/mdadm.py (+2/-1) curtin/block/mkfs.py (+3/-2) curtin/block/zfs.py (+2/-1) curtin/commands/apply_net.py (+4/-3) curtin/commands/apt_config.py (+13/-13) curtin/commands/block_meta.py (+5/-4) curtin/commands/curthooks.py (+391/-207) curtin/commands/in_target.py (+2/-2) curtin/commands/install.py (+4/-2) curtin/commands/system_install.py (+2/-1) curtin/commands/system_upgrade.py (+3/-2) curtin/deps/__init__.py (+3/-3) curtin/distro.py (+512/-0) curtin/futil.py (+2/-1) curtin/net/__init__.py (+0/-59) curtin/net/deps.py (+72/-0) curtin/paths.py (+34/-0) curtin/util.py (+20/-318) dev/null (+0/-96) doc/topics/config.rst (+40/-0) doc/topics/curthooks.rst (+18/-2) examples/tests/filesystem_battery.yaml (+2/-2) helpers/common (+156/-35) tests/unittests/test_apt_custom_sources_list.py (+10/-8) tests/unittests/test_apt_source.py (+8/-7) tests/unittests/test_block_iscsi.py (+7/-0) tests/unittests/test_block_lvm.py (+3/-2) tests/unittests/test_block_mdadm.py (+18/-11) tests/unittests/test_block_mkfs.py (+3/-2) tests/unittests/test_block_zfs.py (+15/-9) tests/unittests/test_commands_apply_net.py (+7/-7) tests/unittests/test_commands_block_meta.py (+4/-3) tests/unittests/test_curthooks.py (+103/-78) tests/unittests/test_distro.py (+302/-0) tests/unittests/test_feature.py (+3/-0) tests/unittests/test_pack.py (+2/-0) tests/unittests/test_util.py (+19/-122) tests/vmtests/__init__.py (+80/-13) tests/vmtests/helpers.py (+28/-1) tests/vmtests/image_sync.py (+3/-1) tests/vmtests/releases.py (+2/-2) tests/vmtests/report_webhook_logger.py (+11/-6) tests/vmtests/test_apt_config_cmd.py (+2/-4) tests/vmtests/test_apt_source.py (+2/-4) tests/vmtests/test_basic.py (+126/-152) tests/vmtests/test_bcache_basic.py (+3/-6) tests/vmtests/test_fs_battery.py (+25/-11) tests/vmtests/test_install_umount.py (+1/-18) tests/vmtests/test_iscsi.py (+10/-6) tests/vmtests/test_journald_reporter.py (+2/-5) tests/vmtests/test_lvm.py (+7/-8) tests/vmtests/test_lvm_iscsi.py (+9/-4) tests/vmtests/test_lvm_root.py (+40/-9) tests/vmtests/test_mdadm_bcache.py (+41/-18) tests/vmtests/test_mdadm_iscsi.py (+9/-3) tests/vmtests/test_multipath.py (+8/-16) tests/vmtests/test_network.py (+4/-19) tests/vmtests/test_network_alias.py (+3/-3) tests/vmtests/test_network_bonding.py (+3/-3) tests/vmtests/test_network_bridging.py (+4/-4) tests/vmtests/test_network_ipv6.py (+4/-4) tests/vmtests/test_network_ipv6_static.py (+2/-2) tests/vmtests/test_network_ipv6_vlan.py (+2/-2) tests/vmtests/test_network_mtu.py (+5/-4) tests/vmtests/test_network_static.py (+2/-11) tests/vmtests/test_network_static_routes.py (+2/-2) tests/vmtests/test_network_vlan.py (+3/-11) tests/vmtests/test_nvme.py (+29/-56) tests/vmtests/test_old_apt_features.py (+2/-4) tests/vmtests/test_pollinate_useragent.py (+2/-2) tests/vmtests/test_raid5_bcache.py (+6/-11) tests/vmtests/test_simple.py (+5/-18) tests/vmtests/test_ubuntu_core.py (+3/-8) tests/vmtests/test_uefi_basic.py (+27/-28) tests/vmtests/test_zfsroot.py (+5/-21) tools/jenkins-runner (+30/-5) tools/vmtest-filter (+57/-0) |
||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
Lee Trager (community) | Approve | ||
Scott Moser (community) | Approve | ||
Chad Smith | Pending | ||
Review via email: mp+349075@code.launchpad.net |
Commit message
Enable custom storage configuration for centos images.
Add support for the majority of storage configurations including
partitioning, lvm, raid, iscsi and combinations of these. Some
storage configs are unsupported at this time.
Unsupported storage config options on Centos:
- bcache (no kernel support)
- zfs (no kernel support)
- jfs, ntfs, reiserfs (no kernel, userspace support)
Curtin's built-in curthooks now support Centos in addition
to Ubuntu. The built-in curthooks are now callable by
in-image curthooks. This feature is announced by the
presence of the feature flag, 'CENTOS_
Other notable features added:
- tools/jenkins-
ability which enables generating the list of tests to
run by specifying attributes of the classes. For example
to run all centos70 tests append:
--
- curtin/distro.py includes distro specific methods, such as
package install and distro version detection
- util.target_path has now moved to curtin.paths module
Description of the change
Server Team CI bot (server-team-bot) wrote : | # |
Scott Moser (smoser) wrote : | # |
I didn't get through the whole thing yet. Only as far as my comments stop.
will review more late.r
Feels like we need a 'distro' module.
def distro.
"""Find the distro in target. return just distro name for now."""
also there woudl be
CENTOS='centos'
DEBIAN='debian'.
then you can avoid copying 'debian' string everywhere.
Ryan Harper (raharper) wrote : | # |
Yes, I generally wanted some sort of distro value cache. Which is why in curthooks I end up grabbing it first and passing it around.
We could avoid passing, at the cost of a function call. Alternatively, we could import it into th curthooks module and call a "setter" to find the right value and it would be a global to the module.
Thoughts?
Chad Smith (chad.smith) wrote : | # |
only a brief glance at the content, will look more tomorrow.
- 99517f4... by Ryan Harper
-
Simplify set construction for get_iscsi_
ports_from_ config - 0ca7ee3... by Ryan Harper
-
Restore param order to copy_iscsi_conf
- cca12fc... by Ryan Harper
-
Fix whitespace damage, update comment to have LP: #
Ryan Harper (raharper) wrote : | # |
Thanks for the comments so far. Pulling in some suggested changes. Some responses in-line.
Scott Moser (smoser) wrote : | # |
I got through the rest of it.
I like the test filter functionality.
comments inline.
- 1d890ad... by Ryan Harper
-
Drop if not target check, use target_path instead
- 3719756... by Ryan Harper
-
setup_grub, helpers/common: pass os-family to install_grub, fix shell nits
- Add --os-family to install_grub cli, have setup_grub() pass in the flag
- Address shell comments
- Fix unittests to work with --os-family - 9555181... by Ryan Harper
-
Fix use of cls.target_distro, it always has a value now, drop test_type=core
- 2a5e3d2... by Ryan Harper
-
Use instead of ; use error/fail
Ryan Harper (raharper) wrote : | # |
Pulling in most of the comments. Replied to a few questions inline. I'll reverify that we're still passing on centos tests and then push the fixes here for a second round.
Thanks for the review!
- dac8fe0... by Ryan Harper
-
Flake8 fixes for vmtests-filter
Chad Smith (chad.smith) : | # |
- 290b898... by Ryan Harper
-
helpers/
common: install_ grub: Fix getopt, os-family takes a parameter - 4bf91d8... by Ryan Harper
-
Drop default_
collect_ scripts class attr, it's not needed - ac47a91... by Ryan Harper
-
helpers/
install_ grub: fix i386 grub_name/ grub_target; catch silent missing package exit
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:d76fb0f9123
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- bfe3021... by Ryan Harper
-
Refactor distro/osfamily into enumerated class
Introduce curtin/distro.py which provides distro variant and
osfamily mapping methods. Inside we enumerate all of the known
distro names, build a distro family to variant mapping and provide
a reverse mapping for translating from one to the other.With this in place, add a singleton based methods to utils,
get_target_distroinfo, which queries /etc/os-release inside
the target path, and extracts the ID= value, looks that up
in the list of distros, and osfamilies creating a named tuple
that is globally cached. Added accessort methods for getting
the variant or osfamily and then used these to update
curthooks to query once, and then compare the value found versus
the enumerated distro objects. Where target is available, methods
will now use get_target_osfamily( target= target) to obtain a value
if one is not provided. In some methods that are distro specific
we default the osfamily to the correct value. - 5cbf688... by Ryan Harper
-
Drop use of singleton, in-use for ephemeral and target, move DistroInfo to distro.py
Ryan Harper (raharper) wrote : | # |
OK, I've given the curtin/distro.py a go. I think it works quite nicely. I'm happy to bikeshed on the attributes (distro vs variant vs osfamily, etc).
That's easy enough to switch around.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:e39be2278d5
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) : | # |
Ryan Harper (raharper) wrote : | # |
team review comments inline
- 90451c1... by Ryan Harper
-
Drop _target from get_distro, get_osfamily helpers
- 3d329af... by Ryan Harper
-
Refactor iscsi.get_
iscsi_disks_ from_config for modular use - Introduce get_iscsi_
volumes_ from_config which returns a list of
iscsi RFC uris which can be used to construct IscsiDiskObjects.
- Refactor get_iscsi_disks_from_ config to use get_iscsi_ volumes_ from_config
- Add docstrings to all get_iscsi_* methods
- Migrate block,net detect_required_ packages_ mapping into module/dep.py
respectively to avoid dependency loop between import of curtin.block
and curtin.command. block_meta
- Fix up curthooks to import block and net deps module - 78e38d2... by Ryan Harper
-
Refactor osfamily parameter, default to DISTROS.debian
Drop any if osfamily is None checks since we now default to
DISTROS.debian for osfamily. Add some checks if osfamily is
not the expected values and raise ValueErrors - 926fd23... by Ryan Harper
-
Move targets_node_dir into function signature with default value
- 8411be2... by Scott Moser
-
Refactor util, distro and add paths.py
Rearrage package/distro related functions out of util.py into
distro.py. Move target_path into paths.py. Adjust callers
where necessary. - aa2c622... by Ryan Harper
-
Drop iscsi initator name hack, not needed
- 652b1c7... by Ryan Harper
-
Fix typo in initramfs string, make pollinate generic, check for binary in target
- 7c2e84a... by Ryan Harper
-
Add unittest for pollinate missing, drop yum_install
Add unittest for when pollinate binary is missing.
Drop distro.yum_install, folding settings and retries into run_apt_command
and run_yum_command.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:a5fdb635f04
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
MP to get rid of users of util.target_path
http://
some smaller things in line.
- 9276416... by Scott Moser
-
remove util.target_path users.
- 494d1f3... by Ryan Harper
-
Replace launchpad link with LP: #NNNN
- d90b938... by Ryan Harper
-
Drop apt,yum retries for all commands, handle yum install in two parts
- 05ff544... by Ryan Harper
-
distro: add unittest and ensure osfamily varient is part of itself
- a8d08f6... by Ryan Harper
-
helpers/common: map variant to os_family and update os_family switch statements
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:47bcf8fe3c1
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 09a05cd... by Ryan Harper
-
curthooks: refactor builtin curthooks into a callable method
- 0b23cbd... by Ryan Harper
-
iscsi_get_
volumes_ from_config: handle curtin config and storage config
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:3972610e2e4
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
Ran a centos70 run on diglett:
% rm -rf ./output/; CURTIN_
...
-------
Ran 372 tests in 1898.647s
OK (SKIP=112)
Tue, 07 Aug 2018 16:05:46 -0500: vmtest end [0] in 1901s
The set of tests that run are:
% ./tools/
2018-08-07 16:20:08,785 - tests.vmtests - INFO - Logfile: /tmp/vmtest-
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
- 35b44dd... by Ryan Harper
-
doc: update curthooks docs
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:b4e8d6897e3
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 00a0658... by Ryan Harper
-
Pass in the real curtin config to builtin_curthooks
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:dd8581ba936
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) : | # |
Ryan Harper (raharper) : | # |
- 068ce08... by Ryan Harper
-
grub: require both os variant and family; pass variant to grub-install, it's fickle.
- 78d89b5... by Ryan Harper
-
Allow yum install, update, upgrade to use the two-step download,install method
- 979d53b... by Ryan Harper
-
Drop target is None checks in distro.py
- adda1ab... by Ryan Harper
-
Add comments on use of ChrootableTarget for rpm/yum operations
Chad Smith (chad.smith) wrote : | # |
Thanks Ryan!
Couple nits inline plus a significant question about detect_
I added a pastebin to add --features argument to CLI, which I can do in a separate branch if you think it is a good idea.
Chad Smith (chad.smith) : | # |
Ryan Harper (raharper) wrote : | # |
Thanks for the review. I've replied inline.
Chad Smith (chad.smith) : | # |
- 3062605... by Ryan Harper
-
block.deps: Add iscsi mapping to open-iscsi for debian family
Scott Moser (smoser) wrote : | # |
2 questions
a.)
'yum update' versus 'yum upgrade'
this feels like we want 'upgrade' as it is more similar to 'dist-upgrade' which is what we do in apt.
b.) I think really we still want the 2 phase for upgrade.
retry this: yum --downloadonly --setopt=
then run this: yum upgrade --cacheonly --downloadonly --setopt=
It looks like we can mostly re-use the existing 'yum_install' but just manage to set ['install'] to be ['upgrade']
We are getting there...
I know that this 'upgrade' path isn't a huge thing, but if we have it there i'd like for it to work reliably.
- 4d3550b... by Ryan Harper
-
Reformat exception to not dangle text on the new line
Scott Moser (smoser) wrote : | # |
I think i'm pretty much fine with this at this point.
mega-branch, but we can take and address any issues one by one.
Assuming the following are happy, I approve:
a.) rharper
b.) vmtest
c.) c-i bot
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:9d2c7fda267
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
On Thu, Aug 9, 2018 at 2:30 PM Scott Moser <email address hidden> wrote:
>
> Review: Approve
>
> I think i'm pretty much fine with this at this point.
> mega-branch, but we can take and address any issues one by one.
>
> Assuming the following are happy, I approve:
> a.) rharper
+1
> b.) vmtest
I'll kick off a full run on diglett; this allows me to "hack" in an
updated curtin-hooks.py for the centos images
However, we shouldn't land this until we get the MAAS image branch
approved and landed.
> c.) c-i bot
>
> --
> https:/
> You are the owner of ~raharper/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:9847b57cb9e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 5f9e785... by Ryan Harper
-
Add support for redhat distros without /etc/os-release; fix centos6 grub install
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:3f1f7265284
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 76bc6dd... by Ryan Harper
-
curthooks: don't update initramfs unless we have storage config
The dracut config wasn't updated, but we still proceeded to regenerate
wasting time when it wasn't needed. Move rpm_ command into distro.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:10686127093
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- e172702... by Ryan Harper
-
Drop extra case ;; and fix Uefi installs
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:956067e8289
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 59b9b94... by Ryan Harper
-
Drop centos_basic vmtest, handled in test_basic and test_network now
Ryan Harper (raharper) wrote : | # |
This passed on diglett full vmtest run (with the new maas-image curtin-hooks injected into centos7 images).
% rm -rf output/; CURTIN_
Quering synced ephemeral images/kernels in /srv/images
=======
Release Codename ImageDate Arch /SubArch Path
-------
12.04 precise 20170424 amd64/hwe-t precise/
12.04 precise 20170424 amd64/hwe-t precise/
12.04 precise 20170424.1 amd64/hwe-p precise/
14.04 trusty 20180806 amd64/hwe-t trusty/
14.04 trusty 20180806 amd64/hwe-x trusty/
14.04 trusty 20180806 i386 /hwe-t trusty/
14.04 trusty 20180806 i386 /hwe-x trusty/
16.04 xenial 20180814 amd64/ga-16.04 xenial/
16.04 xenial 20180814 amd64/hwe-16.04 xenial/
16.04 xenial 20180814 amd64/hwe-
16.04 xenial 20180814 i386 /ga-16.04 xenial/
16.04 xenial 20180814 i386 /hwe-16.04 xenial/
16.04 xenial 20180814 i386 /hwe-16.04-edge xenial/
17.04 zesty 20171219 amd64/ga-17.04 zesty/amd64/
17.10 artful 20180718 amd64/ga-17.10 artful/
17.10 artful 20180718 i386 /ga-17.10 artful/
18.04 bionic 20180814 amd64/ga-18.04 bionic/
18.04 bionic 20180814 i386 /ga-18.04 bionic/
18.10 cosmic 20180813 amd64/ga-18.10 cosmic/
18.10 cosmic 20180813 i386 /ga-18.10 cosmic/
-------
6.6 centos66 20180501_01 amd64/generic centos66/
7.0 centos70 20180501_01 amd64/generic centos70/
=======
Wed, 15 Aug 2018 14:58:02 -0500: vmtest start: nosetests3 --process-
...
-------
Ran 3336 tests in 23577.156s
Wed, 15 Aug 2018 21:31:00 -0500: vmtest end [0] in 23580s
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:961058e0638
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:961058e0638
https:/
Executed test runs:
ABORTED: https:/
ABORTED: https:/
ABORTED: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:961058e0638
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
SUCCESS: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:961058e0638
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Lee Trager (ltrager) wrote : | # |
I have been testing this branch using the MAAS CI. Nodes in the MAAS CI have no direct access to the Internet. This is causing UEFI CentOS 7 installs to fail when running
yum --assumeyes --quiet install --downloadonly --setopt=
I made sure the image I built has grub2-efi-x64[1]. While I think its a good feature that Curtin will automatically install missing dependencies if those dependencies are currently on the system Curtin should not try to access the Internet.
I would suggest querying RPM directly to see if a package is available before trying to use yum
[root@autopkgtest /]# rpm -q grub2-efi-x64
grub2-efi-
[root@autopkgtest /]# echo $?
0
[root@autopkgtest /]# rpm -q missing-package
package missing-package is not installed
[root@autopkgtest /]# echo $?
1
[1] https:/
- 4e1ad40... by Ryan Harper
-
Move grub package install to install_
missing_ packages - dc92fe4... by Ryan Harper
-
Make distro.
has_pkg_ available multi-distro - 2545c5f... by Ryan Harper
-
Build list of uefi packages and then update needed set checking if installed
- e1b9d38... by Ryan Harper
-
vmtests: Add environ variable IMAGE_SRC_KEYRING to specify gpg key path for testing unofficial images
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:1e1c7aa8d61
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 479275f... by Ryan Harper
-
Fix package name: grub2-efi-modules
- 18ea647... by Ryan Harper
-
Fix package name once more, grub2-efi-
x64-modules
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:76e4baa3a7e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Lee Trager (ltrager) wrote : | # |
Latest changes allow CentOS to confirm with custom storage in the MAAS CI! Approving as custom storage works but we still need to solve LP:1788088.
- a66566a... by Ryan Harper
-
centos: UEFI only depends on grub2-efi-
x64-modules - 07972da... by Ryan Harper
-
helpers/common: make efibootmgr dump verbosely
- f9c5916... by Ryan Harper
-
vmtest: collect /boot contents; collect efibootmgr output on UEFI
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:3c1fa5feaef
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
After some back and forth about what grub2 packages needed; we pulled out shim and grub2-efi-x64 for now and pushed secure boot to a separate feature.
A full vmtest run with this branch against current published images has passed. I've also run all centos7 tests against the proposed images from ltrager and that has passed as well.
- d1e92f6... by Ryan Harper
-
Allow os_variant=rhel in grub install
When RHEL is installed, the os_variant value is 'rhel'. Allow this
value to match the centos|redhat case statement for grub install.LP: #1790756
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:f764e28d234
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- a04727e... by Ryan Harper
-
builtin-hooks call handle_cloudconfig on centos to config maas datasource
In-image curthooks in centos images called curthooks.
handle_ cloudconfig.
We need to do the same in the built-in hooks if we're on centos.LP: #1791140
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:e5b7b578e56
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 7198fbc... by Ryan Harper
-
jenkins-runner: restore missing -p|--parallel cli case statement
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:57db65feaa7
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 0f426eb... by Ryan Harper
-
jenkins-runner: better quoting and add --filter foo=bar support
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:1545caaa232
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
Overall I'm happy with this at this point.
If Ryan is happy and c-i is happy then I'm good.
I think that we have to rebase though. There are several '<<<<'.
You'll have to rebase.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:0f426eb681e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- fafa454... by Ryan Harper
-
Only install multipath packages if needed
- 211e2ad... by Ryan Harper
-
jenkins-runner: always append tests to nosetest
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:211e2ad86f7
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
- 11ad4ba... by Ryan Harper
-
jenkins-runner: handle nosetest args passed with filters and test paths
- e0e9837... by Ryan Harper
-
Drop debug and fix empty check to size of array
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:e0e98376b2e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:e0e98376b2e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:e0e98376b2e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
We've updated the centos images to have the required packages for offline install. Jenkins node didnt have access to the yum repos so the vmtests were failing for multipath and iscsi. With that resolved, I've re-run those tests and have positive results from here:
https:/
Ran 40 tests in 309.102s
OK (SKIP=12)
Fri, 21 Sep 2018 06:40:34 +0000: vmtest end [0] in 316s
Preview Diff
1 | diff --git a/curtin/__init__.py b/curtin/__init__.py |
2 | index 002454b..ee35ca3 100644 |
3 | --- a/curtin/__init__.py |
4 | +++ b/curtin/__init__.py |
5 | @@ -10,6 +10,8 @@ KERNEL_CMDLINE_COPY_TO_INSTALL_SEP = "---" |
6 | FEATURES = [ |
7 | # curtin can apply centos networking via centos_apply_network_config |
8 | 'CENTOS_APPLY_NETWORK_CONFIG', |
9 | + # curtin can configure centos storage devices and boot devices |
10 | + 'CENTOS_CURTHOOK_SUPPORT', |
11 | # install supports the 'network' config version 1 |
12 | 'NETWORK_CONFIG_V1', |
13 | # reporter supports 'webhook' type |
14 | diff --git a/curtin/block/__init__.py b/curtin/block/__init__.py |
15 | index b771629..490c268 100644 |
16 | --- a/curtin/block/__init__.py |
17 | +++ b/curtin/block/__init__.py |
18 | @@ -1003,78 +1003,6 @@ def wipe_volume(path, mode="superblock", exclusive=True): |
19 | raise ValueError("wipe mode %s not supported" % mode) |
20 | |
21 | |
22 | -def storage_config_required_packages(storage_config, mapping): |
23 | - """Read storage configuration dictionary and determine |
24 | - which packages are required for the supplied configuration |
25 | - to function. Return a list of packaged to install. |
26 | - """ |
27 | - |
28 | - if not storage_config or not isinstance(storage_config, dict): |
29 | - raise ValueError('Invalid storage configuration. ' |
30 | - 'Must be a dict:\n %s' % storage_config) |
31 | - |
32 | - if not mapping or not isinstance(mapping, dict): |
33 | - raise ValueError('Invalid storage mapping. Must be a dict') |
34 | - |
35 | - if 'storage' in storage_config: |
36 | - storage_config = storage_config.get('storage') |
37 | - |
38 | - needed_packages = [] |
39 | - |
40 | - # get reqs by device operation type |
41 | - dev_configs = set(operation['type'] |
42 | - for operation in storage_config['config']) |
43 | - |
44 | - for dev_type in dev_configs: |
45 | - if dev_type in mapping: |
46 | - needed_packages.extend(mapping[dev_type]) |
47 | - |
48 | - # for any format operations, check the fstype and |
49 | - # determine if we need any mkfs tools as well. |
50 | - format_configs = set([operation['fstype'] |
51 | - for operation in storage_config['config'] |
52 | - if operation['type'] == 'format']) |
53 | - for format_type in format_configs: |
54 | - if format_type in mapping: |
55 | - needed_packages.extend(mapping[format_type]) |
56 | - |
57 | - return needed_packages |
58 | - |
59 | - |
60 | -def detect_required_packages_mapping(): |
61 | - """Return a dictionary providing a versioned configuration which maps |
62 | - storage configuration elements to the packages which are required |
63 | - for functionality. |
64 | - |
65 | - The mapping key is either a config type value, or an fstype value. |
66 | - |
67 | - """ |
68 | - version = 1 |
69 | - mapping = { |
70 | - version: { |
71 | - 'handler': storage_config_required_packages, |
72 | - 'mapping': { |
73 | - 'bcache': ['bcache-tools'], |
74 | - 'btrfs': ['btrfs-tools'], |
75 | - 'ext2': ['e2fsprogs'], |
76 | - 'ext3': ['e2fsprogs'], |
77 | - 'ext4': ['e2fsprogs'], |
78 | - 'jfs': ['jfsutils'], |
79 | - 'lvm_partition': ['lvm2'], |
80 | - 'lvm_volgroup': ['lvm2'], |
81 | - 'ntfs': ['ntfs-3g'], |
82 | - 'raid': ['mdadm'], |
83 | - 'reiserfs': ['reiserfsprogs'], |
84 | - 'xfs': ['xfsprogs'], |
85 | - 'zfsroot': ['zfsutils-linux', 'zfs-initramfs'], |
86 | - 'zfs': ['zfsutils-linux', 'zfs-initramfs'], |
87 | - 'zpool': ['zfsutils-linux', 'zfs-initramfs'], |
88 | - }, |
89 | - }, |
90 | - } |
91 | - return mapping |
92 | - |
93 | - |
94 | def get_supported_filesystems(): |
95 | """ Return a list of filesystems that the kernel currently supports |
96 | as read from /proc/filesystems. |
97 | diff --git a/curtin/block/deps.py b/curtin/block/deps.py |
98 | new file mode 100644 |
99 | index 0000000..930f764 |
100 | --- /dev/null |
101 | +++ b/curtin/block/deps.py |
102 | @@ -0,0 +1,103 @@ |
103 | +# This file is part of curtin. See LICENSE file for copyright and license info. |
104 | + |
105 | +from curtin.distro import DISTROS |
106 | +from curtin.block import iscsi |
107 | + |
108 | + |
109 | +def storage_config_required_packages(storage_config, mapping): |
110 | + """Read storage configuration dictionary and determine |
111 | + which packages are required for the supplied configuration |
112 | + to function. Return a list of packaged to install. |
113 | + """ |
114 | + |
115 | + if not storage_config or not isinstance(storage_config, dict): |
116 | + raise ValueError('Invalid storage configuration. ' |
117 | + 'Must be a dict:\n %s' % storage_config) |
118 | + |
119 | + if not mapping or not isinstance(mapping, dict): |
120 | + raise ValueError('Invalid storage mapping. Must be a dict') |
121 | + |
122 | + if 'storage' in storage_config: |
123 | + storage_config = storage_config.get('storage') |
124 | + |
125 | + needed_packages = [] |
126 | + |
127 | + # get reqs by device operation type |
128 | + dev_configs = set(operation['type'] |
129 | + for operation in storage_config['config']) |
130 | + |
131 | + for dev_type in dev_configs: |
132 | + if dev_type in mapping: |
133 | + needed_packages.extend(mapping[dev_type]) |
134 | + |
135 | + # for disks with path: iscsi: we need iscsi tools |
136 | + iscsi_vols = iscsi.get_iscsi_volumes_from_config(storage_config) |
137 | + if len(iscsi_vols) > 0: |
138 | + needed_packages.extend(mapping['iscsi']) |
139 | + |
140 | + # for any format operations, check the fstype and |
141 | + # determine if we need any mkfs tools as well. |
142 | + format_configs = set([operation['fstype'] |
143 | + for operation in storage_config['config'] |
144 | + if operation['type'] == 'format']) |
145 | + for format_type in format_configs: |
146 | + if format_type in mapping: |
147 | + needed_packages.extend(mapping[format_type]) |
148 | + |
149 | + return needed_packages |
150 | + |
151 | + |
152 | +def detect_required_packages_mapping(osfamily=DISTROS.debian): |
153 | + """Return a dictionary providing a versioned configuration which maps |
154 | + storage configuration elements to the packages which are required |
155 | + for functionality. |
156 | + |
157 | + The mapping key is either a config type value, or an fstype value. |
158 | + |
159 | + """ |
160 | + distro_mapping = { |
161 | + DISTROS.debian: { |
162 | + 'bcache': ['bcache-tools'], |
163 | + 'btrfs': ['btrfs-tools'], |
164 | + 'ext2': ['e2fsprogs'], |
165 | + 'ext3': ['e2fsprogs'], |
166 | + 'ext4': ['e2fsprogs'], |
167 | + 'jfs': ['jfsutils'], |
168 | + 'iscsi': ['open-iscsi'], |
169 | + 'lvm_partition': ['lvm2'], |
170 | + 'lvm_volgroup': ['lvm2'], |
171 | + 'ntfs': ['ntfs-3g'], |
172 | + 'raid': ['mdadm'], |
173 | + 'reiserfs': ['reiserfsprogs'], |
174 | + 'xfs': ['xfsprogs'], |
175 | + 'zfsroot': ['zfsutils-linux', 'zfs-initramfs'], |
176 | + 'zfs': ['zfsutils-linux', 'zfs-initramfs'], |
177 | + 'zpool': ['zfsutils-linux', 'zfs-initramfs'], |
178 | + }, |
179 | + DISTROS.redhat: { |
180 | + 'bcache': [], |
181 | + 'btrfs': ['btrfs-progs'], |
182 | + 'ext2': ['e2fsprogs'], |
183 | + 'ext3': ['e2fsprogs'], |
184 | + 'ext4': ['e2fsprogs'], |
185 | + 'jfs': [], |
186 | + 'iscsi': ['iscsi-initiator-utils'], |
187 | + 'lvm_partition': ['lvm2'], |
188 | + 'lvm_volgroup': ['lvm2'], |
189 | + 'ntfs': [], |
190 | + 'raid': ['mdadm'], |
191 | + 'reiserfs': [], |
192 | + 'xfs': ['xfsprogs'], |
193 | + 'zfsroot': [], |
194 | + 'zfs': [], |
195 | + 'zpool': [], |
196 | + }, |
197 | + } |
198 | + if osfamily not in distro_mapping: |
199 | + raise ValueError('No block package mapping for distro: %s' % osfamily) |
200 | + |
201 | + return {1: {'handler': storage_config_required_packages, |
202 | + 'mapping': distro_mapping.get(osfamily)}} |
203 | + |
204 | + |
205 | +# vi: ts=4 expandtab syntax=python |
206 | diff --git a/curtin/block/iscsi.py b/curtin/block/iscsi.py |
207 | index 0c666b6..3c46500 100644 |
208 | --- a/curtin/block/iscsi.py |
209 | +++ b/curtin/block/iscsi.py |
210 | @@ -9,7 +9,7 @@ import os |
211 | import re |
212 | import shutil |
213 | |
214 | -from curtin import (util, udev) |
215 | +from curtin import (paths, util, udev) |
216 | from curtin.block import (get_device_slave_knames, |
217 | path_to_kname) |
218 | |
219 | @@ -230,29 +230,45 @@ def connected_disks(): |
220 | return _ISCSI_DISKS |
221 | |
222 | |
223 | -def get_iscsi_disks_from_config(cfg): |
224 | +def get_iscsi_volumes_from_config(cfg): |
225 | """Parse a curtin storage config and return a list |
226 | - of iscsi disk objects for each configuration present |
227 | + of iscsi disk rfc4173 uris for each configuration present. |
228 | """ |
229 | if not cfg: |
230 | cfg = {} |
231 | |
232 | - sconfig = cfg.get('storage', {}).get('config', {}) |
233 | - if not sconfig: |
234 | + if 'storage' in cfg: |
235 | + sconfig = cfg.get('storage', {}).get('config', []) |
236 | + else: |
237 | + sconfig = cfg.get('config', []) |
238 | + if not sconfig or not isinstance(sconfig, list): |
239 | LOG.warning('Configuration dictionary did not contain' |
240 | ' a storage configuration') |
241 | return [] |
242 | |
243 | + return [disk['path'] for disk in sconfig |
244 | + if disk['type'] == 'disk' and |
245 | + disk.get('path', "").startswith('iscsi:')] |
246 | + |
247 | + |
248 | +def get_iscsi_disks_from_config(cfg): |
249 | + """Return a list of IscsiDisk objects for each iscsi volume present.""" |
250 | # Construct IscsiDisk objects for each iscsi volume present |
251 | - iscsi_disks = [IscsiDisk(disk['path']) for disk in sconfig |
252 | - if disk['type'] == 'disk' and |
253 | - disk.get('path', "").startswith('iscsi:')] |
254 | + iscsi_disks = [IscsiDisk(volume) for volume in |
255 | + get_iscsi_volumes_from_config(cfg)] |
256 | LOG.debug('Found %s iscsi disks in storage config', len(iscsi_disks)) |
257 | return iscsi_disks |
258 | |
259 | |
260 | +def get_iscsi_ports_from_config(cfg): |
261 | + """Return a set of ports that may be used when connecting to volumes.""" |
262 | + ports = set([d.port for d in get_iscsi_disks_from_config(cfg)]) |
263 | + LOG.debug('Found iscsi ports in use: %s', ports) |
264 | + return ports |
265 | + |
266 | + |
267 | def disconnect_target_disks(target_root_path=None): |
268 | - target_nodes_path = util.target_path(target_root_path, '/etc/iscsi/nodes') |
269 | + target_nodes_path = paths.target_path(target_root_path, '/etc/iscsi/nodes') |
270 | fails = [] |
271 | if os.path.isdir(target_nodes_path): |
272 | for target in os.listdir(target_nodes_path): |
273 | diff --git a/curtin/block/lvm.py b/curtin/block/lvm.py |
274 | index eca64f6..b3f8bcb 100644 |
275 | --- a/curtin/block/lvm.py |
276 | +++ b/curtin/block/lvm.py |
277 | @@ -4,6 +4,7 @@ |
278 | This module provides some helper functions for manipulating lvm devices |
279 | """ |
280 | |
281 | +from curtin import distro |
282 | from curtin import util |
283 | from curtin.log import LOG |
284 | import os |
285 | @@ -88,7 +89,7 @@ def lvm_scan(activate=True): |
286 | # before appending the cache flag though, check if lvmetad is running. this |
287 | # ensures that we do the right thing even if lvmetad is supported but is |
288 | # not running |
289 | - release = util.lsb_release().get('codename') |
290 | + release = distro.lsb_release().get('codename') |
291 | if release in [None, 'UNAVAILABLE']: |
292 | LOG.warning('unable to find release number, assuming xenial or later') |
293 | release = 'xenial' |
294 | diff --git a/curtin/block/mdadm.py b/curtin/block/mdadm.py |
295 | index 8eff7fb..4ad6aa7 100644 |
296 | --- a/curtin/block/mdadm.py |
297 | +++ b/curtin/block/mdadm.py |
298 | @@ -13,6 +13,7 @@ import time |
299 | |
300 | from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path) |
301 | from curtin.block import get_holders |
302 | +from curtin.distro import lsb_release |
303 | from curtin import (util, udev) |
304 | from curtin.log import LOG |
305 | |
306 | @@ -95,7 +96,7 @@ VALID_RAID_ARRAY_STATES = ( |
307 | checks the mdadm version and will return True if we can use --export |
308 | for key=value list with enough info, false if version is less than |
309 | ''' |
310 | -MDADM_USE_EXPORT = util.lsb_release()['codename'] not in ['precise', 'trusty'] |
311 | +MDADM_USE_EXPORT = lsb_release()['codename'] not in ['precise', 'trusty'] |
312 | |
313 | # |
314 | # mdadm executors |
315 | diff --git a/curtin/block/mkfs.py b/curtin/block/mkfs.py |
316 | index f39017c..4a1e1f9 100644 |
317 | --- a/curtin/block/mkfs.py |
318 | +++ b/curtin/block/mkfs.py |
319 | @@ -3,8 +3,9 @@ |
320 | # This module wraps calls to mkfs.<fstype> and determines the appropriate flags |
321 | # for each filesystem type |
322 | |
323 | -from curtin import util |
324 | from curtin import block |
325 | +from curtin import distro |
326 | +from curtin import util |
327 | |
328 | import string |
329 | import os |
330 | @@ -102,7 +103,7 @@ def valid_fstypes(): |
331 | |
332 | def get_flag_mapping(flag_name, fs_family, param=None, strict=False): |
333 | ret = [] |
334 | - release = util.lsb_release()['codename'] |
335 | + release = distro.lsb_release()['codename'] |
336 | overrides = release_flag_mapping_overrides.get(release, {}) |
337 | if flag_name in overrides and fs_family in overrides[flag_name]: |
338 | flag_sym = overrides[flag_name][fs_family] |
339 | diff --git a/curtin/block/zfs.py b/curtin/block/zfs.py |
340 | index e279ab6..5615144 100644 |
341 | --- a/curtin/block/zfs.py |
342 | +++ b/curtin/block/zfs.py |
343 | @@ -7,6 +7,7 @@ and volumes.""" |
344 | import os |
345 | |
346 | from curtin.config import merge_config |
347 | +from curtin import distro |
348 | from curtin import util |
349 | from . import blkid, get_supported_filesystems |
350 | |
351 | @@ -90,7 +91,7 @@ def zfs_assert_supported(): |
352 | if arch in ZFS_UNSUPPORTED_ARCHES: |
353 | raise RuntimeError("zfs is not supported on architecture: %s" % arch) |
354 | |
355 | - release = util.lsb_release()['codename'] |
356 | + release = distro.lsb_release()['codename'] |
357 | if release in ZFS_UNSUPPORTED_RELEASES: |
358 | raise RuntimeError("zfs is not supported on release: %s" % release) |
359 | |
360 | diff --git a/curtin/commands/apply_net.py b/curtin/commands/apply_net.py |
361 | index ffd474e..ddc5056 100644 |
362 | --- a/curtin/commands/apply_net.py |
363 | +++ b/curtin/commands/apply_net.py |
364 | @@ -7,6 +7,7 @@ from .. import log |
365 | import curtin.net as net |
366 | import curtin.util as util |
367 | from curtin import config |
368 | +from curtin import paths |
369 | from . import populate_one_subcmd |
370 | |
371 | |
372 | @@ -123,7 +124,7 @@ def _patch_ifupdown_ipv6_mtu_hook(target, |
373 | |
374 | for hook in ['prehook', 'posthook']: |
375 | fn = hookfn[hook] |
376 | - cfg = util.target_path(target, path=fn) |
377 | + cfg = paths.target_path(target, path=fn) |
378 | LOG.info('Injecting fix for ipv6 mtu settings: %s', cfg) |
379 | util.write_file(cfg, contents[hook], mode=0o755) |
380 | |
381 | @@ -136,7 +137,7 @@ def _disable_ipv6_privacy_extensions(target, |
382 | Resolve this by allowing the cloud-image setting to win. """ |
383 | |
384 | LOG.debug('Attempting to remove ipv6 privacy extensions') |
385 | - cfg = util.target_path(target, path=path) |
386 | + cfg = paths.target_path(target, path=path) |
387 | if not os.path.exists(cfg): |
388 | LOG.warn('Failed to find ipv6 privacy conf file %s', cfg) |
389 | return |
390 | @@ -182,7 +183,7 @@ def _maybe_remove_legacy_eth0(target, |
391 | - with unknown content, leave it and warn |
392 | """ |
393 | |
394 | - cfg = util.target_path(target, path=path) |
395 | + cfg = paths.target_path(target, path=path) |
396 | if not os.path.exists(cfg): |
397 | LOG.warn('Failed to find legacy network conf file %s', cfg) |
398 | return |
399 | diff --git a/curtin/commands/apt_config.py b/curtin/commands/apt_config.py |
400 | index 41c329e..9ce25b3 100644 |
401 | --- a/curtin/commands/apt_config.py |
402 | +++ b/curtin/commands/apt_config.py |
403 | @@ -13,7 +13,7 @@ import sys |
404 | import yaml |
405 | |
406 | from curtin.log import LOG |
407 | -from curtin import (config, util, gpg) |
408 | +from curtin import (config, distro, gpg, paths, util) |
409 | |
410 | from . import populate_one_subcmd |
411 | |
412 | @@ -61,7 +61,7 @@ def handle_apt(cfg, target=None): |
413 | curthooks if a global apt config was provided or via the "apt" |
414 | standalone command. |
415 | """ |
416 | - release = util.lsb_release(target=target)['codename'] |
417 | + release = distro.lsb_release(target=target)['codename'] |
418 | arch = util.get_architecture(target) |
419 | mirrors = find_apt_mirror_info(cfg, arch) |
420 | LOG.debug("Apt Mirror info: %s", mirrors) |
421 | @@ -148,7 +148,7 @@ def apply_debconf_selections(cfg, target=None): |
422 | pkg = re.sub(r"[:\s].*", "", line) |
423 | pkgs_cfgd.add(pkg) |
424 | |
425 | - pkgs_installed = util.get_installed_packages(target) |
426 | + pkgs_installed = distro.get_installed_packages(target) |
427 | |
428 | LOG.debug("pkgs_cfgd: %s", pkgs_cfgd) |
429 | LOG.debug("pkgs_installed: %s", pkgs_installed) |
430 | @@ -164,7 +164,7 @@ def apply_debconf_selections(cfg, target=None): |
431 | def clean_cloud_init(target): |
432 | """clean out any local cloud-init config""" |
433 | flist = glob.glob( |
434 | - util.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*")) |
435 | + paths.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*")) |
436 | |
437 | LOG.debug("cleaning cloud-init config from: %s", flist) |
438 | for dpkg_cfg in flist: |
439 | @@ -194,7 +194,7 @@ def rename_apt_lists(new_mirrors, target=None): |
440 | """rename_apt_lists - rename apt lists to preserve old cache data""" |
441 | default_mirrors = get_default_mirrors(util.get_architecture(target)) |
442 | |
443 | - pre = util.target_path(target, APT_LISTS) |
444 | + pre = paths.target_path(target, APT_LISTS) |
445 | for (name, omirror) in default_mirrors.items(): |
446 | nmirror = new_mirrors.get(name) |
447 | if not nmirror: |
448 | @@ -299,7 +299,7 @@ def generate_sources_list(cfg, release, mirrors, target=None): |
449 | if tmpl is None: |
450 | LOG.info("No custom template provided, fall back to modify" |
451 | "mirrors in %s on the target system", aptsrc) |
452 | - tmpl = util.load_file(util.target_path(target, aptsrc)) |
453 | + tmpl = util.load_file(paths.target_path(target, aptsrc)) |
454 | # Strategy if no custom template was provided: |
455 | # - Only replacing mirrors |
456 | # - no reason to replace "release" as it is from target anyway |
457 | @@ -310,24 +310,24 @@ def generate_sources_list(cfg, release, mirrors, target=None): |
458 | tmpl = mirror_to_placeholder(tmpl, default_mirrors['SECURITY'], |
459 | "$SECURITY") |
460 | |
461 | - orig = util.target_path(target, aptsrc) |
462 | + orig = paths.target_path(target, aptsrc) |
463 | if os.path.exists(orig): |
464 | os.rename(orig, orig + ".curtin.old") |
465 | |
466 | rendered = util.render_string(tmpl, params) |
467 | disabled = disable_suites(cfg.get('disable_suites'), rendered, release) |
468 | - util.write_file(util.target_path(target, aptsrc), disabled, mode=0o644) |
469 | + util.write_file(paths.target_path(target, aptsrc), disabled, mode=0o644) |
470 | |
471 | # protect the just generated sources.list from cloud-init |
472 | cloudfile = "/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg" |
473 | # this has to work with older cloud-init as well, so use old key |
474 | cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1) |
475 | try: |
476 | - util.write_file(util.target_path(target, cloudfile), |
477 | + util.write_file(paths.target_path(target, cloudfile), |
478 | cloudconf, mode=0o644) |
479 | except IOError: |
480 | LOG.exception("Failed to protect source.list from cloud-init in (%s)", |
481 | - util.target_path(target, cloudfile)) |
482 | + paths.target_path(target, cloudfile)) |
483 | raise |
484 | |
485 | |
486 | @@ -409,7 +409,7 @@ def add_apt_sources(srcdict, target=None, template_params=None, |
487 | raise |
488 | continue |
489 | |
490 | - sourcefn = util.target_path(target, ent['filename']) |
491 | + sourcefn = paths.target_path(target, ent['filename']) |
492 | try: |
493 | contents = "%s\n" % (source) |
494 | util.write_file(sourcefn, contents, omode="a") |
495 | @@ -417,8 +417,8 @@ def add_apt_sources(srcdict, target=None, template_params=None, |
496 | LOG.exception("failed write to file %s: %s", sourcefn, detail) |
497 | raise |
498 | |
499 | - util.apt_update(target=target, force=True, |
500 | - comment="apt-source changed config") |
501 | + distro.apt_update(target=target, force=True, |
502 | + comment="apt-source changed config") |
503 | |
504 | return |
505 | |
506 | diff --git a/curtin/commands/block_meta.py b/curtin/commands/block_meta.py |
507 | index 6bd430d..197c1fd 100644 |
508 | --- a/curtin/commands/block_meta.py |
509 | +++ b/curtin/commands/block_meta.py |
510 | @@ -1,8 +1,9 @@ |
511 | # This file is part of curtin. See LICENSE file for copyright and license info. |
512 | |
513 | from collections import OrderedDict, namedtuple |
514 | -from curtin import (block, config, util) |
515 | +from curtin import (block, config, paths, util) |
516 | from curtin.block import (bcache, mdadm, mkfs, clear_holders, lvm, iscsi, zfs) |
517 | +from curtin import distro |
518 | from curtin.log import LOG, logged_time |
519 | from curtin.reporter import events |
520 | |
521 | @@ -730,12 +731,12 @@ def mount_fstab_data(fdata, target=None): |
522 | |
523 | :param fdata: a FstabData type |
524 | :return None.""" |
525 | - mp = util.target_path(target, fdata.path) |
526 | + mp = paths.target_path(target, fdata.path) |
527 | if fdata.device: |
528 | device = fdata.device |
529 | else: |
530 | if fdata.spec.startswith("/") and not fdata.spec.startswith("/dev/"): |
531 | - device = util.target_path(target, fdata.spec) |
532 | + device = paths.target_path(target, fdata.spec) |
533 | else: |
534 | device = fdata.spec |
535 | |
536 | @@ -856,7 +857,7 @@ def lvm_partition_handler(info, storage_config): |
537 | # Use 'wipesignatures' (if available) and 'zero' to clear target lv |
538 | # of any fs metadata |
539 | cmd = ["lvcreate", volgroup, "--name", name, "--zero=y"] |
540 | - release = util.lsb_release()['codename'] |
541 | + release = distro.lsb_release()['codename'] |
542 | if release not in ['precise', 'trusty']: |
543 | cmd.extend(["--wipesignatures=y"]) |
544 | |
545 | diff --git a/curtin/commands/curthooks.py b/curtin/commands/curthooks.py |
546 | index f9a5a66..480eca4 100644 |
547 | --- a/curtin/commands/curthooks.py |
548 | +++ b/curtin/commands/curthooks.py |
549 | @@ -11,12 +11,18 @@ import textwrap |
550 | |
551 | from curtin import config |
552 | from curtin import block |
553 | +from curtin import distro |
554 | +from curtin.block import iscsi |
555 | from curtin import net |
556 | from curtin import futil |
557 | from curtin.log import LOG |
558 | +from curtin import paths |
559 | from curtin import swap |
560 | from curtin import util |
561 | from curtin import version as curtin_version |
562 | +from curtin.block import deps as bdeps |
563 | +from curtin.distro import DISTROS |
564 | +from curtin.net import deps as ndeps |
565 | from curtin.reporter import events |
566 | from curtin.commands import apply_net, apt_config |
567 | from curtin.url_helper import get_maas_version |
568 | @@ -173,10 +179,10 @@ def install_kernel(cfg, target): |
569 | # target only has required packages installed. See LP:1640519 |
570 | fk_packages = get_flash_kernel_pkgs() |
571 | if fk_packages: |
572 | - util.install_packages(fk_packages.split(), target=target) |
573 | + distro.install_packages(fk_packages.split(), target=target) |
574 | |
575 | if kernel_package: |
576 | - util.install_packages([kernel_package], target=target) |
577 | + distro.install_packages([kernel_package], target=target) |
578 | return |
579 | |
580 | # uname[2] is kernel name (ie: 3.16.0-7-generic) |
581 | @@ -193,24 +199,24 @@ def install_kernel(cfg, target): |
582 | LOG.warn("Couldn't detect kernel package to install for %s." |
583 | % kernel) |
584 | if kernel_fallback is not None: |
585 | - util.install_packages([kernel_fallback], target=target) |
586 | + distro.install_packages([kernel_fallback], target=target) |
587 | return |
588 | |
589 | package = "linux-{flavor}{map_suffix}".format( |
590 | flavor=flavor, map_suffix=map_suffix) |
591 | |
592 | - if util.has_pkg_available(package, target): |
593 | - if util.has_pkg_installed(package, target): |
594 | + if distro.has_pkg_available(package, target): |
595 | + if distro.has_pkg_installed(package, target): |
596 | LOG.debug("Kernel package '%s' already installed", package) |
597 | else: |
598 | LOG.debug("installing kernel package '%s'", package) |
599 | - util.install_packages([package], target=target) |
600 | + distro.install_packages([package], target=target) |
601 | else: |
602 | if kernel_fallback is not None: |
603 | LOG.info("Kernel package '%s' not available. " |
604 | "Installing fallback package '%s'.", |
605 | package, kernel_fallback) |
606 | - util.install_packages([kernel_fallback], target=target) |
607 | + distro.install_packages([kernel_fallback], target=target) |
608 | else: |
609 | LOG.warn("Kernel package '%s' not available and no fallback." |
610 | " System may not boot.", package) |
611 | @@ -273,7 +279,7 @@ def uefi_reorder_loaders(grubcfg, target): |
612 | LOG.debug("Currently booted UEFI loader might no longer boot.") |
613 | |
614 | |
615 | -def setup_grub(cfg, target): |
616 | +def setup_grub(cfg, target, osfamily=DISTROS.debian): |
617 | # target is the path to the mounted filesystem |
618 | |
619 | # FIXME: these methods need moving to curtin.block |
620 | @@ -353,24 +359,6 @@ def setup_grub(cfg, target): |
621 | else: |
622 | instdevs = list(blockdevs) |
623 | |
624 | - # UEFI requires grub-efi-{arch}. If a signed version of that package |
625 | - # exists then it will be installed. |
626 | - if util.is_uefi_bootable(): |
627 | - arch = util.get_architecture() |
628 | - pkgs = ['grub-efi-%s' % arch] |
629 | - |
630 | - # Architecture might support a signed UEFI loader |
631 | - uefi_pkg_signed = 'grub-efi-%s-signed' % arch |
632 | - if util.has_pkg_available(uefi_pkg_signed): |
633 | - pkgs.append(uefi_pkg_signed) |
634 | - |
635 | - # AMD64 has shim-signed for SecureBoot support |
636 | - if arch == "amd64": |
637 | - pkgs.append("shim-signed") |
638 | - |
639 | - # Install the UEFI packages needed for the architecture |
640 | - util.install_packages(pkgs, target=target) |
641 | - |
642 | env = os.environ.copy() |
643 | |
644 | replace_default = grubcfg.get('replace_linux_default', True) |
645 | @@ -399,6 +387,7 @@ def setup_grub(cfg, target): |
646 | else: |
647 | LOG.debug("NOT enabling UEFI nvram updates") |
648 | LOG.debug("Target system may not boot") |
649 | + args.append('--os-family=%s' % osfamily) |
650 | args.append(target) |
651 | |
652 | # capture stdout and stderr joined. |
653 | @@ -435,14 +424,21 @@ def copy_crypttab(crypttab, target): |
654 | shutil.copy(crypttab, os.path.sep.join([target, 'etc/crypttab'])) |
655 | |
656 | |
657 | -def copy_iscsi_conf(nodes_dir, target): |
658 | +def copy_iscsi_conf(nodes_dir, target, target_nodes_dir='etc/iscsi/nodes'): |
659 | if not nodes_dir: |
660 | LOG.warn("nodes directory must be specified, not copying") |
661 | return |
662 | |
663 | LOG.info("copying iscsi nodes database into target") |
664 | - shutil.copytree(nodes_dir, os.path.sep.join([target, |
665 | - 'etc/iscsi/nodes'])) |
666 | + tdir = os.path.sep.join([target, target_nodes_dir]) |
667 | + if not os.path.exists(tdir): |
668 | + shutil.copytree(nodes_dir, tdir) |
669 | + else: |
670 | + # if /etc/iscsi/nodes exists, copy dirs underneath |
671 | + for ndir in os.listdir(nodes_dir): |
672 | + source_dir = os.path.join(nodes_dir, ndir) |
673 | + target_dir = os.path.join(tdir, ndir) |
674 | + shutil.copytree(source_dir, target_dir) |
675 | |
676 | |
677 | def copy_mdadm_conf(mdadm_conf, target): |
678 | @@ -486,7 +482,7 @@ def copy_dname_rules(rules_d, target): |
679 | if not rules_d: |
680 | LOG.warn("no udev rules directory to copy") |
681 | return |
682 | - target_rules_dir = util.target_path(target, "etc/udev/rules.d") |
683 | + target_rules_dir = paths.target_path(target, "etc/udev/rules.d") |
684 | for rule in os.listdir(rules_d): |
685 | target_file = os.path.join(target_rules_dir, rule) |
686 | shutil.copy(os.path.join(rules_d, rule), target_file) |
687 | @@ -532,11 +528,19 @@ def add_swap(cfg, target, fstab): |
688 | maxsize=maxsize) |
689 | |
690 | |
691 | -def detect_and_handle_multipath(cfg, target): |
692 | - DEFAULT_MULTIPATH_PACKAGES = ['multipath-tools-boot'] |
693 | +def detect_and_handle_multipath(cfg, target, osfamily=DISTROS.debian): |
694 | + DEFAULT_MULTIPATH_PACKAGES = { |
695 | + DISTROS.debian: ['multipath-tools-boot'], |
696 | + DISTROS.redhat: ['device-mapper-multipath'], |
697 | + } |
698 | + if osfamily not in DEFAULT_MULTIPATH_PACKAGES: |
699 | + raise ValueError( |
700 | + 'No multipath package mapping for distro: %s' % osfamily) |
701 | + |
702 | mpcfg = cfg.get('multipath', {}) |
703 | mpmode = mpcfg.get('mode', 'auto') |
704 | - mppkgs = mpcfg.get('packages', DEFAULT_MULTIPATH_PACKAGES) |
705 | + mppkgs = mpcfg.get('packages', |
706 | + DEFAULT_MULTIPATH_PACKAGES.get(osfamily)) |
707 | mpbindings = mpcfg.get('overwrite_bindings', True) |
708 | |
709 | if isinstance(mppkgs, str): |
710 | @@ -549,23 +553,28 @@ def detect_and_handle_multipath(cfg, target): |
711 | return |
712 | |
713 | LOG.info("Detected multipath devices. Installing support via %s", mppkgs) |
714 | + needed = [pkg for pkg in mppkgs if pkg |
715 | + not in distro.get_installed_packages(target)] |
716 | + if needed: |
717 | + distro.install_packages(needed, target=target, osfamily=osfamily) |
718 | |
719 | - util.install_packages(mppkgs, target=target) |
720 | replace_spaces = True |
721 | - try: |
722 | - # check in-target version |
723 | - pkg_ver = util.get_package_version('multipath-tools', target=target) |
724 | - LOG.debug("get_package_version:\n%s", pkg_ver) |
725 | - LOG.debug("multipath version is %s (major=%s minor=%s micro=%s)", |
726 | - pkg_ver['semantic_version'], pkg_ver['major'], |
727 | - pkg_ver['minor'], pkg_ver['micro']) |
728 | - # multipath-tools versions < 0.5.0 do _NOT_ want whitespace replaced |
729 | - # i.e. 0.4.X in Trusty. |
730 | - if pkg_ver['semantic_version'] < 500: |
731 | - replace_spaces = False |
732 | - except Exception as e: |
733 | - LOG.warn("failed reading multipath-tools version, " |
734 | - "assuming it wants no spaces in wwids: %s", e) |
735 | + if osfamily == DISTROS.debian: |
736 | + try: |
737 | + # check in-target version |
738 | + pkg_ver = distro.get_package_version('multipath-tools', |
739 | + target=target) |
740 | + LOG.debug("get_package_version:\n%s", pkg_ver) |
741 | + LOG.debug("multipath version is %s (major=%s minor=%s micro=%s)", |
742 | + pkg_ver['semantic_version'], pkg_ver['major'], |
743 | + pkg_ver['minor'], pkg_ver['micro']) |
744 | + # multipath-tools versions < 0.5.0 do _NOT_ |
745 | + # want whitespace replaced i.e. 0.4.X in Trusty. |
746 | + if pkg_ver['semantic_version'] < 500: |
747 | + replace_spaces = False |
748 | + except Exception as e: |
749 | + LOG.warn("failed reading multipath-tools version, " |
750 | + "assuming it wants no spaces in wwids: %s", e) |
751 | |
752 | multipath_cfg_path = os.path.sep.join([target, '/etc/multipath.conf']) |
753 | multipath_bind_path = os.path.sep.join([target, '/etc/multipath/bindings']) |
754 | @@ -574,7 +583,7 @@ def detect_and_handle_multipath(cfg, target): |
755 | if not os.path.isfile(multipath_cfg_path): |
756 | # Without user_friendly_names option enabled system fails to boot |
757 | # if any of the disks has spaces in its name. Package multipath-tools |
758 | - # has bug opened for this issue (LP: 1432062) but it was not fixed yet. |
759 | + # has bug opened for this issue LP: #1432062 but it was not fixed yet. |
760 | multipath_cfg_content = '\n'.join( |
761 | ['# This file was created by curtin while installing the system.', |
762 | 'defaults {', |
763 | @@ -593,7 +602,13 @@ def detect_and_handle_multipath(cfg, target): |
764 | mpname = "mpath0" |
765 | grub_dev = "/dev/mapper/" + mpname |
766 | if partno is not None: |
767 | - grub_dev += "-part%s" % partno |
768 | + if osfamily == DISTROS.debian: |
769 | + grub_dev += "-part%s" % partno |
770 | + elif osfamily == DISTROS.redhat: |
771 | + grub_dev += "p%s" % partno |
772 | + else: |
773 | + raise ValueError( |
774 | + 'Unknown grub_dev mapping for distro: %s' % osfamily) |
775 | |
776 | LOG.debug("configuring multipath install for root=%s wwid=%s", |
777 | grub_dev, wwid) |
778 | @@ -606,31 +621,54 @@ def detect_and_handle_multipath(cfg, target): |
779 | '']) |
780 | util.write_file(multipath_bind_path, content=multipath_bind_content) |
781 | |
782 | - grub_cfg = os.path.sep.join( |
783 | - [target, '/etc/default/grub.d/50-curtin-multipath.cfg']) |
784 | + if osfamily == DISTROS.debian: |
785 | + grub_cfg = os.path.sep.join( |
786 | + [target, '/etc/default/grub.d/50-curtin-multipath.cfg']) |
787 | + omode = 'w' |
788 | + elif osfamily == DISTROS.redhat: |
789 | + grub_cfg = os.path.sep.join([target, '/etc/default/grub']) |
790 | + omode = 'a' |
791 | + else: |
792 | + raise ValueError( |
793 | + 'Unknown grub_cfg mapping for distro: %s' % osfamily) |
794 | + |
795 | msg = '\n'.join([ |
796 | - '# Written by curtin for multipath device wwid "%s"' % wwid, |
797 | + '# Written by curtin for multipath device %s %s' % (mpname, wwid), |
798 | 'GRUB_DEVICE=%s' % grub_dev, |
799 | 'GRUB_DISABLE_LINUX_UUID=true', |
800 | '']) |
801 | - util.write_file(grub_cfg, content=msg) |
802 | - |
803 | + util.write_file(grub_cfg, omode=omode, content=msg) |
804 | else: |
805 | LOG.warn("Not sure how this will boot") |
806 | |
807 | - # Initrams needs to be updated to include /etc/multipath.cfg |
808 | - # and /etc/multipath/bindings files. |
809 | - update_initramfs(target, all_kernels=True) |
810 | + if osfamily == DISTROS.debian: |
811 | + # Initrams needs to be updated to include /etc/multipath.cfg |
812 | + # and /etc/multipath/bindings files. |
813 | + update_initramfs(target, all_kernels=True) |
814 | + elif osfamily == DISTROS.redhat: |
815 | + # Write out initramfs/dracut config for multipath |
816 | + dracut_conf_multipath = os.path.sep.join( |
817 | + [target, '/etc/dracut.conf.d/10-curtin-multipath.conf']) |
818 | + msg = '\n'.join([ |
819 | + '# Written by curtin for multipath device wwid "%s"' % wwid, |
820 | + 'force_drivers+=" dm-multipath "', |
821 | + 'add_dracutmodules+="multipath"', |
822 | + 'install_items+="/etc/multipath.conf /etc/multipath/bindings"', |
823 | + '']) |
824 | + util.write_file(dracut_conf_multipath, content=msg) |
825 | + else: |
826 | + raise ValueError( |
827 | + 'Unknown initramfs mapping for distro: %s' % osfamily) |
828 | |
829 | |
830 | -def detect_required_packages(cfg): |
831 | +def detect_required_packages(cfg, osfamily=DISTROS.debian): |
832 | """ |
833 | detect packages that will be required in-target by custom config items |
834 | """ |
835 | |
836 | mapping = { |
837 | - 'storage': block.detect_required_packages_mapping(), |
838 | - 'network': net.detect_required_packages_mapping(), |
839 | + 'storage': bdeps.detect_required_packages_mapping(osfamily=osfamily), |
840 | + 'network': ndeps.detect_required_packages_mapping(osfamily=osfamily), |
841 | } |
842 | |
843 | needed_packages = [] |
844 | @@ -657,16 +695,16 @@ def detect_required_packages(cfg): |
845 | return needed_packages |
846 | |
847 | |
848 | -def install_missing_packages(cfg, target): |
849 | +def install_missing_packages(cfg, target, osfamily=DISTROS.debian): |
850 | ''' describe which operation types will require specific packages |
851 | |
852 | 'custom_config_key': { |
853 | 'pkg1': ['op_name_1', 'op_name_2', ...] |
854 | } |
855 | ''' |
856 | - |
857 | - installed_packages = util.get_installed_packages(target) |
858 | - needed_packages = set([pkg for pkg in detect_required_packages(cfg) |
859 | + installed_packages = distro.get_installed_packages(target) |
860 | + needed_packages = set([pkg for pkg in |
861 | + detect_required_packages(cfg, osfamily=osfamily) |
862 | if pkg not in installed_packages]) |
863 | |
864 | arch_packages = { |
865 | @@ -678,6 +716,31 @@ def install_missing_packages(cfg, target): |
866 | if pkg not in needed_packages: |
867 | needed_packages.add(pkg) |
868 | |
869 | + # UEFI requires grub-efi-{arch}. If a signed version of that package |
870 | + # exists then it will be installed. |
871 | + if util.is_uefi_bootable(): |
872 | + uefi_pkgs = [] |
873 | + if osfamily == DISTROS.redhat: |
874 | + # centos/redhat doesn't support 32-bit? |
875 | + uefi_pkgs.extend(['grub2-efi-x64-modules']) |
876 | + elif osfamily == DISTROS.debian: |
877 | + arch = util.get_architecture() |
878 | + uefi_pkgs.append('grub-efi-%s' % arch) |
879 | + |
880 | + # Architecture might support a signed UEFI loader |
881 | + uefi_pkg_signed = 'grub-efi-%s-signed' % arch |
882 | + if distro.has_pkg_available(uefi_pkg_signed): |
883 | + uefi_pkgs.append(uefi_pkg_signed) |
884 | + |
885 | + # AMD64 has shim-signed for SecureBoot support |
886 | + if arch == "amd64": |
887 | + uefi_pkgs.append("shim-signed") |
888 | + else: |
889 | + raise ValueError('Unknown grub2 package list for distro: %s' % |
890 | + osfamily) |
891 | + needed_packages.update([pkg for pkg in uefi_pkgs |
892 | + if pkg not in installed_packages]) |
893 | + |
894 | # Filter out ifupdown network packages on netplan enabled systems. |
895 | has_netplan = ('nplan' in installed_packages or |
896 | 'netplan.io' in installed_packages) |
897 | @@ -696,10 +759,10 @@ def install_missing_packages(cfg, target): |
898 | reporting_enabled=True, level="INFO", |
899 | description="Installing packages on target system: " + |
900 | str(to_add)): |
901 | - util.install_packages(to_add, target=target) |
902 | + distro.install_packages(to_add, target=target, osfamily=osfamily) |
903 | |
904 | |
905 | -def system_upgrade(cfg, target): |
906 | +def system_upgrade(cfg, target, osfamily=DISTROS.debian): |
907 | """run system-upgrade (apt-get dist-upgrade) or other in target. |
908 | |
909 | config: |
910 | @@ -718,7 +781,7 @@ def system_upgrade(cfg, target): |
911 | LOG.debug("system_upgrade disabled by config.") |
912 | return |
913 | |
914 | - util.system_upgrade(target=target) |
915 | + distro.system_upgrade(target=target, osfamily=osfamily) |
916 | |
917 | |
918 | def inject_pollinate_user_agent_config(ua_cfg, target): |
919 | @@ -728,7 +791,7 @@ def inject_pollinate_user_agent_config(ua_cfg, target): |
920 | if not isinstance(ua_cfg, dict): |
921 | raise ValueError('ua_cfg is not a dictionary: %s', ua_cfg) |
922 | |
923 | - pollinate_cfg = util.target_path(target, '/etc/pollinate/add-user-agent') |
924 | + pollinate_cfg = paths.target_path(target, '/etc/pollinate/add-user-agent') |
925 | comment = "# written by curtin" |
926 | content = "\n".join(["%s/%s %s" % (ua_key, ua_val, comment) |
927 | for ua_key, ua_val in ua_cfg.items()]) + "\n" |
928 | @@ -751,6 +814,8 @@ def handle_pollinate_user_agent(cfg, target): |
929 | curtin version |
930 | maas version (via endpoint URL, if present) |
931 | """ |
932 | + if not util.which('pollinate', target=target): |
933 | + return |
934 | |
935 | pcfg = cfg.get('pollinate') |
936 | if not isinstance(pcfg, dict): |
937 | @@ -776,6 +841,63 @@ def handle_pollinate_user_agent(cfg, target): |
938 | inject_pollinate_user_agent_config(uacfg, target) |
939 | |
940 | |
941 | +def configure_iscsi(cfg, state_etcd, target, osfamily=DISTROS.debian): |
942 | + # If a /etc/iscsi/nodes/... file was created by block_meta then it |
943 | + # needs to be copied onto the target system |
944 | + nodes = os.path.join(state_etcd, "nodes") |
945 | + if not os.path.exists(nodes): |
946 | + return |
947 | + |
948 | + LOG.info('Iscsi configuration found, enabling service') |
949 | + if osfamily == DISTROS.redhat: |
950 | + # copy iscsi node config to target image |
951 | + LOG.debug('Copying iscsi node config to target') |
952 | + copy_iscsi_conf(nodes, target, target_nodes_dir='var/lib/iscsi/nodes') |
953 | + |
954 | + # update in-target config |
955 | + with util.ChrootableTarget(target) as in_chroot: |
956 | + # enable iscsid service |
957 | + LOG.debug('Enabling iscsi daemon') |
958 | + in_chroot.subp(['chkconfig', 'iscsid', 'on']) |
959 | + |
960 | + # update selinux config for iscsi ports required |
961 | + for port in [str(port) for port in |
962 | + iscsi.get_iscsi_ports_from_config(cfg)]: |
963 | + LOG.debug('Adding iscsi port %s to selinux iscsi_port_t list', |
964 | + port) |
965 | + in_chroot.subp(['semanage', 'port', '-a', '-t', |
966 | + 'iscsi_port_t', '-p', 'tcp', port]) |
967 | + |
968 | + elif osfamily == DISTROS.debian: |
969 | + copy_iscsi_conf(nodes, target) |
970 | + else: |
971 | + raise ValueError( |
972 | + 'Unknown iscsi requirements for distro: %s' % osfamily) |
973 | + |
974 | + |
975 | +def configure_mdadm(cfg, state_etcd, target, osfamily=DISTROS.debian): |
976 | + # If a mdadm.conf file was created by block_meta than it needs |
977 | + # to be copied onto the target system |
978 | + mdadm_location = os.path.join(state_etcd, "mdadm.conf") |
979 | + if not os.path.exists(mdadm_location): |
980 | + return |
981 | + |
982 | + conf_map = { |
983 | + DISTROS.debian: 'etc/mdadm/mdadm.conf', |
984 | + DISTROS.redhat: 'etc/mdadm.conf', |
985 | + } |
986 | + if osfamily not in conf_map: |
987 | + raise ValueError( |
988 | + 'Unknown mdadm conf mapping for distro: %s' % osfamily) |
989 | + LOG.info('Mdadm configuration found, enabling service') |
990 | + shutil.copy(mdadm_location, paths.target_path(target, |
991 | + conf_map[osfamily])) |
992 | + if osfamily == DISTROS.debian: |
993 | + # as per LP: #964052 reconfigure mdadm |
994 | + util.subp(['dpkg-reconfigure', '--frontend=noninteractive', 'mdadm'], |
995 | + data=None, target=target) |
996 | + |
997 | + |
998 | def handle_cloudconfig(cfg, base_dir=None): |
999 | """write cloud-init configuration files into base_dir. |
1000 | |
1001 | @@ -845,21 +967,11 @@ def ubuntu_core_curthooks(cfg, target=None): |
1002 | content=config.dump_config({'network': netconfig})) |
1003 | |
1004 | |
1005 | -def rpm_get_dist_id(target): |
1006 | - """Use rpm command to extract the '%rhel' distro macro which returns |
1007 | - the major os version id (6, 7, 8). This works for centos or rhel |
1008 | - """ |
1009 | - with util.ChrootableTarget(target) as in_chroot: |
1010 | - dist, _ = in_chroot.subp(['rpm', '-E', '%rhel'], capture=True) |
1011 | - return dist.rstrip() |
1012 | - |
1013 | - |
1014 | -def centos_apply_network_config(netcfg, target=None): |
1015 | +def redhat_upgrade_cloud_init(netcfg, target=None, osfamily=DISTROS.redhat): |
1016 | """ CentOS images execute built-in curthooks which only supports |
1017 | simple networking configuration. This hook enables advanced |
1018 | network configuration via config passthrough to the target. |
1019 | """ |
1020 | - |
1021 | def cloud_init_repo(version): |
1022 | if not version: |
1023 | raise ValueError('Missing required version parameter') |
1024 | @@ -868,9 +980,9 @@ def centos_apply_network_config(netcfg, target=None): |
1025 | |
1026 | if netcfg: |
1027 | LOG.info('Removing embedded network configuration (if present)') |
1028 | - ifcfgs = glob.glob(util.target_path(target, |
1029 | - 'etc/sysconfig/network-scripts') + |
1030 | - '/ifcfg-*') |
1031 | + ifcfgs = glob.glob( |
1032 | + paths.target_path(target, 'etc/sysconfig/network-scripts') + |
1033 | + '/ifcfg-*') |
1034 | # remove ifcfg-* (except ifcfg-lo) |
1035 | for ifcfg in ifcfgs: |
1036 | if os.path.basename(ifcfg) != "ifcfg-lo": |
1037 | @@ -884,29 +996,27 @@ def centos_apply_network_config(netcfg, target=None): |
1038 | # if in-target cloud-init is not updated, upgrade via cloud-init repo |
1039 | if not passthrough: |
1040 | cloud_init_yum_repo = ( |
1041 | - util.target_path(target, |
1042 | - 'etc/yum.repos.d/curtin-cloud-init.repo')) |
1043 | + paths.target_path(target, |
1044 | + 'etc/yum.repos.d/curtin-cloud-init.repo')) |
1045 | # Inject cloud-init daily yum repo |
1046 | util.write_file(cloud_init_yum_repo, |
1047 | - content=cloud_init_repo(rpm_get_dist_id(target))) |
1048 | + content=cloud_init_repo( |
1049 | + distro.rpm_get_dist_id(target))) |
1050 | |
1051 | # we separate the installation of repository packages (epel, |
1052 | # cloud-init-el-release) as we need a new invocation of yum |
1053 | # to read the newly installed repo files. |
1054 | - YUM_CMD = ['yum', '-y', '--noplugins', 'install'] |
1055 | - retries = [1] * 30 |
1056 | - with util.ChrootableTarget(target) as in_chroot: |
1057 | - # ensure up-to-date ca-certificates to handle https mirror |
1058 | - # connections |
1059 | - in_chroot.subp(YUM_CMD + ['ca-certificates'], capture=True, |
1060 | - log_captured=True, retries=retries) |
1061 | - in_chroot.subp(YUM_CMD + ['epel-release'], capture=True, |
1062 | - log_captured=True, retries=retries) |
1063 | - in_chroot.subp(YUM_CMD + ['cloud-init-el-release'], |
1064 | - log_captured=True, capture=True, |
1065 | - retries=retries) |
1066 | - in_chroot.subp(YUM_CMD + ['cloud-init'], capture=True, |
1067 | - log_captured=True, retries=retries) |
1068 | + |
1069 | + # ensure up-to-date ca-certificates to handle https mirror |
1070 | + # connections |
1071 | + distro.install_packages(['ca-certificates'], target=target, |
1072 | + osfamily=osfamily) |
1073 | + distro.install_packages(['epel-release'], target=target, |
1074 | + osfamily=osfamily) |
1075 | + distro.install_packages(['cloud-init-el-release'], target=target, |
1076 | + osfamily=osfamily) |
1077 | + distro.install_packages(['cloud-init'], target=target, |
1078 | + osfamily=osfamily) |
1079 | |
1080 | # remove cloud-init el-stable bootstrap repo config as the |
1081 | # cloud-init-el-release package points to the correct repo |
1082 | @@ -919,127 +1029,136 @@ def centos_apply_network_config(netcfg, target=None): |
1083 | capture=False, rcs=[0]) |
1084 | except util.ProcessExecutionError: |
1085 | LOG.debug('Image missing bridge-utils package, installing') |
1086 | - in_chroot.subp(YUM_CMD + ['bridge-utils'], capture=True, |
1087 | - log_captured=True, retries=retries) |
1088 | + distro.install_packages(['bridge-utils'], target=target, |
1089 | + osfamily=osfamily) |
1090 | |
1091 | LOG.info('Passing network configuration through to target') |
1092 | net.render_netconfig_passthrough(target, netconfig={'network': netcfg}) |
1093 | |
1094 | |
1095 | -def target_is_ubuntu_core(target): |
1096 | - """Check if Ubuntu-Core specific directory is present at target""" |
1097 | - if target: |
1098 | - return os.path.exists(util.target_path(target, |
1099 | - 'system-data/var/lib/snapd')) |
1100 | - return False |
1101 | - |
1102 | - |
1103 | -def target_is_centos(target): |
1104 | - """Check if CentOS specific file is present at target""" |
1105 | - if target: |
1106 | - return os.path.exists(util.target_path(target, 'etc/centos-release')) |
1107 | +# Public API, maas may call this from internal curthooks |
1108 | +centos_apply_network_config = redhat_upgrade_cloud_init |
1109 | |
1110 | - return False |
1111 | |
1112 | +def redhat_apply_selinux_autorelabel(target): |
1113 | + """Creates file /.autorelabel. |
1114 | |
1115 | -def target_is_rhel(target): |
1116 | - """Check if RHEL specific file is present at target""" |
1117 | - if target: |
1118 | - return os.path.exists(util.target_path(target, 'etc/redhat-release')) |
1119 | + This is used by SELinux to relabel all of the |
1120 | + files on the filesystem to have the correct |
1121 | + security context. Without this SSH login will |
1122 | + fail. |
1123 | + """ |
1124 | + LOG.debug('enabling selinux autorelabel') |
1125 | + open(paths.target_path(target, '.autorelabel'), 'a').close() |
1126 | |
1127 | - return False |
1128 | |
1129 | +def redhat_update_dracut_config(target, cfg): |
1130 | + initramfs_mapping = { |
1131 | + 'lvm': {'conf': 'lvmconf', 'modules': 'lvm'}, |
1132 | + 'raid': {'conf': 'mdadmconf', 'modules': 'mdraid'}, |
1133 | + } |
1134 | |
1135 | -def curthooks(args): |
1136 | - state = util.load_command_environment() |
1137 | + # no need to update initramfs if no custom storage |
1138 | + if 'storage' not in cfg: |
1139 | + return False |
1140 | |
1141 | - if args.target is not None: |
1142 | - target = args.target |
1143 | - else: |
1144 | - target = state['target'] |
1145 | + storage_config = cfg.get('storage', {}).get('config') |
1146 | + if not storage_config: |
1147 | + raise ValueError('Invalid storage config') |
1148 | + |
1149 | + add_conf = set() |
1150 | + add_modules = set() |
1151 | + for scfg in storage_config: |
1152 | + if scfg['type'] == 'raid': |
1153 | + add_conf.add(initramfs_mapping['raid']['conf']) |
1154 | + add_modules.add(initramfs_mapping['raid']['modules']) |
1155 | + elif scfg['type'] in ['lvm_volgroup', 'lvm_partition']: |
1156 | + add_conf.add(initramfs_mapping['lvm']['conf']) |
1157 | + add_modules.add(initramfs_mapping['lvm']['modules']) |
1158 | + |
1159 | + dconfig = ['# Written by curtin for custom storage config'] |
1160 | + dconfig.append('add_dracutmodules+="%s"' % (" ".join(add_modules))) |
1161 | + for conf in add_conf: |
1162 | + dconfig.append('%s="yes"' % conf) |
1163 | + |
1164 | + # Write out initramfs/dracut config for storage config |
1165 | + dracut_conf_storage = os.path.sep.join( |
1166 | + [target, '/etc/dracut.conf.d/50-curtin-storage.conf']) |
1167 | + msg = '\n'.join(dconfig + ['']) |
1168 | + LOG.debug('Updating redhat dracut config') |
1169 | + util.write_file(dracut_conf_storage, content=msg) |
1170 | + return True |
1171 | + |
1172 | + |
1173 | +def redhat_update_initramfs(target, cfg): |
1174 | + if not redhat_update_dracut_config(target, cfg): |
1175 | + LOG.debug('Skipping redhat initramfs update, no custom storage config') |
1176 | + return |
1177 | + kver_cmd = ['rpm', '-q', '--queryformat', |
1178 | + '%{VERSION}-%{RELEASE}.%{ARCH}', 'kernel'] |
1179 | + with util.ChrootableTarget(target) as in_chroot: |
1180 | + LOG.debug('Finding redhat kernel version: %s', kver_cmd) |
1181 | + kver, _err = in_chroot.subp(kver_cmd, capture=True) |
1182 | + LOG.debug('Found kver=%s' % kver) |
1183 | + initramfs = '/boot/initramfs-%s.img' % kver |
1184 | + dracut_cmd = ['dracut', '-f', initramfs, kver] |
1185 | + LOG.debug('Rebuilding initramfs with: %s', dracut_cmd) |
1186 | + in_chroot.subp(dracut_cmd, capture=True) |
1187 | |
1188 | - if target is None: |
1189 | - sys.stderr.write("Unable to find target. " |
1190 | - "Use --target or set TARGET_MOUNT_POINT\n") |
1191 | - sys.exit(2) |
1192 | |
1193 | - cfg = config.load_command_config(args, state) |
1194 | +def builtin_curthooks(cfg, target, state): |
1195 | + LOG.info('Running curtin builtin curthooks') |
1196 | stack_prefix = state.get('report_stack_prefix', '') |
1197 | - |
1198 | - # if curtin-hooks hook exists in target we can defer to the in-target hooks |
1199 | - if util.run_hook_if_exists(target, 'curtin-hooks'): |
1200 | - # For vmtests to force execute centos_apply_network_config, uncomment |
1201 | - # the value in examples/tests/centos_defaults.yaml |
1202 | - if cfg.get('_ammend_centos_curthooks'): |
1203 | - if cfg.get('cloudconfig'): |
1204 | - handle_cloudconfig( |
1205 | - cfg['cloudconfig'], |
1206 | - base_dir=util.target_path(target, 'etc/cloud/cloud.cfg.d')) |
1207 | - |
1208 | - if target_is_centos(target) or target_is_rhel(target): |
1209 | - LOG.info('Detected RHEL/CentOS image, running extra hooks') |
1210 | - with events.ReportEventStack( |
1211 | - name=stack_prefix, reporting_enabled=True, |
1212 | - level="INFO", |
1213 | - description="Configuring CentOS for first boot"): |
1214 | - centos_apply_network_config(cfg.get('network', {}), target) |
1215 | - sys.exit(0) |
1216 | - |
1217 | - if target_is_ubuntu_core(target): |
1218 | - LOG.info('Detected Ubuntu-Core image, running hooks') |
1219 | + state_etcd = os.path.split(state['fstab'])[0] |
1220 | + |
1221 | + distro_info = distro.get_distroinfo(target=target) |
1222 | + if not distro_info: |
1223 | + raise RuntimeError('Failed to determine target distro') |
1224 | + osfamily = distro_info.family |
1225 | + LOG.info('Configuring target system for distro: %s osfamily: %s', |
1226 | + distro_info.variant, osfamily) |
1227 | + if osfamily == DISTROS.debian: |
1228 | with events.ReportEventStack( |
1229 | - name=stack_prefix, reporting_enabled=True, level="INFO", |
1230 | - description="Configuring Ubuntu-Core for first boot"): |
1231 | - ubuntu_core_curthooks(cfg, target) |
1232 | - sys.exit(0) |
1233 | - |
1234 | - with events.ReportEventStack( |
1235 | - name=stack_prefix + '/writing-config', |
1236 | - reporting_enabled=True, level="INFO", |
1237 | - description="configuring apt configuring apt"): |
1238 | - do_apt_config(cfg, target) |
1239 | - disable_overlayroot(cfg, target) |
1240 | + name=stack_prefix + '/writing-apt-config', |
1241 | + reporting_enabled=True, level="INFO", |
1242 | + description="configuring apt configuring apt"): |
1243 | + do_apt_config(cfg, target) |
1244 | + disable_overlayroot(cfg, target) |
1245 | |
1246 | - # LP: #1742560 prevent zfs-dkms from being installed (Xenial) |
1247 | - if util.lsb_release(target=target)['codename'] == 'xenial': |
1248 | - util.apt_update(target=target) |
1249 | - with util.ChrootableTarget(target) as in_chroot: |
1250 | - in_chroot.subp(['apt-mark', 'hold', 'zfs-dkms']) |
1251 | + # LP: #1742560 prevent zfs-dkms from being installed (Xenial) |
1252 | + if distro.lsb_release(target=target)['codename'] == 'xenial': |
1253 | + distro.apt_update(target=target) |
1254 | + with util.ChrootableTarget(target) as in_chroot: |
1255 | + in_chroot.subp(['apt-mark', 'hold', 'zfs-dkms']) |
1256 | |
1257 | # packages may be needed prior to installing kernel |
1258 | with events.ReportEventStack( |
1259 | name=stack_prefix + '/installing-missing-packages', |
1260 | reporting_enabled=True, level="INFO", |
1261 | description="installing missing packages"): |
1262 | - install_missing_packages(cfg, target) |
1263 | + install_missing_packages(cfg, target, osfamily=osfamily) |
1264 | |
1265 | - # If a /etc/iscsi/nodes/... file was created by block_meta then it |
1266 | - # needs to be copied onto the target system |
1267 | - nodes_location = os.path.join(os.path.split(state['fstab'])[0], |
1268 | - "nodes") |
1269 | - if os.path.exists(nodes_location): |
1270 | - copy_iscsi_conf(nodes_location, target) |
1271 | - # do we need to reconfigure open-iscsi? |
1272 | - |
1273 | - # If a mdadm.conf file was created by block_meta than it needs to be copied |
1274 | - # onto the target system |
1275 | - mdadm_location = os.path.join(os.path.split(state['fstab'])[0], |
1276 | - "mdadm.conf") |
1277 | - if os.path.exists(mdadm_location): |
1278 | - copy_mdadm_conf(mdadm_location, target) |
1279 | - # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052 |
1280 | - # reconfigure mdadm |
1281 | - util.subp(['dpkg-reconfigure', '--frontend=noninteractive', 'mdadm'], |
1282 | - data=None, target=target) |
1283 | + with events.ReportEventStack( |
1284 | + name=stack_prefix + '/configuring-iscsi-service', |
1285 | + reporting_enabled=True, level="INFO", |
1286 | + description="configuring iscsi service"): |
1287 | + configure_iscsi(cfg, state_etcd, target, osfamily=osfamily) |
1288 | |
1289 | with events.ReportEventStack( |
1290 | - name=stack_prefix + '/installing-kernel', |
1291 | + name=stack_prefix + '/configuring-mdadm-service', |
1292 | reporting_enabled=True, level="INFO", |
1293 | - description="installing kernel"): |
1294 | - setup_zipl(cfg, target) |
1295 | - install_kernel(cfg, target) |
1296 | - run_zipl(cfg, target) |
1297 | - restore_dist_interfaces(cfg, target) |
1298 | + description="configuring raid (mdadm) service"): |
1299 | + configure_mdadm(cfg, state_etcd, target, osfamily=osfamily) |
1300 | + |
1301 | + if osfamily == DISTROS.debian: |
1302 | + with events.ReportEventStack( |
1303 | + name=stack_prefix + '/installing-kernel', |
1304 | + reporting_enabled=True, level="INFO", |
1305 | + description="installing kernel"): |
1306 | + setup_zipl(cfg, target) |
1307 | + install_kernel(cfg, target) |
1308 | + run_zipl(cfg, target) |
1309 | + restore_dist_interfaces(cfg, target) |
1310 | |
1311 | with events.ReportEventStack( |
1312 | name=stack_prefix + '/setting-up-swap', |
1313 | @@ -1047,6 +1166,23 @@ def curthooks(args): |
1314 | description="setting up swap"): |
1315 | add_swap(cfg, target, state.get('fstab')) |
1316 | |
1317 | + if osfamily == DISTROS.redhat: |
1318 | + # set cloud-init maas datasource for centos images |
1319 | + if cfg.get('cloudconfig'): |
1320 | + handle_cloudconfig( |
1321 | + cfg['cloudconfig'], |
1322 | + base_dir=paths.target_path(target, |
1323 | + 'etc/cloud/cloud.cfg.d')) |
1324 | + |
1325 | + # For vmtests to force execute redhat_upgrade_cloud_init, uncomment |
1326 | + # the value in examples/tests/centos_defaults.yaml |
1327 | + if cfg.get('_ammend_centos_curthooks'): |
1328 | + with events.ReportEventStack( |
1329 | + name=stack_prefix + '/upgrading cloud-init', |
1330 | + reporting_enabled=True, level="INFO", |
1331 | + description="Upgrading cloud-init in target"): |
1332 | + redhat_upgrade_cloud_init(cfg.get('network', {}), target) |
1333 | + |
1334 | with events.ReportEventStack( |
1335 | name=stack_prefix + '/apply-networking-config', |
1336 | reporting_enabled=True, level="INFO", |
1337 | @@ -1063,29 +1199,44 @@ def curthooks(args): |
1338 | name=stack_prefix + '/configuring-multipath', |
1339 | reporting_enabled=True, level="INFO", |
1340 | description="configuring multipath"): |
1341 | - detect_and_handle_multipath(cfg, target) |
1342 | + detect_and_handle_multipath(cfg, target, osfamily=osfamily) |
1343 | |
1344 | with events.ReportEventStack( |
1345 | name=stack_prefix + '/system-upgrade', |
1346 | reporting_enabled=True, level="INFO", |
1347 | description="updating packages on target system"): |
1348 | - system_upgrade(cfg, target) |
1349 | + system_upgrade(cfg, target, osfamily=osfamily) |
1350 | + |
1351 | + if osfamily == DISTROS.redhat: |
1352 | + with events.ReportEventStack( |
1353 | + name=stack_prefix + '/enabling-selinux-autorelabel', |
1354 | + reporting_enabled=True, level="INFO", |
1355 | + description="enabling selinux autorelabel mode"): |
1356 | + redhat_apply_selinux_autorelabel(target) |
1357 | + |
1358 | + with events.ReportEventStack( |
1359 | + name=stack_prefix + '/updating-initramfs-configuration', |
1360 | + reporting_enabled=True, level="INFO", |
1361 | + description="updating initramfs configuration"): |
1362 | + redhat_update_initramfs(target, cfg) |
1363 | |
1364 | with events.ReportEventStack( |
1365 | name=stack_prefix + '/pollinate-user-agent', |
1366 | reporting_enabled=True, level="INFO", |
1367 | - description="configuring pollinate user-agent on target system"): |
1368 | + description="configuring pollinate user-agent on target"): |
1369 | handle_pollinate_user_agent(cfg, target) |
1370 | |
1371 | - # If a crypttab file was created by block_meta than it needs to be copied |
1372 | - # onto the target system, and update_initramfs() needs to be run, so that |
1373 | - # the cryptsetup hooks are properly configured on the installed system and |
1374 | - # it will be able to open encrypted volumes at boot. |
1375 | - crypttab_location = os.path.join(os.path.split(state['fstab'])[0], |
1376 | - "crypttab") |
1377 | - if os.path.exists(crypttab_location): |
1378 | - copy_crypttab(crypttab_location, target) |
1379 | - update_initramfs(target) |
1380 | + if osfamily == DISTROS.debian: |
1381 | + # If a crypttab file was created by block_meta than it needs to be |
1382 | + # copied onto the target system, and update_initramfs() needs to be |
1383 | + # run, so that the cryptsetup hooks are properly configured on the |
1384 | + # installed system and it will be able to open encrypted volumes |
1385 | + # at boot. |
1386 | + crypttab_location = os.path.join(os.path.split(state['fstab'])[0], |
1387 | + "crypttab") |
1388 | + if os.path.exists(crypttab_location): |
1389 | + copy_crypttab(crypttab_location, target) |
1390 | + update_initramfs(target) |
1391 | |
1392 | # If udev dname rules were created, copy them to target |
1393 | udev_rules_d = os.path.join(state['scratch'], "rules.d") |
1394 | @@ -1102,8 +1253,41 @@ def curthooks(args): |
1395 | machine.startswith('aarch64') and not util.is_uefi_bootable()): |
1396 | update_initramfs(target) |
1397 | else: |
1398 | - setup_grub(cfg, target) |
1399 | + setup_grub(cfg, target, osfamily=osfamily) |
1400 | + |
1401 | + |
1402 | +def curthooks(args): |
1403 | + state = util.load_command_environment() |
1404 | + |
1405 | + if args.target is not None: |
1406 | + target = args.target |
1407 | + else: |
1408 | + target = state['target'] |
1409 | + |
1410 | + if target is None: |
1411 | + sys.stderr.write("Unable to find target. " |
1412 | + "Use --target or set TARGET_MOUNT_POINT\n") |
1413 | + sys.exit(2) |
1414 | + |
1415 | + cfg = config.load_command_config(args, state) |
1416 | + stack_prefix = state.get('report_stack_prefix', '') |
1417 | + curthooks_mode = cfg.get('curthooks', {}).get('mode', 'auto') |
1418 | + |
1419 | + # UC is special, handle it first. |
1420 | + if distro.is_ubuntu_core(target): |
1421 | + LOG.info('Detected Ubuntu-Core image, running hooks') |
1422 | + with events.ReportEventStack( |
1423 | + name=stack_prefix, reporting_enabled=True, level="INFO", |
1424 | + description="Configuring Ubuntu-Core for first boot"): |
1425 | + ubuntu_core_curthooks(cfg, target) |
1426 | + sys.exit(0) |
1427 | + |
1428 | + # user asked for target, or auto mode |
1429 | + if curthooks_mode in ['auto', 'target']: |
1430 | + if util.run_hook_if_exists(target, 'curtin-hooks'): |
1431 | + sys.exit(0) |
1432 | |
1433 | + builtin_curthooks(cfg, target, state) |
1434 | sys.exit(0) |
1435 | |
1436 | |
1437 | diff --git a/curtin/commands/in_target.py b/curtin/commands/in_target.py |
1438 | index 8e839c0..c6f7abd 100644 |
1439 | --- a/curtin/commands/in_target.py |
1440 | +++ b/curtin/commands/in_target.py |
1441 | @@ -4,7 +4,7 @@ import os |
1442 | import pty |
1443 | import sys |
1444 | |
1445 | -from curtin import util |
1446 | +from curtin import paths, util |
1447 | |
1448 | from . import populate_one_subcmd |
1449 | |
1450 | @@ -41,7 +41,7 @@ def in_target_main(args): |
1451 | sys.exit(2) |
1452 | |
1453 | daemons = args.allow_daemons |
1454 | - if util.target_path(args.target) == "/": |
1455 | + if paths.target_path(args.target) == "/": |
1456 | sys.stderr.write("WARN: Target is /, daemons are allowed.\n") |
1457 | daemons = True |
1458 | cmd = args.command_args |
1459 | diff --git a/curtin/commands/install.py b/curtin/commands/install.py |
1460 | index 4d2a13f..244683c 100644 |
1461 | --- a/curtin/commands/install.py |
1462 | +++ b/curtin/commands/install.py |
1463 | @@ -13,7 +13,9 @@ import tempfile |
1464 | |
1465 | from curtin.block import iscsi |
1466 | from curtin import config |
1467 | +from curtin import distro |
1468 | from curtin import util |
1469 | +from curtin import paths |
1470 | from curtin import version |
1471 | from curtin.log import LOG, logged_time |
1472 | from curtin.reporter.legacy import load_reporter |
1473 | @@ -80,7 +82,7 @@ def copy_install_log(logfile, target, log_target_path): |
1474 | LOG.debug('Copying curtin install log from %s to target/%s', |
1475 | logfile, log_target_path) |
1476 | util.write_file( |
1477 | - filename=util.target_path(target, log_target_path), |
1478 | + filename=paths.target_path(target, log_target_path), |
1479 | content=util.load_file(logfile, decode=False), |
1480 | mode=0o400, omode="wb") |
1481 | |
1482 | @@ -319,7 +321,7 @@ def apply_kexec(kexec, target): |
1483 | raise TypeError("kexec is not a dict.") |
1484 | |
1485 | if not util.which('kexec'): |
1486 | - util.install_packages('kexec-tools') |
1487 | + distro.install_packages('kexec-tools') |
1488 | |
1489 | if not os.path.isfile(target_grubcfg): |
1490 | raise ValueError("%s does not exist in target" % grubcfg) |
1491 | diff --git a/curtin/commands/system_install.py b/curtin/commands/system_install.py |
1492 | index 05d70af..6d7b736 100644 |
1493 | --- a/curtin/commands/system_install.py |
1494 | +++ b/curtin/commands/system_install.py |
1495 | @@ -7,6 +7,7 @@ import curtin.util as util |
1496 | |
1497 | from . import populate_one_subcmd |
1498 | from curtin.log import LOG |
1499 | +from curtin import distro |
1500 | |
1501 | |
1502 | def system_install_pkgs_main(args): |
1503 | @@ -16,7 +17,7 @@ def system_install_pkgs_main(args): |
1504 | |
1505 | exit_code = 0 |
1506 | try: |
1507 | - util.install_packages( |
1508 | + distro.install_packages( |
1509 | pkglist=args.packages, target=args.target, |
1510 | allow_daemons=args.allow_daemons) |
1511 | except util.ProcessExecutionError as e: |
1512 | diff --git a/curtin/commands/system_upgrade.py b/curtin/commands/system_upgrade.py |
1513 | index fe10fac..d4f6735 100644 |
1514 | --- a/curtin/commands/system_upgrade.py |
1515 | +++ b/curtin/commands/system_upgrade.py |
1516 | @@ -7,6 +7,7 @@ import curtin.util as util |
1517 | |
1518 | from . import populate_one_subcmd |
1519 | from curtin.log import LOG |
1520 | +from curtin import distro |
1521 | |
1522 | |
1523 | def system_upgrade_main(args): |
1524 | @@ -16,8 +17,8 @@ def system_upgrade_main(args): |
1525 | |
1526 | exit_code = 0 |
1527 | try: |
1528 | - util.system_upgrade(target=args.target, |
1529 | - allow_daemons=args.allow_daemons) |
1530 | + distro.system_upgrade(target=args.target, |
1531 | + allow_daemons=args.allow_daemons) |
1532 | except util.ProcessExecutionError as e: |
1533 | LOG.warn("system upgrade failed: %s" % e) |
1534 | exit_code = e.exit_code |
1535 | diff --git a/curtin/deps/__init__.py b/curtin/deps/__init__.py |
1536 | index 7014895..96df4f6 100644 |
1537 | --- a/curtin/deps/__init__.py |
1538 | +++ b/curtin/deps/__init__.py |
1539 | @@ -6,13 +6,13 @@ import sys |
1540 | from curtin.util import ( |
1541 | ProcessExecutionError, |
1542 | get_architecture, |
1543 | - install_packages, |
1544 | is_uefi_bootable, |
1545 | - lsb_release, |
1546 | subp, |
1547 | which, |
1548 | ) |
1549 | |
1550 | +from curtin.distro import install_packages, lsb_release |
1551 | + |
1552 | REQUIRED_IMPORTS = [ |
1553 | # import string to execute, python2 package, python3 package |
1554 | ('import yaml', 'python-yaml', 'python3-yaml'), |
1555 | @@ -177,7 +177,7 @@ def install_deps(verbosity=False, dry_run=False, allow_daemons=True): |
1556 | ret = 0 |
1557 | try: |
1558 | install_packages(missing_pkgs, allow_daemons=allow_daemons, |
1559 | - aptopts=["--no-install-recommends"]) |
1560 | + opts=["--no-install-recommends"]) |
1561 | except ProcessExecutionError as e: |
1562 | sys.stderr.write("%s\n" % e) |
1563 | ret = e.exit_code |
1564 | diff --git a/curtin/distro.py b/curtin/distro.py |
1565 | new file mode 100644 |
1566 | index 0000000..f2a78ed |
1567 | --- /dev/null |
1568 | +++ b/curtin/distro.py |
1569 | @@ -0,0 +1,512 @@ |
1570 | +# This file is part of curtin. See LICENSE file for copyright and license info. |
1571 | +import glob |
1572 | +from collections import namedtuple |
1573 | +import os |
1574 | +import re |
1575 | +import shutil |
1576 | +import tempfile |
1577 | + |
1578 | +from .paths import target_path |
1579 | +from .util import ( |
1580 | + ChrootableTarget, |
1581 | + find_newer, |
1582 | + load_file, |
1583 | + load_shell_content, |
1584 | + ProcessExecutionError, |
1585 | + set_unexecutable, |
1586 | + string_types, |
1587 | + subp, |
1588 | + which |
1589 | +) |
1590 | +from .log import LOG |
1591 | + |
1592 | +DistroInfo = namedtuple('DistroInfo', ('variant', 'family')) |
1593 | +DISTRO_NAMES = ['arch', 'centos', 'debian', 'fedora', 'freebsd', 'gentoo', |
1594 | + 'opensuse', 'redhat', 'rhel', 'sles', 'suse', 'ubuntu'] |
1595 | + |
1596 | + |
1597 | +# python2.7 lacks PEP 435, so we must make use an alternative for py2.7/3.x |
1598 | +# https://stackoverflow.com/questions/36932/how-can-i-represent-an-enum-in-python |
1599 | +def distro_enum(*distros): |
1600 | + return namedtuple('Distros', distros)(*distros) |
1601 | + |
1602 | + |
1603 | +DISTROS = distro_enum(*DISTRO_NAMES) |
1604 | + |
1605 | +OS_FAMILIES = { |
1606 | + DISTROS.debian: [DISTROS.debian, DISTROS.ubuntu], |
1607 | + DISTROS.redhat: [DISTROS.centos, DISTROS.fedora, DISTROS.redhat, |
1608 | + DISTROS.rhel], |
1609 | + DISTROS.gentoo: [DISTROS.gentoo], |
1610 | + DISTROS.freebsd: [DISTROS.freebsd], |
1611 | + DISTROS.suse: [DISTROS.opensuse, DISTROS.sles, DISTROS.suse], |
1612 | + DISTROS.arch: [DISTROS.arch], |
1613 | +} |
1614 | + |
1615 | +# invert the mapping for faster lookup of variants |
1616 | +DISTRO_TO_OSFAMILY = ( |
1617 | + {variant: family for family, variants in OS_FAMILIES.items() |
1618 | + for variant in variants}) |
1619 | + |
1620 | +_LSB_RELEASE = {} |
1621 | + |
1622 | + |
1623 | +def name_to_distro(distname): |
1624 | + try: |
1625 | + return DISTROS[DISTROS.index(distname)] |
1626 | + except (IndexError, AttributeError): |
1627 | + LOG.error('Unknown distro name: %s', distname) |
1628 | + |
1629 | + |
1630 | +def lsb_release(target=None): |
1631 | + if target_path(target) != "/": |
1632 | + # do not use or update cache if target is provided |
1633 | + return _lsb_release(target) |
1634 | + |
1635 | + global _LSB_RELEASE |
1636 | + if not _LSB_RELEASE: |
1637 | + data = _lsb_release() |
1638 | + _LSB_RELEASE.update(data) |
1639 | + return _LSB_RELEASE |
1640 | + |
1641 | + |
1642 | +def os_release(target=None): |
1643 | + data = {} |
1644 | + os_release = target_path(target, 'etc/os-release') |
1645 | + if os.path.exists(os_release): |
1646 | + data = load_shell_content(load_file(os_release), |
1647 | + add_empty=False, empty_val=None) |
1648 | + if not data: |
1649 | + for relfile in [target_path(target, rel) for rel in |
1650 | + ['etc/centos-release', 'etc/redhat-release']]: |
1651 | + data = _parse_redhat_release(release_file=relfile, target=target) |
1652 | + if data: |
1653 | + break |
1654 | + |
1655 | + return data |
1656 | + |
1657 | + |
1658 | +def _parse_redhat_release(release_file=None, target=None): |
1659 | + """Return a dictionary of distro info fields from /etc/redhat-release. |
1660 | + |
1661 | + Dict keys will align with /etc/os-release keys: |
1662 | + ID, VERSION_ID, VERSION_CODENAME |
1663 | + """ |
1664 | + |
1665 | + if not release_file: |
1666 | + release_file = target_path('etc/redhat-release') |
1667 | + if not os.path.exists(release_file): |
1668 | + return {} |
1669 | + redhat_release = load_file(release_file) |
1670 | + redhat_regex = ( |
1671 | + r'(?P<name>.+) release (?P<version>[\d\.]+) ' |
1672 | + r'\((?P<codename>[^)]+)\)') |
1673 | + match = re.match(redhat_regex, redhat_release) |
1674 | + if match: |
1675 | + group = match.groupdict() |
1676 | + group['name'] = group['name'].lower().partition(' linux')[0] |
1677 | + if group['name'] == 'red hat enterprise': |
1678 | + group['name'] = 'redhat' |
1679 | + return {'ID': group['name'], 'VERSION_ID': group['version'], |
1680 | + 'VERSION_CODENAME': group['codename']} |
1681 | + return {} |
1682 | + |
1683 | + |
1684 | +def get_distroinfo(target=None): |
1685 | + variant_name = os_release(target=target)['ID'] |
1686 | + variant = name_to_distro(variant_name) |
1687 | + family = DISTRO_TO_OSFAMILY.get(variant) |
1688 | + return DistroInfo(variant, family) |
1689 | + |
1690 | + |
1691 | +def get_distro(target=None): |
1692 | + distinfo = get_distroinfo(target=target) |
1693 | + return distinfo.variant |
1694 | + |
1695 | + |
1696 | +def get_osfamily(target=None): |
1697 | + distinfo = get_distroinfo(target=target) |
1698 | + return distinfo.family |
1699 | + |
1700 | + |
1701 | +def is_ubuntu_core(target=None): |
1702 | + """Check if Ubuntu-Core specific directory is present at target""" |
1703 | + return os.path.exists(target_path(target, 'system-data/var/lib/snapd')) |
1704 | + |
1705 | + |
1706 | +def is_centos(target=None): |
1707 | + """Check if CentOS specific file is present at target""" |
1708 | + return os.path.exists(target_path(target, 'etc/centos-release')) |
1709 | + |
1710 | + |
1711 | +def is_rhel(target=None): |
1712 | + """Check if RHEL specific file is present at target""" |
1713 | + return os.path.exists(target_path(target, 'etc/redhat-release')) |
1714 | + |
1715 | + |
1716 | +def _lsb_release(target=None): |
1717 | + fmap = {'Codename': 'codename', 'Description': 'description', |
1718 | + 'Distributor ID': 'id', 'Release': 'release'} |
1719 | + |
1720 | + data = {} |
1721 | + try: |
1722 | + out, _ = subp(['lsb_release', '--all'], capture=True, target=target) |
1723 | + for line in out.splitlines(): |
1724 | + fname, _, val = line.partition(":") |
1725 | + if fname in fmap: |
1726 | + data[fmap[fname]] = val.strip() |
1727 | + missing = [k for k in fmap.values() if k not in data] |
1728 | + if len(missing): |
1729 | + LOG.warn("Missing fields in lsb_release --all output: %s", |
1730 | + ','.join(missing)) |
1731 | + |
1732 | + except ProcessExecutionError as err: |
1733 | + LOG.warn("Unable to get lsb_release --all: %s", err) |
1734 | + data = {v: "UNAVAILABLE" for v in fmap.values()} |
1735 | + |
1736 | + return data |
1737 | + |
1738 | + |
1739 | +def apt_update(target=None, env=None, force=False, comment=None, |
1740 | + retries=None): |
1741 | + |
1742 | + marker = "tmp/curtin.aptupdate" |
1743 | + |
1744 | + if env is None: |
1745 | + env = os.environ.copy() |
1746 | + |
1747 | + if retries is None: |
1748 | + # by default run apt-update up to 3 times to allow |
1749 | + # for transient failures |
1750 | + retries = (1, 2, 3) |
1751 | + |
1752 | + if comment is None: |
1753 | + comment = "no comment provided" |
1754 | + |
1755 | + if comment.endswith("\n"): |
1756 | + comment = comment[:-1] |
1757 | + |
1758 | + marker = target_path(target, marker) |
1759 | + # if marker exists, check if there are files that would make it obsolete |
1760 | + listfiles = [target_path(target, "/etc/apt/sources.list")] |
1761 | + listfiles += glob.glob( |
1762 | + target_path(target, "etc/apt/sources.list.d/*.list")) |
1763 | + |
1764 | + if os.path.exists(marker) and not force: |
1765 | + if len(find_newer(marker, listfiles)) == 0: |
1766 | + return |
1767 | + |
1768 | + restore_perms = [] |
1769 | + |
1770 | + abs_tmpdir = tempfile.mkdtemp(dir=target_path(target, "/tmp")) |
1771 | + try: |
1772 | + abs_slist = abs_tmpdir + "/sources.list" |
1773 | + abs_slistd = abs_tmpdir + "/sources.list.d" |
1774 | + ch_tmpdir = "/tmp/" + os.path.basename(abs_tmpdir) |
1775 | + ch_slist = ch_tmpdir + "/sources.list" |
1776 | + ch_slistd = ch_tmpdir + "/sources.list.d" |
1777 | + |
1778 | + # this file gets executed on apt-get update sometimes. (LP: #1527710) |
1779 | + motd_update = target_path( |
1780 | + target, "/usr/lib/update-notifier/update-motd-updates-available") |
1781 | + pmode = set_unexecutable(motd_update) |
1782 | + if pmode is not None: |
1783 | + restore_perms.append((motd_update, pmode),) |
1784 | + |
1785 | + # create tmpdir/sources.list with all lines other than deb-src |
1786 | + # avoid apt complaining by using existing and empty dir for sourceparts |
1787 | + os.mkdir(abs_slistd) |
1788 | + with open(abs_slist, "w") as sfp: |
1789 | + for sfile in listfiles: |
1790 | + with open(sfile, "r") as fp: |
1791 | + contents = fp.read() |
1792 | + for line in contents.splitlines(): |
1793 | + line = line.lstrip() |
1794 | + if not line.startswith("deb-src"): |
1795 | + sfp.write(line + "\n") |
1796 | + |
1797 | + update_cmd = [ |
1798 | + 'apt-get', '--quiet', |
1799 | + '--option=Acquire::Languages=none', |
1800 | + '--option=Dir::Etc::sourcelist=%s' % ch_slist, |
1801 | + '--option=Dir::Etc::sourceparts=%s' % ch_slistd, |
1802 | + 'update'] |
1803 | + |
1804 | + # do not using 'run_apt_command' so we can use 'retries' to subp |
1805 | + with ChrootableTarget(target, allow_daemons=True) as inchroot: |
1806 | + inchroot.subp(update_cmd, env=env, retries=retries) |
1807 | + finally: |
1808 | + for fname, perms in restore_perms: |
1809 | + os.chmod(fname, perms) |
1810 | + if abs_tmpdir: |
1811 | + shutil.rmtree(abs_tmpdir) |
1812 | + |
1813 | + with open(marker, "w") as fp: |
1814 | + fp.write(comment + "\n") |
1815 | + |
1816 | + |
1817 | +def run_apt_command(mode, args=None, opts=None, env=None, target=None, |
1818 | + execute=True, allow_daemons=False): |
1819 | + defopts = ['--quiet', '--assume-yes', |
1820 | + '--option=Dpkg::options::=--force-unsafe-io', |
1821 | + '--option=Dpkg::Options::=--force-confold'] |
1822 | + if args is None: |
1823 | + args = [] |
1824 | + |
1825 | + if opts is None: |
1826 | + opts = [] |
1827 | + |
1828 | + if env is None: |
1829 | + env = os.environ.copy() |
1830 | + env['DEBIAN_FRONTEND'] = 'noninteractive' |
1831 | + |
1832 | + if which('eatmydata', target=target): |
1833 | + emd = ['eatmydata'] |
1834 | + else: |
1835 | + emd = [] |
1836 | + |
1837 | + cmd = emd + ['apt-get'] + defopts + opts + [mode] + args |
1838 | + if not execute: |
1839 | + return env, cmd |
1840 | + |
1841 | + apt_update(target, env=env, comment=' '.join(cmd)) |
1842 | + with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: |
1843 | + return inchroot.subp(cmd, env=env) |
1844 | + |
1845 | + |
1846 | +def run_yum_command(mode, args=None, opts=None, env=None, target=None, |
1847 | + execute=True, allow_daemons=False): |
1848 | + defopts = ['--assumeyes', '--quiet'] |
1849 | + |
1850 | + if args is None: |
1851 | + args = [] |
1852 | + |
1853 | + if opts is None: |
1854 | + opts = [] |
1855 | + |
1856 | + cmd = ['yum'] + defopts + opts + [mode] + args |
1857 | + if not execute: |
1858 | + return env, cmd |
1859 | + |
1860 | + if mode in ["install", "update", "upgrade"]: |
1861 | + return yum_install(mode, args, opts=opts, env=env, target=target, |
1862 | + allow_daemons=allow_daemons) |
1863 | + |
1864 | + with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: |
1865 | + return inchroot.subp(cmd, env=env) |
1866 | + |
1867 | + |
1868 | +def yum_install(mode, packages=None, opts=None, env=None, target=None, |
1869 | + allow_daemons=False): |
1870 | + |
1871 | + defopts = ['--assumeyes', '--quiet'] |
1872 | + |
1873 | + if packages is None: |
1874 | + packages = [] |
1875 | + |
1876 | + if opts is None: |
1877 | + opts = [] |
1878 | + |
1879 | + if mode not in ['install', 'update', 'upgrade']: |
1880 | + raise ValueError( |
1881 | + 'Unsupported mode "%s" for yum package install/upgrade' % mode) |
1882 | + |
1883 | + # download first, then install/upgrade from cache |
1884 | + cmd = ['yum'] + defopts + opts + [mode] |
1885 | + dl_opts = ['--downloadonly', '--setopt=keepcache=1'] |
1886 | + inst_opts = ['--cacheonly'] |
1887 | + |
1888 | + # rpm requires /dev /sys and /proc be mounted, use ChrootableTarget |
1889 | + with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: |
1890 | + inchroot.subp(cmd + dl_opts + packages, |
1891 | + env=env, retries=[1] * 10) |
1892 | + return inchroot.subp(cmd + inst_opts + packages, env=env) |
1893 | + |
1894 | + |
1895 | +def rpm_get_dist_id(target=None): |
1896 | + """Use rpm command to extract the '%rhel' distro macro which returns |
1897 | + the major os version id (6, 7, 8). This works for centos or rhel |
1898 | + """ |
1899 | + with ChrootableTarget(target) as in_chroot: |
1900 | + dist, _ = in_chroot.subp(['rpm', '-E', '%rhel'], capture=True) |
1901 | + return dist.rstrip() |
1902 | + |
1903 | + |
1904 | +def system_upgrade(opts=None, target=None, env=None, allow_daemons=False, |
1905 | + osfamily=None): |
1906 | + LOG.debug("Upgrading system in %s", target) |
1907 | + |
1908 | + distro_cfg = { |
1909 | + DISTROS.debian: {'function': 'run_apt_command', |
1910 | + 'subcommands': ('dist-upgrade', 'autoremove')}, |
1911 | + DISTROS.redhat: {'function': 'run_yum_command', |
1912 | + 'subcommands': ('upgrade')}, |
1913 | + } |
1914 | + if osfamily not in distro_cfg: |
1915 | + raise ValueError('Distro "%s" does not have system_upgrade support', |
1916 | + osfamily) |
1917 | + |
1918 | + for mode in distro_cfg[osfamily]['subcommands']: |
1919 | + ret = distro_cfg[osfamily]['function']( |
1920 | + mode, opts=opts, target=target, |
1921 | + env=env, allow_daemons=allow_daemons) |
1922 | + return ret |
1923 | + |
1924 | + |
1925 | +def install_packages(pkglist, osfamily=None, opts=None, target=None, env=None, |
1926 | + allow_daemons=False): |
1927 | + if isinstance(pkglist, str): |
1928 | + pkglist = [pkglist] |
1929 | + |
1930 | + if not osfamily: |
1931 | + osfamily = get_osfamily(target=target) |
1932 | + |
1933 | + installer_map = { |
1934 | + DISTROS.debian: run_apt_command, |
1935 | + DISTROS.redhat: run_yum_command, |
1936 | + } |
1937 | + |
1938 | + install_cmd = installer_map.get(osfamily) |
1939 | + if not install_cmd: |
1940 | + raise ValueError('No packge install command for distro: %s' % |
1941 | + osfamily) |
1942 | + |
1943 | + return install_cmd('install', args=pkglist, opts=opts, target=target, |
1944 | + env=env, allow_daemons=allow_daemons) |
1945 | + |
1946 | + |
1947 | +def has_pkg_available(pkg, target=None, osfamily=None): |
1948 | + if not osfamily: |
1949 | + osfamily = get_osfamily(target=target) |
1950 | + |
1951 | + if osfamily not in [DISTROS.debian, DISTROS.redhat]: |
1952 | + raise ValueError('has_pkg_available: unsupported distro family: %s', |
1953 | + osfamily) |
1954 | + |
1955 | + if osfamily == DISTROS.debian: |
1956 | + out, _ = subp(['apt-cache', 'pkgnames'], capture=True, target=target) |
1957 | + for item in out.splitlines(): |
1958 | + if pkg == item.strip(): |
1959 | + return True |
1960 | + return False |
1961 | + |
1962 | + if osfamily == DISTROS.redhat: |
1963 | + out, _ = run_yum_command('list', opts=['--cacheonly']) |
1964 | + for item in out.splitlines(): |
1965 | + if item.lower().startswith(pkg.lower()): |
1966 | + return True |
1967 | + return False |
1968 | + |
1969 | + |
1970 | +def get_installed_packages(target=None): |
1971 | + if which('dpkg-query', target=target): |
1972 | + (out, _) = subp(['dpkg-query', '--list'], target=target, capture=True) |
1973 | + elif which('rpm', target=target): |
1974 | + # rpm requires /dev /sys and /proc be mounted, use ChrootableTarget |
1975 | + with ChrootableTarget(target) as in_chroot: |
1976 | + (out, _) = in_chroot.subp(['rpm', '-qa', '--queryformat', |
1977 | + 'ii %{NAME} %{VERSION}-%{RELEASE}\n'], |
1978 | + target=target, capture=True) |
1979 | + if not out: |
1980 | + raise ValueError('No package query tool') |
1981 | + |
1982 | + pkgs_inst = set() |
1983 | + for line in out.splitlines(): |
1984 | + try: |
1985 | + (state, pkg, other) = line.split(None, 2) |
1986 | + except ValueError: |
1987 | + continue |
1988 | + if state.startswith("hi") or state.startswith("ii"): |
1989 | + pkgs_inst.add(re.sub(":.*", "", pkg)) |
1990 | + |
1991 | + return pkgs_inst |
1992 | + |
1993 | + |
1994 | +def has_pkg_installed(pkg, target=None): |
1995 | + try: |
1996 | + out, _ = subp(['dpkg-query', '--show', '--showformat', |
1997 | + '${db:Status-Abbrev}', pkg], |
1998 | + capture=True, target=target) |
1999 | + return out.rstrip() == "ii" |
2000 | + except ProcessExecutionError: |
2001 | + return False |
2002 | + |
2003 | + |
2004 | +def parse_dpkg_version(raw, name=None, semx=None): |
2005 | + """Parse a dpkg version string into various parts and calcualate a |
2006 | + numerical value of the version for use in comparing package versions |
2007 | + |
2008 | + Native packages (without a '-'), will have the package version treated |
2009 | + as the upstream version. |
2010 | + |
2011 | + returns a dictionary with fields: |
2012 | + 'major' (int), 'minor' (int), 'micro' (int), |
2013 | + 'semantic_version' (int), |
2014 | + 'extra' (string), 'raw' (string), 'upstream' (string), |
2015 | + 'name' (present only if name is not None) |
2016 | + """ |
2017 | + if not isinstance(raw, string_types): |
2018 | + raise TypeError( |
2019 | + "Invalid type %s for parse_dpkg_version" % raw.__class__) |
2020 | + |
2021 | + if semx is None: |
2022 | + semx = (10000, 100, 1) |
2023 | + |
2024 | + if "-" in raw: |
2025 | + upstream = raw.rsplit('-', 1)[0] |
2026 | + else: |
2027 | + # this is a native package, package version treated as upstream. |
2028 | + upstream = raw |
2029 | + |
2030 | + match = re.search(r'[^0-9.]', upstream) |
2031 | + if match: |
2032 | + extra = upstream[match.start():] |
2033 | + upstream_base = upstream[:match.start()] |
2034 | + else: |
2035 | + upstream_base = upstream |
2036 | + extra = None |
2037 | + |
2038 | + toks = upstream_base.split(".", 2) |
2039 | + if len(toks) == 3: |
2040 | + major, minor, micro = toks |
2041 | + elif len(toks) == 2: |
2042 | + major, minor, micro = (toks[0], toks[1], 0) |
2043 | + elif len(toks) == 1: |
2044 | + major, minor, micro = (toks[0], 0, 0) |
2045 | + |
2046 | + version = { |
2047 | + 'major': int(major), |
2048 | + 'minor': int(minor), |
2049 | + 'micro': int(micro), |
2050 | + 'extra': extra, |
2051 | + 'raw': raw, |
2052 | + 'upstream': upstream, |
2053 | + } |
2054 | + if name: |
2055 | + version['name'] = name |
2056 | + |
2057 | + if semx: |
2058 | + try: |
2059 | + version['semantic_version'] = int( |
2060 | + int(major) * semx[0] + int(minor) * semx[1] + |
2061 | + int(micro) * semx[2]) |
2062 | + except (ValueError, IndexError): |
2063 | + version['semantic_version'] = None |
2064 | + |
2065 | + return version |
2066 | + |
2067 | + |
2068 | +def get_package_version(pkg, target=None, semx=None): |
2069 | + """Use dpkg-query to extract package pkg's version string |
2070 | + and parse the version string into a dictionary |
2071 | + """ |
2072 | + try: |
2073 | + out, _ = subp(['dpkg-query', '--show', '--showformat', |
2074 | + '${Version}', pkg], capture=True, target=target) |
2075 | + raw = out.rstrip() |
2076 | + return parse_dpkg_version(raw, name=pkg, semx=semx) |
2077 | + except ProcessExecutionError: |
2078 | + return None |
2079 | + |
2080 | + |
2081 | +# vi: ts=4 expandtab syntax=python |
2082 | diff --git a/curtin/futil.py b/curtin/futil.py |
2083 | index 506964e..e603f88 100644 |
2084 | --- a/curtin/futil.py |
2085 | +++ b/curtin/futil.py |
2086 | @@ -5,7 +5,8 @@ import pwd |
2087 | import os |
2088 | import warnings |
2089 | |
2090 | -from .util import write_file, target_path |
2091 | +from .util import write_file |
2092 | +from .paths import target_path |
2093 | from .log import LOG |
2094 | |
2095 | |
2096 | diff --git a/curtin/net/__init__.py b/curtin/net/__init__.py |
2097 | index b4c9b59..ef2ba26 100644 |
2098 | --- a/curtin/net/__init__.py |
2099 | +++ b/curtin/net/__init__.py |
2100 | @@ -572,63 +572,4 @@ def get_interface_mac(ifname): |
2101 | return read_sys_net(ifname, "address", enoent=False) |
2102 | |
2103 | |
2104 | -def network_config_required_packages(network_config, mapping=None): |
2105 | - |
2106 | - if network_config is None: |
2107 | - network_config = {} |
2108 | - |
2109 | - if not isinstance(network_config, dict): |
2110 | - raise ValueError('Invalid network configuration. Must be a dict') |
2111 | - |
2112 | - if mapping is None: |
2113 | - mapping = {} |
2114 | - |
2115 | - if not isinstance(mapping, dict): |
2116 | - raise ValueError('Invalid network mapping. Must be a dict') |
2117 | - |
2118 | - # allow top-level 'network' key |
2119 | - if 'network' in network_config: |
2120 | - network_config = network_config.get('network') |
2121 | - |
2122 | - # v1 has 'config' key and uses type: devtype elements |
2123 | - if 'config' in network_config: |
2124 | - dev_configs = set(device['type'] |
2125 | - for device in network_config['config']) |
2126 | - else: |
2127 | - # v2 has no config key |
2128 | - dev_configs = set(cfgtype for (cfgtype, cfg) in |
2129 | - network_config.items() if cfgtype not in ['version']) |
2130 | - |
2131 | - needed_packages = [] |
2132 | - for dev_type in dev_configs: |
2133 | - if dev_type in mapping: |
2134 | - needed_packages.extend(mapping[dev_type]) |
2135 | - |
2136 | - return needed_packages |
2137 | - |
2138 | - |
2139 | -def detect_required_packages_mapping(): |
2140 | - """Return a dictionary providing a versioned configuration which maps |
2141 | - network configuration elements to the packages which are required |
2142 | - for functionality. |
2143 | - """ |
2144 | - mapping = { |
2145 | - 1: { |
2146 | - 'handler': network_config_required_packages, |
2147 | - 'mapping': { |
2148 | - 'bond': ['ifenslave'], |
2149 | - 'bridge': ['bridge-utils'], |
2150 | - 'vlan': ['vlan']}, |
2151 | - }, |
2152 | - 2: { |
2153 | - 'handler': network_config_required_packages, |
2154 | - 'mapping': { |
2155 | - 'bonds': ['ifenslave'], |
2156 | - 'bridges': ['bridge-utils'], |
2157 | - 'vlans': ['vlan']} |
2158 | - }, |
2159 | - } |
2160 | - |
2161 | - return mapping |
2162 | - |
2163 | # vi: ts=4 expandtab syntax=python |
2164 | diff --git a/curtin/net/deps.py b/curtin/net/deps.py |
2165 | new file mode 100644 |
2166 | index 0000000..b98961d |
2167 | --- /dev/null |
2168 | +++ b/curtin/net/deps.py |
2169 | @@ -0,0 +1,72 @@ |
2170 | +# This file is part of curtin. See LICENSE file for copyright and license info. |
2171 | + |
2172 | +from curtin.distro import DISTROS |
2173 | + |
2174 | + |
2175 | +def network_config_required_packages(network_config, mapping=None): |
2176 | + |
2177 | + if network_config is None: |
2178 | + network_config = {} |
2179 | + |
2180 | + if not isinstance(network_config, dict): |
2181 | + raise ValueError('Invalid network configuration. Must be a dict') |
2182 | + |
2183 | + if mapping is None: |
2184 | + mapping = {} |
2185 | + |
2186 | + if not isinstance(mapping, dict): |
2187 | + raise ValueError('Invalid network mapping. Must be a dict') |
2188 | + |
2189 | + # allow top-level 'network' key |
2190 | + if 'network' in network_config: |
2191 | + network_config = network_config.get('network') |
2192 | + |
2193 | + # v1 has 'config' key and uses type: devtype elements |
2194 | + if 'config' in network_config: |
2195 | + dev_configs = set(device['type'] |
2196 | + for device in network_config['config']) |
2197 | + else: |
2198 | + # v2 has no config key |
2199 | + dev_configs = set(cfgtype for (cfgtype, cfg) in |
2200 | + network_config.items() if cfgtype not in ['version']) |
2201 | + |
2202 | + needed_packages = [] |
2203 | + for dev_type in dev_configs: |
2204 | + if dev_type in mapping: |
2205 | + needed_packages.extend(mapping[dev_type]) |
2206 | + |
2207 | + return needed_packages |
2208 | + |
2209 | + |
2210 | +def detect_required_packages_mapping(osfamily=DISTROS.debian): |
2211 | + """Return a dictionary providing a versioned configuration which maps |
2212 | + network configuration elements to the packages which are required |
2213 | + for functionality. |
2214 | + """ |
2215 | + # keys ending with 's' are v2 values |
2216 | + distro_mapping = { |
2217 | + DISTROS.debian: { |
2218 | + 'bond': ['ifenslave'], |
2219 | + 'bonds': [], |
2220 | + 'bridge': ['bridge-utils'], |
2221 | + 'bridges': [], |
2222 | + 'vlan': ['vlan'], |
2223 | + 'vlans': []}, |
2224 | + DISTROS.redhat: { |
2225 | + 'bond': [], |
2226 | + 'bonds': [], |
2227 | + 'bridge': [], |
2228 | + 'bridges': [], |
2229 | + 'vlan': [], |
2230 | + 'vlans': []}, |
2231 | + } |
2232 | + if osfamily not in distro_mapping: |
2233 | + raise ValueError('No net package mapping for distro: %s' % osfamily) |
2234 | + |
2235 | + return {1: {'handler': network_config_required_packages, |
2236 | + 'mapping': distro_mapping.get(osfamily)}, |
2237 | + 2: {'handler': network_config_required_packages, |
2238 | + 'mapping': distro_mapping.get(osfamily)}} |
2239 | + |
2240 | + |
2241 | +# vi: ts=4 expandtab syntax=python |
2242 | diff --git a/curtin/paths.py b/curtin/paths.py |
2243 | new file mode 100644 |
2244 | index 0000000..064b060 |
2245 | --- /dev/null |
2246 | +++ b/curtin/paths.py |
2247 | @@ -0,0 +1,34 @@ |
2248 | +# This file is part of curtin. See LICENSE file for copyright and license info. |
2249 | +import os |
2250 | + |
2251 | +try: |
2252 | + string_types = (basestring,) |
2253 | +except NameError: |
2254 | + string_types = (str,) |
2255 | + |
2256 | + |
2257 | +def target_path(target, path=None): |
2258 | + # return 'path' inside target, accepting target as None |
2259 | + if target in (None, ""): |
2260 | + target = "/" |
2261 | + elif not isinstance(target, string_types): |
2262 | + raise ValueError("Unexpected input for target: %s" % target) |
2263 | + else: |
2264 | + target = os.path.abspath(target) |
2265 | + # abspath("//") returns "//" specifically for 2 slashes. |
2266 | + if target.startswith("//"): |
2267 | + target = target[1:] |
2268 | + |
2269 | + if not path: |
2270 | + return target |
2271 | + |
2272 | + if not isinstance(path, string_types): |
2273 | + raise ValueError("Unexpected input for path: %s" % path) |
2274 | + |
2275 | + # os.path.join("/etc", "/foo") returns "/foo". Chomp all leading /. |
2276 | + while len(path) and path[0] == "/": |
2277 | + path = path[1:] |
2278 | + |
2279 | + return os.path.join(target, path) |
2280 | + |
2281 | +# vi: ts=4 expandtab syntax=python |
2282 | diff --git a/curtin/util.py b/curtin/util.py |
2283 | index 29bf06e..238d7c5 100644 |
2284 | --- a/curtin/util.py |
2285 | +++ b/curtin/util.py |
2286 | @@ -4,7 +4,6 @@ import argparse |
2287 | import collections |
2288 | from contextlib import contextmanager |
2289 | import errno |
2290 | -import glob |
2291 | import json |
2292 | import os |
2293 | import platform |
2294 | @@ -38,15 +37,16 @@ except NameError: |
2295 | # python3 does not have a long type. |
2296 | numeric_types = (int, float) |
2297 | |
2298 | +from . import paths |
2299 | from .log import LOG, log_call |
2300 | |
2301 | _INSTALLED_HELPERS_PATH = 'usr/lib/curtin/helpers' |
2302 | _INSTALLED_MAIN = 'usr/bin/curtin' |
2303 | |
2304 | -_LSB_RELEASE = {} |
2305 | _USES_SYSTEMD = None |
2306 | _HAS_UNSHARE_PID = None |
2307 | |
2308 | + |
2309 | _DNS_REDIRECT_IP = None |
2310 | |
2311 | # matcher used in template rendering functions |
2312 | @@ -61,7 +61,7 @@ def _subp(args, data=None, rcs=None, env=None, capture=False, |
2313 | rcs = [0] |
2314 | devnull_fp = None |
2315 | |
2316 | - tpath = target_path(target) |
2317 | + tpath = paths.target_path(target) |
2318 | chroot_args = [] if tpath == "/" else ['chroot', target] |
2319 | sh_args = ['sh', '-c'] if shell else [] |
2320 | if isinstance(args, string_types): |
2321 | @@ -165,7 +165,7 @@ def _get_unshare_pid_args(unshare_pid=None, target=None, euid=None): |
2322 | if euid is None: |
2323 | euid = os.geteuid() |
2324 | |
2325 | - tpath = target_path(target) |
2326 | + tpath = paths.target_path(target) |
2327 | |
2328 | unshare_pid_in = unshare_pid |
2329 | if unshare_pid is None: |
2330 | @@ -595,7 +595,7 @@ def disable_daemons_in_root(target): |
2331 | 'done', |
2332 | '']) |
2333 | |
2334 | - fpath = target_path(target, "/usr/sbin/policy-rc.d") |
2335 | + fpath = paths.target_path(target, "/usr/sbin/policy-rc.d") |
2336 | |
2337 | if os.path.isfile(fpath): |
2338 | return False |
2339 | @@ -606,7 +606,7 @@ def disable_daemons_in_root(target): |
2340 | |
2341 | def undisable_daemons_in_root(target): |
2342 | try: |
2343 | - os.unlink(target_path(target, "/usr/sbin/policy-rc.d")) |
2344 | + os.unlink(paths.target_path(target, "/usr/sbin/policy-rc.d")) |
2345 | except OSError as e: |
2346 | if e.errno != errno.ENOENT: |
2347 | raise |
2348 | @@ -618,7 +618,7 @@ class ChrootableTarget(object): |
2349 | def __init__(self, target, allow_daemons=False, sys_resolvconf=True): |
2350 | if target is None: |
2351 | target = "/" |
2352 | - self.target = target_path(target) |
2353 | + self.target = paths.target_path(target) |
2354 | self.mounts = ["/dev", "/proc", "/sys"] |
2355 | self.umounts = [] |
2356 | self.disabled_daemons = False |
2357 | @@ -628,14 +628,14 @@ class ChrootableTarget(object): |
2358 | |
2359 | def __enter__(self): |
2360 | for p in self.mounts: |
2361 | - tpath = target_path(self.target, p) |
2362 | + tpath = paths.target_path(self.target, p) |
2363 | if do_mount(p, tpath, opts='--bind'): |
2364 | self.umounts.append(tpath) |
2365 | |
2366 | if not self.allow_daemons: |
2367 | self.disabled_daemons = disable_daemons_in_root(self.target) |
2368 | |
2369 | - rconf = target_path(self.target, "/etc/resolv.conf") |
2370 | + rconf = paths.target_path(self.target, "/etc/resolv.conf") |
2371 | target_etc = os.path.dirname(rconf) |
2372 | if self.target != "/" and os.path.isdir(target_etc): |
2373 | # never muck with resolv.conf on / |
2374 | @@ -660,13 +660,13 @@ class ChrootableTarget(object): |
2375 | undisable_daemons_in_root(self.target) |
2376 | |
2377 | # if /dev is to be unmounted, udevadm settle (LP: #1462139) |
2378 | - if target_path(self.target, "/dev") in self.umounts: |
2379 | + if paths.target_path(self.target, "/dev") in self.umounts: |
2380 | log_call(subp, ['udevadm', 'settle']) |
2381 | |
2382 | for p in reversed(self.umounts): |
2383 | do_umount(p) |
2384 | |
2385 | - rconf = target_path(self.target, "/etc/resolv.conf") |
2386 | + rconf = paths.target_path(self.target, "/etc/resolv.conf") |
2387 | if self.sys_resolvconf and self.rconf_d: |
2388 | os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf) |
2389 | shutil.rmtree(self.rconf_d) |
2390 | @@ -676,7 +676,7 @@ class ChrootableTarget(object): |
2391 | return subp(*args, **kwargs) |
2392 | |
2393 | def path(self, path): |
2394 | - return target_path(self.target, path) |
2395 | + return paths.target_path(self.target, path) |
2396 | |
2397 | |
2398 | def is_exe(fpath): |
2399 | @@ -685,29 +685,29 @@ def is_exe(fpath): |
2400 | |
2401 | |
2402 | def which(program, search=None, target=None): |
2403 | - target = target_path(target) |
2404 | + target = paths.target_path(target) |
2405 | |
2406 | if os.path.sep in program: |
2407 | # if program had a '/' in it, then do not search PATH |
2408 | # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls |
2409 | # so effectively we set cwd to / (or target) |
2410 | - if is_exe(target_path(target, program)): |
2411 | + if is_exe(paths.target_path(target, program)): |
2412 | return program |
2413 | |
2414 | if search is None: |
2415 | - paths = [p.strip('"') for p in |
2416 | - os.environ.get("PATH", "").split(os.pathsep)] |
2417 | + candpaths = [p.strip('"') for p in |
2418 | + os.environ.get("PATH", "").split(os.pathsep)] |
2419 | if target == "/": |
2420 | - search = paths |
2421 | + search = candpaths |
2422 | else: |
2423 | - search = [p for p in paths if p.startswith("/")] |
2424 | + search = [p for p in candpaths if p.startswith("/")] |
2425 | |
2426 | # normalize path input |
2427 | search = [os.path.abspath(p) for p in search] |
2428 | |
2429 | for path in search: |
2430 | ppath = os.path.sep.join((path, program)) |
2431 | - if is_exe(target_path(target, ppath)): |
2432 | + if is_exe(paths.target_path(target, ppath)): |
2433 | return ppath |
2434 | |
2435 | return None |
2436 | @@ -773,116 +773,6 @@ def get_architecture(target=None): |
2437 | return out.strip() |
2438 | |
2439 | |
2440 | -def has_pkg_available(pkg, target=None): |
2441 | - out, _ = subp(['apt-cache', 'pkgnames'], capture=True, target=target) |
2442 | - for item in out.splitlines(): |
2443 | - if pkg == item.strip(): |
2444 | - return True |
2445 | - return False |
2446 | - |
2447 | - |
2448 | -def get_installed_packages(target=None): |
2449 | - (out, _) = subp(['dpkg-query', '--list'], target=target, capture=True) |
2450 | - |
2451 | - pkgs_inst = set() |
2452 | - for line in out.splitlines(): |
2453 | - try: |
2454 | - (state, pkg, other) = line.split(None, 2) |
2455 | - except ValueError: |
2456 | - continue |
2457 | - if state.startswith("hi") or state.startswith("ii"): |
2458 | - pkgs_inst.add(re.sub(":.*", "", pkg)) |
2459 | - |
2460 | - return pkgs_inst |
2461 | - |
2462 | - |
2463 | -def has_pkg_installed(pkg, target=None): |
2464 | - try: |
2465 | - out, _ = subp(['dpkg-query', '--show', '--showformat', |
2466 | - '${db:Status-Abbrev}', pkg], |
2467 | - capture=True, target=target) |
2468 | - return out.rstrip() == "ii" |
2469 | - except ProcessExecutionError: |
2470 | - return False |
2471 | - |
2472 | - |
2473 | -def parse_dpkg_version(raw, name=None, semx=None): |
2474 | - """Parse a dpkg version string into various parts and calcualate a |
2475 | - numerical value of the version for use in comparing package versions |
2476 | - |
2477 | - Native packages (without a '-'), will have the package version treated |
2478 | - as the upstream version. |
2479 | - |
2480 | - returns a dictionary with fields: |
2481 | - 'major' (int), 'minor' (int), 'micro' (int), |
2482 | - 'semantic_version' (int), |
2483 | - 'extra' (string), 'raw' (string), 'upstream' (string), |
2484 | - 'name' (present only if name is not None) |
2485 | - """ |
2486 | - if not isinstance(raw, string_types): |
2487 | - raise TypeError( |
2488 | - "Invalid type %s for parse_dpkg_version" % raw.__class__) |
2489 | - |
2490 | - if semx is None: |
2491 | - semx = (10000, 100, 1) |
2492 | - |
2493 | - if "-" in raw: |
2494 | - upstream = raw.rsplit('-', 1)[0] |
2495 | - else: |
2496 | - # this is a native package, package version treated as upstream. |
2497 | - upstream = raw |
2498 | - |
2499 | - match = re.search(r'[^0-9.]', upstream) |
2500 | - if match: |
2501 | - extra = upstream[match.start():] |
2502 | - upstream_base = upstream[:match.start()] |
2503 | - else: |
2504 | - upstream_base = upstream |
2505 | - extra = None |
2506 | - |
2507 | - toks = upstream_base.split(".", 2) |
2508 | - if len(toks) == 3: |
2509 | - major, minor, micro = toks |
2510 | - elif len(toks) == 2: |
2511 | - major, minor, micro = (toks[0], toks[1], 0) |
2512 | - elif len(toks) == 1: |
2513 | - major, minor, micro = (toks[0], 0, 0) |
2514 | - |
2515 | - version = { |
2516 | - 'major': int(major), |
2517 | - 'minor': int(minor), |
2518 | - 'micro': int(micro), |
2519 | - 'extra': extra, |
2520 | - 'raw': raw, |
2521 | - 'upstream': upstream, |
2522 | - } |
2523 | - if name: |
2524 | - version['name'] = name |
2525 | - |
2526 | - if semx: |
2527 | - try: |
2528 | - version['semantic_version'] = int( |
2529 | - int(major) * semx[0] + int(minor) * semx[1] + |
2530 | - int(micro) * semx[2]) |
2531 | - except (ValueError, IndexError): |
2532 | - version['semantic_version'] = None |
2533 | - |
2534 | - return version |
2535 | - |
2536 | - |
2537 | -def get_package_version(pkg, target=None, semx=None): |
2538 | - """Use dpkg-query to extract package pkg's version string |
2539 | - and parse the version string into a dictionary |
2540 | - """ |
2541 | - try: |
2542 | - out, _ = subp(['dpkg-query', '--show', '--showformat', |
2543 | - '${Version}', pkg], capture=True, target=target) |
2544 | - raw = out.rstrip() |
2545 | - return parse_dpkg_version(raw, name=pkg, semx=semx) |
2546 | - except ProcessExecutionError: |
2547 | - return None |
2548 | - |
2549 | - |
2550 | def find_newer(src, files): |
2551 | mtime = os.stat(src).st_mtime |
2552 | return [f for f in files if |
2553 | @@ -907,134 +797,6 @@ def set_unexecutable(fname, strict=False): |
2554 | return cur |
2555 | |
2556 | |
2557 | -def apt_update(target=None, env=None, force=False, comment=None, |
2558 | - retries=None): |
2559 | - |
2560 | - marker = "tmp/curtin.aptupdate" |
2561 | - if target is None: |
2562 | - target = "/" |
2563 | - |
2564 | - if env is None: |
2565 | - env = os.environ.copy() |
2566 | - |
2567 | - if retries is None: |
2568 | - # by default run apt-update up to 3 times to allow |
2569 | - # for transient failures |
2570 | - retries = (1, 2, 3) |
2571 | - |
2572 | - if comment is None: |
2573 | - comment = "no comment provided" |
2574 | - |
2575 | - if comment.endswith("\n"): |
2576 | - comment = comment[:-1] |
2577 | - |
2578 | - marker = target_path(target, marker) |
2579 | - # if marker exists, check if there are files that would make it obsolete |
2580 | - listfiles = [target_path(target, "/etc/apt/sources.list")] |
2581 | - listfiles += glob.glob( |
2582 | - target_path(target, "etc/apt/sources.list.d/*.list")) |
2583 | - |
2584 | - if os.path.exists(marker) and not force: |
2585 | - if len(find_newer(marker, listfiles)) == 0: |
2586 | - return |
2587 | - |
2588 | - restore_perms = [] |
2589 | - |
2590 | - abs_tmpdir = tempfile.mkdtemp(dir=target_path(target, "/tmp")) |
2591 | - try: |
2592 | - abs_slist = abs_tmpdir + "/sources.list" |
2593 | - abs_slistd = abs_tmpdir + "/sources.list.d" |
2594 | - ch_tmpdir = "/tmp/" + os.path.basename(abs_tmpdir) |
2595 | - ch_slist = ch_tmpdir + "/sources.list" |
2596 | - ch_slistd = ch_tmpdir + "/sources.list.d" |
2597 | - |
2598 | - # this file gets executed on apt-get update sometimes. (LP: #1527710) |
2599 | - motd_update = target_path( |
2600 | - target, "/usr/lib/update-notifier/update-motd-updates-available") |
2601 | - pmode = set_unexecutable(motd_update) |
2602 | - if pmode is not None: |
2603 | - restore_perms.append((motd_update, pmode),) |
2604 | - |
2605 | - # create tmpdir/sources.list with all lines other than deb-src |
2606 | - # avoid apt complaining by using existing and empty dir for sourceparts |
2607 | - os.mkdir(abs_slistd) |
2608 | - with open(abs_slist, "w") as sfp: |
2609 | - for sfile in listfiles: |
2610 | - with open(sfile, "r") as fp: |
2611 | - contents = fp.read() |
2612 | - for line in contents.splitlines(): |
2613 | - line = line.lstrip() |
2614 | - if not line.startswith("deb-src"): |
2615 | - sfp.write(line + "\n") |
2616 | - |
2617 | - update_cmd = [ |
2618 | - 'apt-get', '--quiet', |
2619 | - '--option=Acquire::Languages=none', |
2620 | - '--option=Dir::Etc::sourcelist=%s' % ch_slist, |
2621 | - '--option=Dir::Etc::sourceparts=%s' % ch_slistd, |
2622 | - 'update'] |
2623 | - |
2624 | - # do not using 'run_apt_command' so we can use 'retries' to subp |
2625 | - with ChrootableTarget(target, allow_daemons=True) as inchroot: |
2626 | - inchroot.subp(update_cmd, env=env, retries=retries) |
2627 | - finally: |
2628 | - for fname, perms in restore_perms: |
2629 | - os.chmod(fname, perms) |
2630 | - if abs_tmpdir: |
2631 | - shutil.rmtree(abs_tmpdir) |
2632 | - |
2633 | - with open(marker, "w") as fp: |
2634 | - fp.write(comment + "\n") |
2635 | - |
2636 | - |
2637 | -def run_apt_command(mode, args=None, aptopts=None, env=None, target=None, |
2638 | - execute=True, allow_daemons=False): |
2639 | - opts = ['--quiet', '--assume-yes', |
2640 | - '--option=Dpkg::options::=--force-unsafe-io', |
2641 | - '--option=Dpkg::Options::=--force-confold'] |
2642 | - |
2643 | - if args is None: |
2644 | - args = [] |
2645 | - |
2646 | - if aptopts is None: |
2647 | - aptopts = [] |
2648 | - |
2649 | - if env is None: |
2650 | - env = os.environ.copy() |
2651 | - env['DEBIAN_FRONTEND'] = 'noninteractive' |
2652 | - |
2653 | - if which('eatmydata', target=target): |
2654 | - emd = ['eatmydata'] |
2655 | - else: |
2656 | - emd = [] |
2657 | - |
2658 | - cmd = emd + ['apt-get'] + opts + aptopts + [mode] + args |
2659 | - if not execute: |
2660 | - return env, cmd |
2661 | - |
2662 | - apt_update(target, env=env, comment=' '.join(cmd)) |
2663 | - with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: |
2664 | - return inchroot.subp(cmd, env=env) |
2665 | - |
2666 | - |
2667 | -def system_upgrade(aptopts=None, target=None, env=None, allow_daemons=False): |
2668 | - LOG.debug("Upgrading system in %s", target) |
2669 | - for mode in ('dist-upgrade', 'autoremove'): |
2670 | - ret = run_apt_command( |
2671 | - mode, aptopts=aptopts, target=target, |
2672 | - env=env, allow_daemons=allow_daemons) |
2673 | - return ret |
2674 | - |
2675 | - |
2676 | -def install_packages(pkglist, aptopts=None, target=None, env=None, |
2677 | - allow_daemons=False): |
2678 | - if isinstance(pkglist, str): |
2679 | - pkglist = [pkglist] |
2680 | - return run_apt_command( |
2681 | - 'install', args=pkglist, |
2682 | - aptopts=aptopts, target=target, env=env, allow_daemons=allow_daemons) |
2683 | - |
2684 | - |
2685 | def is_uefi_bootable(): |
2686 | return os.path.exists('/sys/firmware/efi') is True |
2687 | |
2688 | @@ -1106,7 +868,7 @@ def run_hook_if_exists(target, hook): |
2689 | """ |
2690 | Look for "hook" in "target" and run it |
2691 | """ |
2692 | - target_hook = target_path(target, '/curtin/' + hook) |
2693 | + target_hook = paths.target_path(target, '/curtin/' + hook) |
2694 | if os.path.isfile(target_hook): |
2695 | LOG.debug("running %s" % target_hook) |
2696 | subp([target_hook]) |
2697 | @@ -1261,41 +1023,6 @@ def is_file_not_found_exc(exc): |
2698 | exc.errno in (errno.ENOENT, errno.EIO, errno.ENXIO)) |
2699 | |
2700 | |
2701 | -def _lsb_release(target=None): |
2702 | - fmap = {'Codename': 'codename', 'Description': 'description', |
2703 | - 'Distributor ID': 'id', 'Release': 'release'} |
2704 | - |
2705 | - data = {} |
2706 | - try: |
2707 | - out, _ = subp(['lsb_release', '--all'], capture=True, target=target) |
2708 | - for line in out.splitlines(): |
2709 | - fname, _, val = line.partition(":") |
2710 | - if fname in fmap: |
2711 | - data[fmap[fname]] = val.strip() |
2712 | - missing = [k for k in fmap.values() if k not in data] |
2713 | - if len(missing): |
2714 | - LOG.warn("Missing fields in lsb_release --all output: %s", |
2715 | - ','.join(missing)) |
2716 | - |
2717 | - except ProcessExecutionError as err: |
2718 | - LOG.warn("Unable to get lsb_release --all: %s", err) |
2719 | - data = {v: "UNAVAILABLE" for v in fmap.values()} |
2720 | - |
2721 | - return data |
2722 | - |
2723 | - |
2724 | -def lsb_release(target=None): |
2725 | - if target_path(target) != "/": |
2726 | - # do not use or update cache if target is provided |
2727 | - return _lsb_release(target) |
2728 | - |
2729 | - global _LSB_RELEASE |
2730 | - if not _LSB_RELEASE: |
2731 | - data = _lsb_release() |
2732 | - _LSB_RELEASE.update(data) |
2733 | - return _LSB_RELEASE |
2734 | - |
2735 | - |
2736 | class MergedCmdAppend(argparse.Action): |
2737 | """This appends to a list in order of appearence both the option string |
2738 | and the value""" |
2739 | @@ -1430,31 +1157,6 @@ def is_resolvable_url(url): |
2740 | return is_resolvable(urlparse(url).hostname) |
2741 | |
2742 | |
2743 | -def target_path(target, path=None): |
2744 | - # return 'path' inside target, accepting target as None |
2745 | - if target in (None, ""): |
2746 | - target = "/" |
2747 | - elif not isinstance(target, string_types): |
2748 | - raise ValueError("Unexpected input for target: %s" % target) |
2749 | - else: |
2750 | - target = os.path.abspath(target) |
2751 | - # abspath("//") returns "//" specifically for 2 slashes. |
2752 | - if target.startswith("//"): |
2753 | - target = target[1:] |
2754 | - |
2755 | - if not path: |
2756 | - return target |
2757 | - |
2758 | - if not isinstance(path, string_types): |
2759 | - raise ValueError("Unexpected input for path: %s" % path) |
2760 | - |
2761 | - # os.path.join("/etc", "/foo") returns "/foo". Chomp all leading /. |
2762 | - while len(path) and path[0] == "/": |
2763 | - path = path[1:] |
2764 | - |
2765 | - return os.path.join(target, path) |
2766 | - |
2767 | - |
2768 | class RunInChroot(ChrootableTarget): |
2769 | """Backwards compatibility for RunInChroot (LP: #1617375). |
2770 | It needs to work like: |
2771 | diff --git a/doc/topics/config.rst b/doc/topics/config.rst |
2772 | index 76e520d..218bc17 100644 |
2773 | --- a/doc/topics/config.rst |
2774 | +++ b/doc/topics/config.rst |
2775 | @@ -14,6 +14,7 @@ Curtin's top level config keys are as follows: |
2776 | - apt_mirrors (``apt_mirrors``) |
2777 | - apt_proxy (``apt_proxy``) |
2778 | - block-meta (``block``) |
2779 | +- curthooks (``curthooks``) |
2780 | - debconf_selections (``debconf_selections``) |
2781 | - disable_overlayroot (``disable_overlayroot``) |
2782 | - grub (``grub``) |
2783 | @@ -110,6 +111,45 @@ Specify the filesystem label on the boot partition. |
2784 | label: my-boot-partition |
2785 | |
2786 | |
2787 | +curthooks |
2788 | +~~~~~~~~~ |
2789 | +Configure how Curtin determines what :ref:`curthooks` to run during the installation |
2790 | +process. |
2791 | + |
2792 | +**mode**: *<['auto', 'builtin', 'target']>* |
2793 | + |
2794 | +The default mode is ``auto``. |
2795 | + |
2796 | +In ``auto`` mode, curtin will execute curthooks within the image if present. |
2797 | +For images without curthooks inside, curtin will execute its built-in hooks. |
2798 | + |
2799 | +Currently the built-in curthooks support the following OS families: |
2800 | + |
2801 | +- Ubuntu |
2802 | +- Centos |
2803 | + |
2804 | +When specifying ``builtin``, curtin will only run the curthooks present in |
2805 | +Curtin ignoring any curthooks that may be present in the target operating |
2806 | +system. |
2807 | + |
2808 | +When specifying ``target``, curtin will attempt run the curthooks in the target |
2809 | +operating system. If the target does NOT contain any curthooks, then the |
2810 | +built-in curthooks will be run instead. |
2811 | + |
2812 | +Any errors during execution of curthooks (built-in or target) will fail the |
2813 | +installation. |
2814 | + |
2815 | +**Example**:: |
2816 | + |
2817 | + # ignore any target curthooks |
2818 | + curthooks: |
2819 | + mode: builtin |
2820 | + |
2821 | + # Only run target curthooks, fall back to built-in |
2822 | + curthooks: |
2823 | + mode: target |
2824 | + |
2825 | + |
2826 | debconf_selections |
2827 | ~~~~~~~~~~~~~~~~~~ |
2828 | Curtin will update the target with debconf set-selection values. Users will |
2829 | diff --git a/doc/topics/curthooks.rst b/doc/topics/curthooks.rst |
2830 | index e5f341b..c59aeaf 100644 |
2831 | --- a/doc/topics/curthooks.rst |
2832 | +++ b/doc/topics/curthooks.rst |
2833 | @@ -1,7 +1,13 @@ |
2834 | +.. _curthooks: |
2835 | + |
2836 | ======================================== |
2837 | -Curthooks / New OS Support |
2838 | +Curthooks / New OS Support |
2839 | ======================================== |
2840 | -Curtin has built-in support for installation of Ubuntu. |
2841 | +Curtin has built-in support for installation of: |
2842 | + |
2843 | + - Ubuntu |
2844 | + - Centos |
2845 | + |
2846 | Other operating systems are supported through a mechanism called |
2847 | 'curthooks' or 'curtin-hooks'. |
2848 | |
2849 | @@ -47,11 +53,21 @@ details. Specifically interesting to this stage are: |
2850 | - ``CONFIG``: This is a path to the curtin config file. It is provided so |
2851 | that additional configuration could be provided through to the OS |
2852 | customization. |
2853 | + - ``WORKING_DIR``: This is a path to a temporary directory where curtin |
2854 | + stores state and configuration files. |
2855 | |
2856 | .. **TODO**: We should add 'PYTHON' or 'CURTIN_PYTHON' to this environment |
2857 | so that the hook can easily run a python program with the same python |
2858 | that curtin ran with (ie, python2 or python3). |
2859 | |
2860 | +Running built-in hooks |
2861 | +---------------------- |
2862 | + |
2863 | +Curthooks may opt to run the built-in curthooks that are already provided in |
2864 | +curtin itself. To do so, an in-image curthook can import the ``curthooks`` |
2865 | +module and invoke the ``builtin_curthooks`` function passing in the required |
2866 | +parameters: config, target, and state. |
2867 | + |
2868 | |
2869 | Networking configuration |
2870 | ------------------------ |
2871 | diff --git a/examples/tests/filesystem_battery.yaml b/examples/tests/filesystem_battery.yaml |
2872 | index 3b1edbf..4eae5b6 100644 |
2873 | --- a/examples/tests/filesystem_battery.yaml |
2874 | +++ b/examples/tests/filesystem_battery.yaml |
2875 | @@ -113,8 +113,8 @@ storage: |
2876 | - id: bind1 |
2877 | fstype: "none" |
2878 | options: "bind" |
2879 | - path: "/var/lib" |
2880 | - spec: "/my/bind-over-var-lib" |
2881 | + path: "/var/cache" |
2882 | + spec: "/my/bind-over-var-cache" |
2883 | type: mount |
2884 | - id: bind2 |
2885 | fstype: "none" |
2886 | diff --git a/helpers/common b/helpers/common |
2887 | index ac2d0f3..f9217b7 100644 |
2888 | --- a/helpers/common |
2889 | +++ b/helpers/common |
2890 | @@ -541,18 +541,18 @@ get_carryover_params() { |
2891 | } |
2892 | |
2893 | install_grub() { |
2894 | - local long_opts="uefi,update-nvram" |
2895 | + local long_opts="uefi,update-nvram,os-family:" |
2896 | local getopt_out="" mp_efi="" |
2897 | getopt_out=$(getopt --name "${0##*/}" \ |
2898 | --options "" --long "${long_opts}" -- "$@") && |
2899 | eval set -- "${getopt_out}" |
2900 | |
2901 | - local uefi=0 |
2902 | - local update_nvram=0 |
2903 | + local uefi=0 update_nvram=0 os_family="" |
2904 | |
2905 | while [ $# -ne 0 ]; do |
2906 | cur="$1"; next="$2"; |
2907 | case "$cur" in |
2908 | + --os-family) os_family=${next};; |
2909 | --uefi) uefi=$((${uefi}+1));; |
2910 | --update-nvram) update_nvram=$((${update_nvram}+1));; |
2911 | --) shift; break;; |
2912 | @@ -595,29 +595,88 @@ install_grub() { |
2913 | error "$mp_dev ($fstype) is not a block device!"; return 1; |
2914 | fi |
2915 | |
2916 | - # get dpkg arch |
2917 | - local dpkg_arch="" |
2918 | - dpkg_arch=$(chroot "$mp" dpkg --print-architecture) |
2919 | - r=$? |
2920 | + local os_variant="" |
2921 | + if [ -e "${mp}/etc/os-release" ]; then |
2922 | + os_variant=$(chroot "$mp" \ |
2923 | + /bin/sh -c 'echo $(. /etc/os-release; echo $ID)') |
2924 | + else |
2925 | + # Centos6 doesn't have os-release, so check for centos/redhat release |
2926 | + # looks like: CentOS release 6.9 (Final) |
2927 | + for rel in $(ls ${mp}/etc/*-release); do |
2928 | + os_variant=$(awk '{print tolower($1)}' $rel) |
2929 | + [ -n "$os_variant" ] && break |
2930 | + done |
2931 | + fi |
2932 | + [ $? != 0 ] && |
2933 | + { error "Failed to read ID from $mp/etc/os-release"; return 1; } |
2934 | + |
2935 | + local rhel_ver="" |
2936 | + case $os_variant in |
2937 | + debian|ubuntu) os_family="debian";; |
2938 | + centos|rhel) |
2939 | + os_family="redhat" |
2940 | + rhel_ver=$(chroot "$mp" rpm -E '%rhel') |
2941 | + ;; |
2942 | + esac |
2943 | + |
2944 | + # ensure we have both settings, family and variant are needed |
2945 | + [ -n "${os_variant}" -a -n "${os_family}" ] || |
2946 | + { error "Failed to determine os variant and family"; return 1; } |
2947 | + |
2948 | + # get target arch |
2949 | + local target_arch="" r="1" |
2950 | + case $os_family in |
2951 | + debian) |
2952 | + target_arch=$(chroot "$mp" dpkg --print-architecture) |
2953 | + r=$? |
2954 | + ;; |
2955 | + redhat) |
2956 | + target_arch=$(chroot "$mp" rpm -E '%_arch') |
2957 | + r=$? |
2958 | + ;; |
2959 | + esac |
2960 | [ $r -eq 0 ] || { |
2961 | - error "failed to get dpkg architecture [$r]" |
2962 | + error "failed to get target architecture [$r]" |
2963 | return 1; |
2964 | } |
2965 | |
2966 | # grub is not the bootloader you are looking for |
2967 | - if [ "${dpkg_arch}" = "s390x" ]; then |
2968 | - return 0; |
2969 | + if [ "${target_arch}" = "s390x" ]; then |
2970 | + return 0; |
2971 | fi |
2972 | |
2973 | # set correct grub package |
2974 | - local grub_name="grub-pc" |
2975 | - local grub_target="i386-pc" |
2976 | - if [ "${dpkg_arch#ppc64}" != "${dpkg_arch}" ]; then |
2977 | + local grub_name="" |
2978 | + local grub_target="" |
2979 | + case "$target_arch" in |
2980 | + i386|amd64) |
2981 | + # debian |
2982 | + grub_name="grub-pc" |
2983 | + grub_target="i386-pc" |
2984 | + ;; |
2985 | + x86_64) |
2986 | + case $rhel_ver in |
2987 | + 6) grub_name="grub";; |
2988 | + 7) grub_name="grub2-pc";; |
2989 | + *) |
2990 | + error "Unknown rhel_ver [$rhel_ver]"; |
2991 | + return 1; |
2992 | + ;; |
2993 | + esac |
2994 | + grub_target="i386-pc" |
2995 | + ;; |
2996 | + esac |
2997 | + if [ "${target_arch#ppc64}" != "${target_arch}" ]; then |
2998 | grub_name="grub-ieee1275" |
2999 | grub_target="powerpc-ieee1275" |
3000 | elif [ "$uefi" -ge 1 ]; then |
3001 | - grub_name="grub-efi-$dpkg_arch" |
3002 | - case "$dpkg_arch" in |
3003 | + grub_name="grub-efi-$target_arch" |
3004 | + case "$target_arch" in |
3005 | + x86_64) |
3006 | + # centos 7+, no centos6 support |
3007 | + grub_name="grub2-efi-x64-modules" |
3008 | + grub_target="x86_64-efi" |
3009 | + ;; |
3010 | amd64) |
3011 | grub_target="x86_64-efi";; |
3012 | arm64) |
3013 | @@ -626,9 +685,19 @@ install_grub() { |
3014 | fi |
3015 | |
3016 | # check that the grub package is installed |
3017 | - tmp=$(chroot "$mp" dpkg-query --show \ |
3018 | - --showformat='${Status}\n' $grub_name) |
3019 | - r=$? |
3020 | + local r=$? |
3021 | + case $os_family in |
3022 | + debian) |
3023 | + tmp=$(chroot "$mp" dpkg-query --show \ |
3024 | + --showformat='${Status}\n' $grub_name) |
3025 | + r=$? |
3026 | + ;; |
3027 | + redhat) |
3028 | + tmp=$(chroot "$mp" rpm -q \ |
3029 | + --queryformat='install ok installed\n' $grub_name) |
3030 | + r=$? |
3031 | + ;; |
3032 | + esac |
3033 | if [ $r -ne 0 -a $r -ne 1 ]; then |
3034 | error "failed to check if $grub_name installed"; |
3035 | return 1; |
3036 | @@ -636,11 +705,16 @@ install_grub() { |
3037 | case "$tmp" in |
3038 | install\ ok\ installed) :;; |
3039 | *) debug 1 "$grub_name not installed, not doing anything"; |
3040 | - return 0;; |
3041 | + return 1;; |
3042 | esac |
3043 | |
3044 | local grub_d="etc/default/grub.d" |
3045 | local mygrub_cfg="$grub_d/50-curtin-settings.cfg" |
3046 | + case $os_family in |
3047 | + redhat) |
3048 | + grub_d="etc/default" |
3049 | + mygrub_cfg="etc/default/grub";; |
3050 | + esac |
3051 | [ -d "$mp/$grub_d" ] || mkdir -p "$mp/$grub_d" || |
3052 | { error "Failed to create $grub_d"; return 1; } |
3053 | |
3054 | @@ -659,14 +733,23 @@ install_grub() { |
3055 | error "Failed to get carryover parrameters from cmdline"; |
3056 | return 1; |
3057 | } |
3058 | + # always append rd.auto=1 for centos |
3059 | + case $os_family in |
3060 | + redhat) |
3061 | + newargs="$newargs rd.auto=1";; |
3062 | + esac |
3063 | debug 1 "carryover command line params: $newargs" |
3064 | |
3065 | - : > "$mp/$mygrub_cfg" || |
3066 | - { error "Failed to write '$mygrub_cfg'"; return 1; } |
3067 | + case $os_family in |
3068 | + debian) |
3069 | + : > "$mp/$mygrub_cfg" || |
3070 | + { error "Failed to write '$mygrub_cfg'"; return 1; } |
3071 | + ;; |
3072 | + esac |
3073 | { |
3074 | [ "${REPLACE_GRUB_LINUX_DEFAULT:-1}" = "0" ] || |
3075 | echo "GRUB_CMDLINE_LINUX_DEFAULT=\"$newargs\"" |
3076 | - echo "# disable grub os prober that might find other OS installs." |
3077 | + echo "# Curtin disable grub os prober that might find other OS installs." |
3078 | echo "GRUB_DISABLE_OS_PROBER=true" |
3079 | echo "GRUB_TERMINAL=console" |
3080 | } >> "$mp/$mygrub_cfg" |
3081 | @@ -692,30 +775,46 @@ install_grub() { |
3082 | nvram="--no-nvram" |
3083 | if [ "$update_nvram" -ge 1 ]; then |
3084 | nvram="" |
3085 | - fi |
3086 | + fi |
3087 | debug 1 "curtin uefi: installing ${grub_name} to: /boot/efi" |
3088 | chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -exc ' |
3089 | echo "before grub-install efiboot settings" |
3090 | - efibootmgr || echo "WARN: efibootmgr exited $?" |
3091 | - dpkg-reconfigure "$1" |
3092 | - update-grub |
3093 | + efibootmgr -v || echo "WARN: efibootmgr exited $?" |
3094 | + bootid="$4" |
3095 | + grubpost="" |
3096 | + case $bootid in |
3097 | + debian|ubuntu) |
3098 | + grubcmd="grub-install" |
3099 | + dpkg-reconfigure "$1" |
3100 | + update-grub |
3101 | + ;; |
3102 | + centos|redhat|rhel) |
3103 | + grubcmd="grub2-install" |
3104 | + grubpost="grub2-mkconfig -o /boot/grub2/grub.cfg" |
3105 | + ;; |
3106 | + *) |
3107 | + echo "Unsupported OS: $bootid" 1>&2 |
3108 | + exit 1 |
3109 | + ;; |
3110 | + esac |
3111 | # grub-install in 12.04 does not contain --no-nvram, --target, |
3112 | # or --efi-directory |
3113 | target="--target=$2" |
3114 | no_nvram="$3" |
3115 | efi_dir="--efi-directory=/boot/efi" |
3116 | - gi_out=$(grub-install --help 2>&1) |
3117 | + gi_out=$($grubcmd --help 2>&1) |
3118 | echo "$gi_out" | grep -q -- "$no_nvram" || no_nvram="" |
3119 | echo "$gi_out" | grep -q -- "--target" || target="" |
3120 | echo "$gi_out" | grep -q -- "--efi-directory" || efi_dir="" |
3121 | - grub-install $target $efi_dir \ |
3122 | - --bootloader-id=ubuntu --recheck $no_nvram' -- \ |
3123 | - "${grub_name}" "${grub_target}" "$nvram" </dev/null || |
3124 | + $grubcmd $target $efi_dir \ |
3125 | + --bootloader-id=$bootid --recheck $no_nvram |
3126 | + [ -z "$grubpost" ] || $grubpost;' \ |
3127 | + -- "${grub_name}" "${grub_target}" "$nvram" "$os_variant" </dev/null || |
3128 | { error "failed to install grub!"; return 1; } |
3129 | |
3130 | chroot "$mp" sh -exc ' |
3131 | echo "after grub-install efiboot settings" |
3132 | - efibootmgr || echo "WARN: efibootmgr exited $?" |
3133 | + efibootmgr -v || echo "WARN: efibootmgr exited $?" |
3134 | ' -- </dev/null || |
3135 | { error "failed to list efi boot entries!"; return 1; } |
3136 | else |
3137 | @@ -728,10 +827,32 @@ install_grub() { |
3138 | debug 1 "curtin non-uefi: installing ${grub_name} to: ${grubdevs[*]}" |
3139 | chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -exc ' |
3140 | pkg=$1; shift; |
3141 | - dpkg-reconfigure "$pkg" |
3142 | - update-grub |
3143 | - for d in "$@"; do grub-install "$d" || exit; done' \ |
3144 | - -- "${grub_name}" "${grubdevs[@]}" </dev/null || |
3145 | + bootid=$1; shift; |
3146 | + bootver=$1; shift; |
3147 | + grubpost="" |
3148 | + case $bootid in |
3149 | + debian|ubuntu) |
3150 | + grubcmd="grub-install" |
3151 | + dpkg-reconfigure "$pkg" |
3152 | + update-grub |
3153 | + ;; |
3154 | + centos|redhat|rhel) |
3155 | + case $bootver in |
3156 | + 6) grubcmd="grub-install";; |
3157 | + 7) grubcmd="grub2-install" |
3158 | + grubpost="grub2-mkconfig -o /boot/grub2/grub.cfg";; |
3159 | + esac |
3160 | + ;; |
3161 | + *) |
3162 | + echo "Unsupported OS: $bootid"; 1>&2 |
3163 | + exit 1 |
3164 | + ;; |
3165 | + esac |
3166 | + for d in "$@"; do |
3167 | + echo $grubcmd "$d"; |
3168 | + $grubcmd "$d" || exit; done |
3169 | + [ -z "$grubpost" ] || $grubpost;' \ |
3170 | + -- "${grub_name}" "${os_variant}" "${rhel_ver}" "${grubdevs[@]}" </dev/null || |
3171 | { error "failed to install grub!"; return 1; } |
3172 | fi |
3173 | |
3174 | diff --git a/tests/unittests/test_apt_custom_sources_list.py b/tests/unittests/test_apt_custom_sources_list.py |
3175 | index 5567dd5..a427ae9 100644 |
3176 | --- a/tests/unittests/test_apt_custom_sources_list.py |
3177 | +++ b/tests/unittests/test_apt_custom_sources_list.py |
3178 | @@ -11,6 +11,8 @@ from mock import call |
3179 | import textwrap |
3180 | import yaml |
3181 | |
3182 | +from curtin import distro |
3183 | +from curtin import paths |
3184 | from curtin import util |
3185 | from curtin.commands import apt_config |
3186 | from .helpers import CiTestCase |
3187 | @@ -106,7 +108,7 @@ class TestAptSourceConfigSourceList(CiTestCase): |
3188 | # make test independent to executing system |
3189 | with mock.patch.object(util, 'load_file', |
3190 | return_value=MOCKED_APT_SRC_LIST): |
3191 | - with mock.patch.object(util, 'lsb_release', |
3192 | + with mock.patch.object(distro, 'lsb_release', |
3193 | return_value={'codename': |
3194 | 'fakerel'}): |
3195 | apt_config.handle_apt(cfg, TARGET) |
3196 | @@ -115,10 +117,10 @@ class TestAptSourceConfigSourceList(CiTestCase): |
3197 | |
3198 | cloudfile = '/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg' |
3199 | cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1) |
3200 | - calls = [call(util.target_path(TARGET, '/etc/apt/sources.list'), |
3201 | + calls = [call(paths.target_path(TARGET, '/etc/apt/sources.list'), |
3202 | expected, |
3203 | mode=0o644), |
3204 | - call(util.target_path(TARGET, cloudfile), |
3205 | + call(paths.target_path(TARGET, cloudfile), |
3206 | cloudconf, |
3207 | mode=0o644)] |
3208 | mockwrite.assert_has_calls(calls) |
3209 | @@ -147,19 +149,19 @@ class TestAptSourceConfigSourceList(CiTestCase): |
3210 | arch = util.get_architecture() |
3211 | # would fail inside the unittest context |
3212 | with mock.patch.object(util, 'get_architecture', return_value=arch): |
3213 | - with mock.patch.object(util, 'lsb_release', |
3214 | + with mock.patch.object(distro, 'lsb_release', |
3215 | return_value={'codename': 'fakerel'}): |
3216 | apt_config.handle_apt(cfg, target) |
3217 | |
3218 | self.assertEqual( |
3219 | EXPECTED_CONVERTED_CONTENT, |
3220 | - util.load_file(util.target_path(target, "/etc/apt/sources.list"))) |
3221 | - cloudfile = util.target_path( |
3222 | + util.load_file(paths.target_path(target, "/etc/apt/sources.list"))) |
3223 | + cloudfile = paths.target_path( |
3224 | target, '/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg') |
3225 | self.assertEqual({'apt_preserve_sources_list': True}, |
3226 | yaml.load(util.load_file(cloudfile))) |
3227 | |
3228 | - @mock.patch("curtin.util.lsb_release") |
3229 | + @mock.patch("curtin.distro.lsb_release") |
3230 | @mock.patch("curtin.util.get_architecture", return_value="amd64") |
3231 | def test_trusty_source_lists(self, m_get_arch, m_lsb_release): |
3232 | """Support mirror equivalency with and without trailing /. |
3233 | @@ -199,7 +201,7 @@ class TestAptSourceConfigSourceList(CiTestCase): |
3234 | |
3235 | release = 'trusty' |
3236 | comps = 'main universe multiverse restricted' |
3237 | - easl = util.target_path(target, 'etc/apt/sources.list') |
3238 | + easl = paths.target_path(target, 'etc/apt/sources.list') |
3239 | |
3240 | orig_content = tmpl.format( |
3241 | mirror=orig_primary, security=orig_security, |
3242 | diff --git a/tests/unittests/test_apt_source.py b/tests/unittests/test_apt_source.py |
3243 | index 2ede986..353cdf8 100644 |
3244 | --- a/tests/unittests/test_apt_source.py |
3245 | +++ b/tests/unittests/test_apt_source.py |
3246 | @@ -12,8 +12,9 @@ import socket |
3247 | import mock |
3248 | from mock import call |
3249 | |
3250 | -from curtin import util |
3251 | +from curtin import distro |
3252 | from curtin import gpg |
3253 | +from curtin import util |
3254 | from curtin.commands import apt_config |
3255 | from .helpers import CiTestCase |
3256 | |
3257 | @@ -77,7 +78,7 @@ class TestAptSourceConfig(CiTestCase): |
3258 | |
3259 | @staticmethod |
3260 | def _add_apt_sources(*args, **kwargs): |
3261 | - with mock.patch.object(util, 'apt_update'): |
3262 | + with mock.patch.object(distro, 'apt_update'): |
3263 | apt_config.add_apt_sources(*args, **kwargs) |
3264 | |
3265 | @staticmethod |
3266 | @@ -86,7 +87,7 @@ class TestAptSourceConfig(CiTestCase): |
3267 | Get the most basic default mrror and release info to be used in tests |
3268 | """ |
3269 | params = {} |
3270 | - params['RELEASE'] = util.lsb_release()['codename'] |
3271 | + params['RELEASE'] = distro.lsb_release()['codename'] |
3272 | arch = util.get_architecture() |
3273 | params['MIRROR'] = apt_config.get_default_mirrors(arch)["PRIMARY"] |
3274 | return params |
3275 | @@ -472,7 +473,7 @@ class TestAptSourceConfig(CiTestCase): |
3276 | 'uri': |
3277 | 'http://testsec.ubuntu.com/%s/' % component}]} |
3278 | post = ("%s_dists_%s-updates_InRelease" % |
3279 | - (component, util.lsb_release()['codename'])) |
3280 | + (component, distro.lsb_release()['codename'])) |
3281 | fromfn = ("%s/%s_%s" % (pre, archive, post)) |
3282 | tofn = ("%s/test.ubuntu.com_%s" % (pre, post)) |
3283 | |
3284 | @@ -937,7 +938,7 @@ class TestDebconfSelections(CiTestCase): |
3285 | m_set_sel.assert_not_called() |
3286 | |
3287 | @mock.patch("curtin.commands.apt_config.debconf_set_selections") |
3288 | - @mock.patch("curtin.commands.apt_config.util.get_installed_packages") |
3289 | + @mock.patch("curtin.commands.apt_config.distro.get_installed_packages") |
3290 | def test_set_sel_call_has_expected_input(self, m_get_inst, m_set_sel): |
3291 | data = { |
3292 | 'set1': 'pkga pkga/q1 mybool false', |
3293 | @@ -960,7 +961,7 @@ class TestDebconfSelections(CiTestCase): |
3294 | |
3295 | @mock.patch("curtin.commands.apt_config.dpkg_reconfigure") |
3296 | @mock.patch("curtin.commands.apt_config.debconf_set_selections") |
3297 | - @mock.patch("curtin.commands.apt_config.util.get_installed_packages") |
3298 | + @mock.patch("curtin.commands.apt_config.distro.get_installed_packages") |
3299 | def test_reconfigure_if_intersection(self, m_get_inst, m_set_sel, |
3300 | m_dpkg_r): |
3301 | data = { |
3302 | @@ -985,7 +986,7 @@ class TestDebconfSelections(CiTestCase): |
3303 | |
3304 | @mock.patch("curtin.commands.apt_config.dpkg_reconfigure") |
3305 | @mock.patch("curtin.commands.apt_config.debconf_set_selections") |
3306 | - @mock.patch("curtin.commands.apt_config.util.get_installed_packages") |
3307 | + @mock.patch("curtin.commands.apt_config.distro.get_installed_packages") |
3308 | def test_reconfigure_if_no_intersection(self, m_get_inst, m_set_sel, |
3309 | m_dpkg_r): |
3310 | data = {'set1': 'pkga pkga/q1 mybool false'} |
3311 | diff --git a/tests/unittests/test_block_iscsi.py b/tests/unittests/test_block_iscsi.py |
3312 | index afaf1f6..f8ef5d8 100644 |
3313 | --- a/tests/unittests/test_block_iscsi.py |
3314 | +++ b/tests/unittests/test_block_iscsi.py |
3315 | @@ -588,6 +588,13 @@ class TestBlockIscsiDiskFromConfig(CiTestCase): |
3316 | # utilize IscsiDisk str method for equality check |
3317 | self.assertEqual(str(expected_iscsi_disk), str(iscsi_disk)) |
3318 | |
3319 | + # test with cfg.get('storage') since caller may already have |
3320 | + # grabbed the 'storage' value from the curtin config |
3321 | + iscsi_disk = iscsi.get_iscsi_disks_from_config( |
3322 | + cfg.get('storage')).pop() |
3323 | + # utilize IscsiDisk str method for equality check |
3324 | + self.assertEqual(str(expected_iscsi_disk), str(iscsi_disk)) |
3325 | + |
3326 | def test_parse_iscsi_disk_from_config_no_iscsi(self): |
3327 | """Test parsing storage config with no iscsi disks included""" |
3328 | cfg = { |
3329 | diff --git a/tests/unittests/test_block_lvm.py b/tests/unittests/test_block_lvm.py |
3330 | index 22fb064..c92c1ec 100644 |
3331 | --- a/tests/unittests/test_block_lvm.py |
3332 | +++ b/tests/unittests/test_block_lvm.py |
3333 | @@ -73,7 +73,8 @@ class TestBlockLvm(CiTestCase): |
3334 | |
3335 | @mock.patch('curtin.block.lvm.lvmetad_running') |
3336 | @mock.patch('curtin.block.lvm.util') |
3337 | - def test_lvm_scan(self, mock_util, mock_lvmetad): |
3338 | + @mock.patch('curtin.block.lvm.distro') |
3339 | + def test_lvm_scan(self, mock_distro, mock_util, mock_lvmetad): |
3340 | """check that lvm_scan formats commands correctly for each release""" |
3341 | cmds = [['pvscan'], ['vgscan', '--mknodes']] |
3342 | for (count, (codename, lvmetad_status, use_cache)) in enumerate( |
3343 | @@ -81,7 +82,7 @@ class TestBlockLvm(CiTestCase): |
3344 | ('trusty', False, False), |
3345 | ('xenial', False, False), ('xenial', True, True), |
3346 | (None, True, True), (None, False, False)]): |
3347 | - mock_util.lsb_release.return_value = {'codename': codename} |
3348 | + mock_distro.lsb_release.return_value = {'codename': codename} |
3349 | mock_lvmetad.return_value = lvmetad_status |
3350 | lvm.lvm_scan() |
3351 | expected = [cmd for cmd in cmds] |
3352 | diff --git a/tests/unittests/test_block_mdadm.py b/tests/unittests/test_block_mdadm.py |
3353 | index 341e49d..d017930 100644 |
3354 | --- a/tests/unittests/test_block_mdadm.py |
3355 | +++ b/tests/unittests/test_block_mdadm.py |
3356 | @@ -15,12 +15,13 @@ class TestBlockMdadmAssemble(CiTestCase): |
3357 | def setUp(self): |
3358 | super(TestBlockMdadmAssemble, self).setUp() |
3359 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3360 | + self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') |
3361 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3362 | self.add_patch('curtin.block.mdadm.udev', 'mock_udev') |
3363 | |
3364 | # Common mock settings |
3365 | self.mock_valid.return_value = True |
3366 | - self.mock_util.lsb_release.return_value = {'codename': 'precise'} |
3367 | + self.mock_lsb_release.return_value = {'codename': 'precise'} |
3368 | self.mock_util.subp.return_value = ('', '') |
3369 | |
3370 | def test_mdadm_assemble_scan(self): |
3371 | @@ -88,6 +89,7 @@ class TestBlockMdadmCreate(CiTestCase): |
3372 | def setUp(self): |
3373 | super(TestBlockMdadmCreate, self).setUp() |
3374 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3375 | + self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') |
3376 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3377 | self.add_patch('curtin.block.mdadm.get_holders', 'mock_holders') |
3378 | self.add_patch('curtin.block.mdadm.udev.udevadm_settle', |
3379 | @@ -95,7 +97,7 @@ class TestBlockMdadmCreate(CiTestCase): |
3380 | |
3381 | # Common mock settings |
3382 | self.mock_valid.return_value = True |
3383 | - self.mock_util.lsb_release.return_value = {'codename': 'precise'} |
3384 | + self.mock_lsb_release.return_value = {'codename': 'precise'} |
3385 | self.mock_holders.return_value = [] |
3386 | |
3387 | def prepare_mock(self, md_devname, raidlevel, devices, spares): |
3388 | @@ -236,14 +238,15 @@ class TestBlockMdadmExamine(CiTestCase): |
3389 | def setUp(self): |
3390 | super(TestBlockMdadmExamine, self).setUp() |
3391 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3392 | + self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') |
3393 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3394 | |
3395 | # Common mock settings |
3396 | self.mock_valid.return_value = True |
3397 | - self.mock_util.lsb_release.return_value = {'codename': 'precise'} |
3398 | + self.mock_lsb_release.return_value = {'codename': 'precise'} |
3399 | |
3400 | def test_mdadm_examine_export(self): |
3401 | - self.mock_util.lsb_release.return_value = {'codename': 'xenial'} |
3402 | + self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3403 | self.mock_util.subp.return_value = ( |
3404 | """ |
3405 | MD_LEVEL=raid0 |
3406 | @@ -320,7 +323,7 @@ class TestBlockMdadmExamine(CiTestCase): |
3407 | class TestBlockMdadmStop(CiTestCase): |
3408 | def setUp(self): |
3409 | super(TestBlockMdadmStop, self).setUp() |
3410 | - self.add_patch('curtin.block.mdadm.util.lsb_release', 'mock_util_lsb') |
3411 | + self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') |
3412 | self.add_patch('curtin.block.mdadm.util.subp', 'mock_util_subp') |
3413 | self.add_patch('curtin.block.mdadm.util.write_file', |
3414 | 'mock_util_write_file') |
3415 | @@ -333,7 +336,7 @@ class TestBlockMdadmStop(CiTestCase): |
3416 | |
3417 | # Common mock settings |
3418 | self.mock_valid.return_value = True |
3419 | - self.mock_util_lsb.return_value = {'codename': 'xenial'} |
3420 | + self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3421 | self.mock_util_subp.side_effect = iter([ |
3422 | ("", ""), # mdadm stop device |
3423 | ]) |
3424 | @@ -488,11 +491,12 @@ class TestBlockMdadmRemove(CiTestCase): |
3425 | def setUp(self): |
3426 | super(TestBlockMdadmRemove, self).setUp() |
3427 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3428 | + self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') |
3429 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3430 | |
3431 | # Common mock settings |
3432 | self.mock_valid.return_value = True |
3433 | - self.mock_util.lsb_release.return_value = {'codename': 'xenial'} |
3434 | + self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3435 | self.mock_util.subp.side_effect = [ |
3436 | ("", ""), # mdadm remove device |
3437 | ] |
3438 | @@ -514,14 +518,15 @@ class TestBlockMdadmQueryDetail(CiTestCase): |
3439 | def setUp(self): |
3440 | super(TestBlockMdadmQueryDetail, self).setUp() |
3441 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3442 | + self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') |
3443 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3444 | |
3445 | # Common mock settings |
3446 | self.mock_valid.return_value = True |
3447 | - self.mock_util.lsb_release.return_value = {'codename': 'precise'} |
3448 | + self.mock_lsb_release.return_value = {'codename': 'precise'} |
3449 | |
3450 | def test_mdadm_query_detail_export(self): |
3451 | - self.mock_util.lsb_release.return_value = {'codename': 'xenial'} |
3452 | + self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3453 | self.mock_util.subp.return_value = ( |
3454 | """ |
3455 | MD_LEVEL=raid1 |
3456 | @@ -592,13 +597,14 @@ class TestBlockMdadmDetailScan(CiTestCase): |
3457 | def setUp(self): |
3458 | super(TestBlockMdadmDetailScan, self).setUp() |
3459 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3460 | + self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') |
3461 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3462 | |
3463 | # Common mock settings |
3464 | self.scan_output = ("ARRAY /dev/md0 metadata=1.2 spares=2 name=0 " + |
3465 | "UUID=b1eae2ff:69b6b02e:1d63bb53:ddfa6e4a") |
3466 | self.mock_valid.return_value = True |
3467 | - self.mock_util.lsb_release.return_value = {'codename': 'xenial'} |
3468 | + self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3469 | self.mock_util.subp.side_effect = [ |
3470 | (self.scan_output, ""), # mdadm --detail --scan |
3471 | ] |
3472 | @@ -627,10 +633,11 @@ class TestBlockMdadmMdHelpers(CiTestCase): |
3473 | def setUp(self): |
3474 | super(TestBlockMdadmMdHelpers, self).setUp() |
3475 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3476 | + self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') |
3477 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3478 | |
3479 | self.mock_valid.return_value = True |
3480 | - self.mock_util.lsb_release.return_value = {'codename': 'xenial'} |
3481 | + self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3482 | |
3483 | def test_valid_mdname(self): |
3484 | mdname = "/dev/md0" |
3485 | diff --git a/tests/unittests/test_block_mkfs.py b/tests/unittests/test_block_mkfs.py |
3486 | index c756281..679f85b 100644 |
3487 | --- a/tests/unittests/test_block_mkfs.py |
3488 | +++ b/tests/unittests/test_block_mkfs.py |
3489 | @@ -37,11 +37,12 @@ class TestBlockMkfs(CiTestCase): |
3490 | @mock.patch("curtin.block.mkfs.block") |
3491 | @mock.patch("curtin.block.mkfs.os") |
3492 | @mock.patch("curtin.block.mkfs.util") |
3493 | + @mock.patch("curtin.block.mkfs.distro.lsb_release") |
3494 | def _run_mkfs_with_config(self, config, expected_cmd, expected_flags, |
3495 | - mock_util, mock_os, mock_block, |
3496 | + mock_lsb_release, mock_util, mock_os, mock_block, |
3497 | release="wily", strict=False): |
3498 | # Pretend we are on wily as there are no known edge cases for it |
3499 | - mock_util.lsb_release.return_value = {"codename": release} |
3500 | + mock_lsb_release.return_value = {"codename": release} |
3501 | mock_os.path.exists.return_value = True |
3502 | mock_block.get_blockdev_sector_size.return_value = (512, 512) |
3503 | |
3504 | diff --git a/tests/unittests/test_block_zfs.py b/tests/unittests/test_block_zfs.py |
3505 | index c18f6a3..9781946 100644 |
3506 | --- a/tests/unittests/test_block_zfs.py |
3507 | +++ b/tests/unittests/test_block_zfs.py |
3508 | @@ -384,7 +384,7 @@ class TestBlockZfsAssertZfsSupported(CiTestCase): |
3509 | super(TestBlockZfsAssertZfsSupported, self).setUp() |
3510 | self.add_patch('curtin.block.zfs.util.subp', 'mock_subp') |
3511 | self.add_patch('curtin.block.zfs.util.get_platform_arch', 'mock_arch') |
3512 | - self.add_patch('curtin.block.zfs.util.lsb_release', 'mock_release') |
3513 | + self.add_patch('curtin.block.zfs.distro.lsb_release', 'mock_release') |
3514 | self.add_patch('curtin.block.zfs.util.which', 'mock_which') |
3515 | self.add_patch('curtin.block.zfs.get_supported_filesystems', |
3516 | 'mock_supfs') |
3517 | @@ -426,46 +426,52 @@ class TestAssertZfsSupported(CiTestCase): |
3518 | super(TestAssertZfsSupported, self).setUp() |
3519 | |
3520 | @mock.patch('curtin.block.zfs.get_supported_filesystems') |
3521 | + @mock.patch('curtin.block.zfs.distro') |
3522 | @mock.patch('curtin.block.zfs.util') |
3523 | - def test_zfs_assert_supported_returns_true(self, mock_util, mock_supfs): |
3524 | + def test_zfs_assert_supported_returns_true(self, mock_util, mock_distro, |
3525 | + mock_supfs): |
3526 | """zfs_assert_supported returns True on supported platforms""" |
3527 | mock_util.get_platform_arch.return_value = 'amd64' |
3528 | - mock_util.lsb_release.return_value = {'codename': 'bionic'} |
3529 | + mock_distro.lsb_release.return_value = {'codename': 'bionic'} |
3530 | mock_util.subp.return_value = ("", "") |
3531 | mock_supfs.return_value = ['zfs'] |
3532 | mock_util.which.side_effect = iter(['/wark/zpool', '/wark/zfs']) |
3533 | |
3534 | self.assertNotIn(mock_util.get_platform_arch.return_value, |
3535 | zfs.ZFS_UNSUPPORTED_ARCHES) |
3536 | - self.assertNotIn(mock_util.lsb_release.return_value['codename'], |
3537 | + self.assertNotIn(mock_distro.lsb_release.return_value['codename'], |
3538 | zfs.ZFS_UNSUPPORTED_RELEASES) |
3539 | self.assertTrue(zfs.zfs_supported()) |
3540 | |
3541 | + @mock.patch('curtin.block.zfs.distro') |
3542 | @mock.patch('curtin.block.zfs.util') |
3543 | def test_zfs_assert_supported_raises_exception_on_bad_arch(self, |
3544 | - mock_util): |
3545 | + mock_util, |
3546 | + mock_distro): |
3547 | """zfs_assert_supported raises RuntimeError on unspported arches""" |
3548 | - mock_util.lsb_release.return_value = {'codename': 'bionic'} |
3549 | + mock_distro.lsb_release.return_value = {'codename': 'bionic'} |
3550 | mock_util.subp.return_value = ("", "") |
3551 | for arch in zfs.ZFS_UNSUPPORTED_ARCHES: |
3552 | mock_util.get_platform_arch.return_value = arch |
3553 | with self.assertRaises(RuntimeError): |
3554 | zfs.zfs_assert_supported() |
3555 | |
3556 | + @mock.patch('curtin.block.zfs.distro') |
3557 | @mock.patch('curtin.block.zfs.util') |
3558 | - def test_zfs_assert_supported_raises_exc_on_bad_releases(self, mock_util): |
3559 | + def test_zfs_assert_supported_raises_exc_on_bad_releases(self, mock_util, |
3560 | + mock_distro): |
3561 | """zfs_assert_supported raises RuntimeError on unspported releases""" |
3562 | mock_util.get_platform_arch.return_value = 'amd64' |
3563 | mock_util.subp.return_value = ("", "") |
3564 | for release in zfs.ZFS_UNSUPPORTED_RELEASES: |
3565 | - mock_util.lsb_release.return_value = {'codename': release} |
3566 | + mock_distro.lsb_release.return_value = {'codename': release} |
3567 | with self.assertRaises(RuntimeError): |
3568 | zfs.zfs_assert_supported() |
3569 | |
3570 | @mock.patch('curtin.block.zfs.util.subprocess.Popen') |
3571 | @mock.patch('curtin.block.zfs.util.is_kmod_loaded') |
3572 | @mock.patch('curtin.block.zfs.get_supported_filesystems') |
3573 | - @mock.patch('curtin.block.zfs.util.lsb_release') |
3574 | + @mock.patch('curtin.block.zfs.distro.lsb_release') |
3575 | @mock.patch('curtin.block.zfs.util.get_platform_arch') |
3576 | def test_zfs_assert_supported_raises_exc_on_missing_module(self, |
3577 | m_arch, |
3578 | diff --git a/tests/unittests/test_commands_apply_net.py b/tests/unittests/test_commands_apply_net.py |
3579 | index a55ab17..04b7f2e 100644 |
3580 | --- a/tests/unittests/test_commands_apply_net.py |
3581 | +++ b/tests/unittests/test_commands_apply_net.py |
3582 | @@ -5,7 +5,7 @@ import copy |
3583 | import os |
3584 | |
3585 | from curtin.commands import apply_net |
3586 | -from curtin import util |
3587 | +from curtin import paths |
3588 | from .helpers import CiTestCase |
3589 | |
3590 | |
3591 | @@ -153,8 +153,8 @@ class TestApplyNetPatchIfupdown(CiTestCase): |
3592 | prehookfn=prehookfn, |
3593 | posthookfn=posthookfn) |
3594 | |
3595 | - precfg = util.target_path(target, path=prehookfn) |
3596 | - postcfg = util.target_path(target, path=posthookfn) |
3597 | + precfg = paths.target_path(target, path=prehookfn) |
3598 | + postcfg = paths.target_path(target, path=posthookfn) |
3599 | precontents = apply_net.IFUPDOWN_IPV6_MTU_PRE_HOOK |
3600 | postcontents = apply_net.IFUPDOWN_IPV6_MTU_POST_HOOK |
3601 | |
3602 | @@ -231,7 +231,7 @@ class TestApplyNetPatchIpv6Priv(CiTestCase): |
3603 | |
3604 | apply_net._disable_ipv6_privacy_extensions(target) |
3605 | |
3606 | - cfg = util.target_path(target, path=path) |
3607 | + cfg = paths.target_path(target, path=path) |
3608 | mock_write.assert_called_with(cfg, expected_ipv6_priv_contents) |
3609 | |
3610 | @patch('curtin.util.load_file') |
3611 | @@ -259,7 +259,7 @@ class TestApplyNetPatchIpv6Priv(CiTestCase): |
3612 | apply_net._disable_ipv6_privacy_extensions(target, path=path) |
3613 | |
3614 | # source file not found |
3615 | - cfg = util.target_path(target, path) |
3616 | + cfg = paths.target_path(target, path) |
3617 | mock_ospath.exists.assert_called_with(cfg) |
3618 | self.assertEqual(0, mock_load.call_count) |
3619 | |
3620 | @@ -272,7 +272,7 @@ class TestApplyNetRemoveLegacyEth0(CiTestCase): |
3621 | def test_remove_legacy_eth0(self, mock_ospath, mock_load, mock_del): |
3622 | target = 'mytarget' |
3623 | path = 'eth0.cfg' |
3624 | - cfg = util.target_path(target, path) |
3625 | + cfg = paths.target_path(target, path) |
3626 | legacy_eth0_contents = ( |
3627 | 'auto eth0\n' |
3628 | 'iface eth0 inet dhcp') |
3629 | @@ -330,7 +330,7 @@ class TestApplyNetRemoveLegacyEth0(CiTestCase): |
3630 | apply_net._maybe_remove_legacy_eth0(target, path) |
3631 | |
3632 | # source file not found |
3633 | - cfg = util.target_path(target, path) |
3634 | + cfg = paths.target_path(target, path) |
3635 | mock_ospath.exists.assert_called_with(cfg) |
3636 | self.assertEqual(0, mock_load.call_count) |
3637 | self.assertEqual(0, mock_del.call_count) |
3638 | diff --git a/tests/unittests/test_commands_block_meta.py b/tests/unittests/test_commands_block_meta.py |
3639 | index a6a0b13..e70d6ed 100644 |
3640 | --- a/tests/unittests/test_commands_block_meta.py |
3641 | +++ b/tests/unittests/test_commands_block_meta.py |
3642 | @@ -7,7 +7,7 @@ from mock import patch, call |
3643 | import os |
3644 | |
3645 | from curtin.commands import block_meta |
3646 | -from curtin import util |
3647 | +from curtin import paths, util |
3648 | from .helpers import CiTestCase |
3649 | |
3650 | |
3651 | @@ -688,8 +688,9 @@ class TestFstabData(CiTestCase): |
3652 | if target is None: |
3653 | target = self.tmp_dir() |
3654 | |
3655 | - expected = [a if a != "_T_MP" else util.target_path(target, fdata.path) |
3656 | - for a in expected] |
3657 | + expected = [ |
3658 | + a if a != "_T_MP" else paths.target_path(target, fdata.path) |
3659 | + for a in expected] |
3660 | with patch("curtin.util.subp") as m_subp: |
3661 | block_meta.mount_fstab_data(fdata, target=target) |
3662 | |
3663 | diff --git a/tests/unittests/test_curthooks.py b/tests/unittests/test_curthooks.py |
3664 | index a8275c7..8fd7933 100644 |
3665 | --- a/tests/unittests/test_curthooks.py |
3666 | +++ b/tests/unittests/test_curthooks.py |
3667 | @@ -4,6 +4,7 @@ import os |
3668 | from mock import call, patch, MagicMock |
3669 | |
3670 | from curtin.commands import curthooks |
3671 | +from curtin import distro |
3672 | from curtin import util |
3673 | from curtin import config |
3674 | from curtin.reporter import events |
3675 | @@ -47,8 +48,8 @@ class TestGetFlashKernelPkgs(CiTestCase): |
3676 | class TestCurthooksInstallKernel(CiTestCase): |
3677 | def setUp(self): |
3678 | super(TestCurthooksInstallKernel, self).setUp() |
3679 | - self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') |
3680 | - self.add_patch('curtin.util.install_packages', 'mock_instpkg') |
3681 | + self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg') |
3682 | + self.add_patch('curtin.distro.install_packages', 'mock_instpkg') |
3683 | self.add_patch( |
3684 | 'curtin.commands.curthooks.get_flash_kernel_pkgs', |
3685 | 'mock_get_flash_kernel_pkgs') |
3686 | @@ -122,12 +123,21 @@ class TestInstallMissingPkgs(CiTestCase): |
3687 | def setUp(self): |
3688 | super(TestInstallMissingPkgs, self).setUp() |
3689 | self.add_patch('platform.machine', 'mock_machine') |
3690 | - self.add_patch('curtin.util.get_installed_packages', |
3691 | + self.add_patch('curtin.util.get_architecture', 'mock_arch') |
3692 | + self.add_patch('curtin.distro.get_installed_packages', |
3693 | 'mock_get_installed_packages') |
3694 | self.add_patch('curtin.util.load_command_environment', |
3695 | 'mock_load_cmd_evn') |
3696 | self.add_patch('curtin.util.which', 'mock_which') |
3697 | - self.add_patch('curtin.util.install_packages', 'mock_install_packages') |
3698 | + self.add_patch('curtin.util.is_uefi_bootable', 'mock_uefi') |
3699 | + self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg') |
3700 | + self.add_patch('curtin.distro.install_packages', |
3701 | + 'mock_install_packages') |
3702 | + self.add_patch('curtin.distro.get_osfamily', 'mock_osfamily') |
3703 | + self.distro_family = distro.DISTROS.debian |
3704 | + self.mock_osfamily.return_value = self.distro_family |
3705 | + self.mock_uefi.return_value = False |
3706 | + self.mock_haspkg.return_value = False |
3707 | |
3708 | @patch.object(events, 'ReportEventStack') |
3709 | def test_install_packages_s390x(self, mock_events): |
3710 | @@ -137,8 +147,8 @@ class TestInstallMissingPkgs(CiTestCase): |
3711 | target = "not-a-real-target" |
3712 | cfg = {} |
3713 | curthooks.install_missing_packages(cfg, target=target) |
3714 | - self.mock_install_packages.assert_called_with(['s390-tools'], |
3715 | - target=target) |
3716 | + self.mock_install_packages.assert_called_with( |
3717 | + ['s390-tools'], target=target, osfamily=self.distro_family) |
3718 | |
3719 | @patch.object(events, 'ReportEventStack') |
3720 | def test_install_packages_s390x_has_zipl(self, mock_events): |
3721 | @@ -159,6 +169,50 @@ class TestInstallMissingPkgs(CiTestCase): |
3722 | curthooks.install_missing_packages(cfg, target=target) |
3723 | self.assertEqual([], self.mock_install_packages.call_args_list) |
3724 | |
3725 | + @patch.object(events, 'ReportEventStack') |
3726 | + def test_install_packages_on_uefi_amd64_shim_signed(self, mock_events): |
3727 | + arch = 'amd64' |
3728 | + self.mock_arch.return_value = arch |
3729 | + self.mock_machine.return_value = 'x86_64' |
3730 | + expected_pkgs = ['grub-efi-%s' % arch, |
3731 | + 'grub-efi-%s-signed' % arch, |
3732 | + 'shim-signed'] |
3733 | + self.mock_machine.return_value = 'x86_64' |
3734 | + self.mock_uefi.return_value = True |
3735 | + self.mock_haspkg.return_value = True |
3736 | + target = "not-a-real-target" |
3737 | + cfg = {} |
3738 | + curthooks.install_missing_packages(cfg, target=target) |
3739 | + self.mock_install_packages.assert_called_with( |
3740 | + expected_pkgs, target=target, osfamily=self.distro_family) |
3741 | + |
3742 | + @patch.object(events, 'ReportEventStack') |
3743 | + def test_install_packages_on_uefi_i386_noshim_nosigned(self, mock_events): |
3744 | + arch = 'i386' |
3745 | + self.mock_arch.return_value = arch |
3746 | + self.mock_machine.return_value = 'i386' |
3747 | + expected_pkgs = ['grub-efi-%s' % arch] |
3748 | + self.mock_machine.return_value = 'i686' |
3749 | + self.mock_uefi.return_value = True |
3750 | + target = "not-a-real-target" |
3751 | + cfg = {} |
3752 | + curthooks.install_missing_packages(cfg, target=target) |
3753 | + self.mock_install_packages.assert_called_with( |
3754 | + expected_pkgs, target=target, osfamily=self.distro_family) |
3755 | + |
3756 | + @patch.object(events, 'ReportEventStack') |
3757 | + def test_install_packages_on_uefi_arm64_nosign_noshim(self, mock_events): |
3758 | + arch = 'arm64' |
3759 | + self.mock_arch.return_value = arch |
3760 | + self.mock_machine.return_value = 'aarch64' |
3761 | + expected_pkgs = ['grub-efi-%s' % arch] |
3762 | + self.mock_uefi.return_value = True |
3763 | + target = "not-a-real-target" |
3764 | + cfg = {} |
3765 | + curthooks.install_missing_packages(cfg, target=target) |
3766 | + self.mock_install_packages.assert_called_with( |
3767 | + expected_pkgs, target=target, osfamily=self.distro_family) |
3768 | + |
3769 | |
3770 | class TestSetupZipl(CiTestCase): |
3771 | |
3772 | @@ -192,7 +246,8 @@ class TestSetupGrub(CiTestCase): |
3773 | def setUp(self): |
3774 | super(TestSetupGrub, self).setUp() |
3775 | self.target = self.tmp_dir() |
3776 | - self.add_patch('curtin.util.lsb_release', 'mock_lsb_release') |
3777 | + self.distro_family = distro.DISTROS.debian |
3778 | + self.add_patch('curtin.distro.lsb_release', 'mock_lsb_release') |
3779 | self.mock_lsb_release.return_value = { |
3780 | 'codename': 'xenial', |
3781 | } |
3782 | @@ -219,11 +274,12 @@ class TestSetupGrub(CiTestCase): |
3783 | 'grub_install_devices': ['/dev/vdb'] |
3784 | } |
3785 | self.subp_output.append(('', '')) |
3786 | - curthooks.setup_grub(cfg, self.target) |
3787 | + curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3788 | self.assertEquals( |
3789 | ([ |
3790 | 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3791 | - 'install-grub', self.target, '/dev/vdb'],), |
3792 | + 'install-grub', '--os-family=%s' % self.distro_family, |
3793 | + self.target, '/dev/vdb'],), |
3794 | self.mock_subp.call_args_list[0][0]) |
3795 | |
3796 | def test_uses_install_devices_in_grubcfg(self): |
3797 | @@ -233,11 +289,12 @@ class TestSetupGrub(CiTestCase): |
3798 | }, |
3799 | } |
3800 | self.subp_output.append(('', '')) |
3801 | - curthooks.setup_grub(cfg, self.target) |
3802 | + curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3803 | self.assertEquals( |
3804 | ([ |
3805 | 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3806 | - 'install-grub', self.target, '/dev/vdb'],), |
3807 | + 'install-grub', '--os-family=%s' % self.distro_family, |
3808 | + self.target, '/dev/vdb'],), |
3809 | self.mock_subp.call_args_list[0][0]) |
3810 | |
3811 | def test_uses_grub_install_on_storage_config(self): |
3812 | @@ -255,11 +312,12 @@ class TestSetupGrub(CiTestCase): |
3813 | }, |
3814 | } |
3815 | self.subp_output.append(('', '')) |
3816 | - curthooks.setup_grub(cfg, self.target) |
3817 | + curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3818 | self.assertEquals( |
3819 | ([ |
3820 | 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3821 | - 'install-grub', self.target, '/dev/vdb'],), |
3822 | + 'install-grub', '--os-family=%s' % self.distro_family, |
3823 | + self.target, '/dev/vdb'],), |
3824 | self.mock_subp.call_args_list[0][0]) |
3825 | |
3826 | def test_grub_install_installs_to_none_if_install_devices_None(self): |
3827 | @@ -269,62 +327,17 @@ class TestSetupGrub(CiTestCase): |
3828 | }, |
3829 | } |
3830 | self.subp_output.append(('', '')) |
3831 | - curthooks.setup_grub(cfg, self.target) |
3832 | - self.assertEquals( |
3833 | - ([ |
3834 | - 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3835 | - 'install-grub', self.target, 'none'],), |
3836 | - self.mock_subp.call_args_list[0][0]) |
3837 | - |
3838 | - def test_grub_install_uefi_installs_signed_packages_for_amd64(self): |
3839 | - self.add_patch('curtin.util.install_packages', 'mock_install') |
3840 | - self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') |
3841 | - self.mock_is_uefi_bootable.return_value = True |
3842 | - cfg = { |
3843 | - 'grub': { |
3844 | - 'install_devices': ['/dev/vdb'], |
3845 | - 'update_nvram': False, |
3846 | - }, |
3847 | - } |
3848 | - self.subp_output.append(('', '')) |
3849 | - self.mock_arch.return_value = 'amd64' |
3850 | - self.mock_haspkg.return_value = True |
3851 | - curthooks.setup_grub(cfg, self.target) |
3852 | - self.assertEquals( |
3853 | - (['grub-efi-amd64', 'grub-efi-amd64-signed', 'shim-signed'],), |
3854 | - self.mock_install.call_args_list[0][0]) |
3855 | + curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3856 | self.assertEquals( |
3857 | ([ |
3858 | 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3859 | - 'install-grub', '--uefi', self.target, '/dev/vdb'],), |
3860 | - self.mock_subp.call_args_list[0][0]) |
3861 | - |
3862 | - def test_grub_install_uefi_installs_packages_for_arm64(self): |
3863 | - self.add_patch('curtin.util.install_packages', 'mock_install') |
3864 | - self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') |
3865 | - self.mock_is_uefi_bootable.return_value = True |
3866 | - cfg = { |
3867 | - 'grub': { |
3868 | - 'install_devices': ['/dev/vdb'], |
3869 | - 'update_nvram': False, |
3870 | - }, |
3871 | - } |
3872 | - self.subp_output.append(('', '')) |
3873 | - self.mock_arch.return_value = 'arm64' |
3874 | - self.mock_haspkg.return_value = False |
3875 | - curthooks.setup_grub(cfg, self.target) |
3876 | - self.assertEquals( |
3877 | - (['grub-efi-arm64'],), |
3878 | - self.mock_install.call_args_list[0][0]) |
3879 | - self.assertEquals( |
3880 | - ([ |
3881 | - 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3882 | - 'install-grub', '--uefi', self.target, '/dev/vdb'],), |
3883 | + 'install-grub', '--os-family=%s' % self.distro_family, |
3884 | + self.target, 'none'],), |
3885 | self.mock_subp.call_args_list[0][0]) |
3886 | |
3887 | def test_grub_install_uefi_updates_nvram_skips_remove_and_reorder(self): |
3888 | - self.add_patch('curtin.util.install_packages', 'mock_install') |
3889 | - self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') |
3890 | + self.add_patch('curtin.distro.install_packages', 'mock_install') |
3891 | + self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg') |
3892 | self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr') |
3893 | self.mock_is_uefi_bootable.return_value = True |
3894 | cfg = { |
3895 | @@ -347,17 +360,18 @@ class TestSetupGrub(CiTestCase): |
3896 | } |
3897 | } |
3898 | } |
3899 | - curthooks.setup_grub(cfg, self.target) |
3900 | + curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3901 | self.assertEquals( |
3902 | ([ |
3903 | 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3904 | 'install-grub', '--uefi', '--update-nvram', |
3905 | + '--os-family=%s' % self.distro_family, |
3906 | self.target, '/dev/vdb'],), |
3907 | self.mock_subp.call_args_list[0][0]) |
3908 | |
3909 | def test_grub_install_uefi_updates_nvram_removes_old_loaders(self): |
3910 | - self.add_patch('curtin.util.install_packages', 'mock_install') |
3911 | - self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') |
3912 | + self.add_patch('curtin.distro.install_packages', 'mock_install') |
3913 | + self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg') |
3914 | self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr') |
3915 | self.mock_is_uefi_bootable.return_value = True |
3916 | cfg = { |
3917 | @@ -392,7 +406,7 @@ class TestSetupGrub(CiTestCase): |
3918 | self.in_chroot_subp_output.append(('', '')) |
3919 | self.in_chroot_subp_output.append(('', '')) |
3920 | self.mock_haspkg.return_value = False |
3921 | - curthooks.setup_grub(cfg, self.target) |
3922 | + curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3923 | self.assertEquals( |
3924 | ['efibootmgr', '-B', '-b'], |
3925 | self.mock_in_chroot_subp.call_args_list[0][0][0][:3]) |
3926 | @@ -406,8 +420,8 @@ class TestSetupGrub(CiTestCase): |
3927 | self.mock_in_chroot_subp.call_args_list[1][0][0][3]])) |
3928 | |
3929 | def test_grub_install_uefi_updates_nvram_reorders_loaders(self): |
3930 | - self.add_patch('curtin.util.install_packages', 'mock_install') |
3931 | - self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') |
3932 | + self.add_patch('curtin.distro.install_packages', 'mock_install') |
3933 | + self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg') |
3934 | self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr') |
3935 | self.mock_is_uefi_bootable.return_value = True |
3936 | cfg = { |
3937 | @@ -436,7 +450,7 @@ class TestSetupGrub(CiTestCase): |
3938 | } |
3939 | self.in_chroot_subp_output.append(('', '')) |
3940 | self.mock_haspkg.return_value = False |
3941 | - curthooks.setup_grub(cfg, self.target) |
3942 | + curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3943 | self.assertEquals( |
3944 | (['efibootmgr', '-o', '0001,0000'],), |
3945 | self.mock_in_chroot_subp.call_args_list[0][0]) |
3946 | @@ -453,11 +467,11 @@ class TestUbuntuCoreHooks(CiTestCase): |
3947 | 'var/lib/snapd') |
3948 | util.ensure_dir(ubuntu_core_path) |
3949 | self.assertTrue(os.path.isdir(ubuntu_core_path)) |
3950 | - is_core = curthooks.target_is_ubuntu_core(self.target) |
3951 | + is_core = distro.is_ubuntu_core(self.target) |
3952 | self.assertTrue(is_core) |
3953 | |
3954 | def test_target_is_ubuntu_core_no_target(self): |
3955 | - is_core = curthooks.target_is_ubuntu_core(self.target) |
3956 | + is_core = distro.is_ubuntu_core(self.target) |
3957 | self.assertFalse(is_core) |
3958 | |
3959 | def test_target_is_ubuntu_core_noncore_target(self): |
3960 | @@ -465,7 +479,7 @@ class TestUbuntuCoreHooks(CiTestCase): |
3961 | non_core_path = os.path.join(self.target, 'curtin') |
3962 | util.ensure_dir(non_core_path) |
3963 | self.assertTrue(os.path.isdir(non_core_path)) |
3964 | - is_core = curthooks.target_is_ubuntu_core(self.target) |
3965 | + is_core = distro.is_ubuntu_core(self.target) |
3966 | self.assertFalse(is_core) |
3967 | |
3968 | @patch('curtin.util.write_file') |
3969 | @@ -736,15 +750,15 @@ class TestDetectRequiredPackages(CiTestCase): |
3970 | ({'network': { |
3971 | 'version': 2, |
3972 | 'items': ('bridge',)}}, |
3973 | - ('bridge-utils',)), |
3974 | + ()), |
3975 | ({'network': { |
3976 | 'version': 2, |
3977 | 'items': ('vlan',)}}, |
3978 | - ('vlan',)), |
3979 | + ()), |
3980 | ({'network': { |
3981 | 'version': 2, |
3982 | 'items': ('vlan', 'bridge')}}, |
3983 | - ('vlan', 'bridge-utils')), |
3984 | + ()), |
3985 | )) |
3986 | |
3987 | def test_mixed_storage_v1_network_v2_detect(self): |
3988 | @@ -755,7 +769,7 @@ class TestDetectRequiredPackages(CiTestCase): |
3989 | 'storage': { |
3990 | 'version': 1, |
3991 | 'items': ('raid', 'bcache', 'ext4')}}, |
3992 | - ('vlan', 'bridge-utils', 'mdadm', 'bcache-tools', 'e2fsprogs')), |
3993 | + ('mdadm', 'bcache-tools', 'e2fsprogs')), |
3994 | )) |
3995 | |
3996 | def test_invalid_version_in_config(self): |
3997 | @@ -782,7 +796,7 @@ class TestCurthooksWriteFiles(CiTestCase): |
3998 | dict((cfg[i]['path'], cfg[i]['content']) for i in cfg.keys()), |
3999 | dir2dict(tmpd, prefix=tmpd)) |
4000 | |
4001 | - @patch('curtin.commands.curthooks.futil.target_path') |
4002 | + @patch('curtin.commands.curthooks.paths.target_path') |
4003 | @patch('curtin.commands.curthooks.futil.write_finfo') |
4004 | def test_handle_write_files_finfo(self, mock_write_finfo, mock_tp): |
4005 | """ Validate that futils.write_files handles target_path correctly """ |
4006 | @@ -816,6 +830,8 @@ class TestCurthooksPollinate(CiTestCase): |
4007 | self.add_patch('curtin.util.write_file', 'mock_write') |
4008 | self.add_patch('curtin.commands.curthooks.get_maas_version', |
4009 | 'mock_maas_version') |
4010 | + self.add_patch('curtin.util.which', 'mock_which') |
4011 | + self.mock_which.return_value = '/usr/bin/pollinate' |
4012 | self.target = self.tmp_dir() |
4013 | |
4014 | def test_handle_pollinate_user_agent_disable(self): |
4015 | @@ -826,6 +842,15 @@ class TestCurthooksPollinate(CiTestCase): |
4016 | self.assertEqual(0, self.mock_maas_version.call_count) |
4017 | self.assertEqual(0, self.mock_write.call_count) |
4018 | |
4019 | + def test_handle_pollinate_returns_if_no_pollinate_binary(self): |
4020 | + """ handle_pollinate_user_agent does nothing if no pollinate binary""" |
4021 | + self.mock_which.return_value = None |
4022 | + cfg = {'reporting': {'maas': {'endpoint': 'http://127.0.0.1/foo'}}} |
4023 | + curthooks.handle_pollinate_user_agent(cfg, self.target) |
4024 | + self.assertEqual(0, self.mock_curtin_version.call_count) |
4025 | + self.assertEqual(0, self.mock_maas_version.call_count) |
4026 | + self.assertEqual(0, self.mock_write.call_count) |
4027 | + |
4028 | def test_handle_pollinate_user_agent_default(self): |
4029 | """ handle_pollinate_user_agent checks curtin/maas version by default |
4030 | """ |
4031 | diff --git a/tests/unittests/test_distro.py b/tests/unittests/test_distro.py |
4032 | new file mode 100644 |
4033 | index 0000000..d4e5a1e |
4034 | --- /dev/null |
4035 | +++ b/tests/unittests/test_distro.py |
4036 | @@ -0,0 +1,302 @@ |
4037 | +# This file is part of curtin. See LICENSE file for copyright and license info. |
4038 | + |
4039 | +from unittest import skipIf |
4040 | +import mock |
4041 | +import sys |
4042 | + |
4043 | +from curtin import distro |
4044 | +from curtin import paths |
4045 | +from curtin import util |
4046 | +from .helpers import CiTestCase |
4047 | + |
4048 | + |
4049 | +class TestLsbRelease(CiTestCase): |
4050 | + |
4051 | + def setUp(self): |
4052 | + super(TestLsbRelease, self).setUp() |
4053 | + self._reset_cache() |
4054 | + |
4055 | + def _reset_cache(self): |
4056 | + keys = [k for k in distro._LSB_RELEASE.keys()] |
4057 | + for d in keys: |
4058 | + del distro._LSB_RELEASE[d] |
4059 | + |
4060 | + @mock.patch("curtin.distro.subp") |
4061 | + def test_lsb_release_functional(self, mock_subp): |
4062 | + output = '\n'.join([ |
4063 | + "Distributor ID: Ubuntu", |
4064 | + "Description: Ubuntu 14.04.2 LTS", |
4065 | + "Release: 14.04", |
4066 | + "Codename: trusty", |
4067 | + ]) |
4068 | + rdata = {'id': 'Ubuntu', 'description': 'Ubuntu 14.04.2 LTS', |
4069 | + 'codename': 'trusty', 'release': '14.04'} |
4070 | + |
4071 | + def fake_subp(cmd, capture=False, target=None): |
4072 | + return output, 'No LSB modules are available.' |
4073 | + |
4074 | + mock_subp.side_effect = fake_subp |
4075 | + found = distro.lsb_release() |
4076 | + mock_subp.assert_called_with( |
4077 | + ['lsb_release', '--all'], capture=True, target=None) |
4078 | + self.assertEqual(found, rdata) |
4079 | + |
4080 | + @mock.patch("curtin.distro.subp") |
4081 | + def test_lsb_release_unavailable(self, mock_subp): |
4082 | + def doraise(*args, **kwargs): |
4083 | + raise util.ProcessExecutionError("foo") |
4084 | + mock_subp.side_effect = doraise |
4085 | + |
4086 | + expected = {k: "UNAVAILABLE" for k in |
4087 | + ('id', 'description', 'codename', 'release')} |
4088 | + self.assertEqual(distro.lsb_release(), expected) |
4089 | + |
4090 | + |
4091 | +class TestParseDpkgVersion(CiTestCase): |
4092 | + """test parse_dpkg_version.""" |
4093 | + |
4094 | + def test_none_raises_type_error(self): |
4095 | + self.assertRaises(TypeError, distro.parse_dpkg_version, None) |
4096 | + |
4097 | + @skipIf(sys.version_info.major < 3, "python 2 bytes are strings.") |
4098 | + def test_bytes_raises_type_error(self): |
4099 | + self.assertRaises(TypeError, distro.parse_dpkg_version, b'1.2.3-0') |
4100 | + |
4101 | + def test_simple_native_package_version(self): |
4102 | + """dpkg versions must have a -. If not present expect value error.""" |
4103 | + self.assertEqual( |
4104 | + {'major': 2, 'minor': 28, 'micro': 0, 'extra': None, |
4105 | + 'raw': '2.28', 'upstream': '2.28', 'name': 'germinate', |
4106 | + 'semantic_version': 22800}, |
4107 | + distro.parse_dpkg_version('2.28', name='germinate')) |
4108 | + |
4109 | + def test_complex_native_package_version(self): |
4110 | + dver = '1.0.106ubuntu2+really1.0.97ubuntu1' |
4111 | + self.assertEqual( |
4112 | + {'major': 1, 'minor': 0, 'micro': 106, |
4113 | + 'extra': 'ubuntu2+really1.0.97ubuntu1', |
4114 | + 'raw': dver, 'upstream': dver, 'name': 'debootstrap', |
4115 | + 'semantic_version': 100106}, |
4116 | + distro.parse_dpkg_version(dver, name='debootstrap', |
4117 | + semx=(100000, 1000, 1))) |
4118 | + |
4119 | + def test_simple_valid(self): |
4120 | + self.assertEqual( |
4121 | + {'major': 1, 'minor': 2, 'micro': 3, 'extra': None, |
4122 | + 'raw': '1.2.3-0', 'upstream': '1.2.3', 'name': 'foo', |
4123 | + 'semantic_version': 10203}, |
4124 | + distro.parse_dpkg_version('1.2.3-0', name='foo')) |
4125 | + |
4126 | + def test_simple_valid_with_semx(self): |
4127 | + self.assertEqual( |
4128 | + {'major': 1, 'minor': 2, 'micro': 3, 'extra': None, |
4129 | + 'raw': '1.2.3-0', 'upstream': '1.2.3', |
4130 | + 'semantic_version': 123}, |
4131 | + distro.parse_dpkg_version('1.2.3-0', semx=(100, 10, 1))) |
4132 | + |
4133 | + def test_upstream_with_hyphen(self): |
4134 | + """upstream versions may have a hyphen.""" |
4135 | + cver = '18.2-14-g6d48d265-0ubuntu1' |
4136 | + self.assertEqual( |
4137 | + {'major': 18, 'minor': 2, 'micro': 0, 'extra': '-14-g6d48d265', |
4138 | + 'raw': cver, 'upstream': '18.2-14-g6d48d265', |
4139 | + 'name': 'cloud-init', 'semantic_version': 180200}, |
4140 | + distro.parse_dpkg_version(cver, name='cloud-init')) |
4141 | + |
4142 | + def test_upstream_with_plus(self): |
4143 | + """multipath tools has a + in it.""" |
4144 | + mver = '0.5.0+git1.656f8865-5ubuntu2.5' |
4145 | + self.assertEqual( |
4146 | + {'major': 0, 'minor': 5, 'micro': 0, 'extra': '+git1.656f8865', |
4147 | + 'raw': mver, 'upstream': '0.5.0+git1.656f8865', |
4148 | + 'semantic_version': 500}, |
4149 | + distro.parse_dpkg_version(mver)) |
4150 | + |
4151 | + |
4152 | +class TestDistros(CiTestCase): |
4153 | + |
4154 | + def test_distro_names(self): |
4155 | + all_distros = list(distro.DISTROS) |
4156 | + for distro_name in distro.DISTRO_NAMES: |
4157 | + distro_enum = getattr(distro.DISTROS, distro_name) |
4158 | + self.assertIn(distro_enum, all_distros) |
4159 | + |
4160 | + def test_distro_names_unknown(self): |
4161 | + distro_name = "ImNotADistro" |
4162 | + self.assertNotIn(distro_name, distro.DISTRO_NAMES) |
4163 | + with self.assertRaises(AttributeError): |
4164 | + getattr(distro.DISTROS, distro_name) |
4165 | + |
4166 | + def test_distro_osfamily(self): |
4167 | + for variant, family in distro.OS_FAMILIES.items(): |
4168 | + self.assertNotEqual(variant, family) |
4169 | + self.assertIn(variant, distro.DISTROS) |
4170 | + for dname in family: |
4171 | + self.assertIn(dname, distro.DISTROS) |
4172 | + |
4173 | + def test_distro_osfmaily_identity(self): |
4174 | + for family, variants in distro.OS_FAMILIES.items(): |
4175 | + self.assertIn(family, variants) |
4176 | + |
4177 | + def test_name_to_distro(self): |
4178 | + for distro_name in distro.DISTRO_NAMES: |
4179 | + dobj = distro.name_to_distro(distro_name) |
4180 | + self.assertEqual(dobj, getattr(distro.DISTROS, distro_name)) |
4181 | + |
4182 | + def test_name_to_distro_unknown_value(self): |
4183 | + with self.assertRaises(ValueError): |
4184 | + distro.name_to_distro(None) |
4185 | + |
4186 | + def test_name_to_distro_unknown_attr(self): |
4187 | + with self.assertRaises(ValueError): |
4188 | + distro.name_to_distro('NotADistro') |
4189 | + |
4190 | + def test_distros_unknown_attr(self): |
4191 | + with self.assertRaises(AttributeError): |
4192 | + distro.DISTROS.notadistro |
4193 | + |
4194 | + def test_distros_unknown_index(self): |
4195 | + with self.assertRaises(IndexError): |
4196 | + distro.DISTROS[len(distro.DISTROS)+1] |
4197 | + |
4198 | + |
4199 | +class TestDistroInfo(CiTestCase): |
4200 | + |
4201 | + def setUp(self): |
4202 | + super(TestDistroInfo, self).setUp() |
4203 | + self.add_patch('curtin.distro.os_release', 'mock_os_release') |
4204 | + |
4205 | + def test_get_distroinfo(self): |
4206 | + for distro_name in distro.DISTRO_NAMES: |
4207 | + self.mock_os_release.return_value = {'ID': distro_name} |
4208 | + variant = distro.name_to_distro(distro_name) |
4209 | + family = distro.DISTRO_TO_OSFAMILY[variant] |
4210 | + distro_info = distro.get_distroinfo() |
4211 | + self.assertEqual(variant, distro_info.variant) |
4212 | + self.assertEqual(family, distro_info.family) |
4213 | + |
4214 | + def test_get_distro(self): |
4215 | + for distro_name in distro.DISTRO_NAMES: |
4216 | + self.mock_os_release.return_value = {'ID': distro_name} |
4217 | + variant = distro.name_to_distro(distro_name) |
4218 | + distro_obj = distro.get_distro() |
4219 | + self.assertEqual(variant, distro_obj) |
4220 | + |
4221 | + def test_get_osfamily(self): |
4222 | + for distro_name in distro.DISTRO_NAMES: |
4223 | + self.mock_os_release.return_value = {'ID': distro_name} |
4224 | + variant = distro.name_to_distro(distro_name) |
4225 | + family = distro.DISTRO_TO_OSFAMILY[variant] |
4226 | + distro_obj = distro.get_osfamily() |
4227 | + self.assertEqual(family, distro_obj) |
4228 | + |
4229 | + |
4230 | +class TestDistroIdentity(CiTestCase): |
4231 | + |
4232 | + def setUp(self): |
4233 | + super(TestDistroIdentity, self).setUp() |
4234 | + self.add_patch('curtin.distro.os.path.exists', 'mock_os_path') |
4235 | + |
4236 | + def test_is_ubuntu_core(self): |
4237 | + for exists in [True, False]: |
4238 | + self.mock_os_path.return_value = exists |
4239 | + self.assertEqual(exists, distro.is_ubuntu_core()) |
4240 | + self.mock_os_path.assert_called_with('/system-data/var/lib/snapd') |
4241 | + |
4242 | + def test_is_centos(self): |
4243 | + for exists in [True, False]: |
4244 | + self.mock_os_path.return_value = exists |
4245 | + self.assertEqual(exists, distro.is_centos()) |
4246 | + self.mock_os_path.assert_called_with('/etc/centos-release') |
4247 | + |
4248 | + def test_is_rhel(self): |
4249 | + for exists in [True, False]: |
4250 | + self.mock_os_path.return_value = exists |
4251 | + self.assertEqual(exists, distro.is_rhel()) |
4252 | + self.mock_os_path.assert_called_with('/etc/redhat-release') |
4253 | + |
4254 | + |
4255 | +class TestYumInstall(CiTestCase): |
4256 | + |
4257 | + @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a) |
4258 | + @mock.patch('curtin.util.subp') |
4259 | + def test_yum_install(self, m_subp): |
4260 | + pkglist = ['foobar', 'wark'] |
4261 | + target = 'mytarget' |
4262 | + mode = 'install' |
4263 | + expected_calls = [ |
4264 | + mock.call(['yum', '--assumeyes', '--quiet', 'install', |
4265 | + '--downloadonly', '--setopt=keepcache=1'] + pkglist, |
4266 | + env=None, retries=[1] * 10, |
4267 | + target=paths.target_path(target)), |
4268 | + mock.call(['yum', '--assumeyes', '--quiet', 'install', |
4269 | + '--cacheonly'] + pkglist, env=None, |
4270 | + target=paths.target_path(target)) |
4271 | + ] |
4272 | + |
4273 | + # call yum_install directly |
4274 | + distro.yum_install(mode, pkglist, target=target) |
4275 | + m_subp.assert_has_calls(expected_calls) |
4276 | + |
4277 | + # call yum_install through run_yum_command |
4278 | + m_subp.reset() |
4279 | + distro.run_yum_command('install', pkglist, target=target) |
4280 | + m_subp.assert_has_calls(expected_calls) |
4281 | + |
4282 | + # call yum_install through install_packages |
4283 | + m_subp.reset() |
4284 | + osfamily = distro.DISTROS.redhat |
4285 | + distro.install_packages(pkglist, osfamily=osfamily, target=target) |
4286 | + m_subp.assert_has_calls(expected_calls) |
4287 | + |
4288 | + |
4289 | +class TestHasPkgAvailable(CiTestCase): |
4290 | + |
4291 | + def setUp(self): |
4292 | + super(TestHasPkgAvailable, self).setUp() |
4293 | + self.package = 'foobar' |
4294 | + self.target = paths.target_path('mytarget') |
4295 | + |
4296 | + @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a) |
4297 | + @mock.patch('curtin.distro.subp') |
4298 | + def test_has_pkg_available_debian(self, m_subp): |
4299 | + osfamily = distro.DISTROS.debian |
4300 | + m_subp.return_value = (self.package, '') |
4301 | + result = distro.has_pkg_available(self.package, self.target, osfamily) |
4302 | + self.assertTrue(result) |
4303 | + m_subp.assert_has_calls([mock.call(['apt-cache', 'pkgnames'], |
4304 | + capture=True, |
4305 | + target=self.target)]) |
4306 | + |
4307 | + @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a) |
4308 | + @mock.patch('curtin.distro.subp') |
4309 | + def test_has_pkg_available_debian_returns_false_not_avail(self, m_subp): |
4310 | + pkg = 'wark' |
4311 | + osfamily = distro.DISTROS.debian |
4312 | + m_subp.return_value = (pkg, '') |
4313 | + result = distro.has_pkg_available(self.package, self.target, osfamily) |
4314 | + self.assertEqual(pkg == self.package, result) |
4315 | + m_subp.assert_has_calls([mock.call(['apt-cache', 'pkgnames'], |
4316 | + capture=True, |
4317 | + target=self.target)]) |
4318 | + |
4319 | + @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a) |
4320 | + @mock.patch('curtin.distro.run_yum_command') |
4321 | + def test_has_pkg_available_redhat(self, m_subp): |
4322 | + osfamily = distro.DISTROS.redhat |
4323 | + m_subp.return_value = (self.package, '') |
4324 | + result = distro.has_pkg_available(self.package, self.target, osfamily) |
4325 | + self.assertTrue(result) |
4326 | + m_subp.assert_has_calls([mock.call('list', opts=['--cacheonly'])]) |
4327 | + |
4328 | + @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a) |
4329 | + @mock.patch('curtin.distro.run_yum_command') |
4330 | + def test_has_pkg_available_redhat_returns_false_not_avail(self, m_subp): |
4331 | + pkg = 'wark' |
4332 | + osfamily = distro.DISTROS.redhat |
4333 | + m_subp.return_value = (pkg, '') |
4334 | + result = distro.has_pkg_available(self.package, self.target, osfamily) |
4335 | + self.assertEqual(pkg == self.package, result) |
4336 | + m_subp.assert_has_calls([mock.call('list', opts=['--cacheonly'])]) |
4337 | + |
4338 | +# vi: ts=4 expandtab syntax=python |
4339 | diff --git a/tests/unittests/test_feature.py b/tests/unittests/test_feature.py |
4340 | index c62e0cd..7c55882 100644 |
4341 | --- a/tests/unittests/test_feature.py |
4342 | +++ b/tests/unittests/test_feature.py |
4343 | @@ -21,4 +21,7 @@ class TestExportsFeatures(CiTestCase): |
4344 | def test_has_centos_apply_network_config(self): |
4345 | self.assertIn('CENTOS_APPLY_NETWORK_CONFIG', curtin.FEATURES) |
4346 | |
4347 | + def test_has_centos_curthook_support(self): |
4348 | + self.assertIn('CENTOS_CURTHOOK_SUPPORT', curtin.FEATURES) |
4349 | + |
4350 | # vi: ts=4 expandtab syntax=python |
4351 | diff --git a/tests/unittests/test_pack.py b/tests/unittests/test_pack.py |
4352 | index 1aae456..cb0b135 100644 |
4353 | --- a/tests/unittests/test_pack.py |
4354 | +++ b/tests/unittests/test_pack.py |
4355 | @@ -97,6 +97,8 @@ class TestPack(TestCase): |
4356 | }} |
4357 | |
4358 | out, err, rc, log_contents = self.run_install(cfg) |
4359 | + print("out=%s" % out) |
4360 | + print("err=%s" % err) |
4361 | |
4362 | # the version string and users command output should be in output |
4363 | self.assertIn(version.version_string(), out) |
4364 | diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py |
4365 | index 7fb332d..a64be16 100644 |
4366 | --- a/tests/unittests/test_util.py |
4367 | +++ b/tests/unittests/test_util.py |
4368 | @@ -4,10 +4,10 @@ from unittest import skipIf |
4369 | import mock |
4370 | import os |
4371 | import stat |
4372 | -import sys |
4373 | from textwrap import dedent |
4374 | |
4375 | from curtin import util |
4376 | +from curtin import paths |
4377 | from .helpers import CiTestCase, simple_mocked_open |
4378 | |
4379 | |
4380 | @@ -104,48 +104,6 @@ class TestWhich(CiTestCase): |
4381 | self.assertEqual(found, "/usr/bin2/fuzz") |
4382 | |
4383 | |
4384 | -class TestLsbRelease(CiTestCase): |
4385 | - |
4386 | - def setUp(self): |
4387 | - super(TestLsbRelease, self).setUp() |
4388 | - self._reset_cache() |
4389 | - |
4390 | - def _reset_cache(self): |
4391 | - keys = [k for k in util._LSB_RELEASE.keys()] |
4392 | - for d in keys: |
4393 | - del util._LSB_RELEASE[d] |
4394 | - |
4395 | - @mock.patch("curtin.util.subp") |
4396 | - def test_lsb_release_functional(self, mock_subp): |
4397 | - output = '\n'.join([ |
4398 | - "Distributor ID: Ubuntu", |
4399 | - "Description: Ubuntu 14.04.2 LTS", |
4400 | - "Release: 14.04", |
4401 | - "Codename: trusty", |
4402 | - ]) |
4403 | - rdata = {'id': 'Ubuntu', 'description': 'Ubuntu 14.04.2 LTS', |
4404 | - 'codename': 'trusty', 'release': '14.04'} |
4405 | - |
4406 | - def fake_subp(cmd, capture=False, target=None): |
4407 | - return output, 'No LSB modules are available.' |
4408 | - |
4409 | - mock_subp.side_effect = fake_subp |
4410 | - found = util.lsb_release() |
4411 | - mock_subp.assert_called_with( |
4412 | - ['lsb_release', '--all'], capture=True, target=None) |
4413 | - self.assertEqual(found, rdata) |
4414 | - |
4415 | - @mock.patch("curtin.util.subp") |
4416 | - def test_lsb_release_unavailable(self, mock_subp): |
4417 | - def doraise(*args, **kwargs): |
4418 | - raise util.ProcessExecutionError("foo") |
4419 | - mock_subp.side_effect = doraise |
4420 | - |
4421 | - expected = {k: "UNAVAILABLE" for k in |
4422 | - ('id', 'description', 'codename', 'release')} |
4423 | - self.assertEqual(util.lsb_release(), expected) |
4424 | - |
4425 | - |
4426 | class TestSubp(CiTestCase): |
4427 | |
4428 | stdin2err = ['bash', '-c', 'cat >&2'] |
4429 | @@ -312,7 +270,7 @@ class TestSubp(CiTestCase): |
4430 | # if target is not provided or is /, chroot should not be used |
4431 | calls = m_popen.call_args_list |
4432 | popen_args, popen_kwargs = calls[-1] |
4433 | - target = util.target_path(kwargs.get('target', None)) |
4434 | + target = paths.target_path(kwargs.get('target', None)) |
4435 | unshcmd = self.mock_get_unshare_pid_args.return_value |
4436 | if target == "/": |
4437 | self.assertEqual(unshcmd + list(cmd), popen_args[0]) |
4438 | @@ -554,44 +512,44 @@ class TestSetUnExecutable(CiTestCase): |
4439 | |
4440 | class TestTargetPath(CiTestCase): |
4441 | def test_target_empty_string(self): |
4442 | - self.assertEqual("/etc/passwd", util.target_path("", "/etc/passwd")) |
4443 | + self.assertEqual("/etc/passwd", paths.target_path("", "/etc/passwd")) |
4444 | |
4445 | def test_target_non_string_raises(self): |
4446 | - self.assertRaises(ValueError, util.target_path, False) |
4447 | - self.assertRaises(ValueError, util.target_path, 9) |
4448 | - self.assertRaises(ValueError, util.target_path, True) |
4449 | + self.assertRaises(ValueError, paths.target_path, False) |
4450 | + self.assertRaises(ValueError, paths.target_path, 9) |
4451 | + self.assertRaises(ValueError, paths.target_path, True) |
4452 | |
4453 | def test_lots_of_slashes_is_slash(self): |
4454 | - self.assertEqual("/", util.target_path("/")) |
4455 | - self.assertEqual("/", util.target_path("//")) |
4456 | - self.assertEqual("/", util.target_path("///")) |
4457 | - self.assertEqual("/", util.target_path("////")) |
4458 | + self.assertEqual("/", paths.target_path("/")) |
4459 | + self.assertEqual("/", paths.target_path("//")) |
4460 | + self.assertEqual("/", paths.target_path("///")) |
4461 | + self.assertEqual("/", paths.target_path("////")) |
4462 | |
4463 | def test_empty_string_is_slash(self): |
4464 | - self.assertEqual("/", util.target_path("")) |
4465 | + self.assertEqual("/", paths.target_path("")) |
4466 | |
4467 | def test_recognizes_relative(self): |
4468 | - self.assertEqual("/", util.target_path("/foo/../")) |
4469 | - self.assertEqual("/", util.target_path("/foo//bar/../../")) |
4470 | + self.assertEqual("/", paths.target_path("/foo/../")) |
4471 | + self.assertEqual("/", paths.target_path("/foo//bar/../../")) |
4472 | |
4473 | def test_no_path(self): |
4474 | - self.assertEqual("/my/target", util.target_path("/my/target")) |
4475 | + self.assertEqual("/my/target", paths.target_path("/my/target")) |
4476 | |
4477 | def test_no_target_no_path(self): |
4478 | - self.assertEqual("/", util.target_path(None)) |
4479 | + self.assertEqual("/", paths.target_path(None)) |
4480 | |
4481 | def test_no_target_with_path(self): |
4482 | - self.assertEqual("/my/path", util.target_path(None, "/my/path")) |
4483 | + self.assertEqual("/my/path", paths.target_path(None, "/my/path")) |
4484 | |
4485 | def test_trailing_slash(self): |
4486 | self.assertEqual("/my/target/my/path", |
4487 | - util.target_path("/my/target/", "/my/path")) |
4488 | + paths.target_path("/my/target/", "/my/path")) |
4489 | |
4490 | def test_bunch_of_slashes_in_path(self): |
4491 | self.assertEqual("/target/my/path/", |
4492 | - util.target_path("/target/", "//my/path/")) |
4493 | + paths.target_path("/target/", "//my/path/")) |
4494 | self.assertEqual("/target/my/path/", |
4495 | - util.target_path("/target/", "///my/path/")) |
4496 | + paths.target_path("/target/", "///my/path/")) |
4497 | |
4498 | |
4499 | class TestRunInChroot(CiTestCase): |
4500 | @@ -1036,65 +994,4 @@ class TestLoadKernelModule(CiTestCase): |
4501 | self.assertEqual(0, self.m_subp.call_count) |
4502 | |
4503 | |
4504 | -class TestParseDpkgVersion(CiTestCase): |
4505 | - """test parse_dpkg_version.""" |
4506 | - |
4507 | - def test_none_raises_type_error(self): |
4508 | - self.assertRaises(TypeError, util.parse_dpkg_version, None) |
4509 | - |
4510 | - @skipIf(sys.version_info.major < 3, "python 2 bytes are strings.") |
4511 | - def test_bytes_raises_type_error(self): |
4512 | - self.assertRaises(TypeError, util.parse_dpkg_version, b'1.2.3-0') |
4513 | - |
4514 | - def test_simple_native_package_version(self): |
4515 | - """dpkg versions must have a -. If not present expect value error.""" |
4516 | - self.assertEqual( |
4517 | - {'major': 2, 'minor': 28, 'micro': 0, 'extra': None, |
4518 | - 'raw': '2.28', 'upstream': '2.28', 'name': 'germinate', |
4519 | - 'semantic_version': 22800}, |
4520 | - util.parse_dpkg_version('2.28', name='germinate')) |
4521 | - |
4522 | - def test_complex_native_package_version(self): |
4523 | - dver = '1.0.106ubuntu2+really1.0.97ubuntu1' |
4524 | - self.assertEqual( |
4525 | - {'major': 1, 'minor': 0, 'micro': 106, |
4526 | - 'extra': 'ubuntu2+really1.0.97ubuntu1', |
4527 | - 'raw': dver, 'upstream': dver, 'name': 'debootstrap', |
4528 | - 'semantic_version': 100106}, |
4529 | - util.parse_dpkg_version(dver, name='debootstrap', |
4530 | - semx=(100000, 1000, 1))) |
4531 | - |
4532 | - def test_simple_valid(self): |
4533 | - self.assertEqual( |
4534 | - {'major': 1, 'minor': 2, 'micro': 3, 'extra': None, |
4535 | - 'raw': '1.2.3-0', 'upstream': '1.2.3', 'name': 'foo', |
4536 | - 'semantic_version': 10203}, |
4537 | - util.parse_dpkg_version('1.2.3-0', name='foo')) |
4538 | - |
4539 | - def test_simple_valid_with_semx(self): |
4540 | - self.assertEqual( |
4541 | - {'major': 1, 'minor': 2, 'micro': 3, 'extra': None, |
4542 | - 'raw': '1.2.3-0', 'upstream': '1.2.3', |
4543 | - 'semantic_version': 123}, |
4544 | - util.parse_dpkg_version('1.2.3-0', semx=(100, 10, 1))) |
4545 | - |
4546 | - def test_upstream_with_hyphen(self): |
4547 | - """upstream versions may have a hyphen.""" |
4548 | - cver = '18.2-14-g6d48d265-0ubuntu1' |
4549 | - self.assertEqual( |
4550 | - {'major': 18, 'minor': 2, 'micro': 0, 'extra': '-14-g6d48d265', |
4551 | - 'raw': cver, 'upstream': '18.2-14-g6d48d265', |
4552 | - 'name': 'cloud-init', 'semantic_version': 180200}, |
4553 | - util.parse_dpkg_version(cver, name='cloud-init')) |
4554 | - |
4555 | - def test_upstream_with_plus(self): |
4556 | - """multipath tools has a + in it.""" |
4557 | - mver = '0.5.0+git1.656f8865-5ubuntu2.5' |
4558 | - self.assertEqual( |
4559 | - {'major': 0, 'minor': 5, 'micro': 0, 'extra': '+git1.656f8865', |
4560 | - 'raw': mver, 'upstream': '0.5.0+git1.656f8865', |
4561 | - 'semantic_version': 500}, |
4562 | - util.parse_dpkg_version(mver)) |
4563 | - |
4564 | - |
4565 | # vi: ts=4 expandtab syntax=python |
4566 | diff --git a/tests/vmtests/__init__.py b/tests/vmtests/__init__.py |
4567 | index bd159c4..7e31491 100644 |
4568 | --- a/tests/vmtests/__init__.py |
4569 | +++ b/tests/vmtests/__init__.py |
4570 | @@ -493,18 +493,67 @@ def skip_by_date(bugnum, fixby, removeby=None, skips=None, install=True): |
4571 | return decorator |
4572 | |
4573 | |
4574 | +DEFAULT_COLLECT_SCRIPTS = { |
4575 | + 'common': [textwrap.dedent(""" |
4576 | + cd OUTPUT_COLLECT_D |
4577 | + cp /etc/fstab ./fstab |
4578 | + cp -a /etc/udev/rules.d ./udev_rules.d |
4579 | + ifconfig -a | cat >ifconfig_a |
4580 | + ip a | cat >ip_a |
4581 | + cp -a /var/log/messages . |
4582 | + cp -a /var/log/syslog . |
4583 | + cp -a /var/log/cloud-init* . |
4584 | + cp -a /var/lib/cloud ./var_lib_cloud |
4585 | + cp -a /run/cloud-init ./run_cloud-init |
4586 | + cp -a /proc/cmdline ./proc_cmdline |
4587 | + cp -a /proc/mounts ./proc_mounts |
4588 | + cp -a /proc/partitions ./proc_partitions |
4589 | + cp -a /proc/swaps ./proc-swaps |
4590 | + # ls -al /dev/disk/* |
4591 | + mkdir -p /dev/disk/by-dname |
4592 | + ls /dev/disk/by-dname/ | cat >ls_dname |
4593 | + ls -al /dev/disk/by-dname/ | cat >ls_al_bydname |
4594 | + ls -al /dev/disk/by-id/ | cat >ls_al_byid |
4595 | + ls -al /dev/disk/by-uuid/ | cat >ls_al_byuuid |
4596 | + blkid -o export | cat >blkid.out |
4597 | + find /boot | cat > find_boot.out |
4598 | + [ -e /sys/firmware/efi ] && { |
4599 | + efibootmgr -v | cat >efibootmgr.out; |
4600 | + } |
4601 | + """)], |
4602 | + 'centos': [textwrap.dedent(""" |
4603 | + # XXX: command | cat >output is required for Centos under SELinux |
4604 | + # http://danwalsh.livejournal.com/22860.html |
4605 | + cd OUTPUT_COLLECT_D |
4606 | + rpm -qa | cat >rpm_qa |
4607 | + cp -a /etc/sysconfig/network-scripts . |
4608 | + rpm -q --queryformat '%{VERSION}\n' cloud-init |tee rpm_ci_version |
4609 | + rpm -E '%rhel' > rpm_dist_version_major |
4610 | + cp -a /etc/centos-release . |
4611 | + """)], |
4612 | + 'ubuntu': [textwrap.dedent(""" |
4613 | + cd OUTPUT_COLLECT_D |
4614 | + dpkg-query --show \ |
4615 | + --showformat='${db:Status-Abbrev}\t${Package}\t${Version}\n' \ |
4616 | + > debian-packages.txt 2> debian-packages.txt.err |
4617 | + cp -av /etc/network/interfaces . |
4618 | + cp -av /etc/network/interfaces.d . |
4619 | + find /etc/network/interfaces.d > find_interfacesd |
4620 | + v="" |
4621 | + out=$(apt-config shell v Acquire::HTTP::Proxy) |
4622 | + eval "$out" |
4623 | + echo "$v" > apt-proxy |
4624 | + """)] |
4625 | +} |
4626 | + |
4627 | + |
4628 | class VMBaseClass(TestCase): |
4629 | __test__ = False |
4630 | expected_failure = False |
4631 | arch_skip = [] |
4632 | boot_timeout = BOOT_TIMEOUT |
4633 | - collect_scripts = [textwrap.dedent(""" |
4634 | - cd OUTPUT_COLLECT_D |
4635 | - dpkg-query --show \ |
4636 | - --showformat='${db:Status-Abbrev}\t${Package}\t${Version}\n' \ |
4637 | - > debian-packages.txt 2> debian-packages.txt.err |
4638 | - cat /proc/swaps > proc-swaps |
4639 | - """)] |
4640 | + collect_scripts = [] |
4641 | + extra_collect_scripts = [] |
4642 | conf_file = "examples/tests/basic.yaml" |
4643 | nr_cpus = None |
4644 | dirty_disks = False |
4645 | @@ -528,6 +577,10 @@ class VMBaseClass(TestCase): |
4646 | conf_replace = {} |
4647 | uefi = False |
4648 | proxy = None |
4649 | + url_map = { |
4650 | + '/MAAS/api/version/': '2.0', |
4651 | + '/MAAS/api/2.0/version/': |
4652 | + json.dumps({'version': '2.5.0+curtin-vmtest'})} |
4653 | |
4654 | # these get set from base_vm_classes |
4655 | release = None |
4656 | @@ -773,6 +826,16 @@ class VMBaseClass(TestCase): |
4657 | cls.arch) |
4658 | raise SkipTest(reason) |
4659 | |
4660 | + # assign default collect scripts |
4661 | + if not cls.collect_scripts: |
4662 | + cls.collect_scripts = ( |
4663 | + DEFAULT_COLLECT_SCRIPTS['common'] + |
4664 | + DEFAULT_COLLECT_SCRIPTS[cls.target_distro]) |
4665 | + |
4666 | + # append extra from subclass |
4667 | + if cls.extra_collect_scripts: |
4668 | + cls.collect_scripts.extend(cls.extra_collect_scripts) |
4669 | + |
4670 | setup_start = time.time() |
4671 | logger.info( |
4672 | ('Starting setup for testclass: {__name__} ' |
4673 | @@ -994,7 +1057,8 @@ class VMBaseClass(TestCase): |
4674 | |
4675 | # set reporting logger |
4676 | cls.reporting_log = os.path.join(cls.td.logs, 'webhooks-events.json') |
4677 | - reporting_logger = CaptureReporting(cls.reporting_log) |
4678 | + reporting_logger = CaptureReporting(cls.reporting_log, |
4679 | + url_mapping=cls.url_map) |
4680 | |
4681 | # write reporting config |
4682 | reporting_config = os.path.join(cls.td.install, 'reporting.cfg') |
4683 | @@ -1442,6 +1506,8 @@ class VMBaseClass(TestCase): |
4684 | if self.target_release == "trusty": |
4685 | raise SkipTest( |
4686 | "(LP: #1523037): dname does not work on trusty kernels") |
4687 | + if self.target_distro != "ubuntu": |
4688 | + raise SkipTest("dname not present in non-ubuntu releases") |
4689 | |
4690 | if not disk_to_check: |
4691 | disk_to_check = self.disk_to_check |
4692 | @@ -1449,11 +1515,9 @@ class VMBaseClass(TestCase): |
4693 | logger.debug('test_dname: no disks to check') |
4694 | return |
4695 | logger.debug('test_dname: checking disks: %s', disk_to_check) |
4696 | - path = self.collect_path("ls_dname") |
4697 | - if not os.path.exists(path): |
4698 | - logger.debug('test_dname: no "ls_dname" file: %s', path) |
4699 | - return |
4700 | - contents = util.load_file(path) |
4701 | + self.output_files_exist(["ls_dname"]) |
4702 | + |
4703 | + contents = self.load_collect_file("ls_dname") |
4704 | for diskname, part in self.disk_to_check: |
4705 | if part is not 0: |
4706 | link = diskname + "-part" + str(part) |
4707 | @@ -1485,6 +1549,9 @@ class VMBaseClass(TestCase): |
4708 | """ Check that curtin has removed /etc/network/interfaces.d/eth0.cfg |
4709 | by examining the output of a find /etc/network > find_interfaces.d |
4710 | """ |
4711 | + # target_distro is set for non-ubuntu targets |
4712 | + if self.target_distro != 'ubuntu': |
4713 | + raise SkipTest("eni/ifupdown not present in non-ubuntu releases") |
4714 | interfacesd = self.load_collect_file("find_interfacesd") |
4715 | self.assertNotIn("/etc/network/interfaces.d/eth0.cfg", |
4716 | interfacesd.split("\n")) |
4717 | diff --git a/tests/vmtests/helpers.py b/tests/vmtests/helpers.py |
4718 | index 10e20b3..6dddcc6 100644 |
4719 | --- a/tests/vmtests/helpers.py |
4720 | +++ b/tests/vmtests/helpers.py |
4721 | @@ -2,6 +2,7 @@ |
4722 | # This file is part of curtin. See LICENSE file for copyright and license info. |
4723 | |
4724 | import os |
4725 | +import re |
4726 | import subprocess |
4727 | import signal |
4728 | import threading |
4729 | @@ -86,7 +87,26 @@ def check_call(cmd, signal=signal.SIGTERM, **kwargs): |
4730 | return Command(cmd, signal).run(**kwargs) |
4731 | |
4732 | |
4733 | -def find_testcases(): |
4734 | +def find_testcases_by_attr(**kwargs): |
4735 | + class_match = set() |
4736 | + for test_case in find_testcases(**kwargs): |
4737 | + tc_name = str(test_case.__class__) |
4738 | + full_path = tc_name.split("'")[1].split(".") |
4739 | + class_name = full_path[-1] |
4740 | + if class_name in class_match: |
4741 | + continue |
4742 | + class_match.add(class_name) |
4743 | + filename = "/".join(full_path[0:-1]) + ".py" |
4744 | + yield "%s:%s" % (filename, class_name) |
4745 | + |
4746 | + |
4747 | +def _attr_match(pattern, value): |
4748 | + if not value: |
4749 | + return False |
4750 | + return re.match(pattern, str(value)) |
4751 | + |
4752 | + |
4753 | +def find_testcases(**kwargs): |
4754 | # Use the TestLoder to load all test cases defined within tests/vmtests/ |
4755 | # and figure out what distros and releases they are testing. Any tests |
4756 | # which are disabled will be excluded. |
4757 | @@ -97,12 +117,19 @@ def find_testcases(): |
4758 | root_dir = os.path.split(os.path.split(tests_dir)[0])[0] |
4759 | # Find all test modules defined in curtin/tests/vmtests/ |
4760 | module_test_suites = loader.discover(tests_dir, top_level_dir=root_dir) |
4761 | + filter_attrs = [attr for attr, value in kwargs.items() if value] |
4762 | for mts in module_test_suites: |
4763 | for class_test_suite in mts: |
4764 | for test_case in class_test_suite: |
4765 | # skip disabled tests |
4766 | if not getattr(test_case, '__test__', False): |
4767 | continue |
4768 | + # compare each filter attr with the specified value |
4769 | + tcmatch = [not _attr_match(kwargs[attr], |
4770 | + getattr(test_case, attr, False)) |
4771 | + for attr in filter_attrs] |
4772 | + if any(tcmatch): |
4773 | + continue |
4774 | yield test_case |
4775 | |
4776 | |
4777 | diff --git a/tests/vmtests/image_sync.py b/tests/vmtests/image_sync.py |
4778 | index e2cedc1..69c19ef 100644 |
4779 | --- a/tests/vmtests/image_sync.py |
4780 | +++ b/tests/vmtests/image_sync.py |
4781 | @@ -30,7 +30,9 @@ IMAGE_SRC_URL = os.environ.get( |
4782 | "http://maas.ubuntu.com/images/ephemeral-v3/daily/streams/v1/index.sjson") |
4783 | IMAGE_DIR = os.environ.get("IMAGE_DIR", "/srv/images") |
4784 | |
4785 | -KEYRING = '/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg' |
4786 | +KEYRING = os.environ.get( |
4787 | + 'IMAGE_SRC_KEYRING', |
4788 | + '/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg') |
4789 | ITEM_NAME_FILTERS = \ |
4790 | ['ftype~(boot-initrd|boot-kernel|root-tgz|squashfs)'] |
4791 | FORMAT_JSON = 'JSON' |
4792 | diff --git a/tests/vmtests/releases.py b/tests/vmtests/releases.py |
4793 | index 02cbfe5..7be8feb 100644 |
4794 | --- a/tests/vmtests/releases.py |
4795 | +++ b/tests/vmtests/releases.py |
4796 | @@ -131,8 +131,8 @@ class _Releases(object): |
4797 | |
4798 | |
4799 | class _CentosReleases(object): |
4800 | - centos70fromxenial = _Centos70FromXenialBase |
4801 | - centos66fromxenial = _Centos66FromXenialBase |
4802 | + centos70_xenial = _Centos70FromXenialBase |
4803 | + centos66_xenial = _Centos66FromXenialBase |
4804 | |
4805 | |
4806 | class _UbuntuCoreReleases(object): |
4807 | diff --git a/tests/vmtests/report_webhook_logger.py b/tests/vmtests/report_webhook_logger.py |
4808 | index e95397c..5e7d63b 100755 |
4809 | --- a/tests/vmtests/report_webhook_logger.py |
4810 | +++ b/tests/vmtests/report_webhook_logger.py |
4811 | @@ -76,7 +76,10 @@ class ServerHandler(http_server.SimpleHTTPRequestHandler): |
4812 | self._message = None |
4813 | self.send_response(200) |
4814 | self.end_headers() |
4815 | - self.wfile.write(("content of %s\n" % self.path).encode('utf-8')) |
4816 | + if self.url_mapping and self.path in self.url_mapping: |
4817 | + self.wfile.write(self.url_mapping[self.path].encode('utf-8')) |
4818 | + else: |
4819 | + self.wfile.write(("content of %s\n" % self.path).encode('utf-8')) |
4820 | |
4821 | def do_POST(self): |
4822 | length = int(self.headers['Content-Length']) |
4823 | @@ -96,13 +99,14 @@ class ServerHandler(http_server.SimpleHTTPRequestHandler): |
4824 | self.wfile.write(msg.encode('utf-8')) |
4825 | |
4826 | |
4827 | -def GenServerHandlerWithResultFile(file_path): |
4828 | +def GenServerHandlerWithResultFile(file_path, url_map): |
4829 | class ExtendedServerHandler(ServerHandler): |
4830 | result_log_file = file_path |
4831 | + url_mapping = url_map |
4832 | return ExtendedServerHandler |
4833 | |
4834 | |
4835 | -def get_httpd(port=None, result_file=None): |
4836 | +def get_httpd(port=None, result_file=None, url_mapping=None): |
4837 | # avoid 'Address already in use' after ctrl-c |
4838 | socketserver.TCPServer.allow_reuse_address = True |
4839 | |
4840 | @@ -111,7 +115,7 @@ def get_httpd(port=None, result_file=None): |
4841 | port = 0 |
4842 | |
4843 | if result_file: |
4844 | - Handler = GenServerHandlerWithResultFile(result_file) |
4845 | + Handler = GenServerHandlerWithResultFile(result_file, url_mapping) |
4846 | else: |
4847 | Handler = ServerHandler |
4848 | httpd = HTTPServerV6(("::", port), Handler) |
4849 | @@ -143,10 +147,11 @@ def run_server(port=DEFAULT_PORT, log_data=True): |
4850 | |
4851 | class CaptureReporting: |
4852 | |
4853 | - def __init__(self, result_file): |
4854 | + def __init__(self, result_file, url_mapping=None): |
4855 | + self.url_mapping = url_mapping |
4856 | self.result_file = result_file |
4857 | self.httpd = get_httpd(result_file=self.result_file, |
4858 | - port=None) |
4859 | + port=None, url_mapping=self.url_mapping) |
4860 | self.httpd.server_activate() |
4861 | # socket.AF_INET6 returns |
4862 | # (host, port, flowinfo, scopeid) |
4863 | diff --git a/tests/vmtests/test_apt_config_cmd.py b/tests/vmtests/test_apt_config_cmd.py |
4864 | index efd04f3..f9b6a09 100644 |
4865 | --- a/tests/vmtests/test_apt_config_cmd.py |
4866 | +++ b/tests/vmtests/test_apt_config_cmd.py |
4867 | @@ -12,16 +12,14 @@ from .releases import base_vm_classes as relbase |
4868 | |
4869 | class TestAptConfigCMD(VMBaseClass): |
4870 | """TestAptConfigCMD - test standalone command""" |
4871 | + test_type = 'config' |
4872 | conf_file = "examples/tests/apt_config_command.yaml" |
4873 | interactive = False |
4874 | extra_disks = [] |
4875 | fstab_expected = {} |
4876 | disk_to_check = [] |
4877 | - collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent(""" |
4878 | + extra_collect_scripts = [textwrap.dedent(""" |
4879 | cd OUTPUT_COLLECT_D |
4880 | - cat /etc/fstab > fstab |
4881 | - ls /dev/disk/by-dname > ls_dname |
4882 | - find /etc/network/interfaces.d > find_interfacesd |
4883 | cp /etc/apt/sources.list.d/curtin-dev-ubuntu-test-archive-*.list . |
4884 | cp /etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg . |
4885 | apt-cache policy | grep proposed > proposed-enabled |
4886 | diff --git a/tests/vmtests/test_apt_source.py b/tests/vmtests/test_apt_source.py |
4887 | index f34913a..bb502b2 100644 |
4888 | --- a/tests/vmtests/test_apt_source.py |
4889 | +++ b/tests/vmtests/test_apt_source.py |
4890 | @@ -14,15 +14,13 @@ from curtin import util |
4891 | |
4892 | class TestAptSrcAbs(VMBaseClass): |
4893 | """TestAptSrcAbs - Basic tests for apt features of curtin""" |
4894 | + test_type = 'config' |
4895 | interactive = False |
4896 | extra_disks = [] |
4897 | fstab_expected = {} |
4898 | disk_to_check = [] |
4899 | - collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent(""" |
4900 | + extra_collect_scripts = [textwrap.dedent(""" |
4901 | cd OUTPUT_COLLECT_D |
4902 | - cat /etc/fstab > fstab |
4903 | - ls /dev/disk/by-dname > ls_dname |
4904 | - find /etc/network/interfaces.d > find_interfacesd |
4905 | apt-key list "F430BBA5" > keyid-F430BBA5 |
4906 | apt-key list "0165013E" > keyppa-0165013E |
4907 | apt-key list "F470A0AC" > keylongid-F470A0AC |
4908 | diff --git a/tests/vmtests/test_basic.py b/tests/vmtests/test_basic.py |
4909 | index 01ffc89..54e3df8 100644 |
4910 | --- a/tests/vmtests/test_basic.py |
4911 | +++ b/tests/vmtests/test_basic.py |
4912 | @@ -4,12 +4,14 @@ from . import ( |
4913 | VMBaseClass, |
4914 | get_apt_proxy) |
4915 | from .releases import base_vm_classes as relbase |
4916 | +from .releases import centos_base_vm_classes as centos_relbase |
4917 | |
4918 | import textwrap |
4919 | from unittest import SkipTest |
4920 | |
4921 | |
4922 | class TestBasicAbs(VMBaseClass): |
4923 | + test_type = 'storage' |
4924 | interactive = False |
4925 | nr_cpus = 2 |
4926 | dirty_disks = True |
4927 | @@ -18,29 +20,18 @@ class TestBasicAbs(VMBaseClass): |
4928 | nvme_disks = ['4G'] |
4929 | disk_to_check = [('main_disk_with_in---valid--dname', 1), |
4930 | ('main_disk_with_in---valid--dname', 2)] |
4931 | - collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent(""" |
4932 | + extra_collect_scripts = [textwrap.dedent(""" |
4933 | cd OUTPUT_COLLECT_D |
4934 | - blkid -o export /dev/vda > blkid_output_vda |
4935 | - blkid -o export /dev/vda1 > blkid_output_vda1 |
4936 | - blkid -o export /dev/vda2 > blkid_output_vda2 |
4937 | + blkid -o export /dev/vda | cat >blkid_output_vda |
4938 | + blkid -o export /dev/vda1 | cat >blkid_output_vda1 |
4939 | + blkid -o export /dev/vda2 | cat >blkid_output_vda2 |
4940 | dev="/dev/vdd"; f="btrfs_uuid_${dev#/dev/*}"; |
4941 | if command -v btrfs-debug-tree >/dev/null; then |
4942 | btrfs-debug-tree -r $dev | awk '/^uuid/ {print $2}' | grep "-" |
4943 | else |
4944 | btrfs inspect-internal dump-super $dev | |
4945 | awk '/^dev_item.fsid/ {print $2}' |
4946 | - fi > $f |
4947 | - cat /proc/partitions > proc_partitions |
4948 | - ls -al /dev/disk/by-uuid/ > ls_uuid |
4949 | - cat /etc/fstab > fstab |
4950 | - mkdir -p /dev/disk/by-dname |
4951 | - ls /dev/disk/by-dname/ > ls_dname |
4952 | - find /etc/network/interfaces.d > find_interfacesd |
4953 | - |
4954 | - v="" |
4955 | - out=$(apt-config shell v Acquire::HTTP::Proxy) |
4956 | - eval "$out" |
4957 | - echo "$v" > apt-proxy |
4958 | + fi | cat >$f |
4959 | """)] |
4960 | |
4961 | def _kname_to_uuid(self, kname): |
4962 | @@ -48,7 +39,7 @@ class TestBasicAbs(VMBaseClass): |
4963 | # parsing ls -al output on /dev/disk/by-uuid: |
4964 | # lrwxrwxrwx 1 root root 9 Dec 4 20:02 |
4965 | # d591e9e9-825a-4f0a-b280-3bfaf470b83c -> ../../vdg |
4966 | - ls_uuid = self.load_collect_file("ls_uuid") |
4967 | + ls_uuid = self.load_collect_file("ls_al_byuuid") |
4968 | uuid = [line.split()[8] for line in ls_uuid.split('\n') |
4969 | if ("../../" + kname) in line.split()] |
4970 | self.assertEqual(len(uuid), 1) |
4971 | @@ -57,81 +48,99 @@ class TestBasicAbs(VMBaseClass): |
4972 | self.assertEqual(len(uuid), 36) |
4973 | return uuid |
4974 | |
4975 | - def test_output_files_exist(self): |
4976 | - self.output_files_exist( |
4977 | - ["blkid_output_vda", "blkid_output_vda1", "blkid_output_vda2", |
4978 | - "btrfs_uuid_vdd", "fstab", "ls_dname", "ls_uuid", |
4979 | - "proc_partitions", |
4980 | - "root/curtin-install.log", "root/curtin-install-cfg.yaml"]) |
4981 | - |
4982 | - def test_ptable(self, disk_to_check=None): |
4983 | + def _test_ptable(self, blkid_output, expected): |
4984 | if self.target_release == "trusty": |
4985 | raise SkipTest("No PTTYPE blkid output on trusty") |
4986 | |
4987 | - blkid_info = self.get_blkid_data("blkid_output_vda") |
4988 | - self.assertEquals(blkid_info["PTTYPE"], "dos") |
4989 | + if not blkid_output: |
4990 | + raise RuntimeError('_test_ptable requires blkid output file') |
4991 | |
4992 | - def test_partition_numbers(self): |
4993 | - # vde should have partitions 1 and 10 |
4994 | - disk = "vde" |
4995 | + if not expected: |
4996 | + raise RuntimeError('_test_ptable requires expected value') |
4997 | + |
4998 | + self.output_files_exist([blkid_output]) |
4999 | + blkid_info = self.get_blkid_data(blkid_output) |
5000 | + self.assertEquals(expected, blkid_info["PTTYPE"]) |
PASSED: Continuous integration, rev:59442cdefbb 6fd3c325c56266f ca7840593ea3b6 /jenkins. ubuntu. com/server/ job/curtin- ci/1002/ /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-arm64/ 1002 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-ppc64el/ 1002 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-s390x/ 1002 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= torkoal/ 1002
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/curtin- ci/1002/ rebuild
https:/