Merge ~raharper/curtin:feature/enable-storage-vmtest-on-centos into curtin:master
- Git
- lp:~raharper/curtin
- feature/enable-storage-vmtest-on-centos
- Merge into master
Status: | Merged | ||||||||
---|---|---|---|---|---|---|---|---|---|
Approved by: | Ryan Harper | ||||||||
Approved revision: | e0e98376b2e7ff3a09f3a8b339c1d029a3274b83 | ||||||||
Merge reported by: | Server Team CI bot | ||||||||
Merged at revision: | not available | ||||||||
Proposed branch: | ~raharper/curtin:feature/enable-storage-vmtest-on-centos | ||||||||
Merge into: | curtin:master | ||||||||
Diff against target: |
7050 lines (+2535/-1590) 82 files modified
curtin/__init__.py (+2/-0) curtin/block/__init__.py (+0/-72) curtin/block/deps.py (+103/-0) curtin/block/iscsi.py (+25/-9) curtin/block/lvm.py (+2/-1) curtin/block/mdadm.py (+2/-1) curtin/block/mkfs.py (+3/-2) curtin/block/zfs.py (+2/-1) curtin/commands/apply_net.py (+4/-3) curtin/commands/apt_config.py (+13/-13) curtin/commands/block_meta.py (+5/-4) curtin/commands/curthooks.py (+391/-207) curtin/commands/in_target.py (+2/-2) curtin/commands/install.py (+4/-2) curtin/commands/system_install.py (+2/-1) curtin/commands/system_upgrade.py (+3/-2) curtin/deps/__init__.py (+3/-3) curtin/distro.py (+512/-0) curtin/futil.py (+2/-1) curtin/net/__init__.py (+0/-59) curtin/net/deps.py (+72/-0) curtin/paths.py (+34/-0) curtin/util.py (+20/-318) dev/null (+0/-96) doc/topics/config.rst (+40/-0) doc/topics/curthooks.rst (+18/-2) examples/tests/filesystem_battery.yaml (+2/-2) helpers/common (+156/-35) tests/unittests/test_apt_custom_sources_list.py (+10/-8) tests/unittests/test_apt_source.py (+8/-7) tests/unittests/test_block_iscsi.py (+7/-0) tests/unittests/test_block_lvm.py (+3/-2) tests/unittests/test_block_mdadm.py (+18/-11) tests/unittests/test_block_mkfs.py (+3/-2) tests/unittests/test_block_zfs.py (+15/-9) tests/unittests/test_commands_apply_net.py (+7/-7) tests/unittests/test_commands_block_meta.py (+4/-3) tests/unittests/test_curthooks.py (+103/-78) tests/unittests/test_distro.py (+302/-0) tests/unittests/test_feature.py (+3/-0) tests/unittests/test_pack.py (+2/-0) tests/unittests/test_util.py (+19/-122) tests/vmtests/__init__.py (+80/-13) tests/vmtests/helpers.py (+28/-1) tests/vmtests/image_sync.py (+3/-1) tests/vmtests/releases.py (+2/-2) tests/vmtests/report_webhook_logger.py (+11/-6) tests/vmtests/test_apt_config_cmd.py (+2/-4) tests/vmtests/test_apt_source.py (+2/-4) tests/vmtests/test_basic.py (+126/-152) tests/vmtests/test_bcache_basic.py (+3/-6) tests/vmtests/test_fs_battery.py (+25/-11) tests/vmtests/test_install_umount.py (+1/-18) tests/vmtests/test_iscsi.py (+10/-6) tests/vmtests/test_journald_reporter.py (+2/-5) tests/vmtests/test_lvm.py (+7/-8) tests/vmtests/test_lvm_iscsi.py (+9/-4) tests/vmtests/test_lvm_root.py (+40/-9) tests/vmtests/test_mdadm_bcache.py (+41/-18) tests/vmtests/test_mdadm_iscsi.py (+9/-3) tests/vmtests/test_multipath.py (+8/-16) tests/vmtests/test_network.py (+4/-19) tests/vmtests/test_network_alias.py (+3/-3) tests/vmtests/test_network_bonding.py (+3/-3) tests/vmtests/test_network_bridging.py (+4/-4) tests/vmtests/test_network_ipv6.py (+4/-4) tests/vmtests/test_network_ipv6_static.py (+2/-2) tests/vmtests/test_network_ipv6_vlan.py (+2/-2) tests/vmtests/test_network_mtu.py (+5/-4) tests/vmtests/test_network_static.py (+2/-11) tests/vmtests/test_network_static_routes.py (+2/-2) tests/vmtests/test_network_vlan.py (+3/-11) tests/vmtests/test_nvme.py (+29/-56) tests/vmtests/test_old_apt_features.py (+2/-4) tests/vmtests/test_pollinate_useragent.py (+2/-2) tests/vmtests/test_raid5_bcache.py (+6/-11) tests/vmtests/test_simple.py (+5/-18) tests/vmtests/test_ubuntu_core.py (+3/-8) tests/vmtests/test_uefi_basic.py (+27/-28) tests/vmtests/test_zfsroot.py (+5/-21) tools/jenkins-runner (+30/-5) tools/vmtest-filter (+57/-0) |
||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
Lee Trager (community) | Approve | ||
Scott Moser (community) | Approve | ||
Chad Smith | Pending | ||
Review via email: mp+349075@code.launchpad.net |
Commit message
Enable custom storage configuration for centos images.
Add support for the majority of storage configurations including
partitioning, lvm, raid, iscsi and combinations of these. Some
storage configs are unsupported at this time.
Unsupported storage config options on Centos:
- bcache (no kernel support)
- zfs (no kernel support)
- jfs, ntfs, reiserfs (no kernel, userspace support)
Curtin's built-in curthooks now support Centos in addition
to Ubuntu. The built-in curthooks are now callable by
in-image curthooks. This feature is announced by the
presence of the feature flag, 'CENTOS_
Other notable features added:
- tools/jenkins-
ability which enables generating the list of tests to
run by specifying attributes of the classes. For example
to run all centos70 tests append:
--
- curtin/distro.py includes distro specific methods, such as
package install and distro version detection
- util.target_path has now moved to curtin.paths module
Description of the change
Server Team CI bot (server-team-bot) wrote : | # |
Scott Moser (smoser) wrote : | # |
I didn't get through the whole thing yet. Only as far as my comments stop.
will review more late.r
Feels like we need a 'distro' module.
def distro.
"""Find the distro in target. return just distro name for now."""
also there woudl be
CENTOS='centos'
DEBIAN='debian'.
then you can avoid copying 'debian' string everywhere.
Ryan Harper (raharper) wrote : | # |
Yes, I generally wanted some sort of distro value cache. Which is why in curthooks I end up grabbing it first and passing it around.
We could avoid passing, at the cost of a function call. Alternatively, we could import it into th curthooks module and call a "setter" to find the right value and it would be a global to the module.
Thoughts?
Chad Smith (chad.smith) wrote : | # |
only a brief glance at the content, will look more tomorrow.
- 99517f4... by Ryan Harper
-
Simplify set construction for get_iscsi_
ports_from_ config - 0ca7ee3... by Ryan Harper
-
Restore param order to copy_iscsi_conf
- cca12fc... by Ryan Harper
-
Fix whitespace damage, update comment to have LP: #
Ryan Harper (raharper) wrote : | # |
Thanks for the comments so far. Pulling in some suggested changes. Some responses in-line.
Scott Moser (smoser) wrote : | # |
I got through the rest of it.
I like the test filter functionality.
comments inline.
- 1d890ad... by Ryan Harper
-
Drop if not target check, use target_path instead
- 3719756... by Ryan Harper
-
setup_grub, helpers/common: pass os-family to install_grub, fix shell nits
- Add --os-family to install_grub cli, have setup_grub() pass in the flag
- Address shell comments
- Fix unittests to work with --os-family - 9555181... by Ryan Harper
-
Fix use of cls.target_distro, it always has a value now, drop test_type=core
- 2a5e3d2... by Ryan Harper
-
Use instead of ; use error/fail
Ryan Harper (raharper) wrote : | # |
Pulling in most of the comments. Replied to a few questions inline. I'll reverify that we're still passing on centos tests and then push the fixes here for a second round.
Thanks for the review!
- dac8fe0... by Ryan Harper
-
Flake8 fixes for vmtests-filter
Chad Smith (chad.smith) : | # |
- 290b898... by Ryan Harper
-
helpers/
common: install_ grub: Fix getopt, os-family takes a parameter - 4bf91d8... by Ryan Harper
-
Drop default_
collect_ scripts class attr, it's not needed - ac47a91... by Ryan Harper
-
helpers/
install_ grub: fix i386 grub_name/ grub_target; catch silent missing package exit
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:d76fb0f9123
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- bfe3021... by Ryan Harper
-
Refactor distro/osfamily into enumerated class
Introduce curtin/distro.py which provides distro variant and
osfamily mapping methods. Inside we enumerate all of the known
distro names, build a distro family to variant mapping and provide
a reverse mapping for translating from one to the other.With this in place, add a singleton based methods to utils,
get_target_distroinfo, which queries /etc/os-release inside
the target path, and extracts the ID= value, looks that up
in the list of distros, and osfamilies creating a named tuple
that is globally cached. Added accessort methods for getting
the variant or osfamily and then used these to update
curthooks to query once, and then compare the value found versus
the enumerated distro objects. Where target is available, methods
will now use get_target_osfamily( target= target) to obtain a value
if one is not provided. In some methods that are distro specific
we default the osfamily to the correct value. - 5cbf688... by Ryan Harper
-
Drop use of singleton, in-use for ephemeral and target, move DistroInfo to distro.py
Ryan Harper (raharper) wrote : | # |
OK, I've given the curtin/distro.py a go. I think it works quite nicely. I'm happy to bikeshed on the attributes (distro vs variant vs osfamily, etc).
That's easy enough to switch around.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:e39be2278d5
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) : | # |
Ryan Harper (raharper) wrote : | # |
team review comments inline
- 90451c1... by Ryan Harper
-
Drop _target from get_distro, get_osfamily helpers
- 3d329af... by Ryan Harper
-
Refactor iscsi.get_
iscsi_disks_ from_config for modular use - Introduce get_iscsi_
volumes_ from_config which returns a list of
iscsi RFC uris which can be used to construct IscsiDiskObjects.
- Refactor get_iscsi_disks_from_ config to use get_iscsi_ volumes_ from_config
- Add docstrings to all get_iscsi_* methods
- Migrate block,net detect_required_ packages_ mapping into module/dep.py
respectively to avoid dependency loop between import of curtin.block
and curtin.command. block_meta
- Fix up curthooks to import block and net deps module - 78e38d2... by Ryan Harper
-
Refactor osfamily parameter, default to DISTROS.debian
Drop any if osfamily is None checks since we now default to
DISTROS.debian for osfamily. Add some checks if osfamily is
not the expected values and raise ValueErrors - 926fd23... by Ryan Harper
-
Move targets_node_dir into function signature with default value
- 8411be2... by Scott Moser
-
Refactor util, distro and add paths.py
Rearrage package/distro related functions out of util.py into
distro.py. Move target_path into paths.py. Adjust callers
where necessary. - aa2c622... by Ryan Harper
-
Drop iscsi initator name hack, not needed
- 652b1c7... by Ryan Harper
-
Fix typo in initramfs string, make pollinate generic, check for binary in target
- 7c2e84a... by Ryan Harper
-
Add unittest for pollinate missing, drop yum_install
Add unittest for when pollinate binary is missing.
Drop distro.yum_install, folding settings and retries into run_apt_command
and run_yum_command.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:a5fdb635f04
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
MP to get rid of users of util.target_path
http://
some smaller things in line.
- 9276416... by Scott Moser
-
remove util.target_path users.
- 494d1f3... by Ryan Harper
-
Replace launchpad link with LP: #NNNN
- d90b938... by Ryan Harper
-
Drop apt,yum retries for all commands, handle yum install in two parts
- 05ff544... by Ryan Harper
-
distro: add unittest and ensure osfamily varient is part of itself
- a8d08f6... by Ryan Harper
-
helpers/common: map variant to os_family and update os_family switch statements
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:47bcf8fe3c1
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 09a05cd... by Ryan Harper
-
curthooks: refactor builtin curthooks into a callable method
- 0b23cbd... by Ryan Harper
-
iscsi_get_
volumes_ from_config: handle curtin config and storage config
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:3972610e2e4
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
Ran a centos70 run on diglett:
% rm -rf ./output/; CURTIN_
...
-------
Ran 372 tests in 1898.647s
OK (SKIP=112)
Tue, 07 Aug 2018 16:05:46 -0500: vmtest end [0] in 1901s
The set of tests that run are:
% ./tools/
2018-08-07 16:20:08,785 - tests.vmtests - INFO - Logfile: /tmp/vmtest-
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
tests/vmtests/
- 35b44dd... by Ryan Harper
-
doc: update curthooks docs
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:b4e8d6897e3
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 00a0658... by Ryan Harper
-
Pass in the real curtin config to builtin_curthooks
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:dd8581ba936
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) : | # |
Ryan Harper (raharper) : | # |
- 068ce08... by Ryan Harper
-
grub: require both os variant and family; pass variant to grub-install, it's fickle.
- 78d89b5... by Ryan Harper
-
Allow yum install, update, upgrade to use the two-step download,install method
- 979d53b... by Ryan Harper
-
Drop target is None checks in distro.py
- adda1ab... by Ryan Harper
-
Add comments on use of ChrootableTarget for rpm/yum operations
Chad Smith (chad.smith) wrote : | # |
Thanks Ryan!
Couple nits inline plus a significant question about detect_
I added a pastebin to add --features argument to CLI, which I can do in a separate branch if you think it is a good idea.
Chad Smith (chad.smith) : | # |
Ryan Harper (raharper) wrote : | # |
Thanks for the review. I've replied inline.
Chad Smith (chad.smith) : | # |
- 3062605... by Ryan Harper
-
block.deps: Add iscsi mapping to open-iscsi for debian family
Scott Moser (smoser) wrote : | # |
2 questions
a.)
'yum update' versus 'yum upgrade'
this feels like we want 'upgrade' as it is more similar to 'dist-upgrade' which is what we do in apt.
b.) I think really we still want the 2 phase for upgrade.
retry this: yum --downloadonly --setopt=
then run this: yum upgrade --cacheonly --downloadonly --setopt=
It looks like we can mostly re-use the existing 'yum_install' but just manage to set ['install'] to be ['upgrade']
We are getting there...
I know that this 'upgrade' path isn't a huge thing, but if we have it there i'd like for it to work reliably.
- 4d3550b... by Ryan Harper
-
Reformat exception to not dangle text on the new line
Scott Moser (smoser) wrote : | # |
I think i'm pretty much fine with this at this point.
mega-branch, but we can take and address any issues one by one.
Assuming the following are happy, I approve:
a.) rharper
b.) vmtest
c.) c-i bot
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:9d2c7fda267
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
On Thu, Aug 9, 2018 at 2:30 PM Scott Moser <email address hidden> wrote:
>
> Review: Approve
>
> I think i'm pretty much fine with this at this point.
> mega-branch, but we can take and address any issues one by one.
>
> Assuming the following are happy, I approve:
> a.) rharper
+1
> b.) vmtest
I'll kick off a full run on diglett; this allows me to "hack" in an
updated curtin-hooks.py for the centos images
However, we shouldn't land this until we get the MAAS image branch
approved and landed.
> c.) c-i bot
>
> --
> https:/
> You are the owner of ~raharper/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:9847b57cb9e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 5f9e785... by Ryan Harper
-
Add support for redhat distros without /etc/os-release; fix centos6 grub install
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:3f1f7265284
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 76bc6dd... by Ryan Harper
-
curthooks: don't update initramfs unless we have storage config
The dracut config wasn't updated, but we still proceeded to regenerate
wasting time when it wasn't needed. Move rpm_ command into distro.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:10686127093
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- e172702... by Ryan Harper
-
Drop extra case ;; and fix Uefi installs
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:956067e8289
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 59b9b94... by Ryan Harper
-
Drop centos_basic vmtest, handled in test_basic and test_network now
Ryan Harper (raharper) wrote : | # |
This passed on diglett full vmtest run (with the new maas-image curtin-hooks injected into centos7 images).
% rm -rf output/; CURTIN_
Quering synced ephemeral images/kernels in /srv/images
=======
Release Codename ImageDate Arch /SubArch Path
-------
12.04 precise 20170424 amd64/hwe-t precise/
12.04 precise 20170424 amd64/hwe-t precise/
12.04 precise 20170424.1 amd64/hwe-p precise/
14.04 trusty 20180806 amd64/hwe-t trusty/
14.04 trusty 20180806 amd64/hwe-x trusty/
14.04 trusty 20180806 i386 /hwe-t trusty/
14.04 trusty 20180806 i386 /hwe-x trusty/
16.04 xenial 20180814 amd64/ga-16.04 xenial/
16.04 xenial 20180814 amd64/hwe-16.04 xenial/
16.04 xenial 20180814 amd64/hwe-
16.04 xenial 20180814 i386 /ga-16.04 xenial/
16.04 xenial 20180814 i386 /hwe-16.04 xenial/
16.04 xenial 20180814 i386 /hwe-16.04-edge xenial/
17.04 zesty 20171219 amd64/ga-17.04 zesty/amd64/
17.10 artful 20180718 amd64/ga-17.10 artful/
17.10 artful 20180718 i386 /ga-17.10 artful/
18.04 bionic 20180814 amd64/ga-18.04 bionic/
18.04 bionic 20180814 i386 /ga-18.04 bionic/
18.10 cosmic 20180813 amd64/ga-18.10 cosmic/
18.10 cosmic 20180813 i386 /ga-18.10 cosmic/
-------
6.6 centos66 20180501_01 amd64/generic centos66/
7.0 centos70 20180501_01 amd64/generic centos70/
=======
Wed, 15 Aug 2018 14:58:02 -0500: vmtest start: nosetests3 --process-
...
-------
Ran 3336 tests in 23577.156s
Wed, 15 Aug 2018 21:31:00 -0500: vmtest end [0] in 23580s
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:961058e0638
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:961058e0638
https:/
Executed test runs:
ABORTED: https:/
ABORTED: https:/
ABORTED: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:961058e0638
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
SUCCESS: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:961058e0638
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Lee Trager (ltrager) wrote : | # |
I have been testing this branch using the MAAS CI. Nodes in the MAAS CI have no direct access to the Internet. This is causing UEFI CentOS 7 installs to fail when running
yum --assumeyes --quiet install --downloadonly --setopt=
I made sure the image I built has grub2-efi-x64[1]. While I think its a good feature that Curtin will automatically install missing dependencies if those dependencies are currently on the system Curtin should not try to access the Internet.
I would suggest querying RPM directly to see if a package is available before trying to use yum
[root@autopkgtest /]# rpm -q grub2-efi-x64
grub2-efi-
[root@autopkgtest /]# echo $?
0
[root@autopkgtest /]# rpm -q missing-package
package missing-package is not installed
[root@autopkgtest /]# echo $?
1
[1] https:/
- 4e1ad40... by Ryan Harper
-
Move grub package install to install_
missing_ packages - dc92fe4... by Ryan Harper
-
Make distro.
has_pkg_ available multi-distro - 2545c5f... by Ryan Harper
-
Build list of uefi packages and then update needed set checking if installed
- e1b9d38... by Ryan Harper
-
vmtests: Add environ variable IMAGE_SRC_KEYRING to specify gpg key path for testing unofficial images
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:1e1c7aa8d61
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 479275f... by Ryan Harper
-
Fix package name: grub2-efi-modules
- 18ea647... by Ryan Harper
-
Fix package name once more, grub2-efi-
x64-modules
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:76e4baa3a7e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Lee Trager (ltrager) wrote : | # |
Latest changes allow CentOS to confirm with custom storage in the MAAS CI! Approving as custom storage works but we still need to solve LP:1788088.
- a66566a... by Ryan Harper
-
centos: UEFI only depends on grub2-efi-
x64-modules - 07972da... by Ryan Harper
-
helpers/common: make efibootmgr dump verbosely
- f9c5916... by Ryan Harper
-
vmtest: collect /boot contents; collect efibootmgr output on UEFI
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:3c1fa5feaef
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
After some back and forth about what grub2 packages needed; we pulled out shim and grub2-efi-x64 for now and pushed secure boot to a separate feature.
A full vmtest run with this branch against current published images has passed. I've also run all centos7 tests against the proposed images from ltrager and that has passed as well.
- d1e92f6... by Ryan Harper
-
Allow os_variant=rhel in grub install
When RHEL is installed, the os_variant value is 'rhel'. Allow this
value to match the centos|redhat case statement for grub install.LP: #1790756
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:f764e28d234
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- a04727e... by Ryan Harper
-
builtin-hooks call handle_cloudconfig on centos to config maas datasource
In-image curthooks in centos images called curthooks.
handle_ cloudconfig.
We need to do the same in the built-in hooks if we're on centos.LP: #1791140
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:e5b7b578e56
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 7198fbc... by Ryan Harper
-
jenkins-runner: restore missing -p|--parallel cli case statement
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:57db65feaa7
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 0f426eb... by Ryan Harper
-
jenkins-runner: better quoting and add --filter foo=bar support
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:1545caaa232
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
Overall I'm happy with this at this point.
If Ryan is happy and c-i is happy then I'm good.
I think that we have to rebase though. There are several '<<<<'.
You'll have to rebase.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:0f426eb681e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- fafa454... by Ryan Harper
-
Only install multipath packages if needed
- 211e2ad... by Ryan Harper
-
jenkins-runner: always append tests to nosetest
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:211e2ad86f7
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
- 11ad4ba... by Ryan Harper
-
jenkins-runner: handle nosetest args passed with filters and test paths
- e0e9837... by Ryan Harper
-
Drop debug and fix empty check to size of array
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:e0e98376b2e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:e0e98376b2e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:e0e98376b2e
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
We've updated the centos images to have the required packages for offline install. Jenkins node didnt have access to the yum repos so the vmtests were failing for multipath and iscsi. With that resolved, I've re-run those tests and have positive results from here:
https:/
Ran 40 tests in 309.102s
OK (SKIP=12)
Fri, 21 Sep 2018 06:40:34 +0000: vmtest end [0] in 316s
Preview Diff
1 | diff --git a/curtin/__init__.py b/curtin/__init__.py | |||
2 | index 002454b..ee35ca3 100644 | |||
3 | --- a/curtin/__init__.py | |||
4 | +++ b/curtin/__init__.py | |||
5 | @@ -10,6 +10,8 @@ KERNEL_CMDLINE_COPY_TO_INSTALL_SEP = "---" | |||
6 | 10 | FEATURES = [ | 10 | FEATURES = [ |
7 | 11 | # curtin can apply centos networking via centos_apply_network_config | 11 | # curtin can apply centos networking via centos_apply_network_config |
8 | 12 | 'CENTOS_APPLY_NETWORK_CONFIG', | 12 | 'CENTOS_APPLY_NETWORK_CONFIG', |
9 | 13 | # curtin can configure centos storage devices and boot devices | ||
10 | 14 | 'CENTOS_CURTHOOK_SUPPORT', | ||
11 | 13 | # install supports the 'network' config version 1 | 15 | # install supports the 'network' config version 1 |
12 | 14 | 'NETWORK_CONFIG_V1', | 16 | 'NETWORK_CONFIG_V1', |
13 | 15 | # reporter supports 'webhook' type | 17 | # reporter supports 'webhook' type |
14 | diff --git a/curtin/block/__init__.py b/curtin/block/__init__.py | |||
15 | index b771629..490c268 100644 | |||
16 | --- a/curtin/block/__init__.py | |||
17 | +++ b/curtin/block/__init__.py | |||
18 | @@ -1003,78 +1003,6 @@ def wipe_volume(path, mode="superblock", exclusive=True): | |||
19 | 1003 | raise ValueError("wipe mode %s not supported" % mode) | 1003 | raise ValueError("wipe mode %s not supported" % mode) |
20 | 1004 | 1004 | ||
21 | 1005 | 1005 | ||
22 | 1006 | def storage_config_required_packages(storage_config, mapping): | ||
23 | 1007 | """Read storage configuration dictionary and determine | ||
24 | 1008 | which packages are required for the supplied configuration | ||
25 | 1009 | to function. Return a list of packaged to install. | ||
26 | 1010 | """ | ||
27 | 1011 | |||
28 | 1012 | if not storage_config or not isinstance(storage_config, dict): | ||
29 | 1013 | raise ValueError('Invalid storage configuration. ' | ||
30 | 1014 | 'Must be a dict:\n %s' % storage_config) | ||
31 | 1015 | |||
32 | 1016 | if not mapping or not isinstance(mapping, dict): | ||
33 | 1017 | raise ValueError('Invalid storage mapping. Must be a dict') | ||
34 | 1018 | |||
35 | 1019 | if 'storage' in storage_config: | ||
36 | 1020 | storage_config = storage_config.get('storage') | ||
37 | 1021 | |||
38 | 1022 | needed_packages = [] | ||
39 | 1023 | |||
40 | 1024 | # get reqs by device operation type | ||
41 | 1025 | dev_configs = set(operation['type'] | ||
42 | 1026 | for operation in storage_config['config']) | ||
43 | 1027 | |||
44 | 1028 | for dev_type in dev_configs: | ||
45 | 1029 | if dev_type in mapping: | ||
46 | 1030 | needed_packages.extend(mapping[dev_type]) | ||
47 | 1031 | |||
48 | 1032 | # for any format operations, check the fstype and | ||
49 | 1033 | # determine if we need any mkfs tools as well. | ||
50 | 1034 | format_configs = set([operation['fstype'] | ||
51 | 1035 | for operation in storage_config['config'] | ||
52 | 1036 | if operation['type'] == 'format']) | ||
53 | 1037 | for format_type in format_configs: | ||
54 | 1038 | if format_type in mapping: | ||
55 | 1039 | needed_packages.extend(mapping[format_type]) | ||
56 | 1040 | |||
57 | 1041 | return needed_packages | ||
58 | 1042 | |||
59 | 1043 | |||
60 | 1044 | def detect_required_packages_mapping(): | ||
61 | 1045 | """Return a dictionary providing a versioned configuration which maps | ||
62 | 1046 | storage configuration elements to the packages which are required | ||
63 | 1047 | for functionality. | ||
64 | 1048 | |||
65 | 1049 | The mapping key is either a config type value, or an fstype value. | ||
66 | 1050 | |||
67 | 1051 | """ | ||
68 | 1052 | version = 1 | ||
69 | 1053 | mapping = { | ||
70 | 1054 | version: { | ||
71 | 1055 | 'handler': storage_config_required_packages, | ||
72 | 1056 | 'mapping': { | ||
73 | 1057 | 'bcache': ['bcache-tools'], | ||
74 | 1058 | 'btrfs': ['btrfs-tools'], | ||
75 | 1059 | 'ext2': ['e2fsprogs'], | ||
76 | 1060 | 'ext3': ['e2fsprogs'], | ||
77 | 1061 | 'ext4': ['e2fsprogs'], | ||
78 | 1062 | 'jfs': ['jfsutils'], | ||
79 | 1063 | 'lvm_partition': ['lvm2'], | ||
80 | 1064 | 'lvm_volgroup': ['lvm2'], | ||
81 | 1065 | 'ntfs': ['ntfs-3g'], | ||
82 | 1066 | 'raid': ['mdadm'], | ||
83 | 1067 | 'reiserfs': ['reiserfsprogs'], | ||
84 | 1068 | 'xfs': ['xfsprogs'], | ||
85 | 1069 | 'zfsroot': ['zfsutils-linux', 'zfs-initramfs'], | ||
86 | 1070 | 'zfs': ['zfsutils-linux', 'zfs-initramfs'], | ||
87 | 1071 | 'zpool': ['zfsutils-linux', 'zfs-initramfs'], | ||
88 | 1072 | }, | ||
89 | 1073 | }, | ||
90 | 1074 | } | ||
91 | 1075 | return mapping | ||
92 | 1076 | |||
93 | 1077 | |||
94 | 1078 | def get_supported_filesystems(): | 1006 | def get_supported_filesystems(): |
95 | 1079 | """ Return a list of filesystems that the kernel currently supports | 1007 | """ Return a list of filesystems that the kernel currently supports |
96 | 1080 | as read from /proc/filesystems. | 1008 | as read from /proc/filesystems. |
97 | diff --git a/curtin/block/deps.py b/curtin/block/deps.py | |||
98 | 1081 | new file mode 100644 | 1009 | new file mode 100644 |
99 | index 0000000..930f764 | |||
100 | --- /dev/null | |||
101 | +++ b/curtin/block/deps.py | |||
102 | @@ -0,0 +1,103 @@ | |||
103 | 1 | # This file is part of curtin. See LICENSE file for copyright and license info. | ||
104 | 2 | |||
105 | 3 | from curtin.distro import DISTROS | ||
106 | 4 | from curtin.block import iscsi | ||
107 | 5 | |||
108 | 6 | |||
109 | 7 | def storage_config_required_packages(storage_config, mapping): | ||
110 | 8 | """Read storage configuration dictionary and determine | ||
111 | 9 | which packages are required for the supplied configuration | ||
112 | 10 | to function. Return a list of packaged to install. | ||
113 | 11 | """ | ||
114 | 12 | |||
115 | 13 | if not storage_config or not isinstance(storage_config, dict): | ||
116 | 14 | raise ValueError('Invalid storage configuration. ' | ||
117 | 15 | 'Must be a dict:\n %s' % storage_config) | ||
118 | 16 | |||
119 | 17 | if not mapping or not isinstance(mapping, dict): | ||
120 | 18 | raise ValueError('Invalid storage mapping. Must be a dict') | ||
121 | 19 | |||
122 | 20 | if 'storage' in storage_config: | ||
123 | 21 | storage_config = storage_config.get('storage') | ||
124 | 22 | |||
125 | 23 | needed_packages = [] | ||
126 | 24 | |||
127 | 25 | # get reqs by device operation type | ||
128 | 26 | dev_configs = set(operation['type'] | ||
129 | 27 | for operation in storage_config['config']) | ||
130 | 28 | |||
131 | 29 | for dev_type in dev_configs: | ||
132 | 30 | if dev_type in mapping: | ||
133 | 31 | needed_packages.extend(mapping[dev_type]) | ||
134 | 32 | |||
135 | 33 | # for disks with path: iscsi: we need iscsi tools | ||
136 | 34 | iscsi_vols = iscsi.get_iscsi_volumes_from_config(storage_config) | ||
137 | 35 | if len(iscsi_vols) > 0: | ||
138 | 36 | needed_packages.extend(mapping['iscsi']) | ||
139 | 37 | |||
140 | 38 | # for any format operations, check the fstype and | ||
141 | 39 | # determine if we need any mkfs tools as well. | ||
142 | 40 | format_configs = set([operation['fstype'] | ||
143 | 41 | for operation in storage_config['config'] | ||
144 | 42 | if operation['type'] == 'format']) | ||
145 | 43 | for format_type in format_configs: | ||
146 | 44 | if format_type in mapping: | ||
147 | 45 | needed_packages.extend(mapping[format_type]) | ||
148 | 46 | |||
149 | 47 | return needed_packages | ||
150 | 48 | |||
151 | 49 | |||
152 | 50 | def detect_required_packages_mapping(osfamily=DISTROS.debian): | ||
153 | 51 | """Return a dictionary providing a versioned configuration which maps | ||
154 | 52 | storage configuration elements to the packages which are required | ||
155 | 53 | for functionality. | ||
156 | 54 | |||
157 | 55 | The mapping key is either a config type value, or an fstype value. | ||
158 | 56 | |||
159 | 57 | """ | ||
160 | 58 | distro_mapping = { | ||
161 | 59 | DISTROS.debian: { | ||
162 | 60 | 'bcache': ['bcache-tools'], | ||
163 | 61 | 'btrfs': ['btrfs-tools'], | ||
164 | 62 | 'ext2': ['e2fsprogs'], | ||
165 | 63 | 'ext3': ['e2fsprogs'], | ||
166 | 64 | 'ext4': ['e2fsprogs'], | ||
167 | 65 | 'jfs': ['jfsutils'], | ||
168 | 66 | 'iscsi': ['open-iscsi'], | ||
169 | 67 | 'lvm_partition': ['lvm2'], | ||
170 | 68 | 'lvm_volgroup': ['lvm2'], | ||
171 | 69 | 'ntfs': ['ntfs-3g'], | ||
172 | 70 | 'raid': ['mdadm'], | ||
173 | 71 | 'reiserfs': ['reiserfsprogs'], | ||
174 | 72 | 'xfs': ['xfsprogs'], | ||
175 | 73 | 'zfsroot': ['zfsutils-linux', 'zfs-initramfs'], | ||
176 | 74 | 'zfs': ['zfsutils-linux', 'zfs-initramfs'], | ||
177 | 75 | 'zpool': ['zfsutils-linux', 'zfs-initramfs'], | ||
178 | 76 | }, | ||
179 | 77 | DISTROS.redhat: { | ||
180 | 78 | 'bcache': [], | ||
181 | 79 | 'btrfs': ['btrfs-progs'], | ||
182 | 80 | 'ext2': ['e2fsprogs'], | ||
183 | 81 | 'ext3': ['e2fsprogs'], | ||
184 | 82 | 'ext4': ['e2fsprogs'], | ||
185 | 83 | 'jfs': [], | ||
186 | 84 | 'iscsi': ['iscsi-initiator-utils'], | ||
187 | 85 | 'lvm_partition': ['lvm2'], | ||
188 | 86 | 'lvm_volgroup': ['lvm2'], | ||
189 | 87 | 'ntfs': [], | ||
190 | 88 | 'raid': ['mdadm'], | ||
191 | 89 | 'reiserfs': [], | ||
192 | 90 | 'xfs': ['xfsprogs'], | ||
193 | 91 | 'zfsroot': [], | ||
194 | 92 | 'zfs': [], | ||
195 | 93 | 'zpool': [], | ||
196 | 94 | }, | ||
197 | 95 | } | ||
198 | 96 | if osfamily not in distro_mapping: | ||
199 | 97 | raise ValueError('No block package mapping for distro: %s' % osfamily) | ||
200 | 98 | |||
201 | 99 | return {1: {'handler': storage_config_required_packages, | ||
202 | 100 | 'mapping': distro_mapping.get(osfamily)}} | ||
203 | 101 | |||
204 | 102 | |||
205 | 103 | # vi: ts=4 expandtab syntax=python | ||
206 | diff --git a/curtin/block/iscsi.py b/curtin/block/iscsi.py | |||
207 | index 0c666b6..3c46500 100644 | |||
208 | --- a/curtin/block/iscsi.py | |||
209 | +++ b/curtin/block/iscsi.py | |||
210 | @@ -9,7 +9,7 @@ import os | |||
211 | 9 | import re | 9 | import re |
212 | 10 | import shutil | 10 | import shutil |
213 | 11 | 11 | ||
215 | 12 | from curtin import (util, udev) | 12 | from curtin import (paths, util, udev) |
216 | 13 | from curtin.block import (get_device_slave_knames, | 13 | from curtin.block import (get_device_slave_knames, |
217 | 14 | path_to_kname) | 14 | path_to_kname) |
218 | 15 | 15 | ||
219 | @@ -230,29 +230,45 @@ def connected_disks(): | |||
220 | 230 | return _ISCSI_DISKS | 230 | return _ISCSI_DISKS |
221 | 231 | 231 | ||
222 | 232 | 232 | ||
224 | 233 | def get_iscsi_disks_from_config(cfg): | 233 | def get_iscsi_volumes_from_config(cfg): |
225 | 234 | """Parse a curtin storage config and return a list | 234 | """Parse a curtin storage config and return a list |
227 | 235 | of iscsi disk objects for each configuration present | 235 | of iscsi disk rfc4173 uris for each configuration present. |
228 | 236 | """ | 236 | """ |
229 | 237 | if not cfg: | 237 | if not cfg: |
230 | 238 | cfg = {} | 238 | cfg = {} |
231 | 239 | 239 | ||
234 | 240 | sconfig = cfg.get('storage', {}).get('config', {}) | 240 | if 'storage' in cfg: |
235 | 241 | if not sconfig: | 241 | sconfig = cfg.get('storage', {}).get('config', []) |
236 | 242 | else: | ||
237 | 243 | sconfig = cfg.get('config', []) | ||
238 | 244 | if not sconfig or not isinstance(sconfig, list): | ||
239 | 242 | LOG.warning('Configuration dictionary did not contain' | 245 | LOG.warning('Configuration dictionary did not contain' |
240 | 243 | ' a storage configuration') | 246 | ' a storage configuration') |
241 | 244 | return [] | 247 | return [] |
242 | 245 | 248 | ||
243 | 249 | return [disk['path'] for disk in sconfig | ||
244 | 250 | if disk['type'] == 'disk' and | ||
245 | 251 | disk.get('path', "").startswith('iscsi:')] | ||
246 | 252 | |||
247 | 253 | |||
248 | 254 | def get_iscsi_disks_from_config(cfg): | ||
249 | 255 | """Return a list of IscsiDisk objects for each iscsi volume present.""" | ||
250 | 246 | # Construct IscsiDisk objects for each iscsi volume present | 256 | # Construct IscsiDisk objects for each iscsi volume present |
254 | 247 | iscsi_disks = [IscsiDisk(disk['path']) for disk in sconfig | 257 | iscsi_disks = [IscsiDisk(volume) for volume in |
255 | 248 | if disk['type'] == 'disk' and | 258 | get_iscsi_volumes_from_config(cfg)] |
253 | 249 | disk.get('path', "").startswith('iscsi:')] | ||
256 | 250 | LOG.debug('Found %s iscsi disks in storage config', len(iscsi_disks)) | 259 | LOG.debug('Found %s iscsi disks in storage config', len(iscsi_disks)) |
257 | 251 | return iscsi_disks | 260 | return iscsi_disks |
258 | 252 | 261 | ||
259 | 253 | 262 | ||
260 | 263 | def get_iscsi_ports_from_config(cfg): | ||
261 | 264 | """Return a set of ports that may be used when connecting to volumes.""" | ||
262 | 265 | ports = set([d.port for d in get_iscsi_disks_from_config(cfg)]) | ||
263 | 266 | LOG.debug('Found iscsi ports in use: %s', ports) | ||
264 | 267 | return ports | ||
265 | 268 | |||
266 | 269 | |||
267 | 254 | def disconnect_target_disks(target_root_path=None): | 270 | def disconnect_target_disks(target_root_path=None): |
269 | 255 | target_nodes_path = util.target_path(target_root_path, '/etc/iscsi/nodes') | 271 | target_nodes_path = paths.target_path(target_root_path, '/etc/iscsi/nodes') |
270 | 256 | fails = [] | 272 | fails = [] |
271 | 257 | if os.path.isdir(target_nodes_path): | 273 | if os.path.isdir(target_nodes_path): |
272 | 258 | for target in os.listdir(target_nodes_path): | 274 | for target in os.listdir(target_nodes_path): |
273 | diff --git a/curtin/block/lvm.py b/curtin/block/lvm.py | |||
274 | index eca64f6..b3f8bcb 100644 | |||
275 | --- a/curtin/block/lvm.py | |||
276 | +++ b/curtin/block/lvm.py | |||
277 | @@ -4,6 +4,7 @@ | |||
278 | 4 | This module provides some helper functions for manipulating lvm devices | 4 | This module provides some helper functions for manipulating lvm devices |
279 | 5 | """ | 5 | """ |
280 | 6 | 6 | ||
281 | 7 | from curtin import distro | ||
282 | 7 | from curtin import util | 8 | from curtin import util |
283 | 8 | from curtin.log import LOG | 9 | from curtin.log import LOG |
284 | 9 | import os | 10 | import os |
285 | @@ -88,7 +89,7 @@ def lvm_scan(activate=True): | |||
286 | 88 | # before appending the cache flag though, check if lvmetad is running. this | 89 | # before appending the cache flag though, check if lvmetad is running. this |
287 | 89 | # ensures that we do the right thing even if lvmetad is supported but is | 90 | # ensures that we do the right thing even if lvmetad is supported but is |
288 | 90 | # not running | 91 | # not running |
290 | 91 | release = util.lsb_release().get('codename') | 92 | release = distro.lsb_release().get('codename') |
291 | 92 | if release in [None, 'UNAVAILABLE']: | 93 | if release in [None, 'UNAVAILABLE']: |
292 | 93 | LOG.warning('unable to find release number, assuming xenial or later') | 94 | LOG.warning('unable to find release number, assuming xenial or later') |
293 | 94 | release = 'xenial' | 95 | release = 'xenial' |
294 | diff --git a/curtin/block/mdadm.py b/curtin/block/mdadm.py | |||
295 | index 8eff7fb..4ad6aa7 100644 | |||
296 | --- a/curtin/block/mdadm.py | |||
297 | +++ b/curtin/block/mdadm.py | |||
298 | @@ -13,6 +13,7 @@ import time | |||
299 | 13 | 13 | ||
300 | 14 | from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path) | 14 | from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path) |
301 | 15 | from curtin.block import get_holders | 15 | from curtin.block import get_holders |
302 | 16 | from curtin.distro import lsb_release | ||
303 | 16 | from curtin import (util, udev) | 17 | from curtin import (util, udev) |
304 | 17 | from curtin.log import LOG | 18 | from curtin.log import LOG |
305 | 18 | 19 | ||
306 | @@ -95,7 +96,7 @@ VALID_RAID_ARRAY_STATES = ( | |||
307 | 95 | checks the mdadm version and will return True if we can use --export | 96 | checks the mdadm version and will return True if we can use --export |
308 | 96 | for key=value list with enough info, false if version is less than | 97 | for key=value list with enough info, false if version is less than |
309 | 97 | ''' | 98 | ''' |
311 | 98 | MDADM_USE_EXPORT = util.lsb_release()['codename'] not in ['precise', 'trusty'] | 99 | MDADM_USE_EXPORT = lsb_release()['codename'] not in ['precise', 'trusty'] |
312 | 99 | 100 | ||
313 | 100 | # | 101 | # |
314 | 101 | # mdadm executors | 102 | # mdadm executors |
315 | diff --git a/curtin/block/mkfs.py b/curtin/block/mkfs.py | |||
316 | index f39017c..4a1e1f9 100644 | |||
317 | --- a/curtin/block/mkfs.py | |||
318 | +++ b/curtin/block/mkfs.py | |||
319 | @@ -3,8 +3,9 @@ | |||
320 | 3 | # This module wraps calls to mkfs.<fstype> and determines the appropriate flags | 3 | # This module wraps calls to mkfs.<fstype> and determines the appropriate flags |
321 | 4 | # for each filesystem type | 4 | # for each filesystem type |
322 | 5 | 5 | ||
323 | 6 | from curtin import util | ||
324 | 7 | from curtin import block | 6 | from curtin import block |
325 | 7 | from curtin import distro | ||
326 | 8 | from curtin import util | ||
327 | 8 | 9 | ||
328 | 9 | import string | 10 | import string |
329 | 10 | import os | 11 | import os |
330 | @@ -102,7 +103,7 @@ def valid_fstypes(): | |||
331 | 102 | 103 | ||
332 | 103 | def get_flag_mapping(flag_name, fs_family, param=None, strict=False): | 104 | def get_flag_mapping(flag_name, fs_family, param=None, strict=False): |
333 | 104 | ret = [] | 105 | ret = [] |
335 | 105 | release = util.lsb_release()['codename'] | 106 | release = distro.lsb_release()['codename'] |
336 | 106 | overrides = release_flag_mapping_overrides.get(release, {}) | 107 | overrides = release_flag_mapping_overrides.get(release, {}) |
337 | 107 | if flag_name in overrides and fs_family in overrides[flag_name]: | 108 | if flag_name in overrides and fs_family in overrides[flag_name]: |
338 | 108 | flag_sym = overrides[flag_name][fs_family] | 109 | flag_sym = overrides[flag_name][fs_family] |
339 | diff --git a/curtin/block/zfs.py b/curtin/block/zfs.py | |||
340 | index e279ab6..5615144 100644 | |||
341 | --- a/curtin/block/zfs.py | |||
342 | +++ b/curtin/block/zfs.py | |||
343 | @@ -7,6 +7,7 @@ and volumes.""" | |||
344 | 7 | import os | 7 | import os |
345 | 8 | 8 | ||
346 | 9 | from curtin.config import merge_config | 9 | from curtin.config import merge_config |
347 | 10 | from curtin import distro | ||
348 | 10 | from curtin import util | 11 | from curtin import util |
349 | 11 | from . import blkid, get_supported_filesystems | 12 | from . import blkid, get_supported_filesystems |
350 | 12 | 13 | ||
351 | @@ -90,7 +91,7 @@ def zfs_assert_supported(): | |||
352 | 90 | if arch in ZFS_UNSUPPORTED_ARCHES: | 91 | if arch in ZFS_UNSUPPORTED_ARCHES: |
353 | 91 | raise RuntimeError("zfs is not supported on architecture: %s" % arch) | 92 | raise RuntimeError("zfs is not supported on architecture: %s" % arch) |
354 | 92 | 93 | ||
356 | 93 | release = util.lsb_release()['codename'] | 94 | release = distro.lsb_release()['codename'] |
357 | 94 | if release in ZFS_UNSUPPORTED_RELEASES: | 95 | if release in ZFS_UNSUPPORTED_RELEASES: |
358 | 95 | raise RuntimeError("zfs is not supported on release: %s" % release) | 96 | raise RuntimeError("zfs is not supported on release: %s" % release) |
359 | 96 | 97 | ||
360 | diff --git a/curtin/commands/apply_net.py b/curtin/commands/apply_net.py | |||
361 | index ffd474e..ddc5056 100644 | |||
362 | --- a/curtin/commands/apply_net.py | |||
363 | +++ b/curtin/commands/apply_net.py | |||
364 | @@ -7,6 +7,7 @@ from .. import log | |||
365 | 7 | import curtin.net as net | 7 | import curtin.net as net |
366 | 8 | import curtin.util as util | 8 | import curtin.util as util |
367 | 9 | from curtin import config | 9 | from curtin import config |
368 | 10 | from curtin import paths | ||
369 | 10 | from . import populate_one_subcmd | 11 | from . import populate_one_subcmd |
370 | 11 | 12 | ||
371 | 12 | 13 | ||
372 | @@ -123,7 +124,7 @@ def _patch_ifupdown_ipv6_mtu_hook(target, | |||
373 | 123 | 124 | ||
374 | 124 | for hook in ['prehook', 'posthook']: | 125 | for hook in ['prehook', 'posthook']: |
375 | 125 | fn = hookfn[hook] | 126 | fn = hookfn[hook] |
377 | 126 | cfg = util.target_path(target, path=fn) | 127 | cfg = paths.target_path(target, path=fn) |
378 | 127 | LOG.info('Injecting fix for ipv6 mtu settings: %s', cfg) | 128 | LOG.info('Injecting fix for ipv6 mtu settings: %s', cfg) |
379 | 128 | util.write_file(cfg, contents[hook], mode=0o755) | 129 | util.write_file(cfg, contents[hook], mode=0o755) |
380 | 129 | 130 | ||
381 | @@ -136,7 +137,7 @@ def _disable_ipv6_privacy_extensions(target, | |||
382 | 136 | Resolve this by allowing the cloud-image setting to win. """ | 137 | Resolve this by allowing the cloud-image setting to win. """ |
383 | 137 | 138 | ||
384 | 138 | LOG.debug('Attempting to remove ipv6 privacy extensions') | 139 | LOG.debug('Attempting to remove ipv6 privacy extensions') |
386 | 139 | cfg = util.target_path(target, path=path) | 140 | cfg = paths.target_path(target, path=path) |
387 | 140 | if not os.path.exists(cfg): | 141 | if not os.path.exists(cfg): |
388 | 141 | LOG.warn('Failed to find ipv6 privacy conf file %s', cfg) | 142 | LOG.warn('Failed to find ipv6 privacy conf file %s', cfg) |
389 | 142 | return | 143 | return |
390 | @@ -182,7 +183,7 @@ def _maybe_remove_legacy_eth0(target, | |||
391 | 182 | - with unknown content, leave it and warn | 183 | - with unknown content, leave it and warn |
392 | 183 | """ | 184 | """ |
393 | 184 | 185 | ||
395 | 185 | cfg = util.target_path(target, path=path) | 186 | cfg = paths.target_path(target, path=path) |
396 | 186 | if not os.path.exists(cfg): | 187 | if not os.path.exists(cfg): |
397 | 187 | LOG.warn('Failed to find legacy network conf file %s', cfg) | 188 | LOG.warn('Failed to find legacy network conf file %s', cfg) |
398 | 188 | return | 189 | return |
399 | diff --git a/curtin/commands/apt_config.py b/curtin/commands/apt_config.py | |||
400 | index 41c329e..9ce25b3 100644 | |||
401 | --- a/curtin/commands/apt_config.py | |||
402 | +++ b/curtin/commands/apt_config.py | |||
403 | @@ -13,7 +13,7 @@ import sys | |||
404 | 13 | import yaml | 13 | import yaml |
405 | 14 | 14 | ||
406 | 15 | from curtin.log import LOG | 15 | from curtin.log import LOG |
408 | 16 | from curtin import (config, util, gpg) | 16 | from curtin import (config, distro, gpg, paths, util) |
409 | 17 | 17 | ||
410 | 18 | from . import populate_one_subcmd | 18 | from . import populate_one_subcmd |
411 | 19 | 19 | ||
412 | @@ -61,7 +61,7 @@ def handle_apt(cfg, target=None): | |||
413 | 61 | curthooks if a global apt config was provided or via the "apt" | 61 | curthooks if a global apt config was provided or via the "apt" |
414 | 62 | standalone command. | 62 | standalone command. |
415 | 63 | """ | 63 | """ |
417 | 64 | release = util.lsb_release(target=target)['codename'] | 64 | release = distro.lsb_release(target=target)['codename'] |
418 | 65 | arch = util.get_architecture(target) | 65 | arch = util.get_architecture(target) |
419 | 66 | mirrors = find_apt_mirror_info(cfg, arch) | 66 | mirrors = find_apt_mirror_info(cfg, arch) |
420 | 67 | LOG.debug("Apt Mirror info: %s", mirrors) | 67 | LOG.debug("Apt Mirror info: %s", mirrors) |
421 | @@ -148,7 +148,7 @@ def apply_debconf_selections(cfg, target=None): | |||
422 | 148 | pkg = re.sub(r"[:\s].*", "", line) | 148 | pkg = re.sub(r"[:\s].*", "", line) |
423 | 149 | pkgs_cfgd.add(pkg) | 149 | pkgs_cfgd.add(pkg) |
424 | 150 | 150 | ||
426 | 151 | pkgs_installed = util.get_installed_packages(target) | 151 | pkgs_installed = distro.get_installed_packages(target) |
427 | 152 | 152 | ||
428 | 153 | LOG.debug("pkgs_cfgd: %s", pkgs_cfgd) | 153 | LOG.debug("pkgs_cfgd: %s", pkgs_cfgd) |
429 | 154 | LOG.debug("pkgs_installed: %s", pkgs_installed) | 154 | LOG.debug("pkgs_installed: %s", pkgs_installed) |
430 | @@ -164,7 +164,7 @@ def apply_debconf_selections(cfg, target=None): | |||
431 | 164 | def clean_cloud_init(target): | 164 | def clean_cloud_init(target): |
432 | 165 | """clean out any local cloud-init config""" | 165 | """clean out any local cloud-init config""" |
433 | 166 | flist = glob.glob( | 166 | flist = glob.glob( |
435 | 167 | util.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*")) | 167 | paths.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*")) |
436 | 168 | 168 | ||
437 | 169 | LOG.debug("cleaning cloud-init config from: %s", flist) | 169 | LOG.debug("cleaning cloud-init config from: %s", flist) |
438 | 170 | for dpkg_cfg in flist: | 170 | for dpkg_cfg in flist: |
439 | @@ -194,7 +194,7 @@ def rename_apt_lists(new_mirrors, target=None): | |||
440 | 194 | """rename_apt_lists - rename apt lists to preserve old cache data""" | 194 | """rename_apt_lists - rename apt lists to preserve old cache data""" |
441 | 195 | default_mirrors = get_default_mirrors(util.get_architecture(target)) | 195 | default_mirrors = get_default_mirrors(util.get_architecture(target)) |
442 | 196 | 196 | ||
444 | 197 | pre = util.target_path(target, APT_LISTS) | 197 | pre = paths.target_path(target, APT_LISTS) |
445 | 198 | for (name, omirror) in default_mirrors.items(): | 198 | for (name, omirror) in default_mirrors.items(): |
446 | 199 | nmirror = new_mirrors.get(name) | 199 | nmirror = new_mirrors.get(name) |
447 | 200 | if not nmirror: | 200 | if not nmirror: |
448 | @@ -299,7 +299,7 @@ def generate_sources_list(cfg, release, mirrors, target=None): | |||
449 | 299 | if tmpl is None: | 299 | if tmpl is None: |
450 | 300 | LOG.info("No custom template provided, fall back to modify" | 300 | LOG.info("No custom template provided, fall back to modify" |
451 | 301 | "mirrors in %s on the target system", aptsrc) | 301 | "mirrors in %s on the target system", aptsrc) |
453 | 302 | tmpl = util.load_file(util.target_path(target, aptsrc)) | 302 | tmpl = util.load_file(paths.target_path(target, aptsrc)) |
454 | 303 | # Strategy if no custom template was provided: | 303 | # Strategy if no custom template was provided: |
455 | 304 | # - Only replacing mirrors | 304 | # - Only replacing mirrors |
456 | 305 | # - no reason to replace "release" as it is from target anyway | 305 | # - no reason to replace "release" as it is from target anyway |
457 | @@ -310,24 +310,24 @@ def generate_sources_list(cfg, release, mirrors, target=None): | |||
458 | 310 | tmpl = mirror_to_placeholder(tmpl, default_mirrors['SECURITY'], | 310 | tmpl = mirror_to_placeholder(tmpl, default_mirrors['SECURITY'], |
459 | 311 | "$SECURITY") | 311 | "$SECURITY") |
460 | 312 | 312 | ||
462 | 313 | orig = util.target_path(target, aptsrc) | 313 | orig = paths.target_path(target, aptsrc) |
463 | 314 | if os.path.exists(orig): | 314 | if os.path.exists(orig): |
464 | 315 | os.rename(orig, orig + ".curtin.old") | 315 | os.rename(orig, orig + ".curtin.old") |
465 | 316 | 316 | ||
466 | 317 | rendered = util.render_string(tmpl, params) | 317 | rendered = util.render_string(tmpl, params) |
467 | 318 | disabled = disable_suites(cfg.get('disable_suites'), rendered, release) | 318 | disabled = disable_suites(cfg.get('disable_suites'), rendered, release) |
469 | 319 | util.write_file(util.target_path(target, aptsrc), disabled, mode=0o644) | 319 | util.write_file(paths.target_path(target, aptsrc), disabled, mode=0o644) |
470 | 320 | 320 | ||
471 | 321 | # protect the just generated sources.list from cloud-init | 321 | # protect the just generated sources.list from cloud-init |
472 | 322 | cloudfile = "/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg" | 322 | cloudfile = "/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg" |
473 | 323 | # this has to work with older cloud-init as well, so use old key | 323 | # this has to work with older cloud-init as well, so use old key |
474 | 324 | cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1) | 324 | cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1) |
475 | 325 | try: | 325 | try: |
477 | 326 | util.write_file(util.target_path(target, cloudfile), | 326 | util.write_file(paths.target_path(target, cloudfile), |
478 | 327 | cloudconf, mode=0o644) | 327 | cloudconf, mode=0o644) |
479 | 328 | except IOError: | 328 | except IOError: |
480 | 329 | LOG.exception("Failed to protect source.list from cloud-init in (%s)", | 329 | LOG.exception("Failed to protect source.list from cloud-init in (%s)", |
482 | 330 | util.target_path(target, cloudfile)) | 330 | paths.target_path(target, cloudfile)) |
483 | 331 | raise | 331 | raise |
484 | 332 | 332 | ||
485 | 333 | 333 | ||
486 | @@ -409,7 +409,7 @@ def add_apt_sources(srcdict, target=None, template_params=None, | |||
487 | 409 | raise | 409 | raise |
488 | 410 | continue | 410 | continue |
489 | 411 | 411 | ||
491 | 412 | sourcefn = util.target_path(target, ent['filename']) | 412 | sourcefn = paths.target_path(target, ent['filename']) |
492 | 413 | try: | 413 | try: |
493 | 414 | contents = "%s\n" % (source) | 414 | contents = "%s\n" % (source) |
494 | 415 | util.write_file(sourcefn, contents, omode="a") | 415 | util.write_file(sourcefn, contents, omode="a") |
495 | @@ -417,8 +417,8 @@ def add_apt_sources(srcdict, target=None, template_params=None, | |||
496 | 417 | LOG.exception("failed write to file %s: %s", sourcefn, detail) | 417 | LOG.exception("failed write to file %s: %s", sourcefn, detail) |
497 | 418 | raise | 418 | raise |
498 | 419 | 419 | ||
501 | 420 | util.apt_update(target=target, force=True, | 420 | distro.apt_update(target=target, force=True, |
502 | 421 | comment="apt-source changed config") | 421 | comment="apt-source changed config") |
503 | 422 | 422 | ||
504 | 423 | return | 423 | return |
505 | 424 | 424 | ||
506 | diff --git a/curtin/commands/block_meta.py b/curtin/commands/block_meta.py | |||
507 | index 6bd430d..197c1fd 100644 | |||
508 | --- a/curtin/commands/block_meta.py | |||
509 | +++ b/curtin/commands/block_meta.py | |||
510 | @@ -1,8 +1,9 @@ | |||
511 | 1 | # This file is part of curtin. See LICENSE file for copyright and license info. | 1 | # This file is part of curtin. See LICENSE file for copyright and license info. |
512 | 2 | 2 | ||
513 | 3 | from collections import OrderedDict, namedtuple | 3 | from collections import OrderedDict, namedtuple |
515 | 4 | from curtin import (block, config, util) | 4 | from curtin import (block, config, paths, util) |
516 | 5 | from curtin.block import (bcache, mdadm, mkfs, clear_holders, lvm, iscsi, zfs) | 5 | from curtin.block import (bcache, mdadm, mkfs, clear_holders, lvm, iscsi, zfs) |
517 | 6 | from curtin import distro | ||
518 | 6 | from curtin.log import LOG, logged_time | 7 | from curtin.log import LOG, logged_time |
519 | 7 | from curtin.reporter import events | 8 | from curtin.reporter import events |
520 | 8 | 9 | ||
521 | @@ -730,12 +731,12 @@ def mount_fstab_data(fdata, target=None): | |||
522 | 730 | 731 | ||
523 | 731 | :param fdata: a FstabData type | 732 | :param fdata: a FstabData type |
524 | 732 | :return None.""" | 733 | :return None.""" |
526 | 733 | mp = util.target_path(target, fdata.path) | 734 | mp = paths.target_path(target, fdata.path) |
527 | 734 | if fdata.device: | 735 | if fdata.device: |
528 | 735 | device = fdata.device | 736 | device = fdata.device |
529 | 736 | else: | 737 | else: |
530 | 737 | if fdata.spec.startswith("/") and not fdata.spec.startswith("/dev/"): | 738 | if fdata.spec.startswith("/") and not fdata.spec.startswith("/dev/"): |
532 | 738 | device = util.target_path(target, fdata.spec) | 739 | device = paths.target_path(target, fdata.spec) |
533 | 739 | else: | 740 | else: |
534 | 740 | device = fdata.spec | 741 | device = fdata.spec |
535 | 741 | 742 | ||
536 | @@ -856,7 +857,7 @@ def lvm_partition_handler(info, storage_config): | |||
537 | 856 | # Use 'wipesignatures' (if available) and 'zero' to clear target lv | 857 | # Use 'wipesignatures' (if available) and 'zero' to clear target lv |
538 | 857 | # of any fs metadata | 858 | # of any fs metadata |
539 | 858 | cmd = ["lvcreate", volgroup, "--name", name, "--zero=y"] | 859 | cmd = ["lvcreate", volgroup, "--name", name, "--zero=y"] |
541 | 859 | release = util.lsb_release()['codename'] | 860 | release = distro.lsb_release()['codename'] |
542 | 860 | if release not in ['precise', 'trusty']: | 861 | if release not in ['precise', 'trusty']: |
543 | 861 | cmd.extend(["--wipesignatures=y"]) | 862 | cmd.extend(["--wipesignatures=y"]) |
544 | 862 | 863 | ||
545 | diff --git a/curtin/commands/curthooks.py b/curtin/commands/curthooks.py | |||
546 | index f9a5a66..480eca4 100644 | |||
547 | --- a/curtin/commands/curthooks.py | |||
548 | +++ b/curtin/commands/curthooks.py | |||
549 | @@ -11,12 +11,18 @@ import textwrap | |||
550 | 11 | 11 | ||
551 | 12 | from curtin import config | 12 | from curtin import config |
552 | 13 | from curtin import block | 13 | from curtin import block |
553 | 14 | from curtin import distro | ||
554 | 15 | from curtin.block import iscsi | ||
555 | 14 | from curtin import net | 16 | from curtin import net |
556 | 15 | from curtin import futil | 17 | from curtin import futil |
557 | 16 | from curtin.log import LOG | 18 | from curtin.log import LOG |
558 | 19 | from curtin import paths | ||
559 | 17 | from curtin import swap | 20 | from curtin import swap |
560 | 18 | from curtin import util | 21 | from curtin import util |
561 | 19 | from curtin import version as curtin_version | 22 | from curtin import version as curtin_version |
562 | 23 | from curtin.block import deps as bdeps | ||
563 | 24 | from curtin.distro import DISTROS | ||
564 | 25 | from curtin.net import deps as ndeps | ||
565 | 20 | from curtin.reporter import events | 26 | from curtin.reporter import events |
566 | 21 | from curtin.commands import apply_net, apt_config | 27 | from curtin.commands import apply_net, apt_config |
567 | 22 | from curtin.url_helper import get_maas_version | 28 | from curtin.url_helper import get_maas_version |
568 | @@ -173,10 +179,10 @@ def install_kernel(cfg, target): | |||
569 | 173 | # target only has required packages installed. See LP:1640519 | 179 | # target only has required packages installed. See LP:1640519 |
570 | 174 | fk_packages = get_flash_kernel_pkgs() | 180 | fk_packages = get_flash_kernel_pkgs() |
571 | 175 | if fk_packages: | 181 | if fk_packages: |
573 | 176 | util.install_packages(fk_packages.split(), target=target) | 182 | distro.install_packages(fk_packages.split(), target=target) |
574 | 177 | 183 | ||
575 | 178 | if kernel_package: | 184 | if kernel_package: |
577 | 179 | util.install_packages([kernel_package], target=target) | 185 | distro.install_packages([kernel_package], target=target) |
578 | 180 | return | 186 | return |
579 | 181 | 187 | ||
580 | 182 | # uname[2] is kernel name (ie: 3.16.0-7-generic) | 188 | # uname[2] is kernel name (ie: 3.16.0-7-generic) |
581 | @@ -193,24 +199,24 @@ def install_kernel(cfg, target): | |||
582 | 193 | LOG.warn("Couldn't detect kernel package to install for %s." | 199 | LOG.warn("Couldn't detect kernel package to install for %s." |
583 | 194 | % kernel) | 200 | % kernel) |
584 | 195 | if kernel_fallback is not None: | 201 | if kernel_fallback is not None: |
586 | 196 | util.install_packages([kernel_fallback], target=target) | 202 | distro.install_packages([kernel_fallback], target=target) |
587 | 197 | return | 203 | return |
588 | 198 | 204 | ||
589 | 199 | package = "linux-{flavor}{map_suffix}".format( | 205 | package = "linux-{flavor}{map_suffix}".format( |
590 | 200 | flavor=flavor, map_suffix=map_suffix) | 206 | flavor=flavor, map_suffix=map_suffix) |
591 | 201 | 207 | ||
594 | 202 | if util.has_pkg_available(package, target): | 208 | if distro.has_pkg_available(package, target): |
595 | 203 | if util.has_pkg_installed(package, target): | 209 | if distro.has_pkg_installed(package, target): |
596 | 204 | LOG.debug("Kernel package '%s' already installed", package) | 210 | LOG.debug("Kernel package '%s' already installed", package) |
597 | 205 | else: | 211 | else: |
598 | 206 | LOG.debug("installing kernel package '%s'", package) | 212 | LOG.debug("installing kernel package '%s'", package) |
600 | 207 | util.install_packages([package], target=target) | 213 | distro.install_packages([package], target=target) |
601 | 208 | else: | 214 | else: |
602 | 209 | if kernel_fallback is not None: | 215 | if kernel_fallback is not None: |
603 | 210 | LOG.info("Kernel package '%s' not available. " | 216 | LOG.info("Kernel package '%s' not available. " |
604 | 211 | "Installing fallback package '%s'.", | 217 | "Installing fallback package '%s'.", |
605 | 212 | package, kernel_fallback) | 218 | package, kernel_fallback) |
607 | 213 | util.install_packages([kernel_fallback], target=target) | 219 | distro.install_packages([kernel_fallback], target=target) |
608 | 214 | else: | 220 | else: |
609 | 215 | LOG.warn("Kernel package '%s' not available and no fallback." | 221 | LOG.warn("Kernel package '%s' not available and no fallback." |
610 | 216 | " System may not boot.", package) | 222 | " System may not boot.", package) |
611 | @@ -273,7 +279,7 @@ def uefi_reorder_loaders(grubcfg, target): | |||
612 | 273 | LOG.debug("Currently booted UEFI loader might no longer boot.") | 279 | LOG.debug("Currently booted UEFI loader might no longer boot.") |
613 | 274 | 280 | ||
614 | 275 | 281 | ||
616 | 276 | def setup_grub(cfg, target): | 282 | def setup_grub(cfg, target, osfamily=DISTROS.debian): |
617 | 277 | # target is the path to the mounted filesystem | 283 | # target is the path to the mounted filesystem |
618 | 278 | 284 | ||
619 | 279 | # FIXME: these methods need moving to curtin.block | 285 | # FIXME: these methods need moving to curtin.block |
620 | @@ -353,24 +359,6 @@ def setup_grub(cfg, target): | |||
621 | 353 | else: | 359 | else: |
622 | 354 | instdevs = list(blockdevs) | 360 | instdevs = list(blockdevs) |
623 | 355 | 361 | ||
624 | 356 | # UEFI requires grub-efi-{arch}. If a signed version of that package | ||
625 | 357 | # exists then it will be installed. | ||
626 | 358 | if util.is_uefi_bootable(): | ||
627 | 359 | arch = util.get_architecture() | ||
628 | 360 | pkgs = ['grub-efi-%s' % arch] | ||
629 | 361 | |||
630 | 362 | # Architecture might support a signed UEFI loader | ||
631 | 363 | uefi_pkg_signed = 'grub-efi-%s-signed' % arch | ||
632 | 364 | if util.has_pkg_available(uefi_pkg_signed): | ||
633 | 365 | pkgs.append(uefi_pkg_signed) | ||
634 | 366 | |||
635 | 367 | # AMD64 has shim-signed for SecureBoot support | ||
636 | 368 | if arch == "amd64": | ||
637 | 369 | pkgs.append("shim-signed") | ||
638 | 370 | |||
639 | 371 | # Install the UEFI packages needed for the architecture | ||
640 | 372 | util.install_packages(pkgs, target=target) | ||
641 | 373 | |||
642 | 374 | env = os.environ.copy() | 362 | env = os.environ.copy() |
643 | 375 | 363 | ||
644 | 376 | replace_default = grubcfg.get('replace_linux_default', True) | 364 | replace_default = grubcfg.get('replace_linux_default', True) |
645 | @@ -399,6 +387,7 @@ def setup_grub(cfg, target): | |||
646 | 399 | else: | 387 | else: |
647 | 400 | LOG.debug("NOT enabling UEFI nvram updates") | 388 | LOG.debug("NOT enabling UEFI nvram updates") |
648 | 401 | LOG.debug("Target system may not boot") | 389 | LOG.debug("Target system may not boot") |
649 | 390 | args.append('--os-family=%s' % osfamily) | ||
650 | 402 | args.append(target) | 391 | args.append(target) |
651 | 403 | 392 | ||
652 | 404 | # capture stdout and stderr joined. | 393 | # capture stdout and stderr joined. |
653 | @@ -435,14 +424,21 @@ def copy_crypttab(crypttab, target): | |||
654 | 435 | shutil.copy(crypttab, os.path.sep.join([target, 'etc/crypttab'])) | 424 | shutil.copy(crypttab, os.path.sep.join([target, 'etc/crypttab'])) |
655 | 436 | 425 | ||
656 | 437 | 426 | ||
658 | 438 | def copy_iscsi_conf(nodes_dir, target): | 427 | def copy_iscsi_conf(nodes_dir, target, target_nodes_dir='etc/iscsi/nodes'): |
659 | 439 | if not nodes_dir: | 428 | if not nodes_dir: |
660 | 440 | LOG.warn("nodes directory must be specified, not copying") | 429 | LOG.warn("nodes directory must be specified, not copying") |
661 | 441 | return | 430 | return |
662 | 442 | 431 | ||
663 | 443 | LOG.info("copying iscsi nodes database into target") | 432 | LOG.info("copying iscsi nodes database into target") |
666 | 444 | shutil.copytree(nodes_dir, os.path.sep.join([target, | 433 | tdir = os.path.sep.join([target, target_nodes_dir]) |
667 | 445 | 'etc/iscsi/nodes'])) | 434 | if not os.path.exists(tdir): |
668 | 435 | shutil.copytree(nodes_dir, tdir) | ||
669 | 436 | else: | ||
670 | 437 | # if /etc/iscsi/nodes exists, copy dirs underneath | ||
671 | 438 | for ndir in os.listdir(nodes_dir): | ||
672 | 439 | source_dir = os.path.join(nodes_dir, ndir) | ||
673 | 440 | target_dir = os.path.join(tdir, ndir) | ||
674 | 441 | shutil.copytree(source_dir, target_dir) | ||
675 | 446 | 442 | ||
676 | 447 | 443 | ||
677 | 448 | def copy_mdadm_conf(mdadm_conf, target): | 444 | def copy_mdadm_conf(mdadm_conf, target): |
678 | @@ -486,7 +482,7 @@ def copy_dname_rules(rules_d, target): | |||
679 | 486 | if not rules_d: | 482 | if not rules_d: |
680 | 487 | LOG.warn("no udev rules directory to copy") | 483 | LOG.warn("no udev rules directory to copy") |
681 | 488 | return | 484 | return |
683 | 489 | target_rules_dir = util.target_path(target, "etc/udev/rules.d") | 485 | target_rules_dir = paths.target_path(target, "etc/udev/rules.d") |
684 | 490 | for rule in os.listdir(rules_d): | 486 | for rule in os.listdir(rules_d): |
685 | 491 | target_file = os.path.join(target_rules_dir, rule) | 487 | target_file = os.path.join(target_rules_dir, rule) |
686 | 492 | shutil.copy(os.path.join(rules_d, rule), target_file) | 488 | shutil.copy(os.path.join(rules_d, rule), target_file) |
687 | @@ -532,11 +528,19 @@ def add_swap(cfg, target, fstab): | |||
688 | 532 | maxsize=maxsize) | 528 | maxsize=maxsize) |
689 | 533 | 529 | ||
690 | 534 | 530 | ||
693 | 535 | def detect_and_handle_multipath(cfg, target): | 531 | def detect_and_handle_multipath(cfg, target, osfamily=DISTROS.debian): |
694 | 536 | DEFAULT_MULTIPATH_PACKAGES = ['multipath-tools-boot'] | 532 | DEFAULT_MULTIPATH_PACKAGES = { |
695 | 533 | DISTROS.debian: ['multipath-tools-boot'], | ||
696 | 534 | DISTROS.redhat: ['device-mapper-multipath'], | ||
697 | 535 | } | ||
698 | 536 | if osfamily not in DEFAULT_MULTIPATH_PACKAGES: | ||
699 | 537 | raise ValueError( | ||
700 | 538 | 'No multipath package mapping for distro: %s' % osfamily) | ||
701 | 539 | |||
702 | 537 | mpcfg = cfg.get('multipath', {}) | 540 | mpcfg = cfg.get('multipath', {}) |
703 | 538 | mpmode = mpcfg.get('mode', 'auto') | 541 | mpmode = mpcfg.get('mode', 'auto') |
705 | 539 | mppkgs = mpcfg.get('packages', DEFAULT_MULTIPATH_PACKAGES) | 542 | mppkgs = mpcfg.get('packages', |
706 | 543 | DEFAULT_MULTIPATH_PACKAGES.get(osfamily)) | ||
707 | 540 | mpbindings = mpcfg.get('overwrite_bindings', True) | 544 | mpbindings = mpcfg.get('overwrite_bindings', True) |
708 | 541 | 545 | ||
709 | 542 | if isinstance(mppkgs, str): | 546 | if isinstance(mppkgs, str): |
710 | @@ -549,23 +553,28 @@ def detect_and_handle_multipath(cfg, target): | |||
711 | 549 | return | 553 | return |
712 | 550 | 554 | ||
713 | 551 | LOG.info("Detected multipath devices. Installing support via %s", mppkgs) | 555 | LOG.info("Detected multipath devices. Installing support via %s", mppkgs) |
714 | 556 | needed = [pkg for pkg in mppkgs if pkg | ||
715 | 557 | not in distro.get_installed_packages(target)] | ||
716 | 558 | if needed: | ||
717 | 559 | distro.install_packages(needed, target=target, osfamily=osfamily) | ||
718 | 552 | 560 | ||
719 | 553 | util.install_packages(mppkgs, target=target) | ||
720 | 554 | replace_spaces = True | 561 | replace_spaces = True |
735 | 555 | try: | 562 | if osfamily == DISTROS.debian: |
736 | 556 | # check in-target version | 563 | try: |
737 | 557 | pkg_ver = util.get_package_version('multipath-tools', target=target) | 564 | # check in-target version |
738 | 558 | LOG.debug("get_package_version:\n%s", pkg_ver) | 565 | pkg_ver = distro.get_package_version('multipath-tools', |
739 | 559 | LOG.debug("multipath version is %s (major=%s minor=%s micro=%s)", | 566 | target=target) |
740 | 560 | pkg_ver['semantic_version'], pkg_ver['major'], | 567 | LOG.debug("get_package_version:\n%s", pkg_ver) |
741 | 561 | pkg_ver['minor'], pkg_ver['micro']) | 568 | LOG.debug("multipath version is %s (major=%s minor=%s micro=%s)", |
742 | 562 | # multipath-tools versions < 0.5.0 do _NOT_ want whitespace replaced | 569 | pkg_ver['semantic_version'], pkg_ver['major'], |
743 | 563 | # i.e. 0.4.X in Trusty. | 570 | pkg_ver['minor'], pkg_ver['micro']) |
744 | 564 | if pkg_ver['semantic_version'] < 500: | 571 | # multipath-tools versions < 0.5.0 do _NOT_ |
745 | 565 | replace_spaces = False | 572 | # want whitespace replaced i.e. 0.4.X in Trusty. |
746 | 566 | except Exception as e: | 573 | if pkg_ver['semantic_version'] < 500: |
747 | 567 | LOG.warn("failed reading multipath-tools version, " | 574 | replace_spaces = False |
748 | 568 | "assuming it wants no spaces in wwids: %s", e) | 575 | except Exception as e: |
749 | 576 | LOG.warn("failed reading multipath-tools version, " | ||
750 | 577 | "assuming it wants no spaces in wwids: %s", e) | ||
751 | 569 | 578 | ||
752 | 570 | multipath_cfg_path = os.path.sep.join([target, '/etc/multipath.conf']) | 579 | multipath_cfg_path = os.path.sep.join([target, '/etc/multipath.conf']) |
753 | 571 | multipath_bind_path = os.path.sep.join([target, '/etc/multipath/bindings']) | 580 | multipath_bind_path = os.path.sep.join([target, '/etc/multipath/bindings']) |
754 | @@ -574,7 +583,7 @@ def detect_and_handle_multipath(cfg, target): | |||
755 | 574 | if not os.path.isfile(multipath_cfg_path): | 583 | if not os.path.isfile(multipath_cfg_path): |
756 | 575 | # Without user_friendly_names option enabled system fails to boot | 584 | # Without user_friendly_names option enabled system fails to boot |
757 | 576 | # if any of the disks has spaces in its name. Package multipath-tools | 585 | # if any of the disks has spaces in its name. Package multipath-tools |
759 | 577 | # has bug opened for this issue (LP: 1432062) but it was not fixed yet. | 586 | # has bug opened for this issue LP: #1432062 but it was not fixed yet. |
760 | 578 | multipath_cfg_content = '\n'.join( | 587 | multipath_cfg_content = '\n'.join( |
761 | 579 | ['# This file was created by curtin while installing the system.', | 588 | ['# This file was created by curtin while installing the system.', |
762 | 580 | 'defaults {', | 589 | 'defaults {', |
763 | @@ -593,7 +602,13 @@ def detect_and_handle_multipath(cfg, target): | |||
764 | 593 | mpname = "mpath0" | 602 | mpname = "mpath0" |
765 | 594 | grub_dev = "/dev/mapper/" + mpname | 603 | grub_dev = "/dev/mapper/" + mpname |
766 | 595 | if partno is not None: | 604 | if partno is not None: |
768 | 596 | grub_dev += "-part%s" % partno | 605 | if osfamily == DISTROS.debian: |
769 | 606 | grub_dev += "-part%s" % partno | ||
770 | 607 | elif osfamily == DISTROS.redhat: | ||
771 | 608 | grub_dev += "p%s" % partno | ||
772 | 609 | else: | ||
773 | 610 | raise ValueError( | ||
774 | 611 | 'Unknown grub_dev mapping for distro: %s' % osfamily) | ||
775 | 597 | 612 | ||
776 | 598 | LOG.debug("configuring multipath install for root=%s wwid=%s", | 613 | LOG.debug("configuring multipath install for root=%s wwid=%s", |
777 | 599 | grub_dev, wwid) | 614 | grub_dev, wwid) |
778 | @@ -606,31 +621,54 @@ def detect_and_handle_multipath(cfg, target): | |||
779 | 606 | '']) | 621 | '']) |
780 | 607 | util.write_file(multipath_bind_path, content=multipath_bind_content) | 622 | util.write_file(multipath_bind_path, content=multipath_bind_content) |
781 | 608 | 623 | ||
784 | 609 | grub_cfg = os.path.sep.join( | 624 | if osfamily == DISTROS.debian: |
785 | 610 | [target, '/etc/default/grub.d/50-curtin-multipath.cfg']) | 625 | grub_cfg = os.path.sep.join( |
786 | 626 | [target, '/etc/default/grub.d/50-curtin-multipath.cfg']) | ||
787 | 627 | omode = 'w' | ||
788 | 628 | elif osfamily == DISTROS.redhat: | ||
789 | 629 | grub_cfg = os.path.sep.join([target, '/etc/default/grub']) | ||
790 | 630 | omode = 'a' | ||
791 | 631 | else: | ||
792 | 632 | raise ValueError( | ||
793 | 633 | 'Unknown grub_cfg mapping for distro: %s' % osfamily) | ||
794 | 634 | |||
795 | 611 | msg = '\n'.join([ | 635 | msg = '\n'.join([ |
797 | 612 | '# Written by curtin for multipath device wwid "%s"' % wwid, | 636 | '# Written by curtin for multipath device %s %s' % (mpname, wwid), |
798 | 613 | 'GRUB_DEVICE=%s' % grub_dev, | 637 | 'GRUB_DEVICE=%s' % grub_dev, |
799 | 614 | 'GRUB_DISABLE_LINUX_UUID=true', | 638 | 'GRUB_DISABLE_LINUX_UUID=true', |
800 | 615 | '']) | 639 | '']) |
803 | 616 | util.write_file(grub_cfg, content=msg) | 640 | util.write_file(grub_cfg, omode=omode, content=msg) |
802 | 617 | |||
804 | 618 | else: | 641 | else: |
805 | 619 | LOG.warn("Not sure how this will boot") | 642 | LOG.warn("Not sure how this will boot") |
806 | 620 | 643 | ||
810 | 621 | # Initrams needs to be updated to include /etc/multipath.cfg | 644 | if osfamily == DISTROS.debian: |
811 | 622 | # and /etc/multipath/bindings files. | 645 | # Initrams needs to be updated to include /etc/multipath.cfg |
812 | 623 | update_initramfs(target, all_kernels=True) | 646 | # and /etc/multipath/bindings files. |
813 | 647 | update_initramfs(target, all_kernels=True) | ||
814 | 648 | elif osfamily == DISTROS.redhat: | ||
815 | 649 | # Write out initramfs/dracut config for multipath | ||
816 | 650 | dracut_conf_multipath = os.path.sep.join( | ||
817 | 651 | [target, '/etc/dracut.conf.d/10-curtin-multipath.conf']) | ||
818 | 652 | msg = '\n'.join([ | ||
819 | 653 | '# Written by curtin for multipath device wwid "%s"' % wwid, | ||
820 | 654 | 'force_drivers+=" dm-multipath "', | ||
821 | 655 | 'add_dracutmodules+="multipath"', | ||
822 | 656 | 'install_items+="/etc/multipath.conf /etc/multipath/bindings"', | ||
823 | 657 | '']) | ||
824 | 658 | util.write_file(dracut_conf_multipath, content=msg) | ||
825 | 659 | else: | ||
826 | 660 | raise ValueError( | ||
827 | 661 | 'Unknown initramfs mapping for distro: %s' % osfamily) | ||
828 | 624 | 662 | ||
829 | 625 | 663 | ||
831 | 626 | def detect_required_packages(cfg): | 664 | def detect_required_packages(cfg, osfamily=DISTROS.debian): |
832 | 627 | """ | 665 | """ |
833 | 628 | detect packages that will be required in-target by custom config items | 666 | detect packages that will be required in-target by custom config items |
834 | 629 | """ | 667 | """ |
835 | 630 | 668 | ||
836 | 631 | mapping = { | 669 | mapping = { |
839 | 632 | 'storage': block.detect_required_packages_mapping(), | 670 | 'storage': bdeps.detect_required_packages_mapping(osfamily=osfamily), |
840 | 633 | 'network': net.detect_required_packages_mapping(), | 671 | 'network': ndeps.detect_required_packages_mapping(osfamily=osfamily), |
841 | 634 | } | 672 | } |
842 | 635 | 673 | ||
843 | 636 | needed_packages = [] | 674 | needed_packages = [] |
844 | @@ -657,16 +695,16 @@ def detect_required_packages(cfg): | |||
845 | 657 | return needed_packages | 695 | return needed_packages |
846 | 658 | 696 | ||
847 | 659 | 697 | ||
849 | 660 | def install_missing_packages(cfg, target): | 698 | def install_missing_packages(cfg, target, osfamily=DISTROS.debian): |
850 | 661 | ''' describe which operation types will require specific packages | 699 | ''' describe which operation types will require specific packages |
851 | 662 | 700 | ||
852 | 663 | 'custom_config_key': { | 701 | 'custom_config_key': { |
853 | 664 | 'pkg1': ['op_name_1', 'op_name_2', ...] | 702 | 'pkg1': ['op_name_1', 'op_name_2', ...] |
854 | 665 | } | 703 | } |
855 | 666 | ''' | 704 | ''' |
859 | 667 | 705 | installed_packages = distro.get_installed_packages(target) | |
860 | 668 | installed_packages = util.get_installed_packages(target) | 706 | needed_packages = set([pkg for pkg in |
861 | 669 | needed_packages = set([pkg for pkg in detect_required_packages(cfg) | 707 | detect_required_packages(cfg, osfamily=osfamily) |
862 | 670 | if pkg not in installed_packages]) | 708 | if pkg not in installed_packages]) |
863 | 671 | 709 | ||
864 | 672 | arch_packages = { | 710 | arch_packages = { |
865 | @@ -678,6 +716,31 @@ def install_missing_packages(cfg, target): | |||
866 | 678 | if pkg not in needed_packages: | 716 | if pkg not in needed_packages: |
867 | 679 | needed_packages.add(pkg) | 717 | needed_packages.add(pkg) |
868 | 680 | 718 | ||
869 | 719 | # UEFI requires grub-efi-{arch}. If a signed version of that package | ||
870 | 720 | # exists then it will be installed. | ||
871 | 721 | if util.is_uefi_bootable(): | ||
872 | 722 | uefi_pkgs = [] | ||
873 | 723 | if osfamily == DISTROS.redhat: | ||
874 | 724 | # centos/redhat doesn't support 32-bit? | ||
875 | 725 | uefi_pkgs.extend(['grub2-efi-x64-modules']) | ||
876 | 726 | elif osfamily == DISTROS.debian: | ||
877 | 727 | arch = util.get_architecture() | ||
878 | 728 | uefi_pkgs.append('grub-efi-%s' % arch) | ||
879 | 729 | |||
880 | 730 | # Architecture might support a signed UEFI loader | ||
881 | 731 | uefi_pkg_signed = 'grub-efi-%s-signed' % arch | ||
882 | 732 | if distro.has_pkg_available(uefi_pkg_signed): | ||
883 | 733 | uefi_pkgs.append(uefi_pkg_signed) | ||
884 | 734 | |||
885 | 735 | # AMD64 has shim-signed for SecureBoot support | ||
886 | 736 | if arch == "amd64": | ||
887 | 737 | uefi_pkgs.append("shim-signed") | ||
888 | 738 | else: | ||
889 | 739 | raise ValueError('Unknown grub2 package list for distro: %s' % | ||
890 | 740 | osfamily) | ||
891 | 741 | needed_packages.update([pkg for pkg in uefi_pkgs | ||
892 | 742 | if pkg not in installed_packages]) | ||
893 | 743 | |||
894 | 681 | # Filter out ifupdown network packages on netplan enabled systems. | 744 | # Filter out ifupdown network packages on netplan enabled systems. |
895 | 682 | has_netplan = ('nplan' in installed_packages or | 745 | has_netplan = ('nplan' in installed_packages or |
896 | 683 | 'netplan.io' in installed_packages) | 746 | 'netplan.io' in installed_packages) |
897 | @@ -696,10 +759,10 @@ def install_missing_packages(cfg, target): | |||
898 | 696 | reporting_enabled=True, level="INFO", | 759 | reporting_enabled=True, level="INFO", |
899 | 697 | description="Installing packages on target system: " + | 760 | description="Installing packages on target system: " + |
900 | 698 | str(to_add)): | 761 | str(to_add)): |
902 | 699 | util.install_packages(to_add, target=target) | 762 | distro.install_packages(to_add, target=target, osfamily=osfamily) |
903 | 700 | 763 | ||
904 | 701 | 764 | ||
906 | 702 | def system_upgrade(cfg, target): | 765 | def system_upgrade(cfg, target, osfamily=DISTROS.debian): |
907 | 703 | """run system-upgrade (apt-get dist-upgrade) or other in target. | 766 | """run system-upgrade (apt-get dist-upgrade) or other in target. |
908 | 704 | 767 | ||
909 | 705 | config: | 768 | config: |
910 | @@ -718,7 +781,7 @@ def system_upgrade(cfg, target): | |||
911 | 718 | LOG.debug("system_upgrade disabled by config.") | 781 | LOG.debug("system_upgrade disabled by config.") |
912 | 719 | return | 782 | return |
913 | 720 | 783 | ||
915 | 721 | util.system_upgrade(target=target) | 784 | distro.system_upgrade(target=target, osfamily=osfamily) |
916 | 722 | 785 | ||
917 | 723 | 786 | ||
918 | 724 | def inject_pollinate_user_agent_config(ua_cfg, target): | 787 | def inject_pollinate_user_agent_config(ua_cfg, target): |
919 | @@ -728,7 +791,7 @@ def inject_pollinate_user_agent_config(ua_cfg, target): | |||
920 | 728 | if not isinstance(ua_cfg, dict): | 791 | if not isinstance(ua_cfg, dict): |
921 | 729 | raise ValueError('ua_cfg is not a dictionary: %s', ua_cfg) | 792 | raise ValueError('ua_cfg is not a dictionary: %s', ua_cfg) |
922 | 730 | 793 | ||
924 | 731 | pollinate_cfg = util.target_path(target, '/etc/pollinate/add-user-agent') | 794 | pollinate_cfg = paths.target_path(target, '/etc/pollinate/add-user-agent') |
925 | 732 | comment = "# written by curtin" | 795 | comment = "# written by curtin" |
926 | 733 | content = "\n".join(["%s/%s %s" % (ua_key, ua_val, comment) | 796 | content = "\n".join(["%s/%s %s" % (ua_key, ua_val, comment) |
927 | 734 | for ua_key, ua_val in ua_cfg.items()]) + "\n" | 797 | for ua_key, ua_val in ua_cfg.items()]) + "\n" |
928 | @@ -751,6 +814,8 @@ def handle_pollinate_user_agent(cfg, target): | |||
929 | 751 | curtin version | 814 | curtin version |
930 | 752 | maas version (via endpoint URL, if present) | 815 | maas version (via endpoint URL, if present) |
931 | 753 | """ | 816 | """ |
932 | 817 | if not util.which('pollinate', target=target): | ||
933 | 818 | return | ||
934 | 754 | 819 | ||
935 | 755 | pcfg = cfg.get('pollinate') | 820 | pcfg = cfg.get('pollinate') |
936 | 756 | if not isinstance(pcfg, dict): | 821 | if not isinstance(pcfg, dict): |
937 | @@ -776,6 +841,63 @@ def handle_pollinate_user_agent(cfg, target): | |||
938 | 776 | inject_pollinate_user_agent_config(uacfg, target) | 841 | inject_pollinate_user_agent_config(uacfg, target) |
939 | 777 | 842 | ||
940 | 778 | 843 | ||
941 | 844 | def configure_iscsi(cfg, state_etcd, target, osfamily=DISTROS.debian): | ||
942 | 845 | # If a /etc/iscsi/nodes/... file was created by block_meta then it | ||
943 | 846 | # needs to be copied onto the target system | ||
944 | 847 | nodes = os.path.join(state_etcd, "nodes") | ||
945 | 848 | if not os.path.exists(nodes): | ||
946 | 849 | return | ||
947 | 850 | |||
948 | 851 | LOG.info('Iscsi configuration found, enabling service') | ||
949 | 852 | if osfamily == DISTROS.redhat: | ||
950 | 853 | # copy iscsi node config to target image | ||
951 | 854 | LOG.debug('Copying iscsi node config to target') | ||
952 | 855 | copy_iscsi_conf(nodes, target, target_nodes_dir='var/lib/iscsi/nodes') | ||
953 | 856 | |||
954 | 857 | # update in-target config | ||
955 | 858 | with util.ChrootableTarget(target) as in_chroot: | ||
956 | 859 | # enable iscsid service | ||
957 | 860 | LOG.debug('Enabling iscsi daemon') | ||
958 | 861 | in_chroot.subp(['chkconfig', 'iscsid', 'on']) | ||
959 | 862 | |||
960 | 863 | # update selinux config for iscsi ports required | ||
961 | 864 | for port in [str(port) for port in | ||
962 | 865 | iscsi.get_iscsi_ports_from_config(cfg)]: | ||
963 | 866 | LOG.debug('Adding iscsi port %s to selinux iscsi_port_t list', | ||
964 | 867 | port) | ||
965 | 868 | in_chroot.subp(['semanage', 'port', '-a', '-t', | ||
966 | 869 | 'iscsi_port_t', '-p', 'tcp', port]) | ||
967 | 870 | |||
968 | 871 | elif osfamily == DISTROS.debian: | ||
969 | 872 | copy_iscsi_conf(nodes, target) | ||
970 | 873 | else: | ||
971 | 874 | raise ValueError( | ||
972 | 875 | 'Unknown iscsi requirements for distro: %s' % osfamily) | ||
973 | 876 | |||
974 | 877 | |||
975 | 878 | def configure_mdadm(cfg, state_etcd, target, osfamily=DISTROS.debian): | ||
976 | 879 | # If a mdadm.conf file was created by block_meta than it needs | ||
977 | 880 | # to be copied onto the target system | ||
978 | 881 | mdadm_location = os.path.join(state_etcd, "mdadm.conf") | ||
979 | 882 | if not os.path.exists(mdadm_location): | ||
980 | 883 | return | ||
981 | 884 | |||
982 | 885 | conf_map = { | ||
983 | 886 | DISTROS.debian: 'etc/mdadm/mdadm.conf', | ||
984 | 887 | DISTROS.redhat: 'etc/mdadm.conf', | ||
985 | 888 | } | ||
986 | 889 | if osfamily not in conf_map: | ||
987 | 890 | raise ValueError( | ||
988 | 891 | 'Unknown mdadm conf mapping for distro: %s' % osfamily) | ||
989 | 892 | LOG.info('Mdadm configuration found, enabling service') | ||
990 | 893 | shutil.copy(mdadm_location, paths.target_path(target, | ||
991 | 894 | conf_map[osfamily])) | ||
992 | 895 | if osfamily == DISTROS.debian: | ||
993 | 896 | # as per LP: #964052 reconfigure mdadm | ||
994 | 897 | util.subp(['dpkg-reconfigure', '--frontend=noninteractive', 'mdadm'], | ||
995 | 898 | data=None, target=target) | ||
996 | 899 | |||
997 | 900 | |||
998 | 779 | def handle_cloudconfig(cfg, base_dir=None): | 901 | def handle_cloudconfig(cfg, base_dir=None): |
999 | 780 | """write cloud-init configuration files into base_dir. | 902 | """write cloud-init configuration files into base_dir. |
1000 | 781 | 903 | ||
1001 | @@ -845,21 +967,11 @@ def ubuntu_core_curthooks(cfg, target=None): | |||
1002 | 845 | content=config.dump_config({'network': netconfig})) | 967 | content=config.dump_config({'network': netconfig})) |
1003 | 846 | 968 | ||
1004 | 847 | 969 | ||
1015 | 848 | def rpm_get_dist_id(target): | 970 | def redhat_upgrade_cloud_init(netcfg, target=None, osfamily=DISTROS.redhat): |
1006 | 849 | """Use rpm command to extract the '%rhel' distro macro which returns | ||
1007 | 850 | the major os version id (6, 7, 8). This works for centos or rhel | ||
1008 | 851 | """ | ||
1009 | 852 | with util.ChrootableTarget(target) as in_chroot: | ||
1010 | 853 | dist, _ = in_chroot.subp(['rpm', '-E', '%rhel'], capture=True) | ||
1011 | 854 | return dist.rstrip() | ||
1012 | 855 | |||
1013 | 856 | |||
1014 | 857 | def centos_apply_network_config(netcfg, target=None): | ||
1016 | 858 | """ CentOS images execute built-in curthooks which only supports | 971 | """ CentOS images execute built-in curthooks which only supports |
1017 | 859 | simple networking configuration. This hook enables advanced | 972 | simple networking configuration. This hook enables advanced |
1018 | 860 | network configuration via config passthrough to the target. | 973 | network configuration via config passthrough to the target. |
1019 | 861 | """ | 974 | """ |
1020 | 862 | |||
1021 | 863 | def cloud_init_repo(version): | 975 | def cloud_init_repo(version): |
1022 | 864 | if not version: | 976 | if not version: |
1023 | 865 | raise ValueError('Missing required version parameter') | 977 | raise ValueError('Missing required version parameter') |
1024 | @@ -868,9 +980,9 @@ def centos_apply_network_config(netcfg, target=None): | |||
1025 | 868 | 980 | ||
1026 | 869 | if netcfg: | 981 | if netcfg: |
1027 | 870 | LOG.info('Removing embedded network configuration (if present)') | 982 | LOG.info('Removing embedded network configuration (if present)') |
1031 | 871 | ifcfgs = glob.glob(util.target_path(target, | 983 | ifcfgs = glob.glob( |
1032 | 872 | 'etc/sysconfig/network-scripts') + | 984 | paths.target_path(target, 'etc/sysconfig/network-scripts') + |
1033 | 873 | '/ifcfg-*') | 985 | '/ifcfg-*') |
1034 | 874 | # remove ifcfg-* (except ifcfg-lo) | 986 | # remove ifcfg-* (except ifcfg-lo) |
1035 | 875 | for ifcfg in ifcfgs: | 987 | for ifcfg in ifcfgs: |
1036 | 876 | if os.path.basename(ifcfg) != "ifcfg-lo": | 988 | if os.path.basename(ifcfg) != "ifcfg-lo": |
1037 | @@ -884,29 +996,27 @@ def centos_apply_network_config(netcfg, target=None): | |||
1038 | 884 | # if in-target cloud-init is not updated, upgrade via cloud-init repo | 996 | # if in-target cloud-init is not updated, upgrade via cloud-init repo |
1039 | 885 | if not passthrough: | 997 | if not passthrough: |
1040 | 886 | cloud_init_yum_repo = ( | 998 | cloud_init_yum_repo = ( |
1043 | 887 | util.target_path(target, | 999 | paths.target_path(target, |
1044 | 888 | 'etc/yum.repos.d/curtin-cloud-init.repo')) | 1000 | 'etc/yum.repos.d/curtin-cloud-init.repo')) |
1045 | 889 | # Inject cloud-init daily yum repo | 1001 | # Inject cloud-init daily yum repo |
1046 | 890 | util.write_file(cloud_init_yum_repo, | 1002 | util.write_file(cloud_init_yum_repo, |
1048 | 891 | content=cloud_init_repo(rpm_get_dist_id(target))) | 1003 | content=cloud_init_repo( |
1049 | 1004 | distro.rpm_get_dist_id(target))) | ||
1050 | 892 | 1005 | ||
1051 | 893 | # we separate the installation of repository packages (epel, | 1006 | # we separate the installation of repository packages (epel, |
1052 | 894 | # cloud-init-el-release) as we need a new invocation of yum | 1007 | # cloud-init-el-release) as we need a new invocation of yum |
1053 | 895 | # to read the newly installed repo files. | 1008 | # to read the newly installed repo files. |
1068 | 896 | YUM_CMD = ['yum', '-y', '--noplugins', 'install'] | 1009 | |
1069 | 897 | retries = [1] * 30 | 1010 | # ensure up-to-date ca-certificates to handle https mirror |
1070 | 898 | with util.ChrootableTarget(target) as in_chroot: | 1011 | # connections |
1071 | 899 | # ensure up-to-date ca-certificates to handle https mirror | 1012 | distro.install_packages(['ca-certificates'], target=target, |
1072 | 900 | # connections | 1013 | osfamily=osfamily) |
1073 | 901 | in_chroot.subp(YUM_CMD + ['ca-certificates'], capture=True, | 1014 | distro.install_packages(['epel-release'], target=target, |
1074 | 902 | log_captured=True, retries=retries) | 1015 | osfamily=osfamily) |
1075 | 903 | in_chroot.subp(YUM_CMD + ['epel-release'], capture=True, | 1016 | distro.install_packages(['cloud-init-el-release'], target=target, |
1076 | 904 | log_captured=True, retries=retries) | 1017 | osfamily=osfamily) |
1077 | 905 | in_chroot.subp(YUM_CMD + ['cloud-init-el-release'], | 1018 | distro.install_packages(['cloud-init'], target=target, |
1078 | 906 | log_captured=True, capture=True, | 1019 | osfamily=osfamily) |
1065 | 907 | retries=retries) | ||
1066 | 908 | in_chroot.subp(YUM_CMD + ['cloud-init'], capture=True, | ||
1067 | 909 | log_captured=True, retries=retries) | ||
1079 | 910 | 1020 | ||
1080 | 911 | # remove cloud-init el-stable bootstrap repo config as the | 1021 | # remove cloud-init el-stable bootstrap repo config as the |
1081 | 912 | # cloud-init-el-release package points to the correct repo | 1022 | # cloud-init-el-release package points to the correct repo |
1082 | @@ -919,127 +1029,136 @@ def centos_apply_network_config(netcfg, target=None): | |||
1083 | 919 | capture=False, rcs=[0]) | 1029 | capture=False, rcs=[0]) |
1084 | 920 | except util.ProcessExecutionError: | 1030 | except util.ProcessExecutionError: |
1085 | 921 | LOG.debug('Image missing bridge-utils package, installing') | 1031 | LOG.debug('Image missing bridge-utils package, installing') |
1088 | 922 | in_chroot.subp(YUM_CMD + ['bridge-utils'], capture=True, | 1032 | distro.install_packages(['bridge-utils'], target=target, |
1089 | 923 | log_captured=True, retries=retries) | 1033 | osfamily=osfamily) |
1090 | 924 | 1034 | ||
1091 | 925 | LOG.info('Passing network configuration through to target') | 1035 | LOG.info('Passing network configuration through to target') |
1092 | 926 | net.render_netconfig_passthrough(target, netconfig={'network': netcfg}) | 1036 | net.render_netconfig_passthrough(target, netconfig={'network': netcfg}) |
1093 | 927 | 1037 | ||
1094 | 928 | 1038 | ||
1107 | 929 | def target_is_ubuntu_core(target): | 1039 | # Public API, maas may call this from internal curthooks |
1108 | 930 | """Check if Ubuntu-Core specific directory is present at target""" | 1040 | centos_apply_network_config = redhat_upgrade_cloud_init |
1097 | 931 | if target: | ||
1098 | 932 | return os.path.exists(util.target_path(target, | ||
1099 | 933 | 'system-data/var/lib/snapd')) | ||
1100 | 934 | return False | ||
1101 | 935 | |||
1102 | 936 | |||
1103 | 937 | def target_is_centos(target): | ||
1104 | 938 | """Check if CentOS specific file is present at target""" | ||
1105 | 939 | if target: | ||
1106 | 940 | return os.path.exists(util.target_path(target, 'etc/centos-release')) | ||
1109 | 941 | 1041 | ||
1110 | 942 | return False | ||
1111 | 943 | 1042 | ||
1112 | 1043 | def redhat_apply_selinux_autorelabel(target): | ||
1113 | 1044 | """Creates file /.autorelabel. | ||
1114 | 944 | 1045 | ||
1119 | 945 | def target_is_rhel(target): | 1046 | This is used by SELinux to relabel all of the |
1120 | 946 | """Check if RHEL specific file is present at target""" | 1047 | files on the filesystem to have the correct |
1121 | 947 | if target: | 1048 | security context. Without this SSH login will |
1122 | 948 | return os.path.exists(util.target_path(target, 'etc/redhat-release')) | 1049 | fail. |
1123 | 1050 | """ | ||
1124 | 1051 | LOG.debug('enabling selinux autorelabel') | ||
1125 | 1052 | open(paths.target_path(target, '.autorelabel'), 'a').close() | ||
1126 | 949 | 1053 | ||
1127 | 950 | return False | ||
1128 | 951 | 1054 | ||
1129 | 1055 | def redhat_update_dracut_config(target, cfg): | ||
1130 | 1056 | initramfs_mapping = { | ||
1131 | 1057 | 'lvm': {'conf': 'lvmconf', 'modules': 'lvm'}, | ||
1132 | 1058 | 'raid': {'conf': 'mdadmconf', 'modules': 'mdraid'}, | ||
1133 | 1059 | } | ||
1134 | 952 | 1060 | ||
1137 | 953 | def curthooks(args): | 1061 | # no need to update initramfs if no custom storage |
1138 | 954 | state = util.load_command_environment() | 1062 | if 'storage' not in cfg: |
1139 | 1063 | return False | ||
1140 | 955 | 1064 | ||
1145 | 956 | if args.target is not None: | 1065 | storage_config = cfg.get('storage', {}).get('config') |
1146 | 957 | target = args.target | 1066 | if not storage_config: |
1147 | 958 | else: | 1067 | raise ValueError('Invalid storage config') |
1148 | 959 | target = state['target'] | 1068 | |
1149 | 1069 | add_conf = set() | ||
1150 | 1070 | add_modules = set() | ||
1151 | 1071 | for scfg in storage_config: | ||
1152 | 1072 | if scfg['type'] == 'raid': | ||
1153 | 1073 | add_conf.add(initramfs_mapping['raid']['conf']) | ||
1154 | 1074 | add_modules.add(initramfs_mapping['raid']['modules']) | ||
1155 | 1075 | elif scfg['type'] in ['lvm_volgroup', 'lvm_partition']: | ||
1156 | 1076 | add_conf.add(initramfs_mapping['lvm']['conf']) | ||
1157 | 1077 | add_modules.add(initramfs_mapping['lvm']['modules']) | ||
1158 | 1078 | |||
1159 | 1079 | dconfig = ['# Written by curtin for custom storage config'] | ||
1160 | 1080 | dconfig.append('add_dracutmodules+="%s"' % (" ".join(add_modules))) | ||
1161 | 1081 | for conf in add_conf: | ||
1162 | 1082 | dconfig.append('%s="yes"' % conf) | ||
1163 | 1083 | |||
1164 | 1084 | # Write out initramfs/dracut config for storage config | ||
1165 | 1085 | dracut_conf_storage = os.path.sep.join( | ||
1166 | 1086 | [target, '/etc/dracut.conf.d/50-curtin-storage.conf']) | ||
1167 | 1087 | msg = '\n'.join(dconfig + ['']) | ||
1168 | 1088 | LOG.debug('Updating redhat dracut config') | ||
1169 | 1089 | util.write_file(dracut_conf_storage, content=msg) | ||
1170 | 1090 | return True | ||
1171 | 1091 | |||
1172 | 1092 | |||
1173 | 1093 | def redhat_update_initramfs(target, cfg): | ||
1174 | 1094 | if not redhat_update_dracut_config(target, cfg): | ||
1175 | 1095 | LOG.debug('Skipping redhat initramfs update, no custom storage config') | ||
1176 | 1096 | return | ||
1177 | 1097 | kver_cmd = ['rpm', '-q', '--queryformat', | ||
1178 | 1098 | '%{VERSION}-%{RELEASE}.%{ARCH}', 'kernel'] | ||
1179 | 1099 | with util.ChrootableTarget(target) as in_chroot: | ||
1180 | 1100 | LOG.debug('Finding redhat kernel version: %s', kver_cmd) | ||
1181 | 1101 | kver, _err = in_chroot.subp(kver_cmd, capture=True) | ||
1182 | 1102 | LOG.debug('Found kver=%s' % kver) | ||
1183 | 1103 | initramfs = '/boot/initramfs-%s.img' % kver | ||
1184 | 1104 | dracut_cmd = ['dracut', '-f', initramfs, kver] | ||
1185 | 1105 | LOG.debug('Rebuilding initramfs with: %s', dracut_cmd) | ||
1186 | 1106 | in_chroot.subp(dracut_cmd, capture=True) | ||
1187 | 960 | 1107 | ||
1188 | 961 | if target is None: | ||
1189 | 962 | sys.stderr.write("Unable to find target. " | ||
1190 | 963 | "Use --target or set TARGET_MOUNT_POINT\n") | ||
1191 | 964 | sys.exit(2) | ||
1192 | 965 | 1108 | ||
1194 | 966 | cfg = config.load_command_config(args, state) | 1109 | def builtin_curthooks(cfg, target, state): |
1195 | 1110 | LOG.info('Running curtin builtin curthooks') | ||
1196 | 967 | stack_prefix = state.get('report_stack_prefix', '') | 1111 | stack_prefix = state.get('report_stack_prefix', '') |
1219 | 968 | 1112 | state_etcd = os.path.split(state['fstab'])[0] | |
1220 | 969 | # if curtin-hooks hook exists in target we can defer to the in-target hooks | 1113 | |
1221 | 970 | if util.run_hook_if_exists(target, 'curtin-hooks'): | 1114 | distro_info = distro.get_distroinfo(target=target) |
1222 | 971 | # For vmtests to force execute centos_apply_network_config, uncomment | 1115 | if not distro_info: |
1223 | 972 | # the value in examples/tests/centos_defaults.yaml | 1116 | raise RuntimeError('Failed to determine target distro') |
1224 | 973 | if cfg.get('_ammend_centos_curthooks'): | 1117 | osfamily = distro_info.family |
1225 | 974 | if cfg.get('cloudconfig'): | 1118 | LOG.info('Configuring target system for distro: %s osfamily: %s', |
1226 | 975 | handle_cloudconfig( | 1119 | distro_info.variant, osfamily) |
1227 | 976 | cfg['cloudconfig'], | 1120 | if osfamily == DISTROS.debian: |
1206 | 977 | base_dir=util.target_path(target, 'etc/cloud/cloud.cfg.d')) | ||
1207 | 978 | |||
1208 | 979 | if target_is_centos(target) or target_is_rhel(target): | ||
1209 | 980 | LOG.info('Detected RHEL/CentOS image, running extra hooks') | ||
1210 | 981 | with events.ReportEventStack( | ||
1211 | 982 | name=stack_prefix, reporting_enabled=True, | ||
1212 | 983 | level="INFO", | ||
1213 | 984 | description="Configuring CentOS for first boot"): | ||
1214 | 985 | centos_apply_network_config(cfg.get('network', {}), target) | ||
1215 | 986 | sys.exit(0) | ||
1216 | 987 | |||
1217 | 988 | if target_is_ubuntu_core(target): | ||
1218 | 989 | LOG.info('Detected Ubuntu-Core image, running hooks') | ||
1228 | 990 | with events.ReportEventStack( | 1121 | with events.ReportEventStack( |
1240 | 991 | name=stack_prefix, reporting_enabled=True, level="INFO", | 1122 | name=stack_prefix + '/writing-apt-config', |
1241 | 992 | description="Configuring Ubuntu-Core for first boot"): | 1123 | reporting_enabled=True, level="INFO", |
1242 | 993 | ubuntu_core_curthooks(cfg, target) | 1124 | description="configuring apt configuring apt"): |
1243 | 994 | sys.exit(0) | 1125 | do_apt_config(cfg, target) |
1244 | 995 | 1126 | disable_overlayroot(cfg, target) | |
1234 | 996 | with events.ReportEventStack( | ||
1235 | 997 | name=stack_prefix + '/writing-config', | ||
1236 | 998 | reporting_enabled=True, level="INFO", | ||
1237 | 999 | description="configuring apt configuring apt"): | ||
1238 | 1000 | do_apt_config(cfg, target) | ||
1239 | 1001 | disable_overlayroot(cfg, target) | ||
1245 | 1002 | 1127 | ||
1251 | 1003 | # LP: #1742560 prevent zfs-dkms from being installed (Xenial) | 1128 | # LP: #1742560 prevent zfs-dkms from being installed (Xenial) |
1252 | 1004 | if util.lsb_release(target=target)['codename'] == 'xenial': | 1129 | if distro.lsb_release(target=target)['codename'] == 'xenial': |
1253 | 1005 | util.apt_update(target=target) | 1130 | distro.apt_update(target=target) |
1254 | 1006 | with util.ChrootableTarget(target) as in_chroot: | 1131 | with util.ChrootableTarget(target) as in_chroot: |
1255 | 1007 | in_chroot.subp(['apt-mark', 'hold', 'zfs-dkms']) | 1132 | in_chroot.subp(['apt-mark', 'hold', 'zfs-dkms']) |
1256 | 1008 | 1133 | ||
1257 | 1009 | # packages may be needed prior to installing kernel | 1134 | # packages may be needed prior to installing kernel |
1258 | 1010 | with events.ReportEventStack( | 1135 | with events.ReportEventStack( |
1259 | 1011 | name=stack_prefix + '/installing-missing-packages', | 1136 | name=stack_prefix + '/installing-missing-packages', |
1260 | 1012 | reporting_enabled=True, level="INFO", | 1137 | reporting_enabled=True, level="INFO", |
1261 | 1013 | description="installing missing packages"): | 1138 | description="installing missing packages"): |
1263 | 1014 | install_missing_packages(cfg, target) | 1139 | install_missing_packages(cfg, target, osfamily=osfamily) |
1264 | 1015 | 1140 | ||
1283 | 1016 | # If a /etc/iscsi/nodes/... file was created by block_meta then it | 1141 | with events.ReportEventStack( |
1284 | 1017 | # needs to be copied onto the target system | 1142 | name=stack_prefix + '/configuring-iscsi-service', |
1285 | 1018 | nodes_location = os.path.join(os.path.split(state['fstab'])[0], | 1143 | reporting_enabled=True, level="INFO", |
1286 | 1019 | "nodes") | 1144 | description="configuring iscsi service"): |
1287 | 1020 | if os.path.exists(nodes_location): | 1145 | configure_iscsi(cfg, state_etcd, target, osfamily=osfamily) |
1270 | 1021 | copy_iscsi_conf(nodes_location, target) | ||
1271 | 1022 | # do we need to reconfigure open-iscsi? | ||
1272 | 1023 | |||
1273 | 1024 | # If a mdadm.conf file was created by block_meta than it needs to be copied | ||
1274 | 1025 | # onto the target system | ||
1275 | 1026 | mdadm_location = os.path.join(os.path.split(state['fstab'])[0], | ||
1276 | 1027 | "mdadm.conf") | ||
1277 | 1028 | if os.path.exists(mdadm_location): | ||
1278 | 1029 | copy_mdadm_conf(mdadm_location, target) | ||
1279 | 1030 | # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052 | ||
1280 | 1031 | # reconfigure mdadm | ||
1281 | 1032 | util.subp(['dpkg-reconfigure', '--frontend=noninteractive', 'mdadm'], | ||
1282 | 1033 | data=None, target=target) | ||
1288 | 1034 | 1146 | ||
1289 | 1035 | with events.ReportEventStack( | 1147 | with events.ReportEventStack( |
1291 | 1036 | name=stack_prefix + '/installing-kernel', | 1148 | name=stack_prefix + '/configuring-mdadm-service', |
1292 | 1037 | reporting_enabled=True, level="INFO", | 1149 | reporting_enabled=True, level="INFO", |
1298 | 1038 | description="installing kernel"): | 1150 | description="configuring raid (mdadm) service"): |
1299 | 1039 | setup_zipl(cfg, target) | 1151 | configure_mdadm(cfg, state_etcd, target, osfamily=osfamily) |
1300 | 1040 | install_kernel(cfg, target) | 1152 | |
1301 | 1041 | run_zipl(cfg, target) | 1153 | if osfamily == DISTROS.debian: |
1302 | 1042 | restore_dist_interfaces(cfg, target) | 1154 | with events.ReportEventStack( |
1303 | 1155 | name=stack_prefix + '/installing-kernel', | ||
1304 | 1156 | reporting_enabled=True, level="INFO", | ||
1305 | 1157 | description="installing kernel"): | ||
1306 | 1158 | setup_zipl(cfg, target) | ||
1307 | 1159 | install_kernel(cfg, target) | ||
1308 | 1160 | run_zipl(cfg, target) | ||
1309 | 1161 | restore_dist_interfaces(cfg, target) | ||
1310 | 1043 | 1162 | ||
1311 | 1044 | with events.ReportEventStack( | 1163 | with events.ReportEventStack( |
1312 | 1045 | name=stack_prefix + '/setting-up-swap', | 1164 | name=stack_prefix + '/setting-up-swap', |
1313 | @@ -1047,6 +1166,23 @@ def curthooks(args): | |||
1314 | 1047 | description="setting up swap"): | 1166 | description="setting up swap"): |
1315 | 1048 | add_swap(cfg, target, state.get('fstab')) | 1167 | add_swap(cfg, target, state.get('fstab')) |
1316 | 1049 | 1168 | ||
1317 | 1169 | if osfamily == DISTROS.redhat: | ||
1318 | 1170 | # set cloud-init maas datasource for centos images | ||
1319 | 1171 | if cfg.get('cloudconfig'): | ||
1320 | 1172 | handle_cloudconfig( | ||
1321 | 1173 | cfg['cloudconfig'], | ||
1322 | 1174 | base_dir=paths.target_path(target, | ||
1323 | 1175 | 'etc/cloud/cloud.cfg.d')) | ||
1324 | 1176 | |||
1325 | 1177 | # For vmtests to force execute redhat_upgrade_cloud_init, uncomment | ||
1326 | 1178 | # the value in examples/tests/centos_defaults.yaml | ||
1327 | 1179 | if cfg.get('_ammend_centos_curthooks'): | ||
1328 | 1180 | with events.ReportEventStack( | ||
1329 | 1181 | name=stack_prefix + '/upgrading cloud-init', | ||
1330 | 1182 | reporting_enabled=True, level="INFO", | ||
1331 | 1183 | description="Upgrading cloud-init in target"): | ||
1332 | 1184 | redhat_upgrade_cloud_init(cfg.get('network', {}), target) | ||
1333 | 1185 | |||
1334 | 1050 | with events.ReportEventStack( | 1186 | with events.ReportEventStack( |
1335 | 1051 | name=stack_prefix + '/apply-networking-config', | 1187 | name=stack_prefix + '/apply-networking-config', |
1336 | 1052 | reporting_enabled=True, level="INFO", | 1188 | reporting_enabled=True, level="INFO", |
1337 | @@ -1063,29 +1199,44 @@ def curthooks(args): | |||
1338 | 1063 | name=stack_prefix + '/configuring-multipath', | 1199 | name=stack_prefix + '/configuring-multipath', |
1339 | 1064 | reporting_enabled=True, level="INFO", | 1200 | reporting_enabled=True, level="INFO", |
1340 | 1065 | description="configuring multipath"): | 1201 | description="configuring multipath"): |
1342 | 1066 | detect_and_handle_multipath(cfg, target) | 1202 | detect_and_handle_multipath(cfg, target, osfamily=osfamily) |
1343 | 1067 | 1203 | ||
1344 | 1068 | with events.ReportEventStack( | 1204 | with events.ReportEventStack( |
1345 | 1069 | name=stack_prefix + '/system-upgrade', | 1205 | name=stack_prefix + '/system-upgrade', |
1346 | 1070 | reporting_enabled=True, level="INFO", | 1206 | reporting_enabled=True, level="INFO", |
1347 | 1071 | description="updating packages on target system"): | 1207 | description="updating packages on target system"): |
1349 | 1072 | system_upgrade(cfg, target) | 1208 | system_upgrade(cfg, target, osfamily=osfamily) |
1350 | 1209 | |||
1351 | 1210 | if osfamily == DISTROS.redhat: | ||
1352 | 1211 | with events.ReportEventStack( | ||
1353 | 1212 | name=stack_prefix + '/enabling-selinux-autorelabel', | ||
1354 | 1213 | reporting_enabled=True, level="INFO", | ||
1355 | 1214 | description="enabling selinux autorelabel mode"): | ||
1356 | 1215 | redhat_apply_selinux_autorelabel(target) | ||
1357 | 1216 | |||
1358 | 1217 | with events.ReportEventStack( | ||
1359 | 1218 | name=stack_prefix + '/updating-initramfs-configuration', | ||
1360 | 1219 | reporting_enabled=True, level="INFO", | ||
1361 | 1220 | description="updating initramfs configuration"): | ||
1362 | 1221 | redhat_update_initramfs(target, cfg) | ||
1363 | 1073 | 1222 | ||
1364 | 1074 | with events.ReportEventStack( | 1223 | with events.ReportEventStack( |
1365 | 1075 | name=stack_prefix + '/pollinate-user-agent', | 1224 | name=stack_prefix + '/pollinate-user-agent', |
1366 | 1076 | reporting_enabled=True, level="INFO", | 1225 | reporting_enabled=True, level="INFO", |
1368 | 1077 | description="configuring pollinate user-agent on target system"): | 1226 | description="configuring pollinate user-agent on target"): |
1369 | 1078 | handle_pollinate_user_agent(cfg, target) | 1227 | handle_pollinate_user_agent(cfg, target) |
1370 | 1079 | 1228 | ||
1380 | 1080 | # If a crypttab file was created by block_meta than it needs to be copied | 1229 | if osfamily == DISTROS.debian: |
1381 | 1081 | # onto the target system, and update_initramfs() needs to be run, so that | 1230 | # If a crypttab file was created by block_meta than it needs to be |
1382 | 1082 | # the cryptsetup hooks are properly configured on the installed system and | 1231 | # copied onto the target system, and update_initramfs() needs to be |
1383 | 1083 | # it will be able to open encrypted volumes at boot. | 1232 | # run, so that the cryptsetup hooks are properly configured on the |
1384 | 1084 | crypttab_location = os.path.join(os.path.split(state['fstab'])[0], | 1233 | # installed system and it will be able to open encrypted volumes |
1385 | 1085 | "crypttab") | 1234 | # at boot. |
1386 | 1086 | if os.path.exists(crypttab_location): | 1235 | crypttab_location = os.path.join(os.path.split(state['fstab'])[0], |
1387 | 1087 | copy_crypttab(crypttab_location, target) | 1236 | "crypttab") |
1388 | 1088 | update_initramfs(target) | 1237 | if os.path.exists(crypttab_location): |
1389 | 1238 | copy_crypttab(crypttab_location, target) | ||
1390 | 1239 | update_initramfs(target) | ||
1391 | 1089 | 1240 | ||
1392 | 1090 | # If udev dname rules were created, copy them to target | 1241 | # If udev dname rules were created, copy them to target |
1393 | 1091 | udev_rules_d = os.path.join(state['scratch'], "rules.d") | 1242 | udev_rules_d = os.path.join(state['scratch'], "rules.d") |
1394 | @@ -1102,8 +1253,41 @@ def curthooks(args): | |||
1395 | 1102 | machine.startswith('aarch64') and not util.is_uefi_bootable()): | 1253 | machine.startswith('aarch64') and not util.is_uefi_bootable()): |
1396 | 1103 | update_initramfs(target) | 1254 | update_initramfs(target) |
1397 | 1104 | else: | 1255 | else: |
1399 | 1105 | setup_grub(cfg, target) | 1256 | setup_grub(cfg, target, osfamily=osfamily) |
1400 | 1257 | |||
1401 | 1258 | |||
1402 | 1259 | def curthooks(args): | ||
1403 | 1260 | state = util.load_command_environment() | ||
1404 | 1261 | |||
1405 | 1262 | if args.target is not None: | ||
1406 | 1263 | target = args.target | ||
1407 | 1264 | else: | ||
1408 | 1265 | target = state['target'] | ||
1409 | 1266 | |||
1410 | 1267 | if target is None: | ||
1411 | 1268 | sys.stderr.write("Unable to find target. " | ||
1412 | 1269 | "Use --target or set TARGET_MOUNT_POINT\n") | ||
1413 | 1270 | sys.exit(2) | ||
1414 | 1271 | |||
1415 | 1272 | cfg = config.load_command_config(args, state) | ||
1416 | 1273 | stack_prefix = state.get('report_stack_prefix', '') | ||
1417 | 1274 | curthooks_mode = cfg.get('curthooks', {}).get('mode', 'auto') | ||
1418 | 1275 | |||
1419 | 1276 | # UC is special, handle it first. | ||
1420 | 1277 | if distro.is_ubuntu_core(target): | ||
1421 | 1278 | LOG.info('Detected Ubuntu-Core image, running hooks') | ||
1422 | 1279 | with events.ReportEventStack( | ||
1423 | 1280 | name=stack_prefix, reporting_enabled=True, level="INFO", | ||
1424 | 1281 | description="Configuring Ubuntu-Core for first boot"): | ||
1425 | 1282 | ubuntu_core_curthooks(cfg, target) | ||
1426 | 1283 | sys.exit(0) | ||
1427 | 1284 | |||
1428 | 1285 | # user asked for target, or auto mode | ||
1429 | 1286 | if curthooks_mode in ['auto', 'target']: | ||
1430 | 1287 | if util.run_hook_if_exists(target, 'curtin-hooks'): | ||
1431 | 1288 | sys.exit(0) | ||
1432 | 1106 | 1289 | ||
1433 | 1290 | builtin_curthooks(cfg, target, state) | ||
1434 | 1107 | sys.exit(0) | 1291 | sys.exit(0) |
1435 | 1108 | 1292 | ||
1436 | 1109 | 1293 | ||
1437 | diff --git a/curtin/commands/in_target.py b/curtin/commands/in_target.py | |||
1438 | index 8e839c0..c6f7abd 100644 | |||
1439 | --- a/curtin/commands/in_target.py | |||
1440 | +++ b/curtin/commands/in_target.py | |||
1441 | @@ -4,7 +4,7 @@ import os | |||
1442 | 4 | import pty | 4 | import pty |
1443 | 5 | import sys | 5 | import sys |
1444 | 6 | 6 | ||
1446 | 7 | from curtin import util | 7 | from curtin import paths, util |
1447 | 8 | 8 | ||
1448 | 9 | from . import populate_one_subcmd | 9 | from . import populate_one_subcmd |
1449 | 10 | 10 | ||
1450 | @@ -41,7 +41,7 @@ def in_target_main(args): | |||
1451 | 41 | sys.exit(2) | 41 | sys.exit(2) |
1452 | 42 | 42 | ||
1453 | 43 | daemons = args.allow_daemons | 43 | daemons = args.allow_daemons |
1455 | 44 | if util.target_path(args.target) == "/": | 44 | if paths.target_path(args.target) == "/": |
1456 | 45 | sys.stderr.write("WARN: Target is /, daemons are allowed.\n") | 45 | sys.stderr.write("WARN: Target is /, daemons are allowed.\n") |
1457 | 46 | daemons = True | 46 | daemons = True |
1458 | 47 | cmd = args.command_args | 47 | cmd = args.command_args |
1459 | diff --git a/curtin/commands/install.py b/curtin/commands/install.py | |||
1460 | index 4d2a13f..244683c 100644 | |||
1461 | --- a/curtin/commands/install.py | |||
1462 | +++ b/curtin/commands/install.py | |||
1463 | @@ -13,7 +13,9 @@ import tempfile | |||
1464 | 13 | 13 | ||
1465 | 14 | from curtin.block import iscsi | 14 | from curtin.block import iscsi |
1466 | 15 | from curtin import config | 15 | from curtin import config |
1467 | 16 | from curtin import distro | ||
1468 | 16 | from curtin import util | 17 | from curtin import util |
1469 | 18 | from curtin import paths | ||
1470 | 17 | from curtin import version | 19 | from curtin import version |
1471 | 18 | from curtin.log import LOG, logged_time | 20 | from curtin.log import LOG, logged_time |
1472 | 19 | from curtin.reporter.legacy import load_reporter | 21 | from curtin.reporter.legacy import load_reporter |
1473 | @@ -80,7 +82,7 @@ def copy_install_log(logfile, target, log_target_path): | |||
1474 | 80 | LOG.debug('Copying curtin install log from %s to target/%s', | 82 | LOG.debug('Copying curtin install log from %s to target/%s', |
1475 | 81 | logfile, log_target_path) | 83 | logfile, log_target_path) |
1476 | 82 | util.write_file( | 84 | util.write_file( |
1478 | 83 | filename=util.target_path(target, log_target_path), | 85 | filename=paths.target_path(target, log_target_path), |
1479 | 84 | content=util.load_file(logfile, decode=False), | 86 | content=util.load_file(logfile, decode=False), |
1480 | 85 | mode=0o400, omode="wb") | 87 | mode=0o400, omode="wb") |
1481 | 86 | 88 | ||
1482 | @@ -319,7 +321,7 @@ def apply_kexec(kexec, target): | |||
1483 | 319 | raise TypeError("kexec is not a dict.") | 321 | raise TypeError("kexec is not a dict.") |
1484 | 320 | 322 | ||
1485 | 321 | if not util.which('kexec'): | 323 | if not util.which('kexec'): |
1487 | 322 | util.install_packages('kexec-tools') | 324 | distro.install_packages('kexec-tools') |
1488 | 323 | 325 | ||
1489 | 324 | if not os.path.isfile(target_grubcfg): | 326 | if not os.path.isfile(target_grubcfg): |
1490 | 325 | raise ValueError("%s does not exist in target" % grubcfg) | 327 | raise ValueError("%s does not exist in target" % grubcfg) |
1491 | diff --git a/curtin/commands/system_install.py b/curtin/commands/system_install.py | |||
1492 | index 05d70af..6d7b736 100644 | |||
1493 | --- a/curtin/commands/system_install.py | |||
1494 | +++ b/curtin/commands/system_install.py | |||
1495 | @@ -7,6 +7,7 @@ import curtin.util as util | |||
1496 | 7 | 7 | ||
1497 | 8 | from . import populate_one_subcmd | 8 | from . import populate_one_subcmd |
1498 | 9 | from curtin.log import LOG | 9 | from curtin.log import LOG |
1499 | 10 | from curtin import distro | ||
1500 | 10 | 11 | ||
1501 | 11 | 12 | ||
1502 | 12 | def system_install_pkgs_main(args): | 13 | def system_install_pkgs_main(args): |
1503 | @@ -16,7 +17,7 @@ def system_install_pkgs_main(args): | |||
1504 | 16 | 17 | ||
1505 | 17 | exit_code = 0 | 18 | exit_code = 0 |
1506 | 18 | try: | 19 | try: |
1508 | 19 | util.install_packages( | 20 | distro.install_packages( |
1509 | 20 | pkglist=args.packages, target=args.target, | 21 | pkglist=args.packages, target=args.target, |
1510 | 21 | allow_daemons=args.allow_daemons) | 22 | allow_daemons=args.allow_daemons) |
1511 | 22 | except util.ProcessExecutionError as e: | 23 | except util.ProcessExecutionError as e: |
1512 | diff --git a/curtin/commands/system_upgrade.py b/curtin/commands/system_upgrade.py | |||
1513 | index fe10fac..d4f6735 100644 | |||
1514 | --- a/curtin/commands/system_upgrade.py | |||
1515 | +++ b/curtin/commands/system_upgrade.py | |||
1516 | @@ -7,6 +7,7 @@ import curtin.util as util | |||
1517 | 7 | 7 | ||
1518 | 8 | from . import populate_one_subcmd | 8 | from . import populate_one_subcmd |
1519 | 9 | from curtin.log import LOG | 9 | from curtin.log import LOG |
1520 | 10 | from curtin import distro | ||
1521 | 10 | 11 | ||
1522 | 11 | 12 | ||
1523 | 12 | def system_upgrade_main(args): | 13 | def system_upgrade_main(args): |
1524 | @@ -16,8 +17,8 @@ def system_upgrade_main(args): | |||
1525 | 16 | 17 | ||
1526 | 17 | exit_code = 0 | 18 | exit_code = 0 |
1527 | 18 | try: | 19 | try: |
1530 | 19 | util.system_upgrade(target=args.target, | 20 | distro.system_upgrade(target=args.target, |
1531 | 20 | allow_daemons=args.allow_daemons) | 21 | allow_daemons=args.allow_daemons) |
1532 | 21 | except util.ProcessExecutionError as e: | 22 | except util.ProcessExecutionError as e: |
1533 | 22 | LOG.warn("system upgrade failed: %s" % e) | 23 | LOG.warn("system upgrade failed: %s" % e) |
1534 | 23 | exit_code = e.exit_code | 24 | exit_code = e.exit_code |
1535 | diff --git a/curtin/deps/__init__.py b/curtin/deps/__init__.py | |||
1536 | index 7014895..96df4f6 100644 | |||
1537 | --- a/curtin/deps/__init__.py | |||
1538 | +++ b/curtin/deps/__init__.py | |||
1539 | @@ -6,13 +6,13 @@ import sys | |||
1540 | 6 | from curtin.util import ( | 6 | from curtin.util import ( |
1541 | 7 | ProcessExecutionError, | 7 | ProcessExecutionError, |
1542 | 8 | get_architecture, | 8 | get_architecture, |
1543 | 9 | install_packages, | ||
1544 | 10 | is_uefi_bootable, | 9 | is_uefi_bootable, |
1545 | 11 | lsb_release, | ||
1546 | 12 | subp, | 10 | subp, |
1547 | 13 | which, | 11 | which, |
1548 | 14 | ) | 12 | ) |
1549 | 15 | 13 | ||
1550 | 14 | from curtin.distro import install_packages, lsb_release | ||
1551 | 15 | |||
1552 | 16 | REQUIRED_IMPORTS = [ | 16 | REQUIRED_IMPORTS = [ |
1553 | 17 | # import string to execute, python2 package, python3 package | 17 | # import string to execute, python2 package, python3 package |
1554 | 18 | ('import yaml', 'python-yaml', 'python3-yaml'), | 18 | ('import yaml', 'python-yaml', 'python3-yaml'), |
1555 | @@ -177,7 +177,7 @@ def install_deps(verbosity=False, dry_run=False, allow_daemons=True): | |||
1556 | 177 | ret = 0 | 177 | ret = 0 |
1557 | 178 | try: | 178 | try: |
1558 | 179 | install_packages(missing_pkgs, allow_daemons=allow_daemons, | 179 | install_packages(missing_pkgs, allow_daemons=allow_daemons, |
1560 | 180 | aptopts=["--no-install-recommends"]) | 180 | opts=["--no-install-recommends"]) |
1561 | 181 | except ProcessExecutionError as e: | 181 | except ProcessExecutionError as e: |
1562 | 182 | sys.stderr.write("%s\n" % e) | 182 | sys.stderr.write("%s\n" % e) |
1563 | 183 | ret = e.exit_code | 183 | ret = e.exit_code |
1564 | diff --git a/curtin/distro.py b/curtin/distro.py | |||
1565 | 184 | new file mode 100644 | 184 | new file mode 100644 |
1566 | index 0000000..f2a78ed | |||
1567 | --- /dev/null | |||
1568 | +++ b/curtin/distro.py | |||
1569 | @@ -0,0 +1,512 @@ | |||
1570 | 1 | # This file is part of curtin. See LICENSE file for copyright and license info. | ||
1571 | 2 | import glob | ||
1572 | 3 | from collections import namedtuple | ||
1573 | 4 | import os | ||
1574 | 5 | import re | ||
1575 | 6 | import shutil | ||
1576 | 7 | import tempfile | ||
1577 | 8 | |||
1578 | 9 | from .paths import target_path | ||
1579 | 10 | from .util import ( | ||
1580 | 11 | ChrootableTarget, | ||
1581 | 12 | find_newer, | ||
1582 | 13 | load_file, | ||
1583 | 14 | load_shell_content, | ||
1584 | 15 | ProcessExecutionError, | ||
1585 | 16 | set_unexecutable, | ||
1586 | 17 | string_types, | ||
1587 | 18 | subp, | ||
1588 | 19 | which | ||
1589 | 20 | ) | ||
1590 | 21 | from .log import LOG | ||
1591 | 22 | |||
1592 | 23 | DistroInfo = namedtuple('DistroInfo', ('variant', 'family')) | ||
1593 | 24 | DISTRO_NAMES = ['arch', 'centos', 'debian', 'fedora', 'freebsd', 'gentoo', | ||
1594 | 25 | 'opensuse', 'redhat', 'rhel', 'sles', 'suse', 'ubuntu'] | ||
1595 | 26 | |||
1596 | 27 | |||
1597 | 28 | # python2.7 lacks PEP 435, so we must make use an alternative for py2.7/3.x | ||
1598 | 29 | # https://stackoverflow.com/questions/36932/how-can-i-represent-an-enum-in-python | ||
1599 | 30 | def distro_enum(*distros): | ||
1600 | 31 | return namedtuple('Distros', distros)(*distros) | ||
1601 | 32 | |||
1602 | 33 | |||
1603 | 34 | DISTROS = distro_enum(*DISTRO_NAMES) | ||
1604 | 35 | |||
1605 | 36 | OS_FAMILIES = { | ||
1606 | 37 | DISTROS.debian: [DISTROS.debian, DISTROS.ubuntu], | ||
1607 | 38 | DISTROS.redhat: [DISTROS.centos, DISTROS.fedora, DISTROS.redhat, | ||
1608 | 39 | DISTROS.rhel], | ||
1609 | 40 | DISTROS.gentoo: [DISTROS.gentoo], | ||
1610 | 41 | DISTROS.freebsd: [DISTROS.freebsd], | ||
1611 | 42 | DISTROS.suse: [DISTROS.opensuse, DISTROS.sles, DISTROS.suse], | ||
1612 | 43 | DISTROS.arch: [DISTROS.arch], | ||
1613 | 44 | } | ||
1614 | 45 | |||
1615 | 46 | # invert the mapping for faster lookup of variants | ||
1616 | 47 | DISTRO_TO_OSFAMILY = ( | ||
1617 | 48 | {variant: family for family, variants in OS_FAMILIES.items() | ||
1618 | 49 | for variant in variants}) | ||
1619 | 50 | |||
1620 | 51 | _LSB_RELEASE = {} | ||
1621 | 52 | |||
1622 | 53 | |||
1623 | 54 | def name_to_distro(distname): | ||
1624 | 55 | try: | ||
1625 | 56 | return DISTROS[DISTROS.index(distname)] | ||
1626 | 57 | except (IndexError, AttributeError): | ||
1627 | 58 | LOG.error('Unknown distro name: %s', distname) | ||
1628 | 59 | |||
1629 | 60 | |||
1630 | 61 | def lsb_release(target=None): | ||
1631 | 62 | if target_path(target) != "/": | ||
1632 | 63 | # do not use or update cache if target is provided | ||
1633 | 64 | return _lsb_release(target) | ||
1634 | 65 | |||
1635 | 66 | global _LSB_RELEASE | ||
1636 | 67 | if not _LSB_RELEASE: | ||
1637 | 68 | data = _lsb_release() | ||
1638 | 69 | _LSB_RELEASE.update(data) | ||
1639 | 70 | return _LSB_RELEASE | ||
1640 | 71 | |||
1641 | 72 | |||
1642 | 73 | def os_release(target=None): | ||
1643 | 74 | data = {} | ||
1644 | 75 | os_release = target_path(target, 'etc/os-release') | ||
1645 | 76 | if os.path.exists(os_release): | ||
1646 | 77 | data = load_shell_content(load_file(os_release), | ||
1647 | 78 | add_empty=False, empty_val=None) | ||
1648 | 79 | if not data: | ||
1649 | 80 | for relfile in [target_path(target, rel) for rel in | ||
1650 | 81 | ['etc/centos-release', 'etc/redhat-release']]: | ||
1651 | 82 | data = _parse_redhat_release(release_file=relfile, target=target) | ||
1652 | 83 | if data: | ||
1653 | 84 | break | ||
1654 | 85 | |||
1655 | 86 | return data | ||
1656 | 87 | |||
1657 | 88 | |||
1658 | 89 | def _parse_redhat_release(release_file=None, target=None): | ||
1659 | 90 | """Return a dictionary of distro info fields from /etc/redhat-release. | ||
1660 | 91 | |||
1661 | 92 | Dict keys will align with /etc/os-release keys: | ||
1662 | 93 | ID, VERSION_ID, VERSION_CODENAME | ||
1663 | 94 | """ | ||
1664 | 95 | |||
1665 | 96 | if not release_file: | ||
1666 | 97 | release_file = target_path('etc/redhat-release') | ||
1667 | 98 | if not os.path.exists(release_file): | ||
1668 | 99 | return {} | ||
1669 | 100 | redhat_release = load_file(release_file) | ||
1670 | 101 | redhat_regex = ( | ||
1671 | 102 | r'(?P<name>.+) release (?P<version>[\d\.]+) ' | ||
1672 | 103 | r'\((?P<codename>[^)]+)\)') | ||
1673 | 104 | match = re.match(redhat_regex, redhat_release) | ||
1674 | 105 | if match: | ||
1675 | 106 | group = match.groupdict() | ||
1676 | 107 | group['name'] = group['name'].lower().partition(' linux')[0] | ||
1677 | 108 | if group['name'] == 'red hat enterprise': | ||
1678 | 109 | group['name'] = 'redhat' | ||
1679 | 110 | return {'ID': group['name'], 'VERSION_ID': group['version'], | ||
1680 | 111 | 'VERSION_CODENAME': group['codename']} | ||
1681 | 112 | return {} | ||
1682 | 113 | |||
1683 | 114 | |||
1684 | 115 | def get_distroinfo(target=None): | ||
1685 | 116 | variant_name = os_release(target=target)['ID'] | ||
1686 | 117 | variant = name_to_distro(variant_name) | ||
1687 | 118 | family = DISTRO_TO_OSFAMILY.get(variant) | ||
1688 | 119 | return DistroInfo(variant, family) | ||
1689 | 120 | |||
1690 | 121 | |||
1691 | 122 | def get_distro(target=None): | ||
1692 | 123 | distinfo = get_distroinfo(target=target) | ||
1693 | 124 | return distinfo.variant | ||
1694 | 125 | |||
1695 | 126 | |||
1696 | 127 | def get_osfamily(target=None): | ||
1697 | 128 | distinfo = get_distroinfo(target=target) | ||
1698 | 129 | return distinfo.family | ||
1699 | 130 | |||
1700 | 131 | |||
1701 | 132 | def is_ubuntu_core(target=None): | ||
1702 | 133 | """Check if Ubuntu-Core specific directory is present at target""" | ||
1703 | 134 | return os.path.exists(target_path(target, 'system-data/var/lib/snapd')) | ||
1704 | 135 | |||
1705 | 136 | |||
1706 | 137 | def is_centos(target=None): | ||
1707 | 138 | """Check if CentOS specific file is present at target""" | ||
1708 | 139 | return os.path.exists(target_path(target, 'etc/centos-release')) | ||
1709 | 140 | |||
1710 | 141 | |||
1711 | 142 | def is_rhel(target=None): | ||
1712 | 143 | """Check if RHEL specific file is present at target""" | ||
1713 | 144 | return os.path.exists(target_path(target, 'etc/redhat-release')) | ||
1714 | 145 | |||
1715 | 146 | |||
1716 | 147 | def _lsb_release(target=None): | ||
1717 | 148 | fmap = {'Codename': 'codename', 'Description': 'description', | ||
1718 | 149 | 'Distributor ID': 'id', 'Release': 'release'} | ||
1719 | 150 | |||
1720 | 151 | data = {} | ||
1721 | 152 | try: | ||
1722 | 153 | out, _ = subp(['lsb_release', '--all'], capture=True, target=target) | ||
1723 | 154 | for line in out.splitlines(): | ||
1724 | 155 | fname, _, val = line.partition(":") | ||
1725 | 156 | if fname in fmap: | ||
1726 | 157 | data[fmap[fname]] = val.strip() | ||
1727 | 158 | missing = [k for k in fmap.values() if k not in data] | ||
1728 | 159 | if len(missing): | ||
1729 | 160 | LOG.warn("Missing fields in lsb_release --all output: %s", | ||
1730 | 161 | ','.join(missing)) | ||
1731 | 162 | |||
1732 | 163 | except ProcessExecutionError as err: | ||
1733 | 164 | LOG.warn("Unable to get lsb_release --all: %s", err) | ||
1734 | 165 | data = {v: "UNAVAILABLE" for v in fmap.values()} | ||
1735 | 166 | |||
1736 | 167 | return data | ||
1737 | 168 | |||
1738 | 169 | |||
1739 | 170 | def apt_update(target=None, env=None, force=False, comment=None, | ||
1740 | 171 | retries=None): | ||
1741 | 172 | |||
1742 | 173 | marker = "tmp/curtin.aptupdate" | ||
1743 | 174 | |||
1744 | 175 | if env is None: | ||
1745 | 176 | env = os.environ.copy() | ||
1746 | 177 | |||
1747 | 178 | if retries is None: | ||
1748 | 179 | # by default run apt-update up to 3 times to allow | ||
1749 | 180 | # for transient failures | ||
1750 | 181 | retries = (1, 2, 3) | ||
1751 | 182 | |||
1752 | 183 | if comment is None: | ||
1753 | 184 | comment = "no comment provided" | ||
1754 | 185 | |||
1755 | 186 | if comment.endswith("\n"): | ||
1756 | 187 | comment = comment[:-1] | ||
1757 | 188 | |||
1758 | 189 | marker = target_path(target, marker) | ||
1759 | 190 | # if marker exists, check if there are files that would make it obsolete | ||
1760 | 191 | listfiles = [target_path(target, "/etc/apt/sources.list")] | ||
1761 | 192 | listfiles += glob.glob( | ||
1762 | 193 | target_path(target, "etc/apt/sources.list.d/*.list")) | ||
1763 | 194 | |||
1764 | 195 | if os.path.exists(marker) and not force: | ||
1765 | 196 | if len(find_newer(marker, listfiles)) == 0: | ||
1766 | 197 | return | ||
1767 | 198 | |||
1768 | 199 | restore_perms = [] | ||
1769 | 200 | |||
1770 | 201 | abs_tmpdir = tempfile.mkdtemp(dir=target_path(target, "/tmp")) | ||
1771 | 202 | try: | ||
1772 | 203 | abs_slist = abs_tmpdir + "/sources.list" | ||
1773 | 204 | abs_slistd = abs_tmpdir + "/sources.list.d" | ||
1774 | 205 | ch_tmpdir = "/tmp/" + os.path.basename(abs_tmpdir) | ||
1775 | 206 | ch_slist = ch_tmpdir + "/sources.list" | ||
1776 | 207 | ch_slistd = ch_tmpdir + "/sources.list.d" | ||
1777 | 208 | |||
1778 | 209 | # this file gets executed on apt-get update sometimes. (LP: #1527710) | ||
1779 | 210 | motd_update = target_path( | ||
1780 | 211 | target, "/usr/lib/update-notifier/update-motd-updates-available") | ||
1781 | 212 | pmode = set_unexecutable(motd_update) | ||
1782 | 213 | if pmode is not None: | ||
1783 | 214 | restore_perms.append((motd_update, pmode),) | ||
1784 | 215 | |||
1785 | 216 | # create tmpdir/sources.list with all lines other than deb-src | ||
1786 | 217 | # avoid apt complaining by using existing and empty dir for sourceparts | ||
1787 | 218 | os.mkdir(abs_slistd) | ||
1788 | 219 | with open(abs_slist, "w") as sfp: | ||
1789 | 220 | for sfile in listfiles: | ||
1790 | 221 | with open(sfile, "r") as fp: | ||
1791 | 222 | contents = fp.read() | ||
1792 | 223 | for line in contents.splitlines(): | ||
1793 | 224 | line = line.lstrip() | ||
1794 | 225 | if not line.startswith("deb-src"): | ||
1795 | 226 | sfp.write(line + "\n") | ||
1796 | 227 | |||
1797 | 228 | update_cmd = [ | ||
1798 | 229 | 'apt-get', '--quiet', | ||
1799 | 230 | '--option=Acquire::Languages=none', | ||
1800 | 231 | '--option=Dir::Etc::sourcelist=%s' % ch_slist, | ||
1801 | 232 | '--option=Dir::Etc::sourceparts=%s' % ch_slistd, | ||
1802 | 233 | 'update'] | ||
1803 | 234 | |||
1804 | 235 | # do not using 'run_apt_command' so we can use 'retries' to subp | ||
1805 | 236 | with ChrootableTarget(target, allow_daemons=True) as inchroot: | ||
1806 | 237 | inchroot.subp(update_cmd, env=env, retries=retries) | ||
1807 | 238 | finally: | ||
1808 | 239 | for fname, perms in restore_perms: | ||
1809 | 240 | os.chmod(fname, perms) | ||
1810 | 241 | if abs_tmpdir: | ||
1811 | 242 | shutil.rmtree(abs_tmpdir) | ||
1812 | 243 | |||
1813 | 244 | with open(marker, "w") as fp: | ||
1814 | 245 | fp.write(comment + "\n") | ||
1815 | 246 | |||
1816 | 247 | |||
1817 | 248 | def run_apt_command(mode, args=None, opts=None, env=None, target=None, | ||
1818 | 249 | execute=True, allow_daemons=False): | ||
1819 | 250 | defopts = ['--quiet', '--assume-yes', | ||
1820 | 251 | '--option=Dpkg::options::=--force-unsafe-io', | ||
1821 | 252 | '--option=Dpkg::Options::=--force-confold'] | ||
1822 | 253 | if args is None: | ||
1823 | 254 | args = [] | ||
1824 | 255 | |||
1825 | 256 | if opts is None: | ||
1826 | 257 | opts = [] | ||
1827 | 258 | |||
1828 | 259 | if env is None: | ||
1829 | 260 | env = os.environ.copy() | ||
1830 | 261 | env['DEBIAN_FRONTEND'] = 'noninteractive' | ||
1831 | 262 | |||
1832 | 263 | if which('eatmydata', target=target): | ||
1833 | 264 | emd = ['eatmydata'] | ||
1834 | 265 | else: | ||
1835 | 266 | emd = [] | ||
1836 | 267 | |||
1837 | 268 | cmd = emd + ['apt-get'] + defopts + opts + [mode] + args | ||
1838 | 269 | if not execute: | ||
1839 | 270 | return env, cmd | ||
1840 | 271 | |||
1841 | 272 | apt_update(target, env=env, comment=' '.join(cmd)) | ||
1842 | 273 | with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: | ||
1843 | 274 | return inchroot.subp(cmd, env=env) | ||
1844 | 275 | |||
1845 | 276 | |||
1846 | 277 | def run_yum_command(mode, args=None, opts=None, env=None, target=None, | ||
1847 | 278 | execute=True, allow_daemons=False): | ||
1848 | 279 | defopts = ['--assumeyes', '--quiet'] | ||
1849 | 280 | |||
1850 | 281 | if args is None: | ||
1851 | 282 | args = [] | ||
1852 | 283 | |||
1853 | 284 | if opts is None: | ||
1854 | 285 | opts = [] | ||
1855 | 286 | |||
1856 | 287 | cmd = ['yum'] + defopts + opts + [mode] + args | ||
1857 | 288 | if not execute: | ||
1858 | 289 | return env, cmd | ||
1859 | 290 | |||
1860 | 291 | if mode in ["install", "update", "upgrade"]: | ||
1861 | 292 | return yum_install(mode, args, opts=opts, env=env, target=target, | ||
1862 | 293 | allow_daemons=allow_daemons) | ||
1863 | 294 | |||
1864 | 295 | with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: | ||
1865 | 296 | return inchroot.subp(cmd, env=env) | ||
1866 | 297 | |||
1867 | 298 | |||
1868 | 299 | def yum_install(mode, packages=None, opts=None, env=None, target=None, | ||
1869 | 300 | allow_daemons=False): | ||
1870 | 301 | |||
1871 | 302 | defopts = ['--assumeyes', '--quiet'] | ||
1872 | 303 | |||
1873 | 304 | if packages is None: | ||
1874 | 305 | packages = [] | ||
1875 | 306 | |||
1876 | 307 | if opts is None: | ||
1877 | 308 | opts = [] | ||
1878 | 309 | |||
1879 | 310 | if mode not in ['install', 'update', 'upgrade']: | ||
1880 | 311 | raise ValueError( | ||
1881 | 312 | 'Unsupported mode "%s" for yum package install/upgrade' % mode) | ||
1882 | 313 | |||
1883 | 314 | # download first, then install/upgrade from cache | ||
1884 | 315 | cmd = ['yum'] + defopts + opts + [mode] | ||
1885 | 316 | dl_opts = ['--downloadonly', '--setopt=keepcache=1'] | ||
1886 | 317 | inst_opts = ['--cacheonly'] | ||
1887 | 318 | |||
1888 | 319 | # rpm requires /dev /sys and /proc be mounted, use ChrootableTarget | ||
1889 | 320 | with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: | ||
1890 | 321 | inchroot.subp(cmd + dl_opts + packages, | ||
1891 | 322 | env=env, retries=[1] * 10) | ||
1892 | 323 | return inchroot.subp(cmd + inst_opts + packages, env=env) | ||
1893 | 324 | |||
1894 | 325 | |||
1895 | 326 | def rpm_get_dist_id(target=None): | ||
1896 | 327 | """Use rpm command to extract the '%rhel' distro macro which returns | ||
1897 | 328 | the major os version id (6, 7, 8). This works for centos or rhel | ||
1898 | 329 | """ | ||
1899 | 330 | with ChrootableTarget(target) as in_chroot: | ||
1900 | 331 | dist, _ = in_chroot.subp(['rpm', '-E', '%rhel'], capture=True) | ||
1901 | 332 | return dist.rstrip() | ||
1902 | 333 | |||
1903 | 334 | |||
1904 | 335 | def system_upgrade(opts=None, target=None, env=None, allow_daemons=False, | ||
1905 | 336 | osfamily=None): | ||
1906 | 337 | LOG.debug("Upgrading system in %s", target) | ||
1907 | 338 | |||
1908 | 339 | distro_cfg = { | ||
1909 | 340 | DISTROS.debian: {'function': 'run_apt_command', | ||
1910 | 341 | 'subcommands': ('dist-upgrade', 'autoremove')}, | ||
1911 | 342 | DISTROS.redhat: {'function': 'run_yum_command', | ||
1912 | 343 | 'subcommands': ('upgrade')}, | ||
1913 | 344 | } | ||
1914 | 345 | if osfamily not in distro_cfg: | ||
1915 | 346 | raise ValueError('Distro "%s" does not have system_upgrade support', | ||
1916 | 347 | osfamily) | ||
1917 | 348 | |||
1918 | 349 | for mode in distro_cfg[osfamily]['subcommands']: | ||
1919 | 350 | ret = distro_cfg[osfamily]['function']( | ||
1920 | 351 | mode, opts=opts, target=target, | ||
1921 | 352 | env=env, allow_daemons=allow_daemons) | ||
1922 | 353 | return ret | ||
1923 | 354 | |||
1924 | 355 | |||
1925 | 356 | def install_packages(pkglist, osfamily=None, opts=None, target=None, env=None, | ||
1926 | 357 | allow_daemons=False): | ||
1927 | 358 | if isinstance(pkglist, str): | ||
1928 | 359 | pkglist = [pkglist] | ||
1929 | 360 | |||
1930 | 361 | if not osfamily: | ||
1931 | 362 | osfamily = get_osfamily(target=target) | ||
1932 | 363 | |||
1933 | 364 | installer_map = { | ||
1934 | 365 | DISTROS.debian: run_apt_command, | ||
1935 | 366 | DISTROS.redhat: run_yum_command, | ||
1936 | 367 | } | ||
1937 | 368 | |||
1938 | 369 | install_cmd = installer_map.get(osfamily) | ||
1939 | 370 | if not install_cmd: | ||
1940 | 371 | raise ValueError('No packge install command for distro: %s' % | ||
1941 | 372 | osfamily) | ||
1942 | 373 | |||
1943 | 374 | return install_cmd('install', args=pkglist, opts=opts, target=target, | ||
1944 | 375 | env=env, allow_daemons=allow_daemons) | ||
1945 | 376 | |||
1946 | 377 | |||
1947 | 378 | def has_pkg_available(pkg, target=None, osfamily=None): | ||
1948 | 379 | if not osfamily: | ||
1949 | 380 | osfamily = get_osfamily(target=target) | ||
1950 | 381 | |||
1951 | 382 | if osfamily not in [DISTROS.debian, DISTROS.redhat]: | ||
1952 | 383 | raise ValueError('has_pkg_available: unsupported distro family: %s', | ||
1953 | 384 | osfamily) | ||
1954 | 385 | |||
1955 | 386 | if osfamily == DISTROS.debian: | ||
1956 | 387 | out, _ = subp(['apt-cache', 'pkgnames'], capture=True, target=target) | ||
1957 | 388 | for item in out.splitlines(): | ||
1958 | 389 | if pkg == item.strip(): | ||
1959 | 390 | return True | ||
1960 | 391 | return False | ||
1961 | 392 | |||
1962 | 393 | if osfamily == DISTROS.redhat: | ||
1963 | 394 | out, _ = run_yum_command('list', opts=['--cacheonly']) | ||
1964 | 395 | for item in out.splitlines(): | ||
1965 | 396 | if item.lower().startswith(pkg.lower()): | ||
1966 | 397 | return True | ||
1967 | 398 | return False | ||
1968 | 399 | |||
1969 | 400 | |||
1970 | 401 | def get_installed_packages(target=None): | ||
1971 | 402 | if which('dpkg-query', target=target): | ||
1972 | 403 | (out, _) = subp(['dpkg-query', '--list'], target=target, capture=True) | ||
1973 | 404 | elif which('rpm', target=target): | ||
1974 | 405 | # rpm requires /dev /sys and /proc be mounted, use ChrootableTarget | ||
1975 | 406 | with ChrootableTarget(target) as in_chroot: | ||
1976 | 407 | (out, _) = in_chroot.subp(['rpm', '-qa', '--queryformat', | ||
1977 | 408 | 'ii %{NAME} %{VERSION}-%{RELEASE}\n'], | ||
1978 | 409 | target=target, capture=True) | ||
1979 | 410 | if not out: | ||
1980 | 411 | raise ValueError('No package query tool') | ||
1981 | 412 | |||
1982 | 413 | pkgs_inst = set() | ||
1983 | 414 | for line in out.splitlines(): | ||
1984 | 415 | try: | ||
1985 | 416 | (state, pkg, other) = line.split(None, 2) | ||
1986 | 417 | except ValueError: | ||
1987 | 418 | continue | ||
1988 | 419 | if state.startswith("hi") or state.startswith("ii"): | ||
1989 | 420 | pkgs_inst.add(re.sub(":.*", "", pkg)) | ||
1990 | 421 | |||
1991 | 422 | return pkgs_inst | ||
1992 | 423 | |||
1993 | 424 | |||
1994 | 425 | def has_pkg_installed(pkg, target=None): | ||
1995 | 426 | try: | ||
1996 | 427 | out, _ = subp(['dpkg-query', '--show', '--showformat', | ||
1997 | 428 | '${db:Status-Abbrev}', pkg], | ||
1998 | 429 | capture=True, target=target) | ||
1999 | 430 | return out.rstrip() == "ii" | ||
2000 | 431 | except ProcessExecutionError: | ||
2001 | 432 | return False | ||
2002 | 433 | |||
2003 | 434 | |||
2004 | 435 | def parse_dpkg_version(raw, name=None, semx=None): | ||
2005 | 436 | """Parse a dpkg version string into various parts and calcualate a | ||
2006 | 437 | numerical value of the version for use in comparing package versions | ||
2007 | 438 | |||
2008 | 439 | Native packages (without a '-'), will have the package version treated | ||
2009 | 440 | as the upstream version. | ||
2010 | 441 | |||
2011 | 442 | returns a dictionary with fields: | ||
2012 | 443 | 'major' (int), 'minor' (int), 'micro' (int), | ||
2013 | 444 | 'semantic_version' (int), | ||
2014 | 445 | 'extra' (string), 'raw' (string), 'upstream' (string), | ||
2015 | 446 | 'name' (present only if name is not None) | ||
2016 | 447 | """ | ||
2017 | 448 | if not isinstance(raw, string_types): | ||
2018 | 449 | raise TypeError( | ||
2019 | 450 | "Invalid type %s for parse_dpkg_version" % raw.__class__) | ||
2020 | 451 | |||
2021 | 452 | if semx is None: | ||
2022 | 453 | semx = (10000, 100, 1) | ||
2023 | 454 | |||
2024 | 455 | if "-" in raw: | ||
2025 | 456 | upstream = raw.rsplit('-', 1)[0] | ||
2026 | 457 | else: | ||
2027 | 458 | # this is a native package, package version treated as upstream. | ||
2028 | 459 | upstream = raw | ||
2029 | 460 | |||
2030 | 461 | match = re.search(r'[^0-9.]', upstream) | ||
2031 | 462 | if match: | ||
2032 | 463 | extra = upstream[match.start():] | ||
2033 | 464 | upstream_base = upstream[:match.start()] | ||
2034 | 465 | else: | ||
2035 | 466 | upstream_base = upstream | ||
2036 | 467 | extra = None | ||
2037 | 468 | |||
2038 | 469 | toks = upstream_base.split(".", 2) | ||
2039 | 470 | if len(toks) == 3: | ||
2040 | 471 | major, minor, micro = toks | ||
2041 | 472 | elif len(toks) == 2: | ||
2042 | 473 | major, minor, micro = (toks[0], toks[1], 0) | ||
2043 | 474 | elif len(toks) == 1: | ||
2044 | 475 | major, minor, micro = (toks[0], 0, 0) | ||
2045 | 476 | |||
2046 | 477 | version = { | ||
2047 | 478 | 'major': int(major), | ||
2048 | 479 | 'minor': int(minor), | ||
2049 | 480 | 'micro': int(micro), | ||
2050 | 481 | 'extra': extra, | ||
2051 | 482 | 'raw': raw, | ||
2052 | 483 | 'upstream': upstream, | ||
2053 | 484 | } | ||
2054 | 485 | if name: | ||
2055 | 486 | version['name'] = name | ||
2056 | 487 | |||
2057 | 488 | if semx: | ||
2058 | 489 | try: | ||
2059 | 490 | version['semantic_version'] = int( | ||
2060 | 491 | int(major) * semx[0] + int(minor) * semx[1] + | ||
2061 | 492 | int(micro) * semx[2]) | ||
2062 | 493 | except (ValueError, IndexError): | ||
2063 | 494 | version['semantic_version'] = None | ||
2064 | 495 | |||
2065 | 496 | return version | ||
2066 | 497 | |||
2067 | 498 | |||
2068 | 499 | def get_package_version(pkg, target=None, semx=None): | ||
2069 | 500 | """Use dpkg-query to extract package pkg's version string | ||
2070 | 501 | and parse the version string into a dictionary | ||
2071 | 502 | """ | ||
2072 | 503 | try: | ||
2073 | 504 | out, _ = subp(['dpkg-query', '--show', '--showformat', | ||
2074 | 505 | '${Version}', pkg], capture=True, target=target) | ||
2075 | 506 | raw = out.rstrip() | ||
2076 | 507 | return parse_dpkg_version(raw, name=pkg, semx=semx) | ||
2077 | 508 | except ProcessExecutionError: | ||
2078 | 509 | return None | ||
2079 | 510 | |||
2080 | 511 | |||
2081 | 512 | # vi: ts=4 expandtab syntax=python | ||
2082 | diff --git a/curtin/futil.py b/curtin/futil.py | |||
2083 | index 506964e..e603f88 100644 | |||
2084 | --- a/curtin/futil.py | |||
2085 | +++ b/curtin/futil.py | |||
2086 | @@ -5,7 +5,8 @@ import pwd | |||
2087 | 5 | import os | 5 | import os |
2088 | 6 | import warnings | 6 | import warnings |
2089 | 7 | 7 | ||
2091 | 8 | from .util import write_file, target_path | 8 | from .util import write_file |
2092 | 9 | from .paths import target_path | ||
2093 | 9 | from .log import LOG | 10 | from .log import LOG |
2094 | 10 | 11 | ||
2095 | 11 | 12 | ||
2096 | diff --git a/curtin/net/__init__.py b/curtin/net/__init__.py | |||
2097 | index b4c9b59..ef2ba26 100644 | |||
2098 | --- a/curtin/net/__init__.py | |||
2099 | +++ b/curtin/net/__init__.py | |||
2100 | @@ -572,63 +572,4 @@ def get_interface_mac(ifname): | |||
2101 | 572 | return read_sys_net(ifname, "address", enoent=False) | 572 | return read_sys_net(ifname, "address", enoent=False) |
2102 | 573 | 573 | ||
2103 | 574 | 574 | ||
2104 | 575 | def network_config_required_packages(network_config, mapping=None): | ||
2105 | 576 | |||
2106 | 577 | if network_config is None: | ||
2107 | 578 | network_config = {} | ||
2108 | 579 | |||
2109 | 580 | if not isinstance(network_config, dict): | ||
2110 | 581 | raise ValueError('Invalid network configuration. Must be a dict') | ||
2111 | 582 | |||
2112 | 583 | if mapping is None: | ||
2113 | 584 | mapping = {} | ||
2114 | 585 | |||
2115 | 586 | if not isinstance(mapping, dict): | ||
2116 | 587 | raise ValueError('Invalid network mapping. Must be a dict') | ||
2117 | 588 | |||
2118 | 589 | # allow top-level 'network' key | ||
2119 | 590 | if 'network' in network_config: | ||
2120 | 591 | network_config = network_config.get('network') | ||
2121 | 592 | |||
2122 | 593 | # v1 has 'config' key and uses type: devtype elements | ||
2123 | 594 | if 'config' in network_config: | ||
2124 | 595 | dev_configs = set(device['type'] | ||
2125 | 596 | for device in network_config['config']) | ||
2126 | 597 | else: | ||
2127 | 598 | # v2 has no config key | ||
2128 | 599 | dev_configs = set(cfgtype for (cfgtype, cfg) in | ||
2129 | 600 | network_config.items() if cfgtype not in ['version']) | ||
2130 | 601 | |||
2131 | 602 | needed_packages = [] | ||
2132 | 603 | for dev_type in dev_configs: | ||
2133 | 604 | if dev_type in mapping: | ||
2134 | 605 | needed_packages.extend(mapping[dev_type]) | ||
2135 | 606 | |||
2136 | 607 | return needed_packages | ||
2137 | 608 | |||
2138 | 609 | |||
2139 | 610 | def detect_required_packages_mapping(): | ||
2140 | 611 | """Return a dictionary providing a versioned configuration which maps | ||
2141 | 612 | network configuration elements to the packages which are required | ||
2142 | 613 | for functionality. | ||
2143 | 614 | """ | ||
2144 | 615 | mapping = { | ||
2145 | 616 | 1: { | ||
2146 | 617 | 'handler': network_config_required_packages, | ||
2147 | 618 | 'mapping': { | ||
2148 | 619 | 'bond': ['ifenslave'], | ||
2149 | 620 | 'bridge': ['bridge-utils'], | ||
2150 | 621 | 'vlan': ['vlan']}, | ||
2151 | 622 | }, | ||
2152 | 623 | 2: { | ||
2153 | 624 | 'handler': network_config_required_packages, | ||
2154 | 625 | 'mapping': { | ||
2155 | 626 | 'bonds': ['ifenslave'], | ||
2156 | 627 | 'bridges': ['bridge-utils'], | ||
2157 | 628 | 'vlans': ['vlan']} | ||
2158 | 629 | }, | ||
2159 | 630 | } | ||
2160 | 631 | |||
2161 | 632 | return mapping | ||
2162 | 633 | |||
2163 | 634 | # vi: ts=4 expandtab syntax=python | 575 | # vi: ts=4 expandtab syntax=python |
2164 | diff --git a/curtin/net/deps.py b/curtin/net/deps.py | |||
2165 | 635 | new file mode 100644 | 576 | new file mode 100644 |
2166 | index 0000000..b98961d | |||
2167 | --- /dev/null | |||
2168 | +++ b/curtin/net/deps.py | |||
2169 | @@ -0,0 +1,72 @@ | |||
2170 | 1 | # This file is part of curtin. See LICENSE file for copyright and license info. | ||
2171 | 2 | |||
2172 | 3 | from curtin.distro import DISTROS | ||
2173 | 4 | |||
2174 | 5 | |||
2175 | 6 | def network_config_required_packages(network_config, mapping=None): | ||
2176 | 7 | |||
2177 | 8 | if network_config is None: | ||
2178 | 9 | network_config = {} | ||
2179 | 10 | |||
2180 | 11 | if not isinstance(network_config, dict): | ||
2181 | 12 | raise ValueError('Invalid network configuration. Must be a dict') | ||
2182 | 13 | |||
2183 | 14 | if mapping is None: | ||
2184 | 15 | mapping = {} | ||
2185 | 16 | |||
2186 | 17 | if not isinstance(mapping, dict): | ||
2187 | 18 | raise ValueError('Invalid network mapping. Must be a dict') | ||
2188 | 19 | |||
2189 | 20 | # allow top-level 'network' key | ||
2190 | 21 | if 'network' in network_config: | ||
2191 | 22 | network_config = network_config.get('network') | ||
2192 | 23 | |||
2193 | 24 | # v1 has 'config' key and uses type: devtype elements | ||
2194 | 25 | if 'config' in network_config: | ||
2195 | 26 | dev_configs = set(device['type'] | ||
2196 | 27 | for device in network_config['config']) | ||
2197 | 28 | else: | ||
2198 | 29 | # v2 has no config key | ||
2199 | 30 | dev_configs = set(cfgtype for (cfgtype, cfg) in | ||
2200 | 31 | network_config.items() if cfgtype not in ['version']) | ||
2201 | 32 | |||
2202 | 33 | needed_packages = [] | ||
2203 | 34 | for dev_type in dev_configs: | ||
2204 | 35 | if dev_type in mapping: | ||
2205 | 36 | needed_packages.extend(mapping[dev_type]) | ||
2206 | 37 | |||
2207 | 38 | return needed_packages | ||
2208 | 39 | |||
2209 | 40 | |||
2210 | 41 | def detect_required_packages_mapping(osfamily=DISTROS.debian): | ||
2211 | 42 | """Return a dictionary providing a versioned configuration which maps | ||
2212 | 43 | network configuration elements to the packages which are required | ||
2213 | 44 | for functionality. | ||
2214 | 45 | """ | ||
2215 | 46 | # keys ending with 's' are v2 values | ||
2216 | 47 | distro_mapping = { | ||
2217 | 48 | DISTROS.debian: { | ||
2218 | 49 | 'bond': ['ifenslave'], | ||
2219 | 50 | 'bonds': [], | ||
2220 | 51 | 'bridge': ['bridge-utils'], | ||
2221 | 52 | 'bridges': [], | ||
2222 | 53 | 'vlan': ['vlan'], | ||
2223 | 54 | 'vlans': []}, | ||
2224 | 55 | DISTROS.redhat: { | ||
2225 | 56 | 'bond': [], | ||
2226 | 57 | 'bonds': [], | ||
2227 | 58 | 'bridge': [], | ||
2228 | 59 | 'bridges': [], | ||
2229 | 60 | 'vlan': [], | ||
2230 | 61 | 'vlans': []}, | ||
2231 | 62 | } | ||
2232 | 63 | if osfamily not in distro_mapping: | ||
2233 | 64 | raise ValueError('No net package mapping for distro: %s' % osfamily) | ||
2234 | 65 | |||
2235 | 66 | return {1: {'handler': network_config_required_packages, | ||
2236 | 67 | 'mapping': distro_mapping.get(osfamily)}, | ||
2237 | 68 | 2: {'handler': network_config_required_packages, | ||
2238 | 69 | 'mapping': distro_mapping.get(osfamily)}} | ||
2239 | 70 | |||
2240 | 71 | |||
2241 | 72 | # vi: ts=4 expandtab syntax=python | ||
2242 | diff --git a/curtin/paths.py b/curtin/paths.py | |||
2243 | 0 | new file mode 100644 | 73 | new file mode 100644 |
2244 | index 0000000..064b060 | |||
2245 | --- /dev/null | |||
2246 | +++ b/curtin/paths.py | |||
2247 | @@ -0,0 +1,34 @@ | |||
2248 | 1 | # This file is part of curtin. See LICENSE file for copyright and license info. | ||
2249 | 2 | import os | ||
2250 | 3 | |||
2251 | 4 | try: | ||
2252 | 5 | string_types = (basestring,) | ||
2253 | 6 | except NameError: | ||
2254 | 7 | string_types = (str,) | ||
2255 | 8 | |||
2256 | 9 | |||
2257 | 10 | def target_path(target, path=None): | ||
2258 | 11 | # return 'path' inside target, accepting target as None | ||
2259 | 12 | if target in (None, ""): | ||
2260 | 13 | target = "/" | ||
2261 | 14 | elif not isinstance(target, string_types): | ||
2262 | 15 | raise ValueError("Unexpected input for target: %s" % target) | ||
2263 | 16 | else: | ||
2264 | 17 | target = os.path.abspath(target) | ||
2265 | 18 | # abspath("//") returns "//" specifically for 2 slashes. | ||
2266 | 19 | if target.startswith("//"): | ||
2267 | 20 | target = target[1:] | ||
2268 | 21 | |||
2269 | 22 | if not path: | ||
2270 | 23 | return target | ||
2271 | 24 | |||
2272 | 25 | if not isinstance(path, string_types): | ||
2273 | 26 | raise ValueError("Unexpected input for path: %s" % path) | ||
2274 | 27 | |||
2275 | 28 | # os.path.join("/etc", "/foo") returns "/foo". Chomp all leading /. | ||
2276 | 29 | while len(path) and path[0] == "/": | ||
2277 | 30 | path = path[1:] | ||
2278 | 31 | |||
2279 | 32 | return os.path.join(target, path) | ||
2280 | 33 | |||
2281 | 34 | # vi: ts=4 expandtab syntax=python | ||
2282 | diff --git a/curtin/util.py b/curtin/util.py | |||
2283 | index 29bf06e..238d7c5 100644 | |||
2284 | --- a/curtin/util.py | |||
2285 | +++ b/curtin/util.py | |||
2286 | @@ -4,7 +4,6 @@ import argparse | |||
2287 | 4 | import collections | 4 | import collections |
2288 | 5 | from contextlib import contextmanager | 5 | from contextlib import contextmanager |
2289 | 6 | import errno | 6 | import errno |
2290 | 7 | import glob | ||
2291 | 8 | import json | 7 | import json |
2292 | 9 | import os | 8 | import os |
2293 | 10 | import platform | 9 | import platform |
2294 | @@ -38,15 +37,16 @@ except NameError: | |||
2295 | 38 | # python3 does not have a long type. | 37 | # python3 does not have a long type. |
2296 | 39 | numeric_types = (int, float) | 38 | numeric_types = (int, float) |
2297 | 40 | 39 | ||
2298 | 40 | from . import paths | ||
2299 | 41 | from .log import LOG, log_call | 41 | from .log import LOG, log_call |
2300 | 42 | 42 | ||
2301 | 43 | _INSTALLED_HELPERS_PATH = 'usr/lib/curtin/helpers' | 43 | _INSTALLED_HELPERS_PATH = 'usr/lib/curtin/helpers' |
2302 | 44 | _INSTALLED_MAIN = 'usr/bin/curtin' | 44 | _INSTALLED_MAIN = 'usr/bin/curtin' |
2303 | 45 | 45 | ||
2304 | 46 | _LSB_RELEASE = {} | ||
2305 | 47 | _USES_SYSTEMD = None | 46 | _USES_SYSTEMD = None |
2306 | 48 | _HAS_UNSHARE_PID = None | 47 | _HAS_UNSHARE_PID = None |
2307 | 49 | 48 | ||
2308 | 49 | |||
2309 | 50 | _DNS_REDIRECT_IP = None | 50 | _DNS_REDIRECT_IP = None |
2310 | 51 | 51 | ||
2311 | 52 | # matcher used in template rendering functions | 52 | # matcher used in template rendering functions |
2312 | @@ -61,7 +61,7 @@ def _subp(args, data=None, rcs=None, env=None, capture=False, | |||
2313 | 61 | rcs = [0] | 61 | rcs = [0] |
2314 | 62 | devnull_fp = None | 62 | devnull_fp = None |
2315 | 63 | 63 | ||
2317 | 64 | tpath = target_path(target) | 64 | tpath = paths.target_path(target) |
2318 | 65 | chroot_args = [] if tpath == "/" else ['chroot', target] | 65 | chroot_args = [] if tpath == "/" else ['chroot', target] |
2319 | 66 | sh_args = ['sh', '-c'] if shell else [] | 66 | sh_args = ['sh', '-c'] if shell else [] |
2320 | 67 | if isinstance(args, string_types): | 67 | if isinstance(args, string_types): |
2321 | @@ -165,7 +165,7 @@ def _get_unshare_pid_args(unshare_pid=None, target=None, euid=None): | |||
2322 | 165 | if euid is None: | 165 | if euid is None: |
2323 | 166 | euid = os.geteuid() | 166 | euid = os.geteuid() |
2324 | 167 | 167 | ||
2326 | 168 | tpath = target_path(target) | 168 | tpath = paths.target_path(target) |
2327 | 169 | 169 | ||
2328 | 170 | unshare_pid_in = unshare_pid | 170 | unshare_pid_in = unshare_pid |
2329 | 171 | if unshare_pid is None: | 171 | if unshare_pid is None: |
2330 | @@ -595,7 +595,7 @@ def disable_daemons_in_root(target): | |||
2331 | 595 | 'done', | 595 | 'done', |
2332 | 596 | '']) | 596 | '']) |
2333 | 597 | 597 | ||
2335 | 598 | fpath = target_path(target, "/usr/sbin/policy-rc.d") | 598 | fpath = paths.target_path(target, "/usr/sbin/policy-rc.d") |
2336 | 599 | 599 | ||
2337 | 600 | if os.path.isfile(fpath): | 600 | if os.path.isfile(fpath): |
2338 | 601 | return False | 601 | return False |
2339 | @@ -606,7 +606,7 @@ def disable_daemons_in_root(target): | |||
2340 | 606 | 606 | ||
2341 | 607 | def undisable_daemons_in_root(target): | 607 | def undisable_daemons_in_root(target): |
2342 | 608 | try: | 608 | try: |
2344 | 609 | os.unlink(target_path(target, "/usr/sbin/policy-rc.d")) | 609 | os.unlink(paths.target_path(target, "/usr/sbin/policy-rc.d")) |
2345 | 610 | except OSError as e: | 610 | except OSError as e: |
2346 | 611 | if e.errno != errno.ENOENT: | 611 | if e.errno != errno.ENOENT: |
2347 | 612 | raise | 612 | raise |
2348 | @@ -618,7 +618,7 @@ class ChrootableTarget(object): | |||
2349 | 618 | def __init__(self, target, allow_daemons=False, sys_resolvconf=True): | 618 | def __init__(self, target, allow_daemons=False, sys_resolvconf=True): |
2350 | 619 | if target is None: | 619 | if target is None: |
2351 | 620 | target = "/" | 620 | target = "/" |
2353 | 621 | self.target = target_path(target) | 621 | self.target = paths.target_path(target) |
2354 | 622 | self.mounts = ["/dev", "/proc", "/sys"] | 622 | self.mounts = ["/dev", "/proc", "/sys"] |
2355 | 623 | self.umounts = [] | 623 | self.umounts = [] |
2356 | 624 | self.disabled_daemons = False | 624 | self.disabled_daemons = False |
2357 | @@ -628,14 +628,14 @@ class ChrootableTarget(object): | |||
2358 | 628 | 628 | ||
2359 | 629 | def __enter__(self): | 629 | def __enter__(self): |
2360 | 630 | for p in self.mounts: | 630 | for p in self.mounts: |
2362 | 631 | tpath = target_path(self.target, p) | 631 | tpath = paths.target_path(self.target, p) |
2363 | 632 | if do_mount(p, tpath, opts='--bind'): | 632 | if do_mount(p, tpath, opts='--bind'): |
2364 | 633 | self.umounts.append(tpath) | 633 | self.umounts.append(tpath) |
2365 | 634 | 634 | ||
2366 | 635 | if not self.allow_daemons: | 635 | if not self.allow_daemons: |
2367 | 636 | self.disabled_daemons = disable_daemons_in_root(self.target) | 636 | self.disabled_daemons = disable_daemons_in_root(self.target) |
2368 | 637 | 637 | ||
2370 | 638 | rconf = target_path(self.target, "/etc/resolv.conf") | 638 | rconf = paths.target_path(self.target, "/etc/resolv.conf") |
2371 | 639 | target_etc = os.path.dirname(rconf) | 639 | target_etc = os.path.dirname(rconf) |
2372 | 640 | if self.target != "/" and os.path.isdir(target_etc): | 640 | if self.target != "/" and os.path.isdir(target_etc): |
2373 | 641 | # never muck with resolv.conf on / | 641 | # never muck with resolv.conf on / |
2374 | @@ -660,13 +660,13 @@ class ChrootableTarget(object): | |||
2375 | 660 | undisable_daemons_in_root(self.target) | 660 | undisable_daemons_in_root(self.target) |
2376 | 661 | 661 | ||
2377 | 662 | # if /dev is to be unmounted, udevadm settle (LP: #1462139) | 662 | # if /dev is to be unmounted, udevadm settle (LP: #1462139) |
2379 | 663 | if target_path(self.target, "/dev") in self.umounts: | 663 | if paths.target_path(self.target, "/dev") in self.umounts: |
2380 | 664 | log_call(subp, ['udevadm', 'settle']) | 664 | log_call(subp, ['udevadm', 'settle']) |
2381 | 665 | 665 | ||
2382 | 666 | for p in reversed(self.umounts): | 666 | for p in reversed(self.umounts): |
2383 | 667 | do_umount(p) | 667 | do_umount(p) |
2384 | 668 | 668 | ||
2386 | 669 | rconf = target_path(self.target, "/etc/resolv.conf") | 669 | rconf = paths.target_path(self.target, "/etc/resolv.conf") |
2387 | 670 | if self.sys_resolvconf and self.rconf_d: | 670 | if self.sys_resolvconf and self.rconf_d: |
2388 | 671 | os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf) | 671 | os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf) |
2389 | 672 | shutil.rmtree(self.rconf_d) | 672 | shutil.rmtree(self.rconf_d) |
2390 | @@ -676,7 +676,7 @@ class ChrootableTarget(object): | |||
2391 | 676 | return subp(*args, **kwargs) | 676 | return subp(*args, **kwargs) |
2392 | 677 | 677 | ||
2393 | 678 | def path(self, path): | 678 | def path(self, path): |
2395 | 679 | return target_path(self.target, path) | 679 | return paths.target_path(self.target, path) |
2396 | 680 | 680 | ||
2397 | 681 | 681 | ||
2398 | 682 | def is_exe(fpath): | 682 | def is_exe(fpath): |
2399 | @@ -685,29 +685,29 @@ def is_exe(fpath): | |||
2400 | 685 | 685 | ||
2401 | 686 | 686 | ||
2402 | 687 | def which(program, search=None, target=None): | 687 | def which(program, search=None, target=None): |
2404 | 688 | target = target_path(target) | 688 | target = paths.target_path(target) |
2405 | 689 | 689 | ||
2406 | 690 | if os.path.sep in program: | 690 | if os.path.sep in program: |
2407 | 691 | # if program had a '/' in it, then do not search PATH | 691 | # if program had a '/' in it, then do not search PATH |
2408 | 692 | # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls | 692 | # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls |
2409 | 693 | # so effectively we set cwd to / (or target) | 693 | # so effectively we set cwd to / (or target) |
2411 | 694 | if is_exe(target_path(target, program)): | 694 | if is_exe(paths.target_path(target, program)): |
2412 | 695 | return program | 695 | return program |
2413 | 696 | 696 | ||
2414 | 697 | if search is None: | 697 | if search is None: |
2417 | 698 | paths = [p.strip('"') for p in | 698 | candpaths = [p.strip('"') for p in |
2418 | 699 | os.environ.get("PATH", "").split(os.pathsep)] | 699 | os.environ.get("PATH", "").split(os.pathsep)] |
2419 | 700 | if target == "/": | 700 | if target == "/": |
2421 | 701 | search = paths | 701 | search = candpaths |
2422 | 702 | else: | 702 | else: |
2424 | 703 | search = [p for p in paths if p.startswith("/")] | 703 | search = [p for p in candpaths if p.startswith("/")] |
2425 | 704 | 704 | ||
2426 | 705 | # normalize path input | 705 | # normalize path input |
2427 | 706 | search = [os.path.abspath(p) for p in search] | 706 | search = [os.path.abspath(p) for p in search] |
2428 | 707 | 707 | ||
2429 | 708 | for path in search: | 708 | for path in search: |
2430 | 709 | ppath = os.path.sep.join((path, program)) | 709 | ppath = os.path.sep.join((path, program)) |
2432 | 710 | if is_exe(target_path(target, ppath)): | 710 | if is_exe(paths.target_path(target, ppath)): |
2433 | 711 | return ppath | 711 | return ppath |
2434 | 712 | 712 | ||
2435 | 713 | return None | 713 | return None |
2436 | @@ -773,116 +773,6 @@ def get_architecture(target=None): | |||
2437 | 773 | return out.strip() | 773 | return out.strip() |
2438 | 774 | 774 | ||
2439 | 775 | 775 | ||
2440 | 776 | def has_pkg_available(pkg, target=None): | ||
2441 | 777 | out, _ = subp(['apt-cache', 'pkgnames'], capture=True, target=target) | ||
2442 | 778 | for item in out.splitlines(): | ||
2443 | 779 | if pkg == item.strip(): | ||
2444 | 780 | return True | ||
2445 | 781 | return False | ||
2446 | 782 | |||
2447 | 783 | |||
2448 | 784 | def get_installed_packages(target=None): | ||
2449 | 785 | (out, _) = subp(['dpkg-query', '--list'], target=target, capture=True) | ||
2450 | 786 | |||
2451 | 787 | pkgs_inst = set() | ||
2452 | 788 | for line in out.splitlines(): | ||
2453 | 789 | try: | ||
2454 | 790 | (state, pkg, other) = line.split(None, 2) | ||
2455 | 791 | except ValueError: | ||
2456 | 792 | continue | ||
2457 | 793 | if state.startswith("hi") or state.startswith("ii"): | ||
2458 | 794 | pkgs_inst.add(re.sub(":.*", "", pkg)) | ||
2459 | 795 | |||
2460 | 796 | return pkgs_inst | ||
2461 | 797 | |||
2462 | 798 | |||
2463 | 799 | def has_pkg_installed(pkg, target=None): | ||
2464 | 800 | try: | ||
2465 | 801 | out, _ = subp(['dpkg-query', '--show', '--showformat', | ||
2466 | 802 | '${db:Status-Abbrev}', pkg], | ||
2467 | 803 | capture=True, target=target) | ||
2468 | 804 | return out.rstrip() == "ii" | ||
2469 | 805 | except ProcessExecutionError: | ||
2470 | 806 | return False | ||
2471 | 807 | |||
2472 | 808 | |||
2473 | 809 | def parse_dpkg_version(raw, name=None, semx=None): | ||
2474 | 810 | """Parse a dpkg version string into various parts and calcualate a | ||
2475 | 811 | numerical value of the version for use in comparing package versions | ||
2476 | 812 | |||
2477 | 813 | Native packages (without a '-'), will have the package version treated | ||
2478 | 814 | as the upstream version. | ||
2479 | 815 | |||
2480 | 816 | returns a dictionary with fields: | ||
2481 | 817 | 'major' (int), 'minor' (int), 'micro' (int), | ||
2482 | 818 | 'semantic_version' (int), | ||
2483 | 819 | 'extra' (string), 'raw' (string), 'upstream' (string), | ||
2484 | 820 | 'name' (present only if name is not None) | ||
2485 | 821 | """ | ||
2486 | 822 | if not isinstance(raw, string_types): | ||
2487 | 823 | raise TypeError( | ||
2488 | 824 | "Invalid type %s for parse_dpkg_version" % raw.__class__) | ||
2489 | 825 | |||
2490 | 826 | if semx is None: | ||
2491 | 827 | semx = (10000, 100, 1) | ||
2492 | 828 | |||
2493 | 829 | if "-" in raw: | ||
2494 | 830 | upstream = raw.rsplit('-', 1)[0] | ||
2495 | 831 | else: | ||
2496 | 832 | # this is a native package, package version treated as upstream. | ||
2497 | 833 | upstream = raw | ||
2498 | 834 | |||
2499 | 835 | match = re.search(r'[^0-9.]', upstream) | ||
2500 | 836 | if match: | ||
2501 | 837 | extra = upstream[match.start():] | ||
2502 | 838 | upstream_base = upstream[:match.start()] | ||
2503 | 839 | else: | ||
2504 | 840 | upstream_base = upstream | ||
2505 | 841 | extra = None | ||
2506 | 842 | |||
2507 | 843 | toks = upstream_base.split(".", 2) | ||
2508 | 844 | if len(toks) == 3: | ||
2509 | 845 | major, minor, micro = toks | ||
2510 | 846 | elif len(toks) == 2: | ||
2511 | 847 | major, minor, micro = (toks[0], toks[1], 0) | ||
2512 | 848 | elif len(toks) == 1: | ||
2513 | 849 | major, minor, micro = (toks[0], 0, 0) | ||
2514 | 850 | |||
2515 | 851 | version = { | ||
2516 | 852 | 'major': int(major), | ||
2517 | 853 | 'minor': int(minor), | ||
2518 | 854 | 'micro': int(micro), | ||
2519 | 855 | 'extra': extra, | ||
2520 | 856 | 'raw': raw, | ||
2521 | 857 | 'upstream': upstream, | ||
2522 | 858 | } | ||
2523 | 859 | if name: | ||
2524 | 860 | version['name'] = name | ||
2525 | 861 | |||
2526 | 862 | if semx: | ||
2527 | 863 | try: | ||
2528 | 864 | version['semantic_version'] = int( | ||
2529 | 865 | int(major) * semx[0] + int(minor) * semx[1] + | ||
2530 | 866 | int(micro) * semx[2]) | ||
2531 | 867 | except (ValueError, IndexError): | ||
2532 | 868 | version['semantic_version'] = None | ||
2533 | 869 | |||
2534 | 870 | return version | ||
2535 | 871 | |||
2536 | 872 | |||
2537 | 873 | def get_package_version(pkg, target=None, semx=None): | ||
2538 | 874 | """Use dpkg-query to extract package pkg's version string | ||
2539 | 875 | and parse the version string into a dictionary | ||
2540 | 876 | """ | ||
2541 | 877 | try: | ||
2542 | 878 | out, _ = subp(['dpkg-query', '--show', '--showformat', | ||
2543 | 879 | '${Version}', pkg], capture=True, target=target) | ||
2544 | 880 | raw = out.rstrip() | ||
2545 | 881 | return parse_dpkg_version(raw, name=pkg, semx=semx) | ||
2546 | 882 | except ProcessExecutionError: | ||
2547 | 883 | return None | ||
2548 | 884 | |||
2549 | 885 | |||
2550 | 886 | def find_newer(src, files): | 776 | def find_newer(src, files): |
2551 | 887 | mtime = os.stat(src).st_mtime | 777 | mtime = os.stat(src).st_mtime |
2552 | 888 | return [f for f in files if | 778 | return [f for f in files if |
2553 | @@ -907,134 +797,6 @@ def set_unexecutable(fname, strict=False): | |||
2554 | 907 | return cur | 797 | return cur |
2555 | 908 | 798 | ||
2556 | 909 | 799 | ||
2557 | 910 | def apt_update(target=None, env=None, force=False, comment=None, | ||
2558 | 911 | retries=None): | ||
2559 | 912 | |||
2560 | 913 | marker = "tmp/curtin.aptupdate" | ||
2561 | 914 | if target is None: | ||
2562 | 915 | target = "/" | ||
2563 | 916 | |||
2564 | 917 | if env is None: | ||
2565 | 918 | env = os.environ.copy() | ||
2566 | 919 | |||
2567 | 920 | if retries is None: | ||
2568 | 921 | # by default run apt-update up to 3 times to allow | ||
2569 | 922 | # for transient failures | ||
2570 | 923 | retries = (1, 2, 3) | ||
2571 | 924 | |||
2572 | 925 | if comment is None: | ||
2573 | 926 | comment = "no comment provided" | ||
2574 | 927 | |||
2575 | 928 | if comment.endswith("\n"): | ||
2576 | 929 | comment = comment[:-1] | ||
2577 | 930 | |||
2578 | 931 | marker = target_path(target, marker) | ||
2579 | 932 | # if marker exists, check if there are files that would make it obsolete | ||
2580 | 933 | listfiles = [target_path(target, "/etc/apt/sources.list")] | ||
2581 | 934 | listfiles += glob.glob( | ||
2582 | 935 | target_path(target, "etc/apt/sources.list.d/*.list")) | ||
2583 | 936 | |||
2584 | 937 | if os.path.exists(marker) and not force: | ||
2585 | 938 | if len(find_newer(marker, listfiles)) == 0: | ||
2586 | 939 | return | ||
2587 | 940 | |||
2588 | 941 | restore_perms = [] | ||
2589 | 942 | |||
2590 | 943 | abs_tmpdir = tempfile.mkdtemp(dir=target_path(target, "/tmp")) | ||
2591 | 944 | try: | ||
2592 | 945 | abs_slist = abs_tmpdir + "/sources.list" | ||
2593 | 946 | abs_slistd = abs_tmpdir + "/sources.list.d" | ||
2594 | 947 | ch_tmpdir = "/tmp/" + os.path.basename(abs_tmpdir) | ||
2595 | 948 | ch_slist = ch_tmpdir + "/sources.list" | ||
2596 | 949 | ch_slistd = ch_tmpdir + "/sources.list.d" | ||
2597 | 950 | |||
2598 | 951 | # this file gets executed on apt-get update sometimes. (LP: #1527710) | ||
2599 | 952 | motd_update = target_path( | ||
2600 | 953 | target, "/usr/lib/update-notifier/update-motd-updates-available") | ||
2601 | 954 | pmode = set_unexecutable(motd_update) | ||
2602 | 955 | if pmode is not None: | ||
2603 | 956 | restore_perms.append((motd_update, pmode),) | ||
2604 | 957 | |||
2605 | 958 | # create tmpdir/sources.list with all lines other than deb-src | ||
2606 | 959 | # avoid apt complaining by using existing and empty dir for sourceparts | ||
2607 | 960 | os.mkdir(abs_slistd) | ||
2608 | 961 | with open(abs_slist, "w") as sfp: | ||
2609 | 962 | for sfile in listfiles: | ||
2610 | 963 | with open(sfile, "r") as fp: | ||
2611 | 964 | contents = fp.read() | ||
2612 | 965 | for line in contents.splitlines(): | ||
2613 | 966 | line = line.lstrip() | ||
2614 | 967 | if not line.startswith("deb-src"): | ||
2615 | 968 | sfp.write(line + "\n") | ||
2616 | 969 | |||
2617 | 970 | update_cmd = [ | ||
2618 | 971 | 'apt-get', '--quiet', | ||
2619 | 972 | '--option=Acquire::Languages=none', | ||
2620 | 973 | '--option=Dir::Etc::sourcelist=%s' % ch_slist, | ||
2621 | 974 | '--option=Dir::Etc::sourceparts=%s' % ch_slistd, | ||
2622 | 975 | 'update'] | ||
2623 | 976 | |||
2624 | 977 | # do not using 'run_apt_command' so we can use 'retries' to subp | ||
2625 | 978 | with ChrootableTarget(target, allow_daemons=True) as inchroot: | ||
2626 | 979 | inchroot.subp(update_cmd, env=env, retries=retries) | ||
2627 | 980 | finally: | ||
2628 | 981 | for fname, perms in restore_perms: | ||
2629 | 982 | os.chmod(fname, perms) | ||
2630 | 983 | if abs_tmpdir: | ||
2631 | 984 | shutil.rmtree(abs_tmpdir) | ||
2632 | 985 | |||
2633 | 986 | with open(marker, "w") as fp: | ||
2634 | 987 | fp.write(comment + "\n") | ||
2635 | 988 | |||
2636 | 989 | |||
2637 | 990 | def run_apt_command(mode, args=None, aptopts=None, env=None, target=None, | ||
2638 | 991 | execute=True, allow_daemons=False): | ||
2639 | 992 | opts = ['--quiet', '--assume-yes', | ||
2640 | 993 | '--option=Dpkg::options::=--force-unsafe-io', | ||
2641 | 994 | '--option=Dpkg::Options::=--force-confold'] | ||
2642 | 995 | |||
2643 | 996 | if args is None: | ||
2644 | 997 | args = [] | ||
2645 | 998 | |||
2646 | 999 | if aptopts is None: | ||
2647 | 1000 | aptopts = [] | ||
2648 | 1001 | |||
2649 | 1002 | if env is None: | ||
2650 | 1003 | env = os.environ.copy() | ||
2651 | 1004 | env['DEBIAN_FRONTEND'] = 'noninteractive' | ||
2652 | 1005 | |||
2653 | 1006 | if which('eatmydata', target=target): | ||
2654 | 1007 | emd = ['eatmydata'] | ||
2655 | 1008 | else: | ||
2656 | 1009 | emd = [] | ||
2657 | 1010 | |||
2658 | 1011 | cmd = emd + ['apt-get'] + opts + aptopts + [mode] + args | ||
2659 | 1012 | if not execute: | ||
2660 | 1013 | return env, cmd | ||
2661 | 1014 | |||
2662 | 1015 | apt_update(target, env=env, comment=' '.join(cmd)) | ||
2663 | 1016 | with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: | ||
2664 | 1017 | return inchroot.subp(cmd, env=env) | ||
2665 | 1018 | |||
2666 | 1019 | |||
2667 | 1020 | def system_upgrade(aptopts=None, target=None, env=None, allow_daemons=False): | ||
2668 | 1021 | LOG.debug("Upgrading system in %s", target) | ||
2669 | 1022 | for mode in ('dist-upgrade', 'autoremove'): | ||
2670 | 1023 | ret = run_apt_command( | ||
2671 | 1024 | mode, aptopts=aptopts, target=target, | ||
2672 | 1025 | env=env, allow_daemons=allow_daemons) | ||
2673 | 1026 | return ret | ||
2674 | 1027 | |||
2675 | 1028 | |||
2676 | 1029 | def install_packages(pkglist, aptopts=None, target=None, env=None, | ||
2677 | 1030 | allow_daemons=False): | ||
2678 | 1031 | if isinstance(pkglist, str): | ||
2679 | 1032 | pkglist = [pkglist] | ||
2680 | 1033 | return run_apt_command( | ||
2681 | 1034 | 'install', args=pkglist, | ||
2682 | 1035 | aptopts=aptopts, target=target, env=env, allow_daemons=allow_daemons) | ||
2683 | 1036 | |||
2684 | 1037 | |||
2685 | 1038 | def is_uefi_bootable(): | 800 | def is_uefi_bootable(): |
2686 | 1039 | return os.path.exists('/sys/firmware/efi') is True | 801 | return os.path.exists('/sys/firmware/efi') is True |
2687 | 1040 | 802 | ||
2688 | @@ -1106,7 +868,7 @@ def run_hook_if_exists(target, hook): | |||
2689 | 1106 | """ | 868 | """ |
2690 | 1107 | Look for "hook" in "target" and run it | 869 | Look for "hook" in "target" and run it |
2691 | 1108 | """ | 870 | """ |
2693 | 1109 | target_hook = target_path(target, '/curtin/' + hook) | 871 | target_hook = paths.target_path(target, '/curtin/' + hook) |
2694 | 1110 | if os.path.isfile(target_hook): | 872 | if os.path.isfile(target_hook): |
2695 | 1111 | LOG.debug("running %s" % target_hook) | 873 | LOG.debug("running %s" % target_hook) |
2696 | 1112 | subp([target_hook]) | 874 | subp([target_hook]) |
2697 | @@ -1261,41 +1023,6 @@ def is_file_not_found_exc(exc): | |||
2698 | 1261 | exc.errno in (errno.ENOENT, errno.EIO, errno.ENXIO)) | 1023 | exc.errno in (errno.ENOENT, errno.EIO, errno.ENXIO)) |
2699 | 1262 | 1024 | ||
2700 | 1263 | 1025 | ||
2701 | 1264 | def _lsb_release(target=None): | ||
2702 | 1265 | fmap = {'Codename': 'codename', 'Description': 'description', | ||
2703 | 1266 | 'Distributor ID': 'id', 'Release': 'release'} | ||
2704 | 1267 | |||
2705 | 1268 | data = {} | ||
2706 | 1269 | try: | ||
2707 | 1270 | out, _ = subp(['lsb_release', '--all'], capture=True, target=target) | ||
2708 | 1271 | for line in out.splitlines(): | ||
2709 | 1272 | fname, _, val = line.partition(":") | ||
2710 | 1273 | if fname in fmap: | ||
2711 | 1274 | data[fmap[fname]] = val.strip() | ||
2712 | 1275 | missing = [k for k in fmap.values() if k not in data] | ||
2713 | 1276 | if len(missing): | ||
2714 | 1277 | LOG.warn("Missing fields in lsb_release --all output: %s", | ||
2715 | 1278 | ','.join(missing)) | ||
2716 | 1279 | |||
2717 | 1280 | except ProcessExecutionError as err: | ||
2718 | 1281 | LOG.warn("Unable to get lsb_release --all: %s", err) | ||
2719 | 1282 | data = {v: "UNAVAILABLE" for v in fmap.values()} | ||
2720 | 1283 | |||
2721 | 1284 | return data | ||
2722 | 1285 | |||
2723 | 1286 | |||
2724 | 1287 | def lsb_release(target=None): | ||
2725 | 1288 | if target_path(target) != "/": | ||
2726 | 1289 | # do not use or update cache if target is provided | ||
2727 | 1290 | return _lsb_release(target) | ||
2728 | 1291 | |||
2729 | 1292 | global _LSB_RELEASE | ||
2730 | 1293 | if not _LSB_RELEASE: | ||
2731 | 1294 | data = _lsb_release() | ||
2732 | 1295 | _LSB_RELEASE.update(data) | ||
2733 | 1296 | return _LSB_RELEASE | ||
2734 | 1297 | |||
2735 | 1298 | |||
2736 | 1299 | class MergedCmdAppend(argparse.Action): | 1026 | class MergedCmdAppend(argparse.Action): |
2737 | 1300 | """This appends to a list in order of appearence both the option string | 1027 | """This appends to a list in order of appearence both the option string |
2738 | 1301 | and the value""" | 1028 | and the value""" |
2739 | @@ -1430,31 +1157,6 @@ def is_resolvable_url(url): | |||
2740 | 1430 | return is_resolvable(urlparse(url).hostname) | 1157 | return is_resolvable(urlparse(url).hostname) |
2741 | 1431 | 1158 | ||
2742 | 1432 | 1159 | ||
2743 | 1433 | def target_path(target, path=None): | ||
2744 | 1434 | # return 'path' inside target, accepting target as None | ||
2745 | 1435 | if target in (None, ""): | ||
2746 | 1436 | target = "/" | ||
2747 | 1437 | elif not isinstance(target, string_types): | ||
2748 | 1438 | raise ValueError("Unexpected input for target: %s" % target) | ||
2749 | 1439 | else: | ||
2750 | 1440 | target = os.path.abspath(target) | ||
2751 | 1441 | # abspath("//") returns "//" specifically for 2 slashes. | ||
2752 | 1442 | if target.startswith("//"): | ||
2753 | 1443 | target = target[1:] | ||
2754 | 1444 | |||
2755 | 1445 | if not path: | ||
2756 | 1446 | return target | ||
2757 | 1447 | |||
2758 | 1448 | if not isinstance(path, string_types): | ||
2759 | 1449 | raise ValueError("Unexpected input for path: %s" % path) | ||
2760 | 1450 | |||
2761 | 1451 | # os.path.join("/etc", "/foo") returns "/foo". Chomp all leading /. | ||
2762 | 1452 | while len(path) and path[0] == "/": | ||
2763 | 1453 | path = path[1:] | ||
2764 | 1454 | |||
2765 | 1455 | return os.path.join(target, path) | ||
2766 | 1456 | |||
2767 | 1457 | |||
2768 | 1458 | class RunInChroot(ChrootableTarget): | 1160 | class RunInChroot(ChrootableTarget): |
2769 | 1459 | """Backwards compatibility for RunInChroot (LP: #1617375). | 1161 | """Backwards compatibility for RunInChroot (LP: #1617375). |
2770 | 1460 | It needs to work like: | 1162 | It needs to work like: |
2771 | diff --git a/doc/topics/config.rst b/doc/topics/config.rst | |||
2772 | index 76e520d..218bc17 100644 | |||
2773 | --- a/doc/topics/config.rst | |||
2774 | +++ b/doc/topics/config.rst | |||
2775 | @@ -14,6 +14,7 @@ Curtin's top level config keys are as follows: | |||
2776 | 14 | - apt_mirrors (``apt_mirrors``) | 14 | - apt_mirrors (``apt_mirrors``) |
2777 | 15 | - apt_proxy (``apt_proxy``) | 15 | - apt_proxy (``apt_proxy``) |
2778 | 16 | - block-meta (``block``) | 16 | - block-meta (``block``) |
2779 | 17 | - curthooks (``curthooks``) | ||
2780 | 17 | - debconf_selections (``debconf_selections``) | 18 | - debconf_selections (``debconf_selections``) |
2781 | 18 | - disable_overlayroot (``disable_overlayroot``) | 19 | - disable_overlayroot (``disable_overlayroot``) |
2782 | 19 | - grub (``grub``) | 20 | - grub (``grub``) |
2783 | @@ -110,6 +111,45 @@ Specify the filesystem label on the boot partition. | |||
2784 | 110 | label: my-boot-partition | 111 | label: my-boot-partition |
2785 | 111 | 112 | ||
2786 | 112 | 113 | ||
2787 | 114 | curthooks | ||
2788 | 115 | ~~~~~~~~~ | ||
2789 | 116 | Configure how Curtin determines what :ref:`curthooks` to run during the installation | ||
2790 | 117 | process. | ||
2791 | 118 | |||
2792 | 119 | **mode**: *<['auto', 'builtin', 'target']>* | ||
2793 | 120 | |||
2794 | 121 | The default mode is ``auto``. | ||
2795 | 122 | |||
2796 | 123 | In ``auto`` mode, curtin will execute curthooks within the image if present. | ||
2797 | 124 | For images without curthooks inside, curtin will execute its built-in hooks. | ||
2798 | 125 | |||
2799 | 126 | Currently the built-in curthooks support the following OS families: | ||
2800 | 127 | |||
2801 | 128 | - Ubuntu | ||
2802 | 129 | - Centos | ||
2803 | 130 | |||
2804 | 131 | When specifying ``builtin``, curtin will only run the curthooks present in | ||
2805 | 132 | Curtin ignoring any curthooks that may be present in the target operating | ||
2806 | 133 | system. | ||
2807 | 134 | |||
2808 | 135 | When specifying ``target``, curtin will attempt run the curthooks in the target | ||
2809 | 136 | operating system. If the target does NOT contain any curthooks, then the | ||
2810 | 137 | built-in curthooks will be run instead. | ||
2811 | 138 | |||
2812 | 139 | Any errors during execution of curthooks (built-in or target) will fail the | ||
2813 | 140 | installation. | ||
2814 | 141 | |||
2815 | 142 | **Example**:: | ||
2816 | 143 | |||
2817 | 144 | # ignore any target curthooks | ||
2818 | 145 | curthooks: | ||
2819 | 146 | mode: builtin | ||
2820 | 147 | |||
2821 | 148 | # Only run target curthooks, fall back to built-in | ||
2822 | 149 | curthooks: | ||
2823 | 150 | mode: target | ||
2824 | 151 | |||
2825 | 152 | |||
2826 | 113 | debconf_selections | 153 | debconf_selections |
2827 | 114 | ~~~~~~~~~~~~~~~~~~ | 154 | ~~~~~~~~~~~~~~~~~~ |
2828 | 115 | Curtin will update the target with debconf set-selection values. Users will | 155 | Curtin will update the target with debconf set-selection values. Users will |
2829 | diff --git a/doc/topics/curthooks.rst b/doc/topics/curthooks.rst | |||
2830 | index e5f341b..c59aeaf 100644 | |||
2831 | --- a/doc/topics/curthooks.rst | |||
2832 | +++ b/doc/topics/curthooks.rst | |||
2833 | @@ -1,7 +1,13 @@ | |||
2834 | 1 | .. _curthooks: | ||
2835 | 2 | |||
2836 | 1 | ======================================== | 3 | ======================================== |
2838 | 2 | Curthooks / New OS Support | 4 | Curthooks / New OS Support |
2839 | 3 | ======================================== | 5 | ======================================== |
2841 | 4 | Curtin has built-in support for installation of Ubuntu. | 6 | Curtin has built-in support for installation of: |
2842 | 7 | |||
2843 | 8 | - Ubuntu | ||
2844 | 9 | - Centos | ||
2845 | 10 | |||
2846 | 5 | Other operating systems are supported through a mechanism called | 11 | Other operating systems are supported through a mechanism called |
2847 | 6 | 'curthooks' or 'curtin-hooks'. | 12 | 'curthooks' or 'curtin-hooks'. |
2848 | 7 | 13 | ||
2849 | @@ -47,11 +53,21 @@ details. Specifically interesting to this stage are: | |||
2850 | 47 | - ``CONFIG``: This is a path to the curtin config file. It is provided so | 53 | - ``CONFIG``: This is a path to the curtin config file. It is provided so |
2851 | 48 | that additional configuration could be provided through to the OS | 54 | that additional configuration could be provided through to the OS |
2852 | 49 | customization. | 55 | customization. |
2853 | 56 | - ``WORKING_DIR``: This is a path to a temporary directory where curtin | ||
2854 | 57 | stores state and configuration files. | ||
2855 | 50 | 58 | ||
2856 | 51 | .. **TODO**: We should add 'PYTHON' or 'CURTIN_PYTHON' to this environment | 59 | .. **TODO**: We should add 'PYTHON' or 'CURTIN_PYTHON' to this environment |
2857 | 52 | so that the hook can easily run a python program with the same python | 60 | so that the hook can easily run a python program with the same python |
2858 | 53 | that curtin ran with (ie, python2 or python3). | 61 | that curtin ran with (ie, python2 or python3). |
2859 | 54 | 62 | ||
2860 | 63 | Running built-in hooks | ||
2861 | 64 | ---------------------- | ||
2862 | 65 | |||
2863 | 66 | Curthooks may opt to run the built-in curthooks that are already provided in | ||
2864 | 67 | curtin itself. To do so, an in-image curthook can import the ``curthooks`` | ||
2865 | 68 | module and invoke the ``builtin_curthooks`` function passing in the required | ||
2866 | 69 | parameters: config, target, and state. | ||
2867 | 70 | |||
2868 | 55 | 71 | ||
2869 | 56 | Networking configuration | 72 | Networking configuration |
2870 | 57 | ------------------------ | 73 | ------------------------ |
2871 | diff --git a/examples/tests/filesystem_battery.yaml b/examples/tests/filesystem_battery.yaml | |||
2872 | index 3b1edbf..4eae5b6 100644 | |||
2873 | --- a/examples/tests/filesystem_battery.yaml | |||
2874 | +++ b/examples/tests/filesystem_battery.yaml | |||
2875 | @@ -113,8 +113,8 @@ storage: | |||
2876 | 113 | - id: bind1 | 113 | - id: bind1 |
2877 | 114 | fstype: "none" | 114 | fstype: "none" |
2878 | 115 | options: "bind" | 115 | options: "bind" |
2881 | 116 | path: "/var/lib" | 116 | path: "/var/cache" |
2882 | 117 | spec: "/my/bind-over-var-lib" | 117 | spec: "/my/bind-over-var-cache" |
2883 | 118 | type: mount | 118 | type: mount |
2884 | 119 | - id: bind2 | 119 | - id: bind2 |
2885 | 120 | fstype: "none" | 120 | fstype: "none" |
2886 | diff --git a/helpers/common b/helpers/common | |||
2887 | index ac2d0f3..f9217b7 100644 | |||
2888 | --- a/helpers/common | |||
2889 | +++ b/helpers/common | |||
2890 | @@ -541,18 +541,18 @@ get_carryover_params() { | |||
2891 | 541 | } | 541 | } |
2892 | 542 | 542 | ||
2893 | 543 | install_grub() { | 543 | install_grub() { |
2895 | 544 | local long_opts="uefi,update-nvram" | 544 | local long_opts="uefi,update-nvram,os-family:" |
2896 | 545 | local getopt_out="" mp_efi="" | 545 | local getopt_out="" mp_efi="" |
2897 | 546 | getopt_out=$(getopt --name "${0##*/}" \ | 546 | getopt_out=$(getopt --name "${0##*/}" \ |
2898 | 547 | --options "" --long "${long_opts}" -- "$@") && | 547 | --options "" --long "${long_opts}" -- "$@") && |
2899 | 548 | eval set -- "${getopt_out}" | 548 | eval set -- "${getopt_out}" |
2900 | 549 | 549 | ||
2903 | 550 | local uefi=0 | 550 | local uefi=0 update_nvram=0 os_family="" |
2902 | 551 | local update_nvram=0 | ||
2904 | 552 | 551 | ||
2905 | 553 | while [ $# -ne 0 ]; do | 552 | while [ $# -ne 0 ]; do |
2906 | 554 | cur="$1"; next="$2"; | 553 | cur="$1"; next="$2"; |
2907 | 555 | case "$cur" in | 554 | case "$cur" in |
2908 | 555 | --os-family) os_family=${next};; | ||
2909 | 556 | --uefi) uefi=$((${uefi}+1));; | 556 | --uefi) uefi=$((${uefi}+1));; |
2910 | 557 | --update-nvram) update_nvram=$((${update_nvram}+1));; | 557 | --update-nvram) update_nvram=$((${update_nvram}+1));; |
2911 | 558 | --) shift; break;; | 558 | --) shift; break;; |
2912 | @@ -595,29 +595,88 @@ install_grub() { | |||
2913 | 595 | error "$mp_dev ($fstype) is not a block device!"; return 1; | 595 | error "$mp_dev ($fstype) is not a block device!"; return 1; |
2914 | 596 | fi | 596 | fi |
2915 | 597 | 597 | ||
2920 | 598 | # get dpkg arch | 598 | local os_variant="" |
2921 | 599 | local dpkg_arch="" | 599 | if [ -e "${mp}/etc/os-release" ]; then |
2922 | 600 | dpkg_arch=$(chroot "$mp" dpkg --print-architecture) | 600 | os_variant=$(chroot "$mp" \ |
2923 | 601 | r=$? | 601 | /bin/sh -c 'echo $(. /etc/os-release; echo $ID)') |
2924 | 602 | else | ||
2925 | 603 | # Centos6 doesn't have os-release, so check for centos/redhat release | ||
2926 | 604 | # looks like: CentOS release 6.9 (Final) | ||
2927 | 605 | for rel in $(ls ${mp}/etc/*-release); do | ||
2928 | 606 | os_variant=$(awk '{print tolower($1)}' $rel) | ||
2929 | 607 | [ -n "$os_variant" ] && break | ||
2930 | 608 | done | ||
2931 | 609 | fi | ||
2932 | 610 | [ $? != 0 ] && | ||
2933 | 611 | { error "Failed to read ID from $mp/etc/os-release"; return 1; } | ||
2934 | 612 | |||
2935 | 613 | local rhel_ver="" | ||
2936 | 614 | case $os_variant in | ||
2937 | 615 | debian|ubuntu) os_family="debian";; | ||
2938 | 616 | centos|rhel) | ||
2939 | 617 | os_family="redhat" | ||
2940 | 618 | rhel_ver=$(chroot "$mp" rpm -E '%rhel') | ||
2941 | 619 | ;; | ||
2942 | 620 | esac | ||
2943 | 621 | |||
2944 | 622 | # ensure we have both settings, family and variant are needed | ||
2945 | 623 | [ -n "${os_variant}" -a -n "${os_family}" ] || | ||
2946 | 624 | { error "Failed to determine os variant and family"; return 1; } | ||
2947 | 625 | |||
2948 | 626 | # get target arch | ||
2949 | 627 | local target_arch="" r="1" | ||
2950 | 628 | case $os_family in | ||
2951 | 629 | debian) | ||
2952 | 630 | target_arch=$(chroot "$mp" dpkg --print-architecture) | ||
2953 | 631 | r=$? | ||
2954 | 632 | ;; | ||
2955 | 633 | redhat) | ||
2956 | 634 | target_arch=$(chroot "$mp" rpm -E '%_arch') | ||
2957 | 635 | r=$? | ||
2958 | 636 | ;; | ||
2959 | 637 | esac | ||
2960 | 602 | [ $r -eq 0 ] || { | 638 | [ $r -eq 0 ] || { |
2962 | 603 | error "failed to get dpkg architecture [$r]" | 639 | error "failed to get target architecture [$r]" |
2963 | 604 | return 1; | 640 | return 1; |
2964 | 605 | } | 641 | } |
2965 | 606 | 642 | ||
2966 | 607 | # grub is not the bootloader you are looking for | 643 | # grub is not the bootloader you are looking for |
2969 | 608 | if [ "${dpkg_arch}" = "s390x" ]; then | 644 | if [ "${target_arch}" = "s390x" ]; then |
2970 | 609 | return 0; | 645 | return 0; |
2971 | 610 | fi | 646 | fi |
2972 | 611 | 647 | ||
2973 | 612 | # set correct grub package | 648 | # set correct grub package |
2977 | 613 | local grub_name="grub-pc" | 649 | local grub_name="" |
2978 | 614 | local grub_target="i386-pc" | 650 | local grub_target="" |
2979 | 615 | if [ "${dpkg_arch#ppc64}" != "${dpkg_arch}" ]; then | 651 | case "$target_arch" in |
2980 | 652 | i386|amd64) | ||
2981 | 653 | # debian | ||
2982 | 654 | grub_name="grub-pc" | ||
2983 | 655 | grub_target="i386-pc" | ||
2984 | 656 | ;; | ||
2985 | 657 | x86_64) | ||
2986 | 658 | case $rhel_ver in | ||
2987 | 659 | 6) grub_name="grub";; | ||
2988 | 660 | 7) grub_name="grub2-pc";; | ||
2989 | 661 | *) | ||
2990 | 662 | error "Unknown rhel_ver [$rhel_ver]"; | ||
2991 | 663 | return 1; | ||
2992 | 664 | ;; | ||
2993 | 665 | esac | ||
2994 | 666 | grub_target="i386-pc" | ||
2995 | 667 | ;; | ||
2996 | 668 | esac | ||
2997 | 669 | if [ "${target_arch#ppc64}" != "${target_arch}" ]; then | ||
2998 | 616 | grub_name="grub-ieee1275" | 670 | grub_name="grub-ieee1275" |
2999 | 617 | grub_target="powerpc-ieee1275" | 671 | grub_target="powerpc-ieee1275" |
3000 | 618 | elif [ "$uefi" -ge 1 ]; then | 672 | elif [ "$uefi" -ge 1 ]; then |
3003 | 619 | grub_name="grub-efi-$dpkg_arch" | 673 | grub_name="grub-efi-$target_arch" |
3004 | 620 | case "$dpkg_arch" in | 674 | case "$target_arch" in |
3005 | 675 | x86_64) | ||
3006 | 676 | # centos 7+, no centos6 support | ||
3007 | 677 | grub_name="grub2-efi-x64-modules" | ||
3008 | 678 | grub_target="x86_64-efi" | ||
3009 | 679 | ;; | ||
3010 | 621 | amd64) | 680 | amd64) |
3011 | 622 | grub_target="x86_64-efi";; | 681 | grub_target="x86_64-efi";; |
3012 | 623 | arm64) | 682 | arm64) |
3013 | @@ -626,9 +685,19 @@ install_grub() { | |||
3014 | 626 | fi | 685 | fi |
3015 | 627 | 686 | ||
3016 | 628 | # check that the grub package is installed | 687 | # check that the grub package is installed |
3020 | 629 | tmp=$(chroot "$mp" dpkg-query --show \ | 688 | local r=$? |
3021 | 630 | --showformat='${Status}\n' $grub_name) | 689 | case $os_family in |
3022 | 631 | r=$? | 690 | debian) |
3023 | 691 | tmp=$(chroot "$mp" dpkg-query --show \ | ||
3024 | 692 | --showformat='${Status}\n' $grub_name) | ||
3025 | 693 | r=$? | ||
3026 | 694 | ;; | ||
3027 | 695 | redhat) | ||
3028 | 696 | tmp=$(chroot "$mp" rpm -q \ | ||
3029 | 697 | --queryformat='install ok installed\n' $grub_name) | ||
3030 | 698 | r=$? | ||
3031 | 699 | ;; | ||
3032 | 700 | esac | ||
3033 | 632 | if [ $r -ne 0 -a $r -ne 1 ]; then | 701 | if [ $r -ne 0 -a $r -ne 1 ]; then |
3034 | 633 | error "failed to check if $grub_name installed"; | 702 | error "failed to check if $grub_name installed"; |
3035 | 634 | return 1; | 703 | return 1; |
3036 | @@ -636,11 +705,16 @@ install_grub() { | |||
3037 | 636 | case "$tmp" in | 705 | case "$tmp" in |
3038 | 637 | install\ ok\ installed) :;; | 706 | install\ ok\ installed) :;; |
3039 | 638 | *) debug 1 "$grub_name not installed, not doing anything"; | 707 | *) debug 1 "$grub_name not installed, not doing anything"; |
3041 | 639 | return 0;; | 708 | return 1;; |
3042 | 640 | esac | 709 | esac |
3043 | 641 | 710 | ||
3044 | 642 | local grub_d="etc/default/grub.d" | 711 | local grub_d="etc/default/grub.d" |
3045 | 643 | local mygrub_cfg="$grub_d/50-curtin-settings.cfg" | 712 | local mygrub_cfg="$grub_d/50-curtin-settings.cfg" |
3046 | 713 | case $os_family in | ||
3047 | 714 | redhat) | ||
3048 | 715 | grub_d="etc/default" | ||
3049 | 716 | mygrub_cfg="etc/default/grub";; | ||
3050 | 717 | esac | ||
3051 | 644 | [ -d "$mp/$grub_d" ] || mkdir -p "$mp/$grub_d" || | 718 | [ -d "$mp/$grub_d" ] || mkdir -p "$mp/$grub_d" || |
3052 | 645 | { error "Failed to create $grub_d"; return 1; } | 719 | { error "Failed to create $grub_d"; return 1; } |
3053 | 646 | 720 | ||
3054 | @@ -659,14 +733,23 @@ install_grub() { | |||
3055 | 659 | error "Failed to get carryover parrameters from cmdline"; | 733 | error "Failed to get carryover parrameters from cmdline"; |
3056 | 660 | return 1; | 734 | return 1; |
3057 | 661 | } | 735 | } |
3058 | 736 | # always append rd.auto=1 for centos | ||
3059 | 737 | case $os_family in | ||
3060 | 738 | redhat) | ||
3061 | 739 | newargs="$newargs rd.auto=1";; | ||
3062 | 740 | esac | ||
3063 | 662 | debug 1 "carryover command line params: $newargs" | 741 | debug 1 "carryover command line params: $newargs" |
3064 | 663 | 742 | ||
3067 | 664 | : > "$mp/$mygrub_cfg" || | 743 | case $os_family in |
3068 | 665 | { error "Failed to write '$mygrub_cfg'"; return 1; } | 744 | debian) |
3069 | 745 | : > "$mp/$mygrub_cfg" || | ||
3070 | 746 | { error "Failed to write '$mygrub_cfg'"; return 1; } | ||
3071 | 747 | ;; | ||
3072 | 748 | esac | ||
3073 | 666 | { | 749 | { |
3074 | 667 | [ "${REPLACE_GRUB_LINUX_DEFAULT:-1}" = "0" ] || | 750 | [ "${REPLACE_GRUB_LINUX_DEFAULT:-1}" = "0" ] || |
3075 | 668 | echo "GRUB_CMDLINE_LINUX_DEFAULT=\"$newargs\"" | 751 | echo "GRUB_CMDLINE_LINUX_DEFAULT=\"$newargs\"" |
3077 | 669 | echo "# disable grub os prober that might find other OS installs." | 752 | echo "# Curtin disable grub os prober that might find other OS installs." |
3078 | 670 | echo "GRUB_DISABLE_OS_PROBER=true" | 753 | echo "GRUB_DISABLE_OS_PROBER=true" |
3079 | 671 | echo "GRUB_TERMINAL=console" | 754 | echo "GRUB_TERMINAL=console" |
3080 | 672 | } >> "$mp/$mygrub_cfg" | 755 | } >> "$mp/$mygrub_cfg" |
3081 | @@ -692,30 +775,46 @@ install_grub() { | |||
3082 | 692 | nvram="--no-nvram" | 775 | nvram="--no-nvram" |
3083 | 693 | if [ "$update_nvram" -ge 1 ]; then | 776 | if [ "$update_nvram" -ge 1 ]; then |
3084 | 694 | nvram="" | 777 | nvram="" |
3086 | 695 | fi | 778 | fi |
3087 | 696 | debug 1 "curtin uefi: installing ${grub_name} to: /boot/efi" | 779 | debug 1 "curtin uefi: installing ${grub_name} to: /boot/efi" |
3088 | 697 | chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -exc ' | 780 | chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -exc ' |
3089 | 698 | echo "before grub-install efiboot settings" | 781 | echo "before grub-install efiboot settings" |
3093 | 699 | efibootmgr || echo "WARN: efibootmgr exited $?" | 782 | efibootmgr -v || echo "WARN: efibootmgr exited $?" |
3094 | 700 | dpkg-reconfigure "$1" | 783 | bootid="$4" |
3095 | 701 | update-grub | 784 | grubpost="" |
3096 | 785 | case $bootid in | ||
3097 | 786 | debian|ubuntu) | ||
3098 | 787 | grubcmd="grub-install" | ||
3099 | 788 | dpkg-reconfigure "$1" | ||
3100 | 789 | update-grub | ||
3101 | 790 | ;; | ||
3102 | 791 | centos|redhat|rhel) | ||
3103 | 792 | grubcmd="grub2-install" | ||
3104 | 793 | grubpost="grub2-mkconfig -o /boot/grub2/grub.cfg" | ||
3105 | 794 | ;; | ||
3106 | 795 | *) | ||
3107 | 796 | echo "Unsupported OS: $bootid" 1>&2 | ||
3108 | 797 | exit 1 | ||
3109 | 798 | ;; | ||
3110 | 799 | esac | ||
3111 | 702 | # grub-install in 12.04 does not contain --no-nvram, --target, | 800 | # grub-install in 12.04 does not contain --no-nvram, --target, |
3112 | 703 | # or --efi-directory | 801 | # or --efi-directory |
3113 | 704 | target="--target=$2" | 802 | target="--target=$2" |
3114 | 705 | no_nvram="$3" | 803 | no_nvram="$3" |
3115 | 706 | efi_dir="--efi-directory=/boot/efi" | 804 | efi_dir="--efi-directory=/boot/efi" |
3117 | 707 | gi_out=$(grub-install --help 2>&1) | 805 | gi_out=$($grubcmd --help 2>&1) |
3118 | 708 | echo "$gi_out" | grep -q -- "$no_nvram" || no_nvram="" | 806 | echo "$gi_out" | grep -q -- "$no_nvram" || no_nvram="" |
3119 | 709 | echo "$gi_out" | grep -q -- "--target" || target="" | 807 | echo "$gi_out" | grep -q -- "--target" || target="" |
3120 | 710 | echo "$gi_out" | grep -q -- "--efi-directory" || efi_dir="" | 808 | echo "$gi_out" | grep -q -- "--efi-directory" || efi_dir="" |
3124 | 711 | grub-install $target $efi_dir \ | 809 | $grubcmd $target $efi_dir \ |
3125 | 712 | --bootloader-id=ubuntu --recheck $no_nvram' -- \ | 810 | --bootloader-id=$bootid --recheck $no_nvram |
3126 | 713 | "${grub_name}" "${grub_target}" "$nvram" </dev/null || | 811 | [ -z "$grubpost" ] || $grubpost;' \ |
3127 | 812 | -- "${grub_name}" "${grub_target}" "$nvram" "$os_variant" </dev/null || | ||
3128 | 714 | { error "failed to install grub!"; return 1; } | 813 | { error "failed to install grub!"; return 1; } |
3129 | 715 | 814 | ||
3130 | 716 | chroot "$mp" sh -exc ' | 815 | chroot "$mp" sh -exc ' |
3131 | 717 | echo "after grub-install efiboot settings" | 816 | echo "after grub-install efiboot settings" |
3133 | 718 | efibootmgr || echo "WARN: efibootmgr exited $?" | 817 | efibootmgr -v || echo "WARN: efibootmgr exited $?" |
3134 | 719 | ' -- </dev/null || | 818 | ' -- </dev/null || |
3135 | 720 | { error "failed to list efi boot entries!"; return 1; } | 819 | { error "failed to list efi boot entries!"; return 1; } |
3136 | 721 | else | 820 | else |
3137 | @@ -728,10 +827,32 @@ install_grub() { | |||
3138 | 728 | debug 1 "curtin non-uefi: installing ${grub_name} to: ${grubdevs[*]}" | 827 | debug 1 "curtin non-uefi: installing ${grub_name} to: ${grubdevs[*]}" |
3139 | 729 | chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -exc ' | 828 | chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -exc ' |
3140 | 730 | pkg=$1; shift; | 829 | pkg=$1; shift; |
3145 | 731 | dpkg-reconfigure "$pkg" | 830 | bootid=$1; shift; |
3146 | 732 | update-grub | 831 | bootver=$1; shift; |
3147 | 733 | for d in "$@"; do grub-install "$d" || exit; done' \ | 832 | grubpost="" |
3148 | 734 | -- "${grub_name}" "${grubdevs[@]}" </dev/null || | 833 | case $bootid in |
3149 | 834 | debian|ubuntu) | ||
3150 | 835 | grubcmd="grub-install" | ||
3151 | 836 | dpkg-reconfigure "$pkg" | ||
3152 | 837 | update-grub | ||
3153 | 838 | ;; | ||
3154 | 839 | centos|redhat|rhel) | ||
3155 | 840 | case $bootver in | ||
3156 | 841 | 6) grubcmd="grub-install";; | ||
3157 | 842 | 7) grubcmd="grub2-install" | ||
3158 | 843 | grubpost="grub2-mkconfig -o /boot/grub2/grub.cfg";; | ||
3159 | 844 | esac | ||
3160 | 845 | ;; | ||
3161 | 846 | *) | ||
3162 | 847 | echo "Unsupported OS: $bootid"; 1>&2 | ||
3163 | 848 | exit 1 | ||
3164 | 849 | ;; | ||
3165 | 850 | esac | ||
3166 | 851 | for d in "$@"; do | ||
3167 | 852 | echo $grubcmd "$d"; | ||
3168 | 853 | $grubcmd "$d" || exit; done | ||
3169 | 854 | [ -z "$grubpost" ] || $grubpost;' \ | ||
3170 | 855 | -- "${grub_name}" "${os_variant}" "${rhel_ver}" "${grubdevs[@]}" </dev/null || | ||
3171 | 735 | { error "failed to install grub!"; return 1; } | 856 | { error "failed to install grub!"; return 1; } |
3172 | 736 | fi | 857 | fi |
3173 | 737 | 858 | ||
3174 | diff --git a/tests/unittests/test_apt_custom_sources_list.py b/tests/unittests/test_apt_custom_sources_list.py | |||
3175 | index 5567dd5..a427ae9 100644 | |||
3176 | --- a/tests/unittests/test_apt_custom_sources_list.py | |||
3177 | +++ b/tests/unittests/test_apt_custom_sources_list.py | |||
3178 | @@ -11,6 +11,8 @@ from mock import call | |||
3179 | 11 | import textwrap | 11 | import textwrap |
3180 | 12 | import yaml | 12 | import yaml |
3181 | 13 | 13 | ||
3182 | 14 | from curtin import distro | ||
3183 | 15 | from curtin import paths | ||
3184 | 14 | from curtin import util | 16 | from curtin import util |
3185 | 15 | from curtin.commands import apt_config | 17 | from curtin.commands import apt_config |
3186 | 16 | from .helpers import CiTestCase | 18 | from .helpers import CiTestCase |
3187 | @@ -106,7 +108,7 @@ class TestAptSourceConfigSourceList(CiTestCase): | |||
3188 | 106 | # make test independent to executing system | 108 | # make test independent to executing system |
3189 | 107 | with mock.patch.object(util, 'load_file', | 109 | with mock.patch.object(util, 'load_file', |
3190 | 108 | return_value=MOCKED_APT_SRC_LIST): | 110 | return_value=MOCKED_APT_SRC_LIST): |
3192 | 109 | with mock.patch.object(util, 'lsb_release', | 111 | with mock.patch.object(distro, 'lsb_release', |
3193 | 110 | return_value={'codename': | 112 | return_value={'codename': |
3194 | 111 | 'fakerel'}): | 113 | 'fakerel'}): |
3195 | 112 | apt_config.handle_apt(cfg, TARGET) | 114 | apt_config.handle_apt(cfg, TARGET) |
3196 | @@ -115,10 +117,10 @@ class TestAptSourceConfigSourceList(CiTestCase): | |||
3197 | 115 | 117 | ||
3198 | 116 | cloudfile = '/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg' | 118 | cloudfile = '/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg' |
3199 | 117 | cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1) | 119 | cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1) |
3201 | 118 | calls = [call(util.target_path(TARGET, '/etc/apt/sources.list'), | 120 | calls = [call(paths.target_path(TARGET, '/etc/apt/sources.list'), |
3202 | 119 | expected, | 121 | expected, |
3203 | 120 | mode=0o644), | 122 | mode=0o644), |
3205 | 121 | call(util.target_path(TARGET, cloudfile), | 123 | call(paths.target_path(TARGET, cloudfile), |
3206 | 122 | cloudconf, | 124 | cloudconf, |
3207 | 123 | mode=0o644)] | 125 | mode=0o644)] |
3208 | 124 | mockwrite.assert_has_calls(calls) | 126 | mockwrite.assert_has_calls(calls) |
3209 | @@ -147,19 +149,19 @@ class TestAptSourceConfigSourceList(CiTestCase): | |||
3210 | 147 | arch = util.get_architecture() | 149 | arch = util.get_architecture() |
3211 | 148 | # would fail inside the unittest context | 150 | # would fail inside the unittest context |
3212 | 149 | with mock.patch.object(util, 'get_architecture', return_value=arch): | 151 | with mock.patch.object(util, 'get_architecture', return_value=arch): |
3214 | 150 | with mock.patch.object(util, 'lsb_release', | 152 | with mock.patch.object(distro, 'lsb_release', |
3215 | 151 | return_value={'codename': 'fakerel'}): | 153 | return_value={'codename': 'fakerel'}): |
3216 | 152 | apt_config.handle_apt(cfg, target) | 154 | apt_config.handle_apt(cfg, target) |
3217 | 153 | 155 | ||
3218 | 154 | self.assertEqual( | 156 | self.assertEqual( |
3219 | 155 | EXPECTED_CONVERTED_CONTENT, | 157 | EXPECTED_CONVERTED_CONTENT, |
3222 | 156 | util.load_file(util.target_path(target, "/etc/apt/sources.list"))) | 158 | util.load_file(paths.target_path(target, "/etc/apt/sources.list"))) |
3223 | 157 | cloudfile = util.target_path( | 159 | cloudfile = paths.target_path( |
3224 | 158 | target, '/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg') | 160 | target, '/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg') |
3225 | 159 | self.assertEqual({'apt_preserve_sources_list': True}, | 161 | self.assertEqual({'apt_preserve_sources_list': True}, |
3226 | 160 | yaml.load(util.load_file(cloudfile))) | 162 | yaml.load(util.load_file(cloudfile))) |
3227 | 161 | 163 | ||
3229 | 162 | @mock.patch("curtin.util.lsb_release") | 164 | @mock.patch("curtin.distro.lsb_release") |
3230 | 163 | @mock.patch("curtin.util.get_architecture", return_value="amd64") | 165 | @mock.patch("curtin.util.get_architecture", return_value="amd64") |
3231 | 164 | def test_trusty_source_lists(self, m_get_arch, m_lsb_release): | 166 | def test_trusty_source_lists(self, m_get_arch, m_lsb_release): |
3232 | 165 | """Support mirror equivalency with and without trailing /. | 167 | """Support mirror equivalency with and without trailing /. |
3233 | @@ -199,7 +201,7 @@ class TestAptSourceConfigSourceList(CiTestCase): | |||
3234 | 199 | 201 | ||
3235 | 200 | release = 'trusty' | 202 | release = 'trusty' |
3236 | 201 | comps = 'main universe multiverse restricted' | 203 | comps = 'main universe multiverse restricted' |
3238 | 202 | easl = util.target_path(target, 'etc/apt/sources.list') | 204 | easl = paths.target_path(target, 'etc/apt/sources.list') |
3239 | 203 | 205 | ||
3240 | 204 | orig_content = tmpl.format( | 206 | orig_content = tmpl.format( |
3241 | 205 | mirror=orig_primary, security=orig_security, | 207 | mirror=orig_primary, security=orig_security, |
3242 | diff --git a/tests/unittests/test_apt_source.py b/tests/unittests/test_apt_source.py | |||
3243 | index 2ede986..353cdf8 100644 | |||
3244 | --- a/tests/unittests/test_apt_source.py | |||
3245 | +++ b/tests/unittests/test_apt_source.py | |||
3246 | @@ -12,8 +12,9 @@ import socket | |||
3247 | 12 | import mock | 12 | import mock |
3248 | 13 | from mock import call | 13 | from mock import call |
3249 | 14 | 14 | ||
3251 | 15 | from curtin import util | 15 | from curtin import distro |
3252 | 16 | from curtin import gpg | 16 | from curtin import gpg |
3253 | 17 | from curtin import util | ||
3254 | 17 | from curtin.commands import apt_config | 18 | from curtin.commands import apt_config |
3255 | 18 | from .helpers import CiTestCase | 19 | from .helpers import CiTestCase |
3256 | 19 | 20 | ||
3257 | @@ -77,7 +78,7 @@ class TestAptSourceConfig(CiTestCase): | |||
3258 | 77 | 78 | ||
3259 | 78 | @staticmethod | 79 | @staticmethod |
3260 | 79 | def _add_apt_sources(*args, **kwargs): | 80 | def _add_apt_sources(*args, **kwargs): |
3262 | 80 | with mock.patch.object(util, 'apt_update'): | 81 | with mock.patch.object(distro, 'apt_update'): |
3263 | 81 | apt_config.add_apt_sources(*args, **kwargs) | 82 | apt_config.add_apt_sources(*args, **kwargs) |
3264 | 82 | 83 | ||
3265 | 83 | @staticmethod | 84 | @staticmethod |
3266 | @@ -86,7 +87,7 @@ class TestAptSourceConfig(CiTestCase): | |||
3267 | 86 | Get the most basic default mrror and release info to be used in tests | 87 | Get the most basic default mrror and release info to be used in tests |
3268 | 87 | """ | 88 | """ |
3269 | 88 | params = {} | 89 | params = {} |
3271 | 89 | params['RELEASE'] = util.lsb_release()['codename'] | 90 | params['RELEASE'] = distro.lsb_release()['codename'] |
3272 | 90 | arch = util.get_architecture() | 91 | arch = util.get_architecture() |
3273 | 91 | params['MIRROR'] = apt_config.get_default_mirrors(arch)["PRIMARY"] | 92 | params['MIRROR'] = apt_config.get_default_mirrors(arch)["PRIMARY"] |
3274 | 92 | return params | 93 | return params |
3275 | @@ -472,7 +473,7 @@ class TestAptSourceConfig(CiTestCase): | |||
3276 | 472 | 'uri': | 473 | 'uri': |
3277 | 473 | 'http://testsec.ubuntu.com/%s/' % component}]} | 474 | 'http://testsec.ubuntu.com/%s/' % component}]} |
3278 | 474 | post = ("%s_dists_%s-updates_InRelease" % | 475 | post = ("%s_dists_%s-updates_InRelease" % |
3280 | 475 | (component, util.lsb_release()['codename'])) | 476 | (component, distro.lsb_release()['codename'])) |
3281 | 476 | fromfn = ("%s/%s_%s" % (pre, archive, post)) | 477 | fromfn = ("%s/%s_%s" % (pre, archive, post)) |
3282 | 477 | tofn = ("%s/test.ubuntu.com_%s" % (pre, post)) | 478 | tofn = ("%s/test.ubuntu.com_%s" % (pre, post)) |
3283 | 478 | 479 | ||
3284 | @@ -937,7 +938,7 @@ class TestDebconfSelections(CiTestCase): | |||
3285 | 937 | m_set_sel.assert_not_called() | 938 | m_set_sel.assert_not_called() |
3286 | 938 | 939 | ||
3287 | 939 | @mock.patch("curtin.commands.apt_config.debconf_set_selections") | 940 | @mock.patch("curtin.commands.apt_config.debconf_set_selections") |
3289 | 940 | @mock.patch("curtin.commands.apt_config.util.get_installed_packages") | 941 | @mock.patch("curtin.commands.apt_config.distro.get_installed_packages") |
3290 | 941 | def test_set_sel_call_has_expected_input(self, m_get_inst, m_set_sel): | 942 | def test_set_sel_call_has_expected_input(self, m_get_inst, m_set_sel): |
3291 | 942 | data = { | 943 | data = { |
3292 | 943 | 'set1': 'pkga pkga/q1 mybool false', | 944 | 'set1': 'pkga pkga/q1 mybool false', |
3293 | @@ -960,7 +961,7 @@ class TestDebconfSelections(CiTestCase): | |||
3294 | 960 | 961 | ||
3295 | 961 | @mock.patch("curtin.commands.apt_config.dpkg_reconfigure") | 962 | @mock.patch("curtin.commands.apt_config.dpkg_reconfigure") |
3296 | 962 | @mock.patch("curtin.commands.apt_config.debconf_set_selections") | 963 | @mock.patch("curtin.commands.apt_config.debconf_set_selections") |
3298 | 963 | @mock.patch("curtin.commands.apt_config.util.get_installed_packages") | 964 | @mock.patch("curtin.commands.apt_config.distro.get_installed_packages") |
3299 | 964 | def test_reconfigure_if_intersection(self, m_get_inst, m_set_sel, | 965 | def test_reconfigure_if_intersection(self, m_get_inst, m_set_sel, |
3300 | 965 | m_dpkg_r): | 966 | m_dpkg_r): |
3301 | 966 | data = { | 967 | data = { |
3302 | @@ -985,7 +986,7 @@ class TestDebconfSelections(CiTestCase): | |||
3303 | 985 | 986 | ||
3304 | 986 | @mock.patch("curtin.commands.apt_config.dpkg_reconfigure") | 987 | @mock.patch("curtin.commands.apt_config.dpkg_reconfigure") |
3305 | 987 | @mock.patch("curtin.commands.apt_config.debconf_set_selections") | 988 | @mock.patch("curtin.commands.apt_config.debconf_set_selections") |
3307 | 988 | @mock.patch("curtin.commands.apt_config.util.get_installed_packages") | 989 | @mock.patch("curtin.commands.apt_config.distro.get_installed_packages") |
3308 | 989 | def test_reconfigure_if_no_intersection(self, m_get_inst, m_set_sel, | 990 | def test_reconfigure_if_no_intersection(self, m_get_inst, m_set_sel, |
3309 | 990 | m_dpkg_r): | 991 | m_dpkg_r): |
3310 | 991 | data = {'set1': 'pkga pkga/q1 mybool false'} | 992 | data = {'set1': 'pkga pkga/q1 mybool false'} |
3311 | diff --git a/tests/unittests/test_block_iscsi.py b/tests/unittests/test_block_iscsi.py | |||
3312 | index afaf1f6..f8ef5d8 100644 | |||
3313 | --- a/tests/unittests/test_block_iscsi.py | |||
3314 | +++ b/tests/unittests/test_block_iscsi.py | |||
3315 | @@ -588,6 +588,13 @@ class TestBlockIscsiDiskFromConfig(CiTestCase): | |||
3316 | 588 | # utilize IscsiDisk str method for equality check | 588 | # utilize IscsiDisk str method for equality check |
3317 | 589 | self.assertEqual(str(expected_iscsi_disk), str(iscsi_disk)) | 589 | self.assertEqual(str(expected_iscsi_disk), str(iscsi_disk)) |
3318 | 590 | 590 | ||
3319 | 591 | # test with cfg.get('storage') since caller may already have | ||
3320 | 592 | # grabbed the 'storage' value from the curtin config | ||
3321 | 593 | iscsi_disk = iscsi.get_iscsi_disks_from_config( | ||
3322 | 594 | cfg.get('storage')).pop() | ||
3323 | 595 | # utilize IscsiDisk str method for equality check | ||
3324 | 596 | self.assertEqual(str(expected_iscsi_disk), str(iscsi_disk)) | ||
3325 | 597 | |||
3326 | 591 | def test_parse_iscsi_disk_from_config_no_iscsi(self): | 598 | def test_parse_iscsi_disk_from_config_no_iscsi(self): |
3327 | 592 | """Test parsing storage config with no iscsi disks included""" | 599 | """Test parsing storage config with no iscsi disks included""" |
3328 | 593 | cfg = { | 600 | cfg = { |
3329 | diff --git a/tests/unittests/test_block_lvm.py b/tests/unittests/test_block_lvm.py | |||
3330 | index 22fb064..c92c1ec 100644 | |||
3331 | --- a/tests/unittests/test_block_lvm.py | |||
3332 | +++ b/tests/unittests/test_block_lvm.py | |||
3333 | @@ -73,7 +73,8 @@ class TestBlockLvm(CiTestCase): | |||
3334 | 73 | 73 | ||
3335 | 74 | @mock.patch('curtin.block.lvm.lvmetad_running') | 74 | @mock.patch('curtin.block.lvm.lvmetad_running') |
3336 | 75 | @mock.patch('curtin.block.lvm.util') | 75 | @mock.patch('curtin.block.lvm.util') |
3338 | 76 | def test_lvm_scan(self, mock_util, mock_lvmetad): | 76 | @mock.patch('curtin.block.lvm.distro') |
3339 | 77 | def test_lvm_scan(self, mock_distro, mock_util, mock_lvmetad): | ||
3340 | 77 | """check that lvm_scan formats commands correctly for each release""" | 78 | """check that lvm_scan formats commands correctly for each release""" |
3341 | 78 | cmds = [['pvscan'], ['vgscan', '--mknodes']] | 79 | cmds = [['pvscan'], ['vgscan', '--mknodes']] |
3342 | 79 | for (count, (codename, lvmetad_status, use_cache)) in enumerate( | 80 | for (count, (codename, lvmetad_status, use_cache)) in enumerate( |
3343 | @@ -81,7 +82,7 @@ class TestBlockLvm(CiTestCase): | |||
3344 | 81 | ('trusty', False, False), | 82 | ('trusty', False, False), |
3345 | 82 | ('xenial', False, False), ('xenial', True, True), | 83 | ('xenial', False, False), ('xenial', True, True), |
3346 | 83 | (None, True, True), (None, False, False)]): | 84 | (None, True, True), (None, False, False)]): |
3348 | 84 | mock_util.lsb_release.return_value = {'codename': codename} | 85 | mock_distro.lsb_release.return_value = {'codename': codename} |
3349 | 85 | mock_lvmetad.return_value = lvmetad_status | 86 | mock_lvmetad.return_value = lvmetad_status |
3350 | 86 | lvm.lvm_scan() | 87 | lvm.lvm_scan() |
3351 | 87 | expected = [cmd for cmd in cmds] | 88 | expected = [cmd for cmd in cmds] |
3352 | diff --git a/tests/unittests/test_block_mdadm.py b/tests/unittests/test_block_mdadm.py | |||
3353 | index 341e49d..d017930 100644 | |||
3354 | --- a/tests/unittests/test_block_mdadm.py | |||
3355 | +++ b/tests/unittests/test_block_mdadm.py | |||
3356 | @@ -15,12 +15,13 @@ class TestBlockMdadmAssemble(CiTestCase): | |||
3357 | 15 | def setUp(self): | 15 | def setUp(self): |
3358 | 16 | super(TestBlockMdadmAssemble, self).setUp() | 16 | super(TestBlockMdadmAssemble, self).setUp() |
3359 | 17 | self.add_patch('curtin.block.mdadm.util', 'mock_util') | 17 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3360 | 18 | self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') | ||
3361 | 18 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') | 19 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3362 | 19 | self.add_patch('curtin.block.mdadm.udev', 'mock_udev') | 20 | self.add_patch('curtin.block.mdadm.udev', 'mock_udev') |
3363 | 20 | 21 | ||
3364 | 21 | # Common mock settings | 22 | # Common mock settings |
3365 | 22 | self.mock_valid.return_value = True | 23 | self.mock_valid.return_value = True |
3367 | 23 | self.mock_util.lsb_release.return_value = {'codename': 'precise'} | 24 | self.mock_lsb_release.return_value = {'codename': 'precise'} |
3368 | 24 | self.mock_util.subp.return_value = ('', '') | 25 | self.mock_util.subp.return_value = ('', '') |
3369 | 25 | 26 | ||
3370 | 26 | def test_mdadm_assemble_scan(self): | 27 | def test_mdadm_assemble_scan(self): |
3371 | @@ -88,6 +89,7 @@ class TestBlockMdadmCreate(CiTestCase): | |||
3372 | 88 | def setUp(self): | 89 | def setUp(self): |
3373 | 89 | super(TestBlockMdadmCreate, self).setUp() | 90 | super(TestBlockMdadmCreate, self).setUp() |
3374 | 90 | self.add_patch('curtin.block.mdadm.util', 'mock_util') | 91 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3375 | 92 | self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') | ||
3376 | 91 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') | 93 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3377 | 92 | self.add_patch('curtin.block.mdadm.get_holders', 'mock_holders') | 94 | self.add_patch('curtin.block.mdadm.get_holders', 'mock_holders') |
3378 | 93 | self.add_patch('curtin.block.mdadm.udev.udevadm_settle', | 95 | self.add_patch('curtin.block.mdadm.udev.udevadm_settle', |
3379 | @@ -95,7 +97,7 @@ class TestBlockMdadmCreate(CiTestCase): | |||
3380 | 95 | 97 | ||
3381 | 96 | # Common mock settings | 98 | # Common mock settings |
3382 | 97 | self.mock_valid.return_value = True | 99 | self.mock_valid.return_value = True |
3384 | 98 | self.mock_util.lsb_release.return_value = {'codename': 'precise'} | 100 | self.mock_lsb_release.return_value = {'codename': 'precise'} |
3385 | 99 | self.mock_holders.return_value = [] | 101 | self.mock_holders.return_value = [] |
3386 | 100 | 102 | ||
3387 | 101 | def prepare_mock(self, md_devname, raidlevel, devices, spares): | 103 | def prepare_mock(self, md_devname, raidlevel, devices, spares): |
3388 | @@ -236,14 +238,15 @@ class TestBlockMdadmExamine(CiTestCase): | |||
3389 | 236 | def setUp(self): | 238 | def setUp(self): |
3390 | 237 | super(TestBlockMdadmExamine, self).setUp() | 239 | super(TestBlockMdadmExamine, self).setUp() |
3391 | 238 | self.add_patch('curtin.block.mdadm.util', 'mock_util') | 240 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3392 | 241 | self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') | ||
3393 | 239 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') | 242 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3394 | 240 | 243 | ||
3395 | 241 | # Common mock settings | 244 | # Common mock settings |
3396 | 242 | self.mock_valid.return_value = True | 245 | self.mock_valid.return_value = True |
3398 | 243 | self.mock_util.lsb_release.return_value = {'codename': 'precise'} | 246 | self.mock_lsb_release.return_value = {'codename': 'precise'} |
3399 | 244 | 247 | ||
3400 | 245 | def test_mdadm_examine_export(self): | 248 | def test_mdadm_examine_export(self): |
3402 | 246 | self.mock_util.lsb_release.return_value = {'codename': 'xenial'} | 249 | self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3403 | 247 | self.mock_util.subp.return_value = ( | 250 | self.mock_util.subp.return_value = ( |
3404 | 248 | """ | 251 | """ |
3405 | 249 | MD_LEVEL=raid0 | 252 | MD_LEVEL=raid0 |
3406 | @@ -320,7 +323,7 @@ class TestBlockMdadmExamine(CiTestCase): | |||
3407 | 320 | class TestBlockMdadmStop(CiTestCase): | 323 | class TestBlockMdadmStop(CiTestCase): |
3408 | 321 | def setUp(self): | 324 | def setUp(self): |
3409 | 322 | super(TestBlockMdadmStop, self).setUp() | 325 | super(TestBlockMdadmStop, self).setUp() |
3411 | 323 | self.add_patch('curtin.block.mdadm.util.lsb_release', 'mock_util_lsb') | 326 | self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') |
3412 | 324 | self.add_patch('curtin.block.mdadm.util.subp', 'mock_util_subp') | 327 | self.add_patch('curtin.block.mdadm.util.subp', 'mock_util_subp') |
3413 | 325 | self.add_patch('curtin.block.mdadm.util.write_file', | 328 | self.add_patch('curtin.block.mdadm.util.write_file', |
3414 | 326 | 'mock_util_write_file') | 329 | 'mock_util_write_file') |
3415 | @@ -333,7 +336,7 @@ class TestBlockMdadmStop(CiTestCase): | |||
3416 | 333 | 336 | ||
3417 | 334 | # Common mock settings | 337 | # Common mock settings |
3418 | 335 | self.mock_valid.return_value = True | 338 | self.mock_valid.return_value = True |
3420 | 336 | self.mock_util_lsb.return_value = {'codename': 'xenial'} | 339 | self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3421 | 337 | self.mock_util_subp.side_effect = iter([ | 340 | self.mock_util_subp.side_effect = iter([ |
3422 | 338 | ("", ""), # mdadm stop device | 341 | ("", ""), # mdadm stop device |
3423 | 339 | ]) | 342 | ]) |
3424 | @@ -488,11 +491,12 @@ class TestBlockMdadmRemove(CiTestCase): | |||
3425 | 488 | def setUp(self): | 491 | def setUp(self): |
3426 | 489 | super(TestBlockMdadmRemove, self).setUp() | 492 | super(TestBlockMdadmRemove, self).setUp() |
3427 | 490 | self.add_patch('curtin.block.mdadm.util', 'mock_util') | 493 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3428 | 494 | self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') | ||
3429 | 491 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') | 495 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3430 | 492 | 496 | ||
3431 | 493 | # Common mock settings | 497 | # Common mock settings |
3432 | 494 | self.mock_valid.return_value = True | 498 | self.mock_valid.return_value = True |
3434 | 495 | self.mock_util.lsb_release.return_value = {'codename': 'xenial'} | 499 | self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3435 | 496 | self.mock_util.subp.side_effect = [ | 500 | self.mock_util.subp.side_effect = [ |
3436 | 497 | ("", ""), # mdadm remove device | 501 | ("", ""), # mdadm remove device |
3437 | 498 | ] | 502 | ] |
3438 | @@ -514,14 +518,15 @@ class TestBlockMdadmQueryDetail(CiTestCase): | |||
3439 | 514 | def setUp(self): | 518 | def setUp(self): |
3440 | 515 | super(TestBlockMdadmQueryDetail, self).setUp() | 519 | super(TestBlockMdadmQueryDetail, self).setUp() |
3441 | 516 | self.add_patch('curtin.block.mdadm.util', 'mock_util') | 520 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3442 | 521 | self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') | ||
3443 | 517 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') | 522 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3444 | 518 | 523 | ||
3445 | 519 | # Common mock settings | 524 | # Common mock settings |
3446 | 520 | self.mock_valid.return_value = True | 525 | self.mock_valid.return_value = True |
3448 | 521 | self.mock_util.lsb_release.return_value = {'codename': 'precise'} | 526 | self.mock_lsb_release.return_value = {'codename': 'precise'} |
3449 | 522 | 527 | ||
3450 | 523 | def test_mdadm_query_detail_export(self): | 528 | def test_mdadm_query_detail_export(self): |
3452 | 524 | self.mock_util.lsb_release.return_value = {'codename': 'xenial'} | 529 | self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3453 | 525 | self.mock_util.subp.return_value = ( | 530 | self.mock_util.subp.return_value = ( |
3454 | 526 | """ | 531 | """ |
3455 | 527 | MD_LEVEL=raid1 | 532 | MD_LEVEL=raid1 |
3456 | @@ -592,13 +597,14 @@ class TestBlockMdadmDetailScan(CiTestCase): | |||
3457 | 592 | def setUp(self): | 597 | def setUp(self): |
3458 | 593 | super(TestBlockMdadmDetailScan, self).setUp() | 598 | super(TestBlockMdadmDetailScan, self).setUp() |
3459 | 594 | self.add_patch('curtin.block.mdadm.util', 'mock_util') | 599 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3460 | 600 | self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') | ||
3461 | 595 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') | 601 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3462 | 596 | 602 | ||
3463 | 597 | # Common mock settings | 603 | # Common mock settings |
3464 | 598 | self.scan_output = ("ARRAY /dev/md0 metadata=1.2 spares=2 name=0 " + | 604 | self.scan_output = ("ARRAY /dev/md0 metadata=1.2 spares=2 name=0 " + |
3465 | 599 | "UUID=b1eae2ff:69b6b02e:1d63bb53:ddfa6e4a") | 605 | "UUID=b1eae2ff:69b6b02e:1d63bb53:ddfa6e4a") |
3466 | 600 | self.mock_valid.return_value = True | 606 | self.mock_valid.return_value = True |
3468 | 601 | self.mock_util.lsb_release.return_value = {'codename': 'xenial'} | 607 | self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3469 | 602 | self.mock_util.subp.side_effect = [ | 608 | self.mock_util.subp.side_effect = [ |
3470 | 603 | (self.scan_output, ""), # mdadm --detail --scan | 609 | (self.scan_output, ""), # mdadm --detail --scan |
3471 | 604 | ] | 610 | ] |
3472 | @@ -627,10 +633,11 @@ class TestBlockMdadmMdHelpers(CiTestCase): | |||
3473 | 627 | def setUp(self): | 633 | def setUp(self): |
3474 | 628 | super(TestBlockMdadmMdHelpers, self).setUp() | 634 | super(TestBlockMdadmMdHelpers, self).setUp() |
3475 | 629 | self.add_patch('curtin.block.mdadm.util', 'mock_util') | 635 | self.add_patch('curtin.block.mdadm.util', 'mock_util') |
3476 | 636 | self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release') | ||
3477 | 630 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') | 637 | self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid') |
3478 | 631 | 638 | ||
3479 | 632 | self.mock_valid.return_value = True | 639 | self.mock_valid.return_value = True |
3481 | 633 | self.mock_util.lsb_release.return_value = {'codename': 'xenial'} | 640 | self.mock_lsb_release.return_value = {'codename': 'xenial'} |
3482 | 634 | 641 | ||
3483 | 635 | def test_valid_mdname(self): | 642 | def test_valid_mdname(self): |
3484 | 636 | mdname = "/dev/md0" | 643 | mdname = "/dev/md0" |
3485 | diff --git a/tests/unittests/test_block_mkfs.py b/tests/unittests/test_block_mkfs.py | |||
3486 | index c756281..679f85b 100644 | |||
3487 | --- a/tests/unittests/test_block_mkfs.py | |||
3488 | +++ b/tests/unittests/test_block_mkfs.py | |||
3489 | @@ -37,11 +37,12 @@ class TestBlockMkfs(CiTestCase): | |||
3490 | 37 | @mock.patch("curtin.block.mkfs.block") | 37 | @mock.patch("curtin.block.mkfs.block") |
3491 | 38 | @mock.patch("curtin.block.mkfs.os") | 38 | @mock.patch("curtin.block.mkfs.os") |
3492 | 39 | @mock.patch("curtin.block.mkfs.util") | 39 | @mock.patch("curtin.block.mkfs.util") |
3493 | 40 | @mock.patch("curtin.block.mkfs.distro.lsb_release") | ||
3494 | 40 | def _run_mkfs_with_config(self, config, expected_cmd, expected_flags, | 41 | def _run_mkfs_with_config(self, config, expected_cmd, expected_flags, |
3496 | 41 | mock_util, mock_os, mock_block, | 42 | mock_lsb_release, mock_util, mock_os, mock_block, |
3497 | 42 | release="wily", strict=False): | 43 | release="wily", strict=False): |
3498 | 43 | # Pretend we are on wily as there are no known edge cases for it | 44 | # Pretend we are on wily as there are no known edge cases for it |
3500 | 44 | mock_util.lsb_release.return_value = {"codename": release} | 45 | mock_lsb_release.return_value = {"codename": release} |
3501 | 45 | mock_os.path.exists.return_value = True | 46 | mock_os.path.exists.return_value = True |
3502 | 46 | mock_block.get_blockdev_sector_size.return_value = (512, 512) | 47 | mock_block.get_blockdev_sector_size.return_value = (512, 512) |
3503 | 47 | 48 | ||
3504 | diff --git a/tests/unittests/test_block_zfs.py b/tests/unittests/test_block_zfs.py | |||
3505 | index c18f6a3..9781946 100644 | |||
3506 | --- a/tests/unittests/test_block_zfs.py | |||
3507 | +++ b/tests/unittests/test_block_zfs.py | |||
3508 | @@ -384,7 +384,7 @@ class TestBlockZfsAssertZfsSupported(CiTestCase): | |||
3509 | 384 | super(TestBlockZfsAssertZfsSupported, self).setUp() | 384 | super(TestBlockZfsAssertZfsSupported, self).setUp() |
3510 | 385 | self.add_patch('curtin.block.zfs.util.subp', 'mock_subp') | 385 | self.add_patch('curtin.block.zfs.util.subp', 'mock_subp') |
3511 | 386 | self.add_patch('curtin.block.zfs.util.get_platform_arch', 'mock_arch') | 386 | self.add_patch('curtin.block.zfs.util.get_platform_arch', 'mock_arch') |
3513 | 387 | self.add_patch('curtin.block.zfs.util.lsb_release', 'mock_release') | 387 | self.add_patch('curtin.block.zfs.distro.lsb_release', 'mock_release') |
3514 | 388 | self.add_patch('curtin.block.zfs.util.which', 'mock_which') | 388 | self.add_patch('curtin.block.zfs.util.which', 'mock_which') |
3515 | 389 | self.add_patch('curtin.block.zfs.get_supported_filesystems', | 389 | self.add_patch('curtin.block.zfs.get_supported_filesystems', |
3516 | 390 | 'mock_supfs') | 390 | 'mock_supfs') |
3517 | @@ -426,46 +426,52 @@ class TestAssertZfsSupported(CiTestCase): | |||
3518 | 426 | super(TestAssertZfsSupported, self).setUp() | 426 | super(TestAssertZfsSupported, self).setUp() |
3519 | 427 | 427 | ||
3520 | 428 | @mock.patch('curtin.block.zfs.get_supported_filesystems') | 428 | @mock.patch('curtin.block.zfs.get_supported_filesystems') |
3521 | 429 | @mock.patch('curtin.block.zfs.distro') | ||
3522 | 429 | @mock.patch('curtin.block.zfs.util') | 430 | @mock.patch('curtin.block.zfs.util') |
3524 | 430 | def test_zfs_assert_supported_returns_true(self, mock_util, mock_supfs): | 431 | def test_zfs_assert_supported_returns_true(self, mock_util, mock_distro, |
3525 | 432 | mock_supfs): | ||
3526 | 431 | """zfs_assert_supported returns True on supported platforms""" | 433 | """zfs_assert_supported returns True on supported platforms""" |
3527 | 432 | mock_util.get_platform_arch.return_value = 'amd64' | 434 | mock_util.get_platform_arch.return_value = 'amd64' |
3529 | 433 | mock_util.lsb_release.return_value = {'codename': 'bionic'} | 435 | mock_distro.lsb_release.return_value = {'codename': 'bionic'} |
3530 | 434 | mock_util.subp.return_value = ("", "") | 436 | mock_util.subp.return_value = ("", "") |
3531 | 435 | mock_supfs.return_value = ['zfs'] | 437 | mock_supfs.return_value = ['zfs'] |
3532 | 436 | mock_util.which.side_effect = iter(['/wark/zpool', '/wark/zfs']) | 438 | mock_util.which.side_effect = iter(['/wark/zpool', '/wark/zfs']) |
3533 | 437 | 439 | ||
3534 | 438 | self.assertNotIn(mock_util.get_platform_arch.return_value, | 440 | self.assertNotIn(mock_util.get_platform_arch.return_value, |
3535 | 439 | zfs.ZFS_UNSUPPORTED_ARCHES) | 441 | zfs.ZFS_UNSUPPORTED_ARCHES) |
3537 | 440 | self.assertNotIn(mock_util.lsb_release.return_value['codename'], | 442 | self.assertNotIn(mock_distro.lsb_release.return_value['codename'], |
3538 | 441 | zfs.ZFS_UNSUPPORTED_RELEASES) | 443 | zfs.ZFS_UNSUPPORTED_RELEASES) |
3539 | 442 | self.assertTrue(zfs.zfs_supported()) | 444 | self.assertTrue(zfs.zfs_supported()) |
3540 | 443 | 445 | ||
3541 | 446 | @mock.patch('curtin.block.zfs.distro') | ||
3542 | 444 | @mock.patch('curtin.block.zfs.util') | 447 | @mock.patch('curtin.block.zfs.util') |
3543 | 445 | def test_zfs_assert_supported_raises_exception_on_bad_arch(self, | 448 | def test_zfs_assert_supported_raises_exception_on_bad_arch(self, |
3545 | 446 | mock_util): | 449 | mock_util, |
3546 | 450 | mock_distro): | ||
3547 | 447 | """zfs_assert_supported raises RuntimeError on unspported arches""" | 451 | """zfs_assert_supported raises RuntimeError on unspported arches""" |
3549 | 448 | mock_util.lsb_release.return_value = {'codename': 'bionic'} | 452 | mock_distro.lsb_release.return_value = {'codename': 'bionic'} |
3550 | 449 | mock_util.subp.return_value = ("", "") | 453 | mock_util.subp.return_value = ("", "") |
3551 | 450 | for arch in zfs.ZFS_UNSUPPORTED_ARCHES: | 454 | for arch in zfs.ZFS_UNSUPPORTED_ARCHES: |
3552 | 451 | mock_util.get_platform_arch.return_value = arch | 455 | mock_util.get_platform_arch.return_value = arch |
3553 | 452 | with self.assertRaises(RuntimeError): | 456 | with self.assertRaises(RuntimeError): |
3554 | 453 | zfs.zfs_assert_supported() | 457 | zfs.zfs_assert_supported() |
3555 | 454 | 458 | ||
3556 | 459 | @mock.patch('curtin.block.zfs.distro') | ||
3557 | 455 | @mock.patch('curtin.block.zfs.util') | 460 | @mock.patch('curtin.block.zfs.util') |
3559 | 456 | def test_zfs_assert_supported_raises_exc_on_bad_releases(self, mock_util): | 461 | def test_zfs_assert_supported_raises_exc_on_bad_releases(self, mock_util, |
3560 | 462 | mock_distro): | ||
3561 | 457 | """zfs_assert_supported raises RuntimeError on unspported releases""" | 463 | """zfs_assert_supported raises RuntimeError on unspported releases""" |
3562 | 458 | mock_util.get_platform_arch.return_value = 'amd64' | 464 | mock_util.get_platform_arch.return_value = 'amd64' |
3563 | 459 | mock_util.subp.return_value = ("", "") | 465 | mock_util.subp.return_value = ("", "") |
3564 | 460 | for release in zfs.ZFS_UNSUPPORTED_RELEASES: | 466 | for release in zfs.ZFS_UNSUPPORTED_RELEASES: |
3566 | 461 | mock_util.lsb_release.return_value = {'codename': release} | 467 | mock_distro.lsb_release.return_value = {'codename': release} |
3567 | 462 | with self.assertRaises(RuntimeError): | 468 | with self.assertRaises(RuntimeError): |
3568 | 463 | zfs.zfs_assert_supported() | 469 | zfs.zfs_assert_supported() |
3569 | 464 | 470 | ||
3570 | 465 | @mock.patch('curtin.block.zfs.util.subprocess.Popen') | 471 | @mock.patch('curtin.block.zfs.util.subprocess.Popen') |
3571 | 466 | @mock.patch('curtin.block.zfs.util.is_kmod_loaded') | 472 | @mock.patch('curtin.block.zfs.util.is_kmod_loaded') |
3572 | 467 | @mock.patch('curtin.block.zfs.get_supported_filesystems') | 473 | @mock.patch('curtin.block.zfs.get_supported_filesystems') |
3574 | 468 | @mock.patch('curtin.block.zfs.util.lsb_release') | 474 | @mock.patch('curtin.block.zfs.distro.lsb_release') |
3575 | 469 | @mock.patch('curtin.block.zfs.util.get_platform_arch') | 475 | @mock.patch('curtin.block.zfs.util.get_platform_arch') |
3576 | 470 | def test_zfs_assert_supported_raises_exc_on_missing_module(self, | 476 | def test_zfs_assert_supported_raises_exc_on_missing_module(self, |
3577 | 471 | m_arch, | 477 | m_arch, |
3578 | diff --git a/tests/unittests/test_commands_apply_net.py b/tests/unittests/test_commands_apply_net.py | |||
3579 | index a55ab17..04b7f2e 100644 | |||
3580 | --- a/tests/unittests/test_commands_apply_net.py | |||
3581 | +++ b/tests/unittests/test_commands_apply_net.py | |||
3582 | @@ -5,7 +5,7 @@ import copy | |||
3583 | 5 | import os | 5 | import os |
3584 | 6 | 6 | ||
3585 | 7 | from curtin.commands import apply_net | 7 | from curtin.commands import apply_net |
3587 | 8 | from curtin import util | 8 | from curtin import paths |
3588 | 9 | from .helpers import CiTestCase | 9 | from .helpers import CiTestCase |
3589 | 10 | 10 | ||
3590 | 11 | 11 | ||
3591 | @@ -153,8 +153,8 @@ class TestApplyNetPatchIfupdown(CiTestCase): | |||
3592 | 153 | prehookfn=prehookfn, | 153 | prehookfn=prehookfn, |
3593 | 154 | posthookfn=posthookfn) | 154 | posthookfn=posthookfn) |
3594 | 155 | 155 | ||
3597 | 156 | precfg = util.target_path(target, path=prehookfn) | 156 | precfg = paths.target_path(target, path=prehookfn) |
3598 | 157 | postcfg = util.target_path(target, path=posthookfn) | 157 | postcfg = paths.target_path(target, path=posthookfn) |
3599 | 158 | precontents = apply_net.IFUPDOWN_IPV6_MTU_PRE_HOOK | 158 | precontents = apply_net.IFUPDOWN_IPV6_MTU_PRE_HOOK |
3600 | 159 | postcontents = apply_net.IFUPDOWN_IPV6_MTU_POST_HOOK | 159 | postcontents = apply_net.IFUPDOWN_IPV6_MTU_POST_HOOK |
3601 | 160 | 160 | ||
3602 | @@ -231,7 +231,7 @@ class TestApplyNetPatchIpv6Priv(CiTestCase): | |||
3603 | 231 | 231 | ||
3604 | 232 | apply_net._disable_ipv6_privacy_extensions(target) | 232 | apply_net._disable_ipv6_privacy_extensions(target) |
3605 | 233 | 233 | ||
3607 | 234 | cfg = util.target_path(target, path=path) | 234 | cfg = paths.target_path(target, path=path) |
3608 | 235 | mock_write.assert_called_with(cfg, expected_ipv6_priv_contents) | 235 | mock_write.assert_called_with(cfg, expected_ipv6_priv_contents) |
3609 | 236 | 236 | ||
3610 | 237 | @patch('curtin.util.load_file') | 237 | @patch('curtin.util.load_file') |
3611 | @@ -259,7 +259,7 @@ class TestApplyNetPatchIpv6Priv(CiTestCase): | |||
3612 | 259 | apply_net._disable_ipv6_privacy_extensions(target, path=path) | 259 | apply_net._disable_ipv6_privacy_extensions(target, path=path) |
3613 | 260 | 260 | ||
3614 | 261 | # source file not found | 261 | # source file not found |
3616 | 262 | cfg = util.target_path(target, path) | 262 | cfg = paths.target_path(target, path) |
3617 | 263 | mock_ospath.exists.assert_called_with(cfg) | 263 | mock_ospath.exists.assert_called_with(cfg) |
3618 | 264 | self.assertEqual(0, mock_load.call_count) | 264 | self.assertEqual(0, mock_load.call_count) |
3619 | 265 | 265 | ||
3620 | @@ -272,7 +272,7 @@ class TestApplyNetRemoveLegacyEth0(CiTestCase): | |||
3621 | 272 | def test_remove_legacy_eth0(self, mock_ospath, mock_load, mock_del): | 272 | def test_remove_legacy_eth0(self, mock_ospath, mock_load, mock_del): |
3622 | 273 | target = 'mytarget' | 273 | target = 'mytarget' |
3623 | 274 | path = 'eth0.cfg' | 274 | path = 'eth0.cfg' |
3625 | 275 | cfg = util.target_path(target, path) | 275 | cfg = paths.target_path(target, path) |
3626 | 276 | legacy_eth0_contents = ( | 276 | legacy_eth0_contents = ( |
3627 | 277 | 'auto eth0\n' | 277 | 'auto eth0\n' |
3628 | 278 | 'iface eth0 inet dhcp') | 278 | 'iface eth0 inet dhcp') |
3629 | @@ -330,7 +330,7 @@ class TestApplyNetRemoveLegacyEth0(CiTestCase): | |||
3630 | 330 | apply_net._maybe_remove_legacy_eth0(target, path) | 330 | apply_net._maybe_remove_legacy_eth0(target, path) |
3631 | 331 | 331 | ||
3632 | 332 | # source file not found | 332 | # source file not found |
3634 | 333 | cfg = util.target_path(target, path) | 333 | cfg = paths.target_path(target, path) |
3635 | 334 | mock_ospath.exists.assert_called_with(cfg) | 334 | mock_ospath.exists.assert_called_with(cfg) |
3636 | 335 | self.assertEqual(0, mock_load.call_count) | 335 | self.assertEqual(0, mock_load.call_count) |
3637 | 336 | self.assertEqual(0, mock_del.call_count) | 336 | self.assertEqual(0, mock_del.call_count) |
3638 | diff --git a/tests/unittests/test_commands_block_meta.py b/tests/unittests/test_commands_block_meta.py | |||
3639 | index a6a0b13..e70d6ed 100644 | |||
3640 | --- a/tests/unittests/test_commands_block_meta.py | |||
3641 | +++ b/tests/unittests/test_commands_block_meta.py | |||
3642 | @@ -7,7 +7,7 @@ from mock import patch, call | |||
3643 | 7 | import os | 7 | import os |
3644 | 8 | 8 | ||
3645 | 9 | from curtin.commands import block_meta | 9 | from curtin.commands import block_meta |
3647 | 10 | from curtin import util | 10 | from curtin import paths, util |
3648 | 11 | from .helpers import CiTestCase | 11 | from .helpers import CiTestCase |
3649 | 12 | 12 | ||
3650 | 13 | 13 | ||
3651 | @@ -688,8 +688,9 @@ class TestFstabData(CiTestCase): | |||
3652 | 688 | if target is None: | 688 | if target is None: |
3653 | 689 | target = self.tmp_dir() | 689 | target = self.tmp_dir() |
3654 | 690 | 690 | ||
3657 | 691 | expected = [a if a != "_T_MP" else util.target_path(target, fdata.path) | 691 | expected = [ |
3658 | 692 | for a in expected] | 692 | a if a != "_T_MP" else paths.target_path(target, fdata.path) |
3659 | 693 | for a in expected] | ||
3660 | 693 | with patch("curtin.util.subp") as m_subp: | 694 | with patch("curtin.util.subp") as m_subp: |
3661 | 694 | block_meta.mount_fstab_data(fdata, target=target) | 695 | block_meta.mount_fstab_data(fdata, target=target) |
3662 | 695 | 696 | ||
3663 | diff --git a/tests/unittests/test_curthooks.py b/tests/unittests/test_curthooks.py | |||
3664 | index a8275c7..8fd7933 100644 | |||
3665 | --- a/tests/unittests/test_curthooks.py | |||
3666 | +++ b/tests/unittests/test_curthooks.py | |||
3667 | @@ -4,6 +4,7 @@ import os | |||
3668 | 4 | from mock import call, patch, MagicMock | 4 | from mock import call, patch, MagicMock |
3669 | 5 | 5 | ||
3670 | 6 | from curtin.commands import curthooks | 6 | from curtin.commands import curthooks |
3671 | 7 | from curtin import distro | ||
3672 | 7 | from curtin import util | 8 | from curtin import util |
3673 | 8 | from curtin import config | 9 | from curtin import config |
3674 | 9 | from curtin.reporter import events | 10 | from curtin.reporter import events |
3675 | @@ -47,8 +48,8 @@ class TestGetFlashKernelPkgs(CiTestCase): | |||
3676 | 47 | class TestCurthooksInstallKernel(CiTestCase): | 48 | class TestCurthooksInstallKernel(CiTestCase): |
3677 | 48 | def setUp(self): | 49 | def setUp(self): |
3678 | 49 | super(TestCurthooksInstallKernel, self).setUp() | 50 | super(TestCurthooksInstallKernel, self).setUp() |
3681 | 50 | self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') | 51 | self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg') |
3682 | 51 | self.add_patch('curtin.util.install_packages', 'mock_instpkg') | 52 | self.add_patch('curtin.distro.install_packages', 'mock_instpkg') |
3683 | 52 | self.add_patch( | 53 | self.add_patch( |
3684 | 53 | 'curtin.commands.curthooks.get_flash_kernel_pkgs', | 54 | 'curtin.commands.curthooks.get_flash_kernel_pkgs', |
3685 | 54 | 'mock_get_flash_kernel_pkgs') | 55 | 'mock_get_flash_kernel_pkgs') |
3686 | @@ -122,12 +123,21 @@ class TestInstallMissingPkgs(CiTestCase): | |||
3687 | 122 | def setUp(self): | 123 | def setUp(self): |
3688 | 123 | super(TestInstallMissingPkgs, self).setUp() | 124 | super(TestInstallMissingPkgs, self).setUp() |
3689 | 124 | self.add_patch('platform.machine', 'mock_machine') | 125 | self.add_patch('platform.machine', 'mock_machine') |
3691 | 125 | self.add_patch('curtin.util.get_installed_packages', | 126 | self.add_patch('curtin.util.get_architecture', 'mock_arch') |
3692 | 127 | self.add_patch('curtin.distro.get_installed_packages', | ||
3693 | 126 | 'mock_get_installed_packages') | 128 | 'mock_get_installed_packages') |
3694 | 127 | self.add_patch('curtin.util.load_command_environment', | 129 | self.add_patch('curtin.util.load_command_environment', |
3695 | 128 | 'mock_load_cmd_evn') | 130 | 'mock_load_cmd_evn') |
3696 | 129 | self.add_patch('curtin.util.which', 'mock_which') | 131 | self.add_patch('curtin.util.which', 'mock_which') |
3698 | 130 | self.add_patch('curtin.util.install_packages', 'mock_install_packages') | 132 | self.add_patch('curtin.util.is_uefi_bootable', 'mock_uefi') |
3699 | 133 | self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg') | ||
3700 | 134 | self.add_patch('curtin.distro.install_packages', | ||
3701 | 135 | 'mock_install_packages') | ||
3702 | 136 | self.add_patch('curtin.distro.get_osfamily', 'mock_osfamily') | ||
3703 | 137 | self.distro_family = distro.DISTROS.debian | ||
3704 | 138 | self.mock_osfamily.return_value = self.distro_family | ||
3705 | 139 | self.mock_uefi.return_value = False | ||
3706 | 140 | self.mock_haspkg.return_value = False | ||
3707 | 131 | 141 | ||
3708 | 132 | @patch.object(events, 'ReportEventStack') | 142 | @patch.object(events, 'ReportEventStack') |
3709 | 133 | def test_install_packages_s390x(self, mock_events): | 143 | def test_install_packages_s390x(self, mock_events): |
3710 | @@ -137,8 +147,8 @@ class TestInstallMissingPkgs(CiTestCase): | |||
3711 | 137 | target = "not-a-real-target" | 147 | target = "not-a-real-target" |
3712 | 138 | cfg = {} | 148 | cfg = {} |
3713 | 139 | curthooks.install_missing_packages(cfg, target=target) | 149 | curthooks.install_missing_packages(cfg, target=target) |
3716 | 140 | self.mock_install_packages.assert_called_with(['s390-tools'], | 150 | self.mock_install_packages.assert_called_with( |
3717 | 141 | target=target) | 151 | ['s390-tools'], target=target, osfamily=self.distro_family) |
3718 | 142 | 152 | ||
3719 | 143 | @patch.object(events, 'ReportEventStack') | 153 | @patch.object(events, 'ReportEventStack') |
3720 | 144 | def test_install_packages_s390x_has_zipl(self, mock_events): | 154 | def test_install_packages_s390x_has_zipl(self, mock_events): |
3721 | @@ -159,6 +169,50 @@ class TestInstallMissingPkgs(CiTestCase): | |||
3722 | 159 | curthooks.install_missing_packages(cfg, target=target) | 169 | curthooks.install_missing_packages(cfg, target=target) |
3723 | 160 | self.assertEqual([], self.mock_install_packages.call_args_list) | 170 | self.assertEqual([], self.mock_install_packages.call_args_list) |
3724 | 161 | 171 | ||
3725 | 172 | @patch.object(events, 'ReportEventStack') | ||
3726 | 173 | def test_install_packages_on_uefi_amd64_shim_signed(self, mock_events): | ||
3727 | 174 | arch = 'amd64' | ||
3728 | 175 | self.mock_arch.return_value = arch | ||
3729 | 176 | self.mock_machine.return_value = 'x86_64' | ||
3730 | 177 | expected_pkgs = ['grub-efi-%s' % arch, | ||
3731 | 178 | 'grub-efi-%s-signed' % arch, | ||
3732 | 179 | 'shim-signed'] | ||
3733 | 180 | self.mock_machine.return_value = 'x86_64' | ||
3734 | 181 | self.mock_uefi.return_value = True | ||
3735 | 182 | self.mock_haspkg.return_value = True | ||
3736 | 183 | target = "not-a-real-target" | ||
3737 | 184 | cfg = {} | ||
3738 | 185 | curthooks.install_missing_packages(cfg, target=target) | ||
3739 | 186 | self.mock_install_packages.assert_called_with( | ||
3740 | 187 | expected_pkgs, target=target, osfamily=self.distro_family) | ||
3741 | 188 | |||
3742 | 189 | @patch.object(events, 'ReportEventStack') | ||
3743 | 190 | def test_install_packages_on_uefi_i386_noshim_nosigned(self, mock_events): | ||
3744 | 191 | arch = 'i386' | ||
3745 | 192 | self.mock_arch.return_value = arch | ||
3746 | 193 | self.mock_machine.return_value = 'i386' | ||
3747 | 194 | expected_pkgs = ['grub-efi-%s' % arch] | ||
3748 | 195 | self.mock_machine.return_value = 'i686' | ||
3749 | 196 | self.mock_uefi.return_value = True | ||
3750 | 197 | target = "not-a-real-target" | ||
3751 | 198 | cfg = {} | ||
3752 | 199 | curthooks.install_missing_packages(cfg, target=target) | ||
3753 | 200 | self.mock_install_packages.assert_called_with( | ||
3754 | 201 | expected_pkgs, target=target, osfamily=self.distro_family) | ||
3755 | 202 | |||
3756 | 203 | @patch.object(events, 'ReportEventStack') | ||
3757 | 204 | def test_install_packages_on_uefi_arm64_nosign_noshim(self, mock_events): | ||
3758 | 205 | arch = 'arm64' | ||
3759 | 206 | self.mock_arch.return_value = arch | ||
3760 | 207 | self.mock_machine.return_value = 'aarch64' | ||
3761 | 208 | expected_pkgs = ['grub-efi-%s' % arch] | ||
3762 | 209 | self.mock_uefi.return_value = True | ||
3763 | 210 | target = "not-a-real-target" | ||
3764 | 211 | cfg = {} | ||
3765 | 212 | curthooks.install_missing_packages(cfg, target=target) | ||
3766 | 213 | self.mock_install_packages.assert_called_with( | ||
3767 | 214 | expected_pkgs, target=target, osfamily=self.distro_family) | ||
3768 | 215 | |||
3769 | 162 | 216 | ||
3770 | 163 | class TestSetupZipl(CiTestCase): | 217 | class TestSetupZipl(CiTestCase): |
3771 | 164 | 218 | ||
3772 | @@ -192,7 +246,8 @@ class TestSetupGrub(CiTestCase): | |||
3773 | 192 | def setUp(self): | 246 | def setUp(self): |
3774 | 193 | super(TestSetupGrub, self).setUp() | 247 | super(TestSetupGrub, self).setUp() |
3775 | 194 | self.target = self.tmp_dir() | 248 | self.target = self.tmp_dir() |
3777 | 195 | self.add_patch('curtin.util.lsb_release', 'mock_lsb_release') | 249 | self.distro_family = distro.DISTROS.debian |
3778 | 250 | self.add_patch('curtin.distro.lsb_release', 'mock_lsb_release') | ||
3779 | 196 | self.mock_lsb_release.return_value = { | 251 | self.mock_lsb_release.return_value = { |
3780 | 197 | 'codename': 'xenial', | 252 | 'codename': 'xenial', |
3781 | 198 | } | 253 | } |
3782 | @@ -219,11 +274,12 @@ class TestSetupGrub(CiTestCase): | |||
3783 | 219 | 'grub_install_devices': ['/dev/vdb'] | 274 | 'grub_install_devices': ['/dev/vdb'] |
3784 | 220 | } | 275 | } |
3785 | 221 | self.subp_output.append(('', '')) | 276 | self.subp_output.append(('', '')) |
3787 | 222 | curthooks.setup_grub(cfg, self.target) | 277 | curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3788 | 223 | self.assertEquals( | 278 | self.assertEquals( |
3789 | 224 | ([ | 279 | ([ |
3790 | 225 | 'sh', '-c', 'exec "$0" "$@" 2>&1', | 280 | 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3792 | 226 | 'install-grub', self.target, '/dev/vdb'],), | 281 | 'install-grub', '--os-family=%s' % self.distro_family, |
3793 | 282 | self.target, '/dev/vdb'],), | ||
3794 | 227 | self.mock_subp.call_args_list[0][0]) | 283 | self.mock_subp.call_args_list[0][0]) |
3795 | 228 | 284 | ||
3796 | 229 | def test_uses_install_devices_in_grubcfg(self): | 285 | def test_uses_install_devices_in_grubcfg(self): |
3797 | @@ -233,11 +289,12 @@ class TestSetupGrub(CiTestCase): | |||
3798 | 233 | }, | 289 | }, |
3799 | 234 | } | 290 | } |
3800 | 235 | self.subp_output.append(('', '')) | 291 | self.subp_output.append(('', '')) |
3802 | 236 | curthooks.setup_grub(cfg, self.target) | 292 | curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3803 | 237 | self.assertEquals( | 293 | self.assertEquals( |
3804 | 238 | ([ | 294 | ([ |
3805 | 239 | 'sh', '-c', 'exec "$0" "$@" 2>&1', | 295 | 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3807 | 240 | 'install-grub', self.target, '/dev/vdb'],), | 296 | 'install-grub', '--os-family=%s' % self.distro_family, |
3808 | 297 | self.target, '/dev/vdb'],), | ||
3809 | 241 | self.mock_subp.call_args_list[0][0]) | 298 | self.mock_subp.call_args_list[0][0]) |
3810 | 242 | 299 | ||
3811 | 243 | def test_uses_grub_install_on_storage_config(self): | 300 | def test_uses_grub_install_on_storage_config(self): |
3812 | @@ -255,11 +312,12 @@ class TestSetupGrub(CiTestCase): | |||
3813 | 255 | }, | 312 | }, |
3814 | 256 | } | 313 | } |
3815 | 257 | self.subp_output.append(('', '')) | 314 | self.subp_output.append(('', '')) |
3817 | 258 | curthooks.setup_grub(cfg, self.target) | 315 | curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3818 | 259 | self.assertEquals( | 316 | self.assertEquals( |
3819 | 260 | ([ | 317 | ([ |
3820 | 261 | 'sh', '-c', 'exec "$0" "$@" 2>&1', | 318 | 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3822 | 262 | 'install-grub', self.target, '/dev/vdb'],), | 319 | 'install-grub', '--os-family=%s' % self.distro_family, |
3823 | 320 | self.target, '/dev/vdb'],), | ||
3824 | 263 | self.mock_subp.call_args_list[0][0]) | 321 | self.mock_subp.call_args_list[0][0]) |
3825 | 264 | 322 | ||
3826 | 265 | def test_grub_install_installs_to_none_if_install_devices_None(self): | 323 | def test_grub_install_installs_to_none_if_install_devices_None(self): |
3827 | @@ -269,62 +327,17 @@ class TestSetupGrub(CiTestCase): | |||
3828 | 269 | }, | 327 | }, |
3829 | 270 | } | 328 | } |
3830 | 271 | self.subp_output.append(('', '')) | 329 | self.subp_output.append(('', '')) |
3855 | 272 | curthooks.setup_grub(cfg, self.target) | 330 | curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3832 | 273 | self.assertEquals( | ||
3833 | 274 | ([ | ||
3834 | 275 | 'sh', '-c', 'exec "$0" "$@" 2>&1', | ||
3835 | 276 | 'install-grub', self.target, 'none'],), | ||
3836 | 277 | self.mock_subp.call_args_list[0][0]) | ||
3837 | 278 | |||
3838 | 279 | def test_grub_install_uefi_installs_signed_packages_for_amd64(self): | ||
3839 | 280 | self.add_patch('curtin.util.install_packages', 'mock_install') | ||
3840 | 281 | self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') | ||
3841 | 282 | self.mock_is_uefi_bootable.return_value = True | ||
3842 | 283 | cfg = { | ||
3843 | 284 | 'grub': { | ||
3844 | 285 | 'install_devices': ['/dev/vdb'], | ||
3845 | 286 | 'update_nvram': False, | ||
3846 | 287 | }, | ||
3847 | 288 | } | ||
3848 | 289 | self.subp_output.append(('', '')) | ||
3849 | 290 | self.mock_arch.return_value = 'amd64' | ||
3850 | 291 | self.mock_haspkg.return_value = True | ||
3851 | 292 | curthooks.setup_grub(cfg, self.target) | ||
3852 | 293 | self.assertEquals( | ||
3853 | 294 | (['grub-efi-amd64', 'grub-efi-amd64-signed', 'shim-signed'],), | ||
3854 | 295 | self.mock_install.call_args_list[0][0]) | ||
3856 | 296 | self.assertEquals( | 331 | self.assertEquals( |
3857 | 297 | ([ | 332 | ([ |
3858 | 298 | 'sh', '-c', 'exec "$0" "$@" 2>&1', | 333 | 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3883 | 299 | 'install-grub', '--uefi', self.target, '/dev/vdb'],), | 334 | 'install-grub', '--os-family=%s' % self.distro_family, |
3884 | 300 | self.mock_subp.call_args_list[0][0]) | 335 | self.target, 'none'],), |
3861 | 301 | |||
3862 | 302 | def test_grub_install_uefi_installs_packages_for_arm64(self): | ||
3863 | 303 | self.add_patch('curtin.util.install_packages', 'mock_install') | ||
3864 | 304 | self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') | ||
3865 | 305 | self.mock_is_uefi_bootable.return_value = True | ||
3866 | 306 | cfg = { | ||
3867 | 307 | 'grub': { | ||
3868 | 308 | 'install_devices': ['/dev/vdb'], | ||
3869 | 309 | 'update_nvram': False, | ||
3870 | 310 | }, | ||
3871 | 311 | } | ||
3872 | 312 | self.subp_output.append(('', '')) | ||
3873 | 313 | self.mock_arch.return_value = 'arm64' | ||
3874 | 314 | self.mock_haspkg.return_value = False | ||
3875 | 315 | curthooks.setup_grub(cfg, self.target) | ||
3876 | 316 | self.assertEquals( | ||
3877 | 317 | (['grub-efi-arm64'],), | ||
3878 | 318 | self.mock_install.call_args_list[0][0]) | ||
3879 | 319 | self.assertEquals( | ||
3880 | 320 | ([ | ||
3881 | 321 | 'sh', '-c', 'exec "$0" "$@" 2>&1', | ||
3882 | 322 | 'install-grub', '--uefi', self.target, '/dev/vdb'],), | ||
3885 | 323 | self.mock_subp.call_args_list[0][0]) | 336 | self.mock_subp.call_args_list[0][0]) |
3886 | 324 | 337 | ||
3887 | 325 | def test_grub_install_uefi_updates_nvram_skips_remove_and_reorder(self): | 338 | def test_grub_install_uefi_updates_nvram_skips_remove_and_reorder(self): |
3890 | 326 | self.add_patch('curtin.util.install_packages', 'mock_install') | 339 | self.add_patch('curtin.distro.install_packages', 'mock_install') |
3891 | 327 | self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') | 340 | self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg') |
3892 | 328 | self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr') | 341 | self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr') |
3893 | 329 | self.mock_is_uefi_bootable.return_value = True | 342 | self.mock_is_uefi_bootable.return_value = True |
3894 | 330 | cfg = { | 343 | cfg = { |
3895 | @@ -347,17 +360,18 @@ class TestSetupGrub(CiTestCase): | |||
3896 | 347 | } | 360 | } |
3897 | 348 | } | 361 | } |
3898 | 349 | } | 362 | } |
3900 | 350 | curthooks.setup_grub(cfg, self.target) | 363 | curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3901 | 351 | self.assertEquals( | 364 | self.assertEquals( |
3902 | 352 | ([ | 365 | ([ |
3903 | 353 | 'sh', '-c', 'exec "$0" "$@" 2>&1', | 366 | 'sh', '-c', 'exec "$0" "$@" 2>&1', |
3904 | 354 | 'install-grub', '--uefi', '--update-nvram', | 367 | 'install-grub', '--uefi', '--update-nvram', |
3905 | 368 | '--os-family=%s' % self.distro_family, | ||
3906 | 355 | self.target, '/dev/vdb'],), | 369 | self.target, '/dev/vdb'],), |
3907 | 356 | self.mock_subp.call_args_list[0][0]) | 370 | self.mock_subp.call_args_list[0][0]) |
3908 | 357 | 371 | ||
3909 | 358 | def test_grub_install_uefi_updates_nvram_removes_old_loaders(self): | 372 | def test_grub_install_uefi_updates_nvram_removes_old_loaders(self): |
3912 | 359 | self.add_patch('curtin.util.install_packages', 'mock_install') | 373 | self.add_patch('curtin.distro.install_packages', 'mock_install') |
3913 | 360 | self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') | 374 | self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg') |
3914 | 361 | self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr') | 375 | self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr') |
3915 | 362 | self.mock_is_uefi_bootable.return_value = True | 376 | self.mock_is_uefi_bootable.return_value = True |
3916 | 363 | cfg = { | 377 | cfg = { |
3917 | @@ -392,7 +406,7 @@ class TestSetupGrub(CiTestCase): | |||
3918 | 392 | self.in_chroot_subp_output.append(('', '')) | 406 | self.in_chroot_subp_output.append(('', '')) |
3919 | 393 | self.in_chroot_subp_output.append(('', '')) | 407 | self.in_chroot_subp_output.append(('', '')) |
3920 | 394 | self.mock_haspkg.return_value = False | 408 | self.mock_haspkg.return_value = False |
3922 | 395 | curthooks.setup_grub(cfg, self.target) | 409 | curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3923 | 396 | self.assertEquals( | 410 | self.assertEquals( |
3924 | 397 | ['efibootmgr', '-B', '-b'], | 411 | ['efibootmgr', '-B', '-b'], |
3925 | 398 | self.mock_in_chroot_subp.call_args_list[0][0][0][:3]) | 412 | self.mock_in_chroot_subp.call_args_list[0][0][0][:3]) |
3926 | @@ -406,8 +420,8 @@ class TestSetupGrub(CiTestCase): | |||
3927 | 406 | self.mock_in_chroot_subp.call_args_list[1][0][0][3]])) | 420 | self.mock_in_chroot_subp.call_args_list[1][0][0][3]])) |
3928 | 407 | 421 | ||
3929 | 408 | def test_grub_install_uefi_updates_nvram_reorders_loaders(self): | 422 | def test_grub_install_uefi_updates_nvram_reorders_loaders(self): |
3932 | 409 | self.add_patch('curtin.util.install_packages', 'mock_install') | 423 | self.add_patch('curtin.distro.install_packages', 'mock_install') |
3933 | 410 | self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg') | 424 | self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg') |
3934 | 411 | self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr') | 425 | self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr') |
3935 | 412 | self.mock_is_uefi_bootable.return_value = True | 426 | self.mock_is_uefi_bootable.return_value = True |
3936 | 413 | cfg = { | 427 | cfg = { |
3937 | @@ -436,7 +450,7 @@ class TestSetupGrub(CiTestCase): | |||
3938 | 436 | } | 450 | } |
3939 | 437 | self.in_chroot_subp_output.append(('', '')) | 451 | self.in_chroot_subp_output.append(('', '')) |
3940 | 438 | self.mock_haspkg.return_value = False | 452 | self.mock_haspkg.return_value = False |
3942 | 439 | curthooks.setup_grub(cfg, self.target) | 453 | curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family) |
3943 | 440 | self.assertEquals( | 454 | self.assertEquals( |
3944 | 441 | (['efibootmgr', '-o', '0001,0000'],), | 455 | (['efibootmgr', '-o', '0001,0000'],), |
3945 | 442 | self.mock_in_chroot_subp.call_args_list[0][0]) | 456 | self.mock_in_chroot_subp.call_args_list[0][0]) |
3946 | @@ -453,11 +467,11 @@ class TestUbuntuCoreHooks(CiTestCase): | |||
3947 | 453 | 'var/lib/snapd') | 467 | 'var/lib/snapd') |
3948 | 454 | util.ensure_dir(ubuntu_core_path) | 468 | util.ensure_dir(ubuntu_core_path) |
3949 | 455 | self.assertTrue(os.path.isdir(ubuntu_core_path)) | 469 | self.assertTrue(os.path.isdir(ubuntu_core_path)) |
3951 | 456 | is_core = curthooks.target_is_ubuntu_core(self.target) | 470 | is_core = distro.is_ubuntu_core(self.target) |
3952 | 457 | self.assertTrue(is_core) | 471 | self.assertTrue(is_core) |
3953 | 458 | 472 | ||
3954 | 459 | def test_target_is_ubuntu_core_no_target(self): | 473 | def test_target_is_ubuntu_core_no_target(self): |
3956 | 460 | is_core = curthooks.target_is_ubuntu_core(self.target) | 474 | is_core = distro.is_ubuntu_core(self.target) |
3957 | 461 | self.assertFalse(is_core) | 475 | self.assertFalse(is_core) |
3958 | 462 | 476 | ||
3959 | 463 | def test_target_is_ubuntu_core_noncore_target(self): | 477 | def test_target_is_ubuntu_core_noncore_target(self): |
3960 | @@ -465,7 +479,7 @@ class TestUbuntuCoreHooks(CiTestCase): | |||
3961 | 465 | non_core_path = os.path.join(self.target, 'curtin') | 479 | non_core_path = os.path.join(self.target, 'curtin') |
3962 | 466 | util.ensure_dir(non_core_path) | 480 | util.ensure_dir(non_core_path) |
3963 | 467 | self.assertTrue(os.path.isdir(non_core_path)) | 481 | self.assertTrue(os.path.isdir(non_core_path)) |
3965 | 468 | is_core = curthooks.target_is_ubuntu_core(self.target) | 482 | is_core = distro.is_ubuntu_core(self.target) |
3966 | 469 | self.assertFalse(is_core) | 483 | self.assertFalse(is_core) |
3967 | 470 | 484 | ||
3968 | 471 | @patch('curtin.util.write_file') | 485 | @patch('curtin.util.write_file') |
3969 | @@ -736,15 +750,15 @@ class TestDetectRequiredPackages(CiTestCase): | |||
3970 | 736 | ({'network': { | 750 | ({'network': { |
3971 | 737 | 'version': 2, | 751 | 'version': 2, |
3972 | 738 | 'items': ('bridge',)}}, | 752 | 'items': ('bridge',)}}, |
3974 | 739 | ('bridge-utils',)), | 753 | ()), |
3975 | 740 | ({'network': { | 754 | ({'network': { |
3976 | 741 | 'version': 2, | 755 | 'version': 2, |
3977 | 742 | 'items': ('vlan',)}}, | 756 | 'items': ('vlan',)}}, |
3979 | 743 | ('vlan',)), | 757 | ()), |
3980 | 744 | ({'network': { | 758 | ({'network': { |
3981 | 745 | 'version': 2, | 759 | 'version': 2, |
3982 | 746 | 'items': ('vlan', 'bridge')}}, | 760 | 'items': ('vlan', 'bridge')}}, |
3984 | 747 | ('vlan', 'bridge-utils')), | 761 | ()), |
3985 | 748 | )) | 762 | )) |
3986 | 749 | 763 | ||
3987 | 750 | def test_mixed_storage_v1_network_v2_detect(self): | 764 | def test_mixed_storage_v1_network_v2_detect(self): |
3988 | @@ -755,7 +769,7 @@ class TestDetectRequiredPackages(CiTestCase): | |||
3989 | 755 | 'storage': { | 769 | 'storage': { |
3990 | 756 | 'version': 1, | 770 | 'version': 1, |
3991 | 757 | 'items': ('raid', 'bcache', 'ext4')}}, | 771 | 'items': ('raid', 'bcache', 'ext4')}}, |
3993 | 758 | ('vlan', 'bridge-utils', 'mdadm', 'bcache-tools', 'e2fsprogs')), | 772 | ('mdadm', 'bcache-tools', 'e2fsprogs')), |
3994 | 759 | )) | 773 | )) |
3995 | 760 | 774 | ||
3996 | 761 | def test_invalid_version_in_config(self): | 775 | def test_invalid_version_in_config(self): |
3997 | @@ -782,7 +796,7 @@ class TestCurthooksWriteFiles(CiTestCase): | |||
3998 | 782 | dict((cfg[i]['path'], cfg[i]['content']) for i in cfg.keys()), | 796 | dict((cfg[i]['path'], cfg[i]['content']) for i in cfg.keys()), |
3999 | 783 | dir2dict(tmpd, prefix=tmpd)) | 797 | dir2dict(tmpd, prefix=tmpd)) |
4000 | 784 | 798 | ||
4002 | 785 | @patch('curtin.commands.curthooks.futil.target_path') | 799 | @patch('curtin.commands.curthooks.paths.target_path') |
4003 | 786 | @patch('curtin.commands.curthooks.futil.write_finfo') | 800 | @patch('curtin.commands.curthooks.futil.write_finfo') |
4004 | 787 | def test_handle_write_files_finfo(self, mock_write_finfo, mock_tp): | 801 | def test_handle_write_files_finfo(self, mock_write_finfo, mock_tp): |
4005 | 788 | """ Validate that futils.write_files handles target_path correctly """ | 802 | """ Validate that futils.write_files handles target_path correctly """ |
4006 | @@ -816,6 +830,8 @@ class TestCurthooksPollinate(CiTestCase): | |||
4007 | 816 | self.add_patch('curtin.util.write_file', 'mock_write') | 830 | self.add_patch('curtin.util.write_file', 'mock_write') |
4008 | 817 | self.add_patch('curtin.commands.curthooks.get_maas_version', | 831 | self.add_patch('curtin.commands.curthooks.get_maas_version', |
4009 | 818 | 'mock_maas_version') | 832 | 'mock_maas_version') |
4010 | 833 | self.add_patch('curtin.util.which', 'mock_which') | ||
4011 | 834 | self.mock_which.return_value = '/usr/bin/pollinate' | ||
4012 | 819 | self.target = self.tmp_dir() | 835 | self.target = self.tmp_dir() |
4013 | 820 | 836 | ||
4014 | 821 | def test_handle_pollinate_user_agent_disable(self): | 837 | def test_handle_pollinate_user_agent_disable(self): |
4015 | @@ -826,6 +842,15 @@ class TestCurthooksPollinate(CiTestCase): | |||
4016 | 826 | self.assertEqual(0, self.mock_maas_version.call_count) | 842 | self.assertEqual(0, self.mock_maas_version.call_count) |
4017 | 827 | self.assertEqual(0, self.mock_write.call_count) | 843 | self.assertEqual(0, self.mock_write.call_count) |
4018 | 828 | 844 | ||
4019 | 845 | def test_handle_pollinate_returns_if_no_pollinate_binary(self): | ||
4020 | 846 | """ handle_pollinate_user_agent does nothing if no pollinate binary""" | ||
4021 | 847 | self.mock_which.return_value = None | ||
4022 | 848 | cfg = {'reporting': {'maas': {'endpoint': 'http://127.0.0.1/foo'}}} | ||
4023 | 849 | curthooks.handle_pollinate_user_agent(cfg, self.target) | ||
4024 | 850 | self.assertEqual(0, self.mock_curtin_version.call_count) | ||
4025 | 851 | self.assertEqual(0, self.mock_maas_version.call_count) | ||
4026 | 852 | self.assertEqual(0, self.mock_write.call_count) | ||
4027 | 853 | |||
4028 | 829 | def test_handle_pollinate_user_agent_default(self): | 854 | def test_handle_pollinate_user_agent_default(self): |
4029 | 830 | """ handle_pollinate_user_agent checks curtin/maas version by default | 855 | """ handle_pollinate_user_agent checks curtin/maas version by default |
4030 | 831 | """ | 856 | """ |
4031 | diff --git a/tests/unittests/test_distro.py b/tests/unittests/test_distro.py | |||
4032 | 832 | new file mode 100644 | 857 | new file mode 100644 |
4033 | index 0000000..d4e5a1e | |||
4034 | --- /dev/null | |||
4035 | +++ b/tests/unittests/test_distro.py | |||
4036 | @@ -0,0 +1,302 @@ | |||
4037 | 1 | # This file is part of curtin. See LICENSE file for copyright and license info. | ||
4038 | 2 | |||
4039 | 3 | from unittest import skipIf | ||
4040 | 4 | import mock | ||
4041 | 5 | import sys | ||
4042 | 6 | |||
4043 | 7 | from curtin import distro | ||
4044 | 8 | from curtin import paths | ||
4045 | 9 | from curtin import util | ||
4046 | 10 | from .helpers import CiTestCase | ||
4047 | 11 | |||
4048 | 12 | |||
4049 | 13 | class TestLsbRelease(CiTestCase): | ||
4050 | 14 | |||
4051 | 15 | def setUp(self): | ||
4052 | 16 | super(TestLsbRelease, self).setUp() | ||
4053 | 17 | self._reset_cache() | ||
4054 | 18 | |||
4055 | 19 | def _reset_cache(self): | ||
4056 | 20 | keys = [k for k in distro._LSB_RELEASE.keys()] | ||
4057 | 21 | for d in keys: | ||
4058 | 22 | del distro._LSB_RELEASE[d] | ||
4059 | 23 | |||
4060 | 24 | @mock.patch("curtin.distro.subp") | ||
4061 | 25 | def test_lsb_release_functional(self, mock_subp): | ||
4062 | 26 | output = '\n'.join([ | ||
4063 | 27 | "Distributor ID: Ubuntu", | ||
4064 | 28 | "Description: Ubuntu 14.04.2 LTS", | ||
4065 | 29 | "Release: 14.04", | ||
4066 | 30 | "Codename: trusty", | ||
4067 | 31 | ]) | ||
4068 | 32 | rdata = {'id': 'Ubuntu', 'description': 'Ubuntu 14.04.2 LTS', | ||
4069 | 33 | 'codename': 'trusty', 'release': '14.04'} | ||
4070 | 34 | |||
4071 | 35 | def fake_subp(cmd, capture=False, target=None): | ||
4072 | 36 | return output, 'No LSB modules are available.' | ||
4073 | 37 | |||
4074 | 38 | mock_subp.side_effect = fake_subp | ||
4075 | 39 | found = distro.lsb_release() | ||
4076 | 40 | mock_subp.assert_called_with( | ||
4077 | 41 | ['lsb_release', '--all'], capture=True, target=None) | ||
4078 | 42 | self.assertEqual(found, rdata) | ||
4079 | 43 | |||
4080 | 44 | @mock.patch("curtin.distro.subp") | ||
4081 | 45 | def test_lsb_release_unavailable(self, mock_subp): | ||
4082 | 46 | def doraise(*args, **kwargs): | ||
4083 | 47 | raise util.ProcessExecutionError("foo") | ||
4084 | 48 | mock_subp.side_effect = doraise | ||
4085 | 49 | |||
4086 | 50 | expected = {k: "UNAVAILABLE" for k in | ||
4087 | 51 | ('id', 'description', 'codename', 'release')} | ||
4088 | 52 | self.assertEqual(distro.lsb_release(), expected) | ||
4089 | 53 | |||
4090 | 54 | |||
4091 | 55 | class TestParseDpkgVersion(CiTestCase): | ||
4092 | 56 | """test parse_dpkg_version.""" | ||
4093 | 57 | |||
4094 | 58 | def test_none_raises_type_error(self): | ||
4095 | 59 | self.assertRaises(TypeError, distro.parse_dpkg_version, None) | ||
4096 | 60 | |||
4097 | 61 | @skipIf(sys.version_info.major < 3, "python 2 bytes are strings.") | ||
4098 | 62 | def test_bytes_raises_type_error(self): | ||
4099 | 63 | self.assertRaises(TypeError, distro.parse_dpkg_version, b'1.2.3-0') | ||
4100 | 64 | |||
4101 | 65 | def test_simple_native_package_version(self): | ||
4102 | 66 | """dpkg versions must have a -. If not present expect value error.""" | ||
4103 | 67 | self.assertEqual( | ||
4104 | 68 | {'major': 2, 'minor': 28, 'micro': 0, 'extra': None, | ||
4105 | 69 | 'raw': '2.28', 'upstream': '2.28', 'name': 'germinate', | ||
4106 | 70 | 'semantic_version': 22800}, | ||
4107 | 71 | distro.parse_dpkg_version('2.28', name='germinate')) | ||
4108 | 72 | |||
4109 | 73 | def test_complex_native_package_version(self): | ||
4110 | 74 | dver = '1.0.106ubuntu2+really1.0.97ubuntu1' | ||
4111 | 75 | self.assertEqual( | ||
4112 | 76 | {'major': 1, 'minor': 0, 'micro': 106, | ||
4113 | 77 | 'extra': 'ubuntu2+really1.0.97ubuntu1', | ||
4114 | 78 | 'raw': dver, 'upstream': dver, 'name': 'debootstrap', | ||
4115 | 79 | 'semantic_version': 100106}, | ||
4116 | 80 | distro.parse_dpkg_version(dver, name='debootstrap', | ||
4117 | 81 | semx=(100000, 1000, 1))) | ||
4118 | 82 | |||
4119 | 83 | def test_simple_valid(self): | ||
4120 | 84 | self.assertEqual( | ||
4121 | 85 | {'major': 1, 'minor': 2, 'micro': 3, 'extra': None, | ||
4122 | 86 | 'raw': '1.2.3-0', 'upstream': '1.2.3', 'name': 'foo', | ||
4123 | 87 | 'semantic_version': 10203}, | ||
4124 | 88 | distro.parse_dpkg_version('1.2.3-0', name='foo')) | ||
4125 | 89 | |||
4126 | 90 | def test_simple_valid_with_semx(self): | ||
4127 | 91 | self.assertEqual( | ||
4128 | 92 | {'major': 1, 'minor': 2, 'micro': 3, 'extra': None, | ||
4129 | 93 | 'raw': '1.2.3-0', 'upstream': '1.2.3', | ||
4130 | 94 | 'semantic_version': 123}, | ||
4131 | 95 | distro.parse_dpkg_version('1.2.3-0', semx=(100, 10, 1))) | ||
4132 | 96 | |||
4133 | 97 | def test_upstream_with_hyphen(self): | ||
4134 | 98 | """upstream versions may have a hyphen.""" | ||
4135 | 99 | cver = '18.2-14-g6d48d265-0ubuntu1' | ||
4136 | 100 | self.assertEqual( | ||
4137 | 101 | {'major': 18, 'minor': 2, 'micro': 0, 'extra': '-14-g6d48d265', | ||
4138 | 102 | 'raw': cver, 'upstream': '18.2-14-g6d48d265', | ||
4139 | 103 | 'name': 'cloud-init', 'semantic_version': 180200}, | ||
4140 | 104 | distro.parse_dpkg_version(cver, name='cloud-init')) | ||
4141 | 105 | |||
4142 | 106 | def test_upstream_with_plus(self): | ||
4143 | 107 | """multipath tools has a + in it.""" | ||
4144 | 108 | mver = '0.5.0+git1.656f8865-5ubuntu2.5' | ||
4145 | 109 | self.assertEqual( | ||
4146 | 110 | {'major': 0, 'minor': 5, 'micro': 0, 'extra': '+git1.656f8865', | ||
4147 | 111 | 'raw': mver, 'upstream': '0.5.0+git1.656f8865', | ||
4148 | 112 | 'semantic_version': 500}, | ||
4149 | 113 | distro.parse_dpkg_version(mver)) | ||
4150 | 114 | |||
4151 | 115 | |||
4152 | 116 | class TestDistros(CiTestCase): | ||
4153 | 117 | |||
4154 | 118 | def test_distro_names(self): | ||
4155 | 119 | all_distros = list(distro.DISTROS) | ||
4156 | 120 | for distro_name in distro.DISTRO_NAMES: | ||
4157 | 121 | distro_enum = getattr(distro.DISTROS, distro_name) | ||
4158 | 122 | self.assertIn(distro_enum, all_distros) | ||
4159 | 123 | |||
4160 | 124 | def test_distro_names_unknown(self): | ||
4161 | 125 | distro_name = "ImNotADistro" | ||
4162 | 126 | self.assertNotIn(distro_name, distro.DISTRO_NAMES) | ||
4163 | 127 | with self.assertRaises(AttributeError): | ||
4164 | 128 | getattr(distro.DISTROS, distro_name) | ||
4165 | 129 | |||
4166 | 130 | def test_distro_osfamily(self): | ||
4167 | 131 | for variant, family in distro.OS_FAMILIES.items(): | ||
4168 | 132 | self.assertNotEqual(variant, family) | ||
4169 | 133 | self.assertIn(variant, distro.DISTROS) | ||
4170 | 134 | for dname in family: | ||
4171 | 135 | self.assertIn(dname, distro.DISTROS) | ||
4172 | 136 | |||
4173 | 137 | def test_distro_osfmaily_identity(self): | ||
4174 | 138 | for family, variants in distro.OS_FAMILIES.items(): | ||
4175 | 139 | self.assertIn(family, variants) | ||
4176 | 140 | |||
4177 | 141 | def test_name_to_distro(self): | ||
4178 | 142 | for distro_name in distro.DISTRO_NAMES: | ||
4179 | 143 | dobj = distro.name_to_distro(distro_name) | ||
4180 | 144 | self.assertEqual(dobj, getattr(distro.DISTROS, distro_name)) | ||
4181 | 145 | |||
4182 | 146 | def test_name_to_distro_unknown_value(self): | ||
4183 | 147 | with self.assertRaises(ValueError): | ||
4184 | 148 | distro.name_to_distro(None) | ||
4185 | 149 | |||
4186 | 150 | def test_name_to_distro_unknown_attr(self): | ||
4187 | 151 | with self.assertRaises(ValueError): | ||
4188 | 152 | distro.name_to_distro('NotADistro') | ||
4189 | 153 | |||
4190 | 154 | def test_distros_unknown_attr(self): | ||
4191 | 155 | with self.assertRaises(AttributeError): | ||
4192 | 156 | distro.DISTROS.notadistro | ||
4193 | 157 | |||
4194 | 158 | def test_distros_unknown_index(self): | ||
4195 | 159 | with self.assertRaises(IndexError): | ||
4196 | 160 | distro.DISTROS[len(distro.DISTROS)+1] | ||
4197 | 161 | |||
4198 | 162 | |||
4199 | 163 | class TestDistroInfo(CiTestCase): | ||
4200 | 164 | |||
4201 | 165 | def setUp(self): | ||
4202 | 166 | super(TestDistroInfo, self).setUp() | ||
4203 | 167 | self.add_patch('curtin.distro.os_release', 'mock_os_release') | ||
4204 | 168 | |||
4205 | 169 | def test_get_distroinfo(self): | ||
4206 | 170 | for distro_name in distro.DISTRO_NAMES: | ||
4207 | 171 | self.mock_os_release.return_value = {'ID': distro_name} | ||
4208 | 172 | variant = distro.name_to_distro(distro_name) | ||
4209 | 173 | family = distro.DISTRO_TO_OSFAMILY[variant] | ||
4210 | 174 | distro_info = distro.get_distroinfo() | ||
4211 | 175 | self.assertEqual(variant, distro_info.variant) | ||
4212 | 176 | self.assertEqual(family, distro_info.family) | ||
4213 | 177 | |||
4214 | 178 | def test_get_distro(self): | ||
4215 | 179 | for distro_name in distro.DISTRO_NAMES: | ||
4216 | 180 | self.mock_os_release.return_value = {'ID': distro_name} | ||
4217 | 181 | variant = distro.name_to_distro(distro_name) | ||
4218 | 182 | distro_obj = distro.get_distro() | ||
4219 | 183 | self.assertEqual(variant, distro_obj) | ||
4220 | 184 | |||
4221 | 185 | def test_get_osfamily(self): | ||
4222 | 186 | for distro_name in distro.DISTRO_NAMES: | ||
4223 | 187 | self.mock_os_release.return_value = {'ID': distro_name} | ||
4224 | 188 | variant = distro.name_to_distro(distro_name) | ||
4225 | 189 | family = distro.DISTRO_TO_OSFAMILY[variant] | ||
4226 | 190 | distro_obj = distro.get_osfamily() | ||
4227 | 191 | self.assertEqual(family, distro_obj) | ||
4228 | 192 | |||
4229 | 193 | |||
4230 | 194 | class TestDistroIdentity(CiTestCase): | ||
4231 | 195 | |||
4232 | 196 | def setUp(self): | ||
4233 | 197 | super(TestDistroIdentity, self).setUp() | ||
4234 | 198 | self.add_patch('curtin.distro.os.path.exists', 'mock_os_path') | ||
4235 | 199 | |||
4236 | 200 | def test_is_ubuntu_core(self): | ||
4237 | 201 | for exists in [True, False]: | ||
4238 | 202 | self.mock_os_path.return_value = exists | ||
4239 | 203 | self.assertEqual(exists, distro.is_ubuntu_core()) | ||
4240 | 204 | self.mock_os_path.assert_called_with('/system-data/var/lib/snapd') | ||
4241 | 205 | |||
4242 | 206 | def test_is_centos(self): | ||
4243 | 207 | for exists in [True, False]: | ||
4244 | 208 | self.mock_os_path.return_value = exists | ||
4245 | 209 | self.assertEqual(exists, distro.is_centos()) | ||
4246 | 210 | self.mock_os_path.assert_called_with('/etc/centos-release') | ||
4247 | 211 | |||
4248 | 212 | def test_is_rhel(self): | ||
4249 | 213 | for exists in [True, False]: | ||
4250 | 214 | self.mock_os_path.return_value = exists | ||
4251 | 215 | self.assertEqual(exists, distro.is_rhel()) | ||
4252 | 216 | self.mock_os_path.assert_called_with('/etc/redhat-release') | ||
4253 | 217 | |||
4254 | 218 | |||
4255 | 219 | class TestYumInstall(CiTestCase): | ||
4256 | 220 | |||
4257 | 221 | @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a) | ||
4258 | 222 | @mock.patch('curtin.util.subp') | ||
4259 | 223 | def test_yum_install(self, m_subp): | ||
4260 | 224 | pkglist = ['foobar', 'wark'] | ||
4261 | 225 | target = 'mytarget' | ||
4262 | 226 | mode = 'install' | ||
4263 | 227 | expected_calls = [ | ||
4264 | 228 | mock.call(['yum', '--assumeyes', '--quiet', 'install', | ||
4265 | 229 | '--downloadonly', '--setopt=keepcache=1'] + pkglist, | ||
4266 | 230 | env=None, retries=[1] * 10, | ||
4267 | 231 | target=paths.target_path(target)), | ||
4268 | 232 | mock.call(['yum', '--assumeyes', '--quiet', 'install', | ||
4269 | 233 | '--cacheonly'] + pkglist, env=None, | ||
4270 | 234 | target=paths.target_path(target)) | ||
4271 | 235 | ] | ||
4272 | 236 | |||
4273 | 237 | # call yum_install directly | ||
4274 | 238 | distro.yum_install(mode, pkglist, target=target) | ||
4275 | 239 | m_subp.assert_has_calls(expected_calls) | ||
4276 | 240 | |||
4277 | 241 | # call yum_install through run_yum_command | ||
4278 | 242 | m_subp.reset() | ||
4279 | 243 | distro.run_yum_command('install', pkglist, target=target) | ||
4280 | 244 | m_subp.assert_has_calls(expected_calls) | ||
4281 | 245 | |||
4282 | 246 | # call yum_install through install_packages | ||
4283 | 247 | m_subp.reset() | ||
4284 | 248 | osfamily = distro.DISTROS.redhat | ||
4285 | 249 | distro.install_packages(pkglist, osfamily=osfamily, target=target) | ||
4286 | 250 | m_subp.assert_has_calls(expected_calls) | ||
4287 | 251 | |||
4288 | 252 | |||
4289 | 253 | class TestHasPkgAvailable(CiTestCase): | ||
4290 | 254 | |||
4291 | 255 | def setUp(self): | ||
4292 | 256 | super(TestHasPkgAvailable, self).setUp() | ||
4293 | 257 | self.package = 'foobar' | ||
4294 | 258 | self.target = paths.target_path('mytarget') | ||
4295 | 259 | |||
4296 | 260 | @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a) | ||
4297 | 261 | @mock.patch('curtin.distro.subp') | ||
4298 | 262 | def test_has_pkg_available_debian(self, m_subp): | ||
4299 | 263 | osfamily = distro.DISTROS.debian | ||
4300 | 264 | m_subp.return_value = (self.package, '') | ||
4301 | 265 | result = distro.has_pkg_available(self.package, self.target, osfamily) | ||
4302 | 266 | self.assertTrue(result) | ||
4303 | 267 | m_subp.assert_has_calls([mock.call(['apt-cache', 'pkgnames'], | ||
4304 | 268 | capture=True, | ||
4305 | 269 | target=self.target)]) | ||
4306 | 270 | |||
4307 | 271 | @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a) | ||
4308 | 272 | @mock.patch('curtin.distro.subp') | ||
4309 | 273 | def test_has_pkg_available_debian_returns_false_not_avail(self, m_subp): | ||
4310 | 274 | pkg = 'wark' | ||
4311 | 275 | osfamily = distro.DISTROS.debian | ||
4312 | 276 | m_subp.return_value = (pkg, '') | ||
4313 | 277 | result = distro.has_pkg_available(self.package, self.target, osfamily) | ||
4314 | 278 | self.assertEqual(pkg == self.package, result) | ||
4315 | 279 | m_subp.assert_has_calls([mock.call(['apt-cache', 'pkgnames'], | ||
4316 | 280 | capture=True, | ||
4317 | 281 | target=self.target)]) | ||
4318 | 282 | |||
4319 | 283 | @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a) | ||
4320 | 284 | @mock.patch('curtin.distro.run_yum_command') | ||
4321 | 285 | def test_has_pkg_available_redhat(self, m_subp): | ||
4322 | 286 | osfamily = distro.DISTROS.redhat | ||
4323 | 287 | m_subp.return_value = (self.package, '') | ||
4324 | 288 | result = distro.has_pkg_available(self.package, self.target, osfamily) | ||
4325 | 289 | self.assertTrue(result) | ||
4326 | 290 | m_subp.assert_has_calls([mock.call('list', opts=['--cacheonly'])]) | ||
4327 | 291 | |||
4328 | 292 | @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a) | ||
4329 | 293 | @mock.patch('curtin.distro.run_yum_command') | ||
4330 | 294 | def test_has_pkg_available_redhat_returns_false_not_avail(self, m_subp): | ||
4331 | 295 | pkg = 'wark' | ||
4332 | 296 | osfamily = distro.DISTROS.redhat | ||
4333 | 297 | m_subp.return_value = (pkg, '') | ||
4334 | 298 | result = distro.has_pkg_available(self.package, self.target, osfamily) | ||
4335 | 299 | self.assertEqual(pkg == self.package, result) | ||
4336 | 300 | m_subp.assert_has_calls([mock.call('list', opts=['--cacheonly'])]) | ||
4337 | 301 | |||
4338 | 302 | # vi: ts=4 expandtab syntax=python | ||
4339 | diff --git a/tests/unittests/test_feature.py b/tests/unittests/test_feature.py | |||
4340 | index c62e0cd..7c55882 100644 | |||
4341 | --- a/tests/unittests/test_feature.py | |||
4342 | +++ b/tests/unittests/test_feature.py | |||
4343 | @@ -21,4 +21,7 @@ class TestExportsFeatures(CiTestCase): | |||
4344 | 21 | def test_has_centos_apply_network_config(self): | 21 | def test_has_centos_apply_network_config(self): |
4345 | 22 | self.assertIn('CENTOS_APPLY_NETWORK_CONFIG', curtin.FEATURES) | 22 | self.assertIn('CENTOS_APPLY_NETWORK_CONFIG', curtin.FEATURES) |
4346 | 23 | 23 | ||
4347 | 24 | def test_has_centos_curthook_support(self): | ||
4348 | 25 | self.assertIn('CENTOS_CURTHOOK_SUPPORT', curtin.FEATURES) | ||
4349 | 26 | |||
4350 | 24 | # vi: ts=4 expandtab syntax=python | 27 | # vi: ts=4 expandtab syntax=python |
4351 | diff --git a/tests/unittests/test_pack.py b/tests/unittests/test_pack.py | |||
4352 | index 1aae456..cb0b135 100644 | |||
4353 | --- a/tests/unittests/test_pack.py | |||
4354 | +++ b/tests/unittests/test_pack.py | |||
4355 | @@ -97,6 +97,8 @@ class TestPack(TestCase): | |||
4356 | 97 | }} | 97 | }} |
4357 | 98 | 98 | ||
4358 | 99 | out, err, rc, log_contents = self.run_install(cfg) | 99 | out, err, rc, log_contents = self.run_install(cfg) |
4359 | 100 | print("out=%s" % out) | ||
4360 | 101 | print("err=%s" % err) | ||
4361 | 100 | 102 | ||
4362 | 101 | # the version string and users command output should be in output | 103 | # the version string and users command output should be in output |
4363 | 102 | self.assertIn(version.version_string(), out) | 104 | self.assertIn(version.version_string(), out) |
4364 | diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py | |||
4365 | index 7fb332d..a64be16 100644 | |||
4366 | --- a/tests/unittests/test_util.py | |||
4367 | +++ b/tests/unittests/test_util.py | |||
4368 | @@ -4,10 +4,10 @@ from unittest import skipIf | |||
4369 | 4 | import mock | 4 | import mock |
4370 | 5 | import os | 5 | import os |
4371 | 6 | import stat | 6 | import stat |
4372 | 7 | import sys | ||
4373 | 8 | from textwrap import dedent | 7 | from textwrap import dedent |
4374 | 9 | 8 | ||
4375 | 10 | from curtin import util | 9 | from curtin import util |
4376 | 10 | from curtin import paths | ||
4377 | 11 | from .helpers import CiTestCase, simple_mocked_open | 11 | from .helpers import CiTestCase, simple_mocked_open |
4378 | 12 | 12 | ||
4379 | 13 | 13 | ||
4380 | @@ -104,48 +104,6 @@ class TestWhich(CiTestCase): | |||
4381 | 104 | self.assertEqual(found, "/usr/bin2/fuzz") | 104 | self.assertEqual(found, "/usr/bin2/fuzz") |
4382 | 105 | 105 | ||
4383 | 106 | 106 | ||
4384 | 107 | class TestLsbRelease(CiTestCase): | ||
4385 | 108 | |||
4386 | 109 | def setUp(self): | ||
4387 | 110 | super(TestLsbRelease, self).setUp() | ||
4388 | 111 | self._reset_cache() | ||
4389 | 112 | |||
4390 | 113 | def _reset_cache(self): | ||
4391 | 114 | keys = [k for k in util._LSB_RELEASE.keys()] | ||
4392 | 115 | for d in keys: | ||
4393 | 116 | del util._LSB_RELEASE[d] | ||
4394 | 117 | |||
4395 | 118 | @mock.patch("curtin.util.subp") | ||
4396 | 119 | def test_lsb_release_functional(self, mock_subp): | ||
4397 | 120 | output = '\n'.join([ | ||
4398 | 121 | "Distributor ID: Ubuntu", | ||
4399 | 122 | "Description: Ubuntu 14.04.2 LTS", | ||
4400 | 123 | "Release: 14.04", | ||
4401 | 124 | "Codename: trusty", | ||
4402 | 125 | ]) | ||
4403 | 126 | rdata = {'id': 'Ubuntu', 'description': 'Ubuntu 14.04.2 LTS', | ||
4404 | 127 | 'codename': 'trusty', 'release': '14.04'} | ||
4405 | 128 | |||
4406 | 129 | def fake_subp(cmd, capture=False, target=None): | ||
4407 | 130 | return output, 'No LSB modules are available.' | ||
4408 | 131 | |||
4409 | 132 | mock_subp.side_effect = fake_subp | ||
4410 | 133 | found = util.lsb_release() | ||
4411 | 134 | mock_subp.assert_called_with( | ||
4412 | 135 | ['lsb_release', '--all'], capture=True, target=None) | ||
4413 | 136 | self.assertEqual(found, rdata) | ||
4414 | 137 | |||
4415 | 138 | @mock.patch("curtin.util.subp") | ||
4416 | 139 | def test_lsb_release_unavailable(self, mock_subp): | ||
4417 | 140 | def doraise(*args, **kwargs): | ||
4418 | 141 | raise util.ProcessExecutionError("foo") | ||
4419 | 142 | mock_subp.side_effect = doraise | ||
4420 | 143 | |||
4421 | 144 | expected = {k: "UNAVAILABLE" for k in | ||
4422 | 145 | ('id', 'description', 'codename', 'release')} | ||
4423 | 146 | self.assertEqual(util.lsb_release(), expected) | ||
4424 | 147 | |||
4425 | 148 | |||
4426 | 149 | class TestSubp(CiTestCase): | 107 | class TestSubp(CiTestCase): |
4427 | 150 | 108 | ||
4428 | 151 | stdin2err = ['bash', '-c', 'cat >&2'] | 109 | stdin2err = ['bash', '-c', 'cat >&2'] |
4429 | @@ -312,7 +270,7 @@ class TestSubp(CiTestCase): | |||
4430 | 312 | # if target is not provided or is /, chroot should not be used | 270 | # if target is not provided or is /, chroot should not be used |
4431 | 313 | calls = m_popen.call_args_list | 271 | calls = m_popen.call_args_list |
4432 | 314 | popen_args, popen_kwargs = calls[-1] | 272 | popen_args, popen_kwargs = calls[-1] |
4434 | 315 | target = util.target_path(kwargs.get('target', None)) | 273 | target = paths.target_path(kwargs.get('target', None)) |
4435 | 316 | unshcmd = self.mock_get_unshare_pid_args.return_value | 274 | unshcmd = self.mock_get_unshare_pid_args.return_value |
4436 | 317 | if target == "/": | 275 | if target == "/": |
4437 | 318 | self.assertEqual(unshcmd + list(cmd), popen_args[0]) | 276 | self.assertEqual(unshcmd + list(cmd), popen_args[0]) |
4438 | @@ -554,44 +512,44 @@ class TestSetUnExecutable(CiTestCase): | |||
4439 | 554 | 512 | ||
4440 | 555 | class TestTargetPath(CiTestCase): | 513 | class TestTargetPath(CiTestCase): |
4441 | 556 | def test_target_empty_string(self): | 514 | def test_target_empty_string(self): |
4443 | 557 | self.assertEqual("/etc/passwd", util.target_path("", "/etc/passwd")) | 515 | self.assertEqual("/etc/passwd", paths.target_path("", "/etc/passwd")) |
4444 | 558 | 516 | ||
4445 | 559 | def test_target_non_string_raises(self): | 517 | def test_target_non_string_raises(self): |
4449 | 560 | self.assertRaises(ValueError, util.target_path, False) | 518 | self.assertRaises(ValueError, paths.target_path, False) |
4450 | 561 | self.assertRaises(ValueError, util.target_path, 9) | 519 | self.assertRaises(ValueError, paths.target_path, 9) |
4451 | 562 | self.assertRaises(ValueError, util.target_path, True) | 520 | self.assertRaises(ValueError, paths.target_path, True) |
4452 | 563 | 521 | ||
4453 | 564 | def test_lots_of_slashes_is_slash(self): | 522 | def test_lots_of_slashes_is_slash(self): |
4458 | 565 | self.assertEqual("/", util.target_path("/")) | 523 | self.assertEqual("/", paths.target_path("/")) |
4459 | 566 | self.assertEqual("/", util.target_path("//")) | 524 | self.assertEqual("/", paths.target_path("//")) |
4460 | 567 | self.assertEqual("/", util.target_path("///")) | 525 | self.assertEqual("/", paths.target_path("///")) |
4461 | 568 | self.assertEqual("/", util.target_path("////")) | 526 | self.assertEqual("/", paths.target_path("////")) |
4462 | 569 | 527 | ||
4463 | 570 | def test_empty_string_is_slash(self): | 528 | def test_empty_string_is_slash(self): |
4465 | 571 | self.assertEqual("/", util.target_path("")) | 529 | self.assertEqual("/", paths.target_path("")) |
4466 | 572 | 530 | ||
4467 | 573 | def test_recognizes_relative(self): | 531 | def test_recognizes_relative(self): |
4470 | 574 | self.assertEqual("/", util.target_path("/foo/../")) | 532 | self.assertEqual("/", paths.target_path("/foo/../")) |
4471 | 575 | self.assertEqual("/", util.target_path("/foo//bar/../../")) | 533 | self.assertEqual("/", paths.target_path("/foo//bar/../../")) |
4472 | 576 | 534 | ||
4473 | 577 | def test_no_path(self): | 535 | def test_no_path(self): |
4475 | 578 | self.assertEqual("/my/target", util.target_path("/my/target")) | 536 | self.assertEqual("/my/target", paths.target_path("/my/target")) |
4476 | 579 | 537 | ||
4477 | 580 | def test_no_target_no_path(self): | 538 | def test_no_target_no_path(self): |
4479 | 581 | self.assertEqual("/", util.target_path(None)) | 539 | self.assertEqual("/", paths.target_path(None)) |
4480 | 582 | 540 | ||
4481 | 583 | def test_no_target_with_path(self): | 541 | def test_no_target_with_path(self): |
4483 | 584 | self.assertEqual("/my/path", util.target_path(None, "/my/path")) | 542 | self.assertEqual("/my/path", paths.target_path(None, "/my/path")) |
4484 | 585 | 543 | ||
4485 | 586 | def test_trailing_slash(self): | 544 | def test_trailing_slash(self): |
4486 | 587 | self.assertEqual("/my/target/my/path", | 545 | self.assertEqual("/my/target/my/path", |
4488 | 588 | util.target_path("/my/target/", "/my/path")) | 546 | paths.target_path("/my/target/", "/my/path")) |
4489 | 589 | 547 | ||
4490 | 590 | def test_bunch_of_slashes_in_path(self): | 548 | def test_bunch_of_slashes_in_path(self): |
4491 | 591 | self.assertEqual("/target/my/path/", | 549 | self.assertEqual("/target/my/path/", |
4493 | 592 | util.target_path("/target/", "//my/path/")) | 550 | paths.target_path("/target/", "//my/path/")) |
4494 | 593 | self.assertEqual("/target/my/path/", | 551 | self.assertEqual("/target/my/path/", |
4496 | 594 | util.target_path("/target/", "///my/path/")) | 552 | paths.target_path("/target/", "///my/path/")) |
4497 | 595 | 553 | ||
4498 | 596 | 554 | ||
4499 | 597 | class TestRunInChroot(CiTestCase): | 555 | class TestRunInChroot(CiTestCase): |
4500 | @@ -1036,65 +994,4 @@ class TestLoadKernelModule(CiTestCase): | |||
4501 | 1036 | self.assertEqual(0, self.m_subp.call_count) | 994 | self.assertEqual(0, self.m_subp.call_count) |
4502 | 1037 | 995 | ||
4503 | 1038 | 996 | ||
4504 | 1039 | class TestParseDpkgVersion(CiTestCase): | ||
4505 | 1040 | """test parse_dpkg_version.""" | ||
4506 | 1041 | |||
4507 | 1042 | def test_none_raises_type_error(self): | ||
4508 | 1043 | self.assertRaises(TypeError, util.parse_dpkg_version, None) | ||
4509 | 1044 | |||
4510 | 1045 | @skipIf(sys.version_info.major < 3, "python 2 bytes are strings.") | ||
4511 | 1046 | def test_bytes_raises_type_error(self): | ||
4512 | 1047 | self.assertRaises(TypeError, util.parse_dpkg_version, b'1.2.3-0') | ||
4513 | 1048 | |||
4514 | 1049 | def test_simple_native_package_version(self): | ||
4515 | 1050 | """dpkg versions must have a -. If not present expect value error.""" | ||
4516 | 1051 | self.assertEqual( | ||
4517 | 1052 | {'major': 2, 'minor': 28, 'micro': 0, 'extra': None, | ||
4518 | 1053 | 'raw': '2.28', 'upstream': '2.28', 'name': 'germinate', | ||
4519 | 1054 | 'semantic_version': 22800}, | ||
4520 | 1055 | util.parse_dpkg_version('2.28', name='germinate')) | ||
4521 | 1056 | |||
4522 | 1057 | def test_complex_native_package_version(self): | ||
4523 | 1058 | dver = '1.0.106ubuntu2+really1.0.97ubuntu1' | ||
4524 | 1059 | self.assertEqual( | ||
4525 | 1060 | {'major': 1, 'minor': 0, 'micro': 106, | ||
4526 | 1061 | 'extra': 'ubuntu2+really1.0.97ubuntu1', | ||
4527 | 1062 | 'raw': dver, 'upstream': dver, 'name': 'debootstrap', | ||
4528 | 1063 | 'semantic_version': 100106}, | ||
4529 | 1064 | util.parse_dpkg_version(dver, name='debootstrap', | ||
4530 | 1065 | semx=(100000, 1000, 1))) | ||
4531 | 1066 | |||
4532 | 1067 | def test_simple_valid(self): | ||
4533 | 1068 | self.assertEqual( | ||
4534 | 1069 | {'major': 1, 'minor': 2, 'micro': 3, 'extra': None, | ||
4535 | 1070 | 'raw': '1.2.3-0', 'upstream': '1.2.3', 'name': 'foo', | ||
4536 | 1071 | 'semantic_version': 10203}, | ||
4537 | 1072 | util.parse_dpkg_version('1.2.3-0', name='foo')) | ||
4538 | 1073 | |||
4539 | 1074 | def test_simple_valid_with_semx(self): | ||
4540 | 1075 | self.assertEqual( | ||
4541 | 1076 | {'major': 1, 'minor': 2, 'micro': 3, 'extra': None, | ||
4542 | 1077 | 'raw': '1.2.3-0', 'upstream': '1.2.3', | ||
4543 | 1078 | 'semantic_version': 123}, | ||
4544 | 1079 | util.parse_dpkg_version('1.2.3-0', semx=(100, 10, 1))) | ||
4545 | 1080 | |||
4546 | 1081 | def test_upstream_with_hyphen(self): | ||
4547 | 1082 | """upstream versions may have a hyphen.""" | ||
4548 | 1083 | cver = '18.2-14-g6d48d265-0ubuntu1' | ||
4549 | 1084 | self.assertEqual( | ||
4550 | 1085 | {'major': 18, 'minor': 2, 'micro': 0, 'extra': '-14-g6d48d265', | ||
4551 | 1086 | 'raw': cver, 'upstream': '18.2-14-g6d48d265', | ||
4552 | 1087 | 'name': 'cloud-init', 'semantic_version': 180200}, | ||
4553 | 1088 | util.parse_dpkg_version(cver, name='cloud-init')) | ||
4554 | 1089 | |||
4555 | 1090 | def test_upstream_with_plus(self): | ||
4556 | 1091 | """multipath tools has a + in it.""" | ||
4557 | 1092 | mver = '0.5.0+git1.656f8865-5ubuntu2.5' | ||
4558 | 1093 | self.assertEqual( | ||
4559 | 1094 | {'major': 0, 'minor': 5, 'micro': 0, 'extra': '+git1.656f8865', | ||
4560 | 1095 | 'raw': mver, 'upstream': '0.5.0+git1.656f8865', | ||
4561 | 1096 | 'semantic_version': 500}, | ||
4562 | 1097 | util.parse_dpkg_version(mver)) | ||
4563 | 1098 | |||
4564 | 1099 | |||
4565 | 1100 | # vi: ts=4 expandtab syntax=python | 997 | # vi: ts=4 expandtab syntax=python |
4566 | diff --git a/tests/vmtests/__init__.py b/tests/vmtests/__init__.py | |||
4567 | index bd159c4..7e31491 100644 | |||
4568 | --- a/tests/vmtests/__init__.py | |||
4569 | +++ b/tests/vmtests/__init__.py | |||
4570 | @@ -493,18 +493,67 @@ def skip_by_date(bugnum, fixby, removeby=None, skips=None, install=True): | |||
4571 | 493 | return decorator | 493 | return decorator |
4572 | 494 | 494 | ||
4573 | 495 | 495 | ||
4574 | 496 | DEFAULT_COLLECT_SCRIPTS = { | ||
4575 | 497 | 'common': [textwrap.dedent(""" | ||
4576 | 498 | cd OUTPUT_COLLECT_D | ||
4577 | 499 | cp /etc/fstab ./fstab | ||
4578 | 500 | cp -a /etc/udev/rules.d ./udev_rules.d | ||
4579 | 501 | ifconfig -a | cat >ifconfig_a | ||
4580 | 502 | ip a | cat >ip_a | ||
4581 | 503 | cp -a /var/log/messages . | ||
4582 | 504 | cp -a /var/log/syslog . | ||
4583 | 505 | cp -a /var/log/cloud-init* . | ||
4584 | 506 | cp -a /var/lib/cloud ./var_lib_cloud | ||
4585 | 507 | cp -a /run/cloud-init ./run_cloud-init | ||
4586 | 508 | cp -a /proc/cmdline ./proc_cmdline | ||
4587 | 509 | cp -a /proc/mounts ./proc_mounts | ||
4588 | 510 | cp -a /proc/partitions ./proc_partitions | ||
4589 | 511 | cp -a /proc/swaps ./proc-swaps | ||
4590 | 512 | # ls -al /dev/disk/* | ||
4591 | 513 | mkdir -p /dev/disk/by-dname | ||
4592 | 514 | ls /dev/disk/by-dname/ | cat >ls_dname | ||
4593 | 515 | ls -al /dev/disk/by-dname/ | cat >ls_al_bydname | ||
4594 | 516 | ls -al /dev/disk/by-id/ | cat >ls_al_byid | ||
4595 | 517 | ls -al /dev/disk/by-uuid/ | cat >ls_al_byuuid | ||
4596 | 518 | blkid -o export | cat >blkid.out | ||
4597 | 519 | find /boot | cat > find_boot.out | ||
4598 | 520 | [ -e /sys/firmware/efi ] && { | ||
4599 | 521 | efibootmgr -v | cat >efibootmgr.out; | ||
4600 | 522 | } | ||
4601 | 523 | """)], | ||
4602 | 524 | 'centos': [textwrap.dedent(""" | ||
4603 | 525 | # XXX: command | cat >output is required for Centos under SELinux | ||
4604 | 526 | # http://danwalsh.livejournal.com/22860.html | ||
4605 | 527 | cd OUTPUT_COLLECT_D | ||
4606 | 528 | rpm -qa | cat >rpm_qa | ||
4607 | 529 | cp -a /etc/sysconfig/network-scripts . | ||
4608 | 530 | rpm -q --queryformat '%{VERSION}\n' cloud-init |tee rpm_ci_version | ||
4609 | 531 | rpm -E '%rhel' > rpm_dist_version_major | ||
4610 | 532 | cp -a /etc/centos-release . | ||
4611 | 533 | """)], | ||
4612 | 534 | 'ubuntu': [textwrap.dedent(""" | ||
4613 | 535 | cd OUTPUT_COLLECT_D | ||
4614 | 536 | dpkg-query --show \ | ||
4615 | 537 | --showformat='${db:Status-Abbrev}\t${Package}\t${Version}\n' \ | ||
4616 | 538 | > debian-packages.txt 2> debian-packages.txt.err | ||
4617 | 539 | cp -av /etc/network/interfaces . | ||
4618 | 540 | cp -av /etc/network/interfaces.d . | ||
4619 | 541 | find /etc/network/interfaces.d > find_interfacesd | ||
4620 | 542 | v="" | ||
4621 | 543 | out=$(apt-config shell v Acquire::HTTP::Proxy) | ||
4622 | 544 | eval "$out" | ||
4623 | 545 | echo "$v" > apt-proxy | ||
4624 | 546 | """)] | ||
4625 | 547 | } | ||
4626 | 548 | |||
4627 | 549 | |||
4628 | 496 | class VMBaseClass(TestCase): | 550 | class VMBaseClass(TestCase): |
4629 | 497 | __test__ = False | 551 | __test__ = False |
4630 | 498 | expected_failure = False | 552 | expected_failure = False |
4631 | 499 | arch_skip = [] | 553 | arch_skip = [] |
4632 | 500 | boot_timeout = BOOT_TIMEOUT | 554 | boot_timeout = BOOT_TIMEOUT |
4640 | 501 | collect_scripts = [textwrap.dedent(""" | 555 | collect_scripts = [] |
4641 | 502 | cd OUTPUT_COLLECT_D | 556 | extra_collect_scripts = [] |
4635 | 503 | dpkg-query --show \ | ||
4636 | 504 | --showformat='${db:Status-Abbrev}\t${Package}\t${Version}\n' \ | ||
4637 | 505 | > debian-packages.txt 2> debian-packages.txt.err | ||
4638 | 506 | cat /proc/swaps > proc-swaps | ||
4639 | 507 | """)] | ||
4642 | 508 | conf_file = "examples/tests/basic.yaml" | 557 | conf_file = "examples/tests/basic.yaml" |
4643 | 509 | nr_cpus = None | 558 | nr_cpus = None |
4644 | 510 | dirty_disks = False | 559 | dirty_disks = False |
4645 | @@ -528,6 +577,10 @@ class VMBaseClass(TestCase): | |||
4646 | 528 | conf_replace = {} | 577 | conf_replace = {} |
4647 | 529 | uefi = False | 578 | uefi = False |
4648 | 530 | proxy = None | 579 | proxy = None |
4649 | 580 | url_map = { | ||
4650 | 581 | '/MAAS/api/version/': '2.0', | ||
4651 | 582 | '/MAAS/api/2.0/version/': | ||
4652 | 583 | json.dumps({'version': '2.5.0+curtin-vmtest'})} | ||
4653 | 531 | 584 | ||
4654 | 532 | # these get set from base_vm_classes | 585 | # these get set from base_vm_classes |
4655 | 533 | release = None | 586 | release = None |
4656 | @@ -773,6 +826,16 @@ class VMBaseClass(TestCase): | |||
4657 | 773 | cls.arch) | 826 | cls.arch) |
4658 | 774 | raise SkipTest(reason) | 827 | raise SkipTest(reason) |
4659 | 775 | 828 | ||
4660 | 829 | # assign default collect scripts | ||
4661 | 830 | if not cls.collect_scripts: | ||
4662 | 831 | cls.collect_scripts = ( | ||
4663 | 832 | DEFAULT_COLLECT_SCRIPTS['common'] + | ||
4664 | 833 | DEFAULT_COLLECT_SCRIPTS[cls.target_distro]) | ||
4665 | 834 | |||
4666 | 835 | # append extra from subclass | ||
4667 | 836 | if cls.extra_collect_scripts: | ||
4668 | 837 | cls.collect_scripts.extend(cls.extra_collect_scripts) | ||
4669 | 838 | |||
4670 | 776 | setup_start = time.time() | 839 | setup_start = time.time() |
4671 | 777 | logger.info( | 840 | logger.info( |
4672 | 778 | ('Starting setup for testclass: {__name__} ' | 841 | ('Starting setup for testclass: {__name__} ' |
4673 | @@ -994,7 +1057,8 @@ class VMBaseClass(TestCase): | |||
4674 | 994 | 1057 | ||
4675 | 995 | # set reporting logger | 1058 | # set reporting logger |
4676 | 996 | cls.reporting_log = os.path.join(cls.td.logs, 'webhooks-events.json') | 1059 | cls.reporting_log = os.path.join(cls.td.logs, 'webhooks-events.json') |
4678 | 997 | reporting_logger = CaptureReporting(cls.reporting_log) | 1060 | reporting_logger = CaptureReporting(cls.reporting_log, |
4679 | 1061 | url_mapping=cls.url_map) | ||
4680 | 998 | 1062 | ||
4681 | 999 | # write reporting config | 1063 | # write reporting config |
4682 | 1000 | reporting_config = os.path.join(cls.td.install, 'reporting.cfg') | 1064 | reporting_config = os.path.join(cls.td.install, 'reporting.cfg') |
4683 | @@ -1442,6 +1506,8 @@ class VMBaseClass(TestCase): | |||
4684 | 1442 | if self.target_release == "trusty": | 1506 | if self.target_release == "trusty": |
4685 | 1443 | raise SkipTest( | 1507 | raise SkipTest( |
4686 | 1444 | "(LP: #1523037): dname does not work on trusty kernels") | 1508 | "(LP: #1523037): dname does not work on trusty kernels") |
4687 | 1509 | if self.target_distro != "ubuntu": | ||
4688 | 1510 | raise SkipTest("dname not present in non-ubuntu releases") | ||
4689 | 1445 | 1511 | ||
4690 | 1446 | if not disk_to_check: | 1512 | if not disk_to_check: |
4691 | 1447 | disk_to_check = self.disk_to_check | 1513 | disk_to_check = self.disk_to_check |
4692 | @@ -1449,11 +1515,9 @@ class VMBaseClass(TestCase): | |||
4693 | 1449 | logger.debug('test_dname: no disks to check') | 1515 | logger.debug('test_dname: no disks to check') |
4694 | 1450 | return | 1516 | return |
4695 | 1451 | logger.debug('test_dname: checking disks: %s', disk_to_check) | 1517 | logger.debug('test_dname: checking disks: %s', disk_to_check) |
4701 | 1452 | path = self.collect_path("ls_dname") | 1518 | self.output_files_exist(["ls_dname"]) |
4702 | 1453 | if not os.path.exists(path): | 1519 | |
4703 | 1454 | logger.debug('test_dname: no "ls_dname" file: %s', path) | 1520 | contents = self.load_collect_file("ls_dname") |
4699 | 1455 | return | ||
4700 | 1456 | contents = util.load_file(path) | ||
4704 | 1457 | for diskname, part in self.disk_to_check: | 1521 | for diskname, part in self.disk_to_check: |
4705 | 1458 | if part is not 0: | 1522 | if part is not 0: |
4706 | 1459 | link = diskname + "-part" + str(part) | 1523 | link = diskname + "-part" + str(part) |
4707 | @@ -1485,6 +1549,9 @@ class VMBaseClass(TestCase): | |||
4708 | 1485 | """ Check that curtin has removed /etc/network/interfaces.d/eth0.cfg | 1549 | """ Check that curtin has removed /etc/network/interfaces.d/eth0.cfg |
4709 | 1486 | by examining the output of a find /etc/network > find_interfaces.d | 1550 | by examining the output of a find /etc/network > find_interfaces.d |
4710 | 1487 | """ | 1551 | """ |
4711 | 1552 | # target_distro is set for non-ubuntu targets | ||
4712 | 1553 | if self.target_distro != 'ubuntu': | ||
4713 | 1554 | raise SkipTest("eni/ifupdown not present in non-ubuntu releases") | ||
4714 | 1488 | interfacesd = self.load_collect_file("find_interfacesd") | 1555 | interfacesd = self.load_collect_file("find_interfacesd") |
4715 | 1489 | self.assertNotIn("/etc/network/interfaces.d/eth0.cfg", | 1556 | self.assertNotIn("/etc/network/interfaces.d/eth0.cfg", |
4716 | 1490 | interfacesd.split("\n")) | 1557 | interfacesd.split("\n")) |
4717 | diff --git a/tests/vmtests/helpers.py b/tests/vmtests/helpers.py | |||
4718 | index 10e20b3..6dddcc6 100644 | |||
4719 | --- a/tests/vmtests/helpers.py | |||
4720 | +++ b/tests/vmtests/helpers.py | |||
4721 | @@ -2,6 +2,7 @@ | |||
4722 | 2 | # This file is part of curtin. See LICENSE file for copyright and license info. | 2 | # This file is part of curtin. See LICENSE file for copyright and license info. |
4723 | 3 | 3 | ||
4724 | 4 | import os | 4 | import os |
4725 | 5 | import re | ||
4726 | 5 | import subprocess | 6 | import subprocess |
4727 | 6 | import signal | 7 | import signal |
4728 | 7 | import threading | 8 | import threading |
4729 | @@ -86,7 +87,26 @@ def check_call(cmd, signal=signal.SIGTERM, **kwargs): | |||
4730 | 86 | return Command(cmd, signal).run(**kwargs) | 87 | return Command(cmd, signal).run(**kwargs) |
4731 | 87 | 88 | ||
4732 | 88 | 89 | ||
4734 | 89 | def find_testcases(): | 90 | def find_testcases_by_attr(**kwargs): |
4735 | 91 | class_match = set() | ||
4736 | 92 | for test_case in find_testcases(**kwargs): | ||
4737 | 93 | tc_name = str(test_case.__class__) | ||
4738 | 94 | full_path = tc_name.split("'")[1].split(".") | ||
4739 | 95 | class_name = full_path[-1] | ||
4740 | 96 | if class_name in class_match: | ||
4741 | 97 | continue | ||
4742 | 98 | class_match.add(class_name) | ||
4743 | 99 | filename = "/".join(full_path[0:-1]) + ".py" | ||
4744 | 100 | yield "%s:%s" % (filename, class_name) | ||
4745 | 101 | |||
4746 | 102 | |||
4747 | 103 | def _attr_match(pattern, value): | ||
4748 | 104 | if not value: | ||
4749 | 105 | return False | ||
4750 | 106 | return re.match(pattern, str(value)) | ||
4751 | 107 | |||
4752 | 108 | |||
4753 | 109 | def find_testcases(**kwargs): | ||
4754 | 90 | # Use the TestLoder to load all test cases defined within tests/vmtests/ | 110 | # Use the TestLoder to load all test cases defined within tests/vmtests/ |
4755 | 91 | # and figure out what distros and releases they are testing. Any tests | 111 | # and figure out what distros and releases they are testing. Any tests |
4756 | 92 | # which are disabled will be excluded. | 112 | # which are disabled will be excluded. |
4757 | @@ -97,12 +117,19 @@ def find_testcases(): | |||
4758 | 97 | root_dir = os.path.split(os.path.split(tests_dir)[0])[0] | 117 | root_dir = os.path.split(os.path.split(tests_dir)[0])[0] |
4759 | 98 | # Find all test modules defined in curtin/tests/vmtests/ | 118 | # Find all test modules defined in curtin/tests/vmtests/ |
4760 | 99 | module_test_suites = loader.discover(tests_dir, top_level_dir=root_dir) | 119 | module_test_suites = loader.discover(tests_dir, top_level_dir=root_dir) |
4761 | 120 | filter_attrs = [attr for attr, value in kwargs.items() if value] | ||
4762 | 100 | for mts in module_test_suites: | 121 | for mts in module_test_suites: |
4763 | 101 | for class_test_suite in mts: | 122 | for class_test_suite in mts: |
4764 | 102 | for test_case in class_test_suite: | 123 | for test_case in class_test_suite: |
4765 | 103 | # skip disabled tests | 124 | # skip disabled tests |
4766 | 104 | if not getattr(test_case, '__test__', False): | 125 | if not getattr(test_case, '__test__', False): |
4767 | 105 | continue | 126 | continue |
4768 | 127 | # compare each filter attr with the specified value | ||
4769 | 128 | tcmatch = [not _attr_match(kwargs[attr], | ||
4770 | 129 | getattr(test_case, attr, False)) | ||
4771 | 130 | for attr in filter_attrs] | ||
4772 | 131 | if any(tcmatch): | ||
4773 | 132 | continue | ||
4774 | 106 | yield test_case | 133 | yield test_case |
4775 | 107 | 134 | ||
4776 | 108 | 135 | ||
4777 | diff --git a/tests/vmtests/image_sync.py b/tests/vmtests/image_sync.py | |||
4778 | index e2cedc1..69c19ef 100644 | |||
4779 | --- a/tests/vmtests/image_sync.py | |||
4780 | +++ b/tests/vmtests/image_sync.py | |||
4781 | @@ -30,7 +30,9 @@ IMAGE_SRC_URL = os.environ.get( | |||
4782 | 30 | "http://maas.ubuntu.com/images/ephemeral-v3/daily/streams/v1/index.sjson") | 30 | "http://maas.ubuntu.com/images/ephemeral-v3/daily/streams/v1/index.sjson") |
4783 | 31 | IMAGE_DIR = os.environ.get("IMAGE_DIR", "/srv/images") | 31 | IMAGE_DIR = os.environ.get("IMAGE_DIR", "/srv/images") |
4784 | 32 | 32 | ||
4786 | 33 | KEYRING = '/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg' | 33 | KEYRING = os.environ.get( |
4787 | 34 | 'IMAGE_SRC_KEYRING', | ||
4788 | 35 | '/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg') | ||
4789 | 34 | ITEM_NAME_FILTERS = \ | 36 | ITEM_NAME_FILTERS = \ |
4790 | 35 | ['ftype~(boot-initrd|boot-kernel|root-tgz|squashfs)'] | 37 | ['ftype~(boot-initrd|boot-kernel|root-tgz|squashfs)'] |
4791 | 36 | FORMAT_JSON = 'JSON' | 38 | FORMAT_JSON = 'JSON' |
4792 | diff --git a/tests/vmtests/releases.py b/tests/vmtests/releases.py | |||
4793 | index 02cbfe5..7be8feb 100644 | |||
4794 | --- a/tests/vmtests/releases.py | |||
4795 | +++ b/tests/vmtests/releases.py | |||
4796 | @@ -131,8 +131,8 @@ class _Releases(object): | |||
4797 | 131 | 131 | ||
4798 | 132 | 132 | ||
4799 | 133 | class _CentosReleases(object): | 133 | class _CentosReleases(object): |
4802 | 134 | centos70fromxenial = _Centos70FromXenialBase | 134 | centos70_xenial = _Centos70FromXenialBase |
4803 | 135 | centos66fromxenial = _Centos66FromXenialBase | 135 | centos66_xenial = _Centos66FromXenialBase |
4804 | 136 | 136 | ||
4805 | 137 | 137 | ||
4806 | 138 | class _UbuntuCoreReleases(object): | 138 | class _UbuntuCoreReleases(object): |
4807 | diff --git a/tests/vmtests/report_webhook_logger.py b/tests/vmtests/report_webhook_logger.py | |||
4808 | index e95397c..5e7d63b 100755 | |||
4809 | --- a/tests/vmtests/report_webhook_logger.py | |||
4810 | +++ b/tests/vmtests/report_webhook_logger.py | |||
4811 | @@ -76,7 +76,10 @@ class ServerHandler(http_server.SimpleHTTPRequestHandler): | |||
4812 | 76 | self._message = None | 76 | self._message = None |
4813 | 77 | self.send_response(200) | 77 | self.send_response(200) |
4814 | 78 | self.end_headers() | 78 | self.end_headers() |
4816 | 79 | self.wfile.write(("content of %s\n" % self.path).encode('utf-8')) | 79 | if self.url_mapping and self.path in self.url_mapping: |
4817 | 80 | self.wfile.write(self.url_mapping[self.path].encode('utf-8')) | ||
4818 | 81 | else: | ||
4819 | 82 | self.wfile.write(("content of %s\n" % self.path).encode('utf-8')) | ||
4820 | 80 | 83 | ||
4821 | 81 | def do_POST(self): | 84 | def do_POST(self): |
4822 | 82 | length = int(self.headers['Content-Length']) | 85 | length = int(self.headers['Content-Length']) |
4823 | @@ -96,13 +99,14 @@ class ServerHandler(http_server.SimpleHTTPRequestHandler): | |||
4824 | 96 | self.wfile.write(msg.encode('utf-8')) | 99 | self.wfile.write(msg.encode('utf-8')) |
4825 | 97 | 100 | ||
4826 | 98 | 101 | ||
4828 | 99 | def GenServerHandlerWithResultFile(file_path): | 102 | def GenServerHandlerWithResultFile(file_path, url_map): |
4829 | 100 | class ExtendedServerHandler(ServerHandler): | 103 | class ExtendedServerHandler(ServerHandler): |
4830 | 101 | result_log_file = file_path | 104 | result_log_file = file_path |
4831 | 105 | url_mapping = url_map | ||
4832 | 102 | return ExtendedServerHandler | 106 | return ExtendedServerHandler |
4833 | 103 | 107 | ||
4834 | 104 | 108 | ||
4836 | 105 | def get_httpd(port=None, result_file=None): | 109 | def get_httpd(port=None, result_file=None, url_mapping=None): |
4837 | 106 | # avoid 'Address already in use' after ctrl-c | 110 | # avoid 'Address already in use' after ctrl-c |
4838 | 107 | socketserver.TCPServer.allow_reuse_address = True | 111 | socketserver.TCPServer.allow_reuse_address = True |
4839 | 108 | 112 | ||
4840 | @@ -111,7 +115,7 @@ def get_httpd(port=None, result_file=None): | |||
4841 | 111 | port = 0 | 115 | port = 0 |
4842 | 112 | 116 | ||
4843 | 113 | if result_file: | 117 | if result_file: |
4845 | 114 | Handler = GenServerHandlerWithResultFile(result_file) | 118 | Handler = GenServerHandlerWithResultFile(result_file, url_mapping) |
4846 | 115 | else: | 119 | else: |
4847 | 116 | Handler = ServerHandler | 120 | Handler = ServerHandler |
4848 | 117 | httpd = HTTPServerV6(("::", port), Handler) | 121 | httpd = HTTPServerV6(("::", port), Handler) |
4849 | @@ -143,10 +147,11 @@ def run_server(port=DEFAULT_PORT, log_data=True): | |||
4850 | 143 | 147 | ||
4851 | 144 | class CaptureReporting: | 148 | class CaptureReporting: |
4852 | 145 | 149 | ||
4854 | 146 | def __init__(self, result_file): | 150 | def __init__(self, result_file, url_mapping=None): |
4855 | 151 | self.url_mapping = url_mapping | ||
4856 | 147 | self.result_file = result_file | 152 | self.result_file = result_file |
4857 | 148 | self.httpd = get_httpd(result_file=self.result_file, | 153 | self.httpd = get_httpd(result_file=self.result_file, |
4859 | 149 | port=None) | 154 | port=None, url_mapping=self.url_mapping) |
4860 | 150 | self.httpd.server_activate() | 155 | self.httpd.server_activate() |
4861 | 151 | # socket.AF_INET6 returns | 156 | # socket.AF_INET6 returns |
4862 | 152 | # (host, port, flowinfo, scopeid) | 157 | # (host, port, flowinfo, scopeid) |
4863 | diff --git a/tests/vmtests/test_apt_config_cmd.py b/tests/vmtests/test_apt_config_cmd.py | |||
4864 | index efd04f3..f9b6a09 100644 | |||
4865 | --- a/tests/vmtests/test_apt_config_cmd.py | |||
4866 | +++ b/tests/vmtests/test_apt_config_cmd.py | |||
4867 | @@ -12,16 +12,14 @@ from .releases import base_vm_classes as relbase | |||
4868 | 12 | 12 | ||
4869 | 13 | class TestAptConfigCMD(VMBaseClass): | 13 | class TestAptConfigCMD(VMBaseClass): |
4870 | 14 | """TestAptConfigCMD - test standalone command""" | 14 | """TestAptConfigCMD - test standalone command""" |
4871 | 15 | test_type = 'config' | ||
4872 | 15 | conf_file = "examples/tests/apt_config_command.yaml" | 16 | conf_file = "examples/tests/apt_config_command.yaml" |
4873 | 16 | interactive = False | 17 | interactive = False |
4874 | 17 | extra_disks = [] | 18 | extra_disks = [] |
4875 | 18 | fstab_expected = {} | 19 | fstab_expected = {} |
4876 | 19 | disk_to_check = [] | 20 | disk_to_check = [] |
4878 | 20 | collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent(""" | 21 | extra_collect_scripts = [textwrap.dedent(""" |
4879 | 21 | cd OUTPUT_COLLECT_D | 22 | cd OUTPUT_COLLECT_D |
4880 | 22 | cat /etc/fstab > fstab | ||
4881 | 23 | ls /dev/disk/by-dname > ls_dname | ||
4882 | 24 | find /etc/network/interfaces.d > find_interfacesd | ||
4883 | 25 | cp /etc/apt/sources.list.d/curtin-dev-ubuntu-test-archive-*.list . | 23 | cp /etc/apt/sources.list.d/curtin-dev-ubuntu-test-archive-*.list . |
4884 | 26 | cp /etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg . | 24 | cp /etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg . |
4885 | 27 | apt-cache policy | grep proposed > proposed-enabled | 25 | apt-cache policy | grep proposed > proposed-enabled |
4886 | diff --git a/tests/vmtests/test_apt_source.py b/tests/vmtests/test_apt_source.py | |||
4887 | index f34913a..bb502b2 100644 | |||
4888 | --- a/tests/vmtests/test_apt_source.py | |||
4889 | +++ b/tests/vmtests/test_apt_source.py | |||
4890 | @@ -14,15 +14,13 @@ from curtin import util | |||
4891 | 14 | 14 | ||
4892 | 15 | class TestAptSrcAbs(VMBaseClass): | 15 | class TestAptSrcAbs(VMBaseClass): |
4893 | 16 | """TestAptSrcAbs - Basic tests for apt features of curtin""" | 16 | """TestAptSrcAbs - Basic tests for apt features of curtin""" |
4894 | 17 | test_type = 'config' | ||
4895 | 17 | interactive = False | 18 | interactive = False |
4896 | 18 | extra_disks = [] | 19 | extra_disks = [] |
4897 | 19 | fstab_expected = {} | 20 | fstab_expected = {} |
4898 | 20 | disk_to_check = [] | 21 | disk_to_check = [] |
4900 | 21 | collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent(""" | 22 | extra_collect_scripts = [textwrap.dedent(""" |
4901 | 22 | cd OUTPUT_COLLECT_D | 23 | cd OUTPUT_COLLECT_D |
4902 | 23 | cat /etc/fstab > fstab | ||
4903 | 24 | ls /dev/disk/by-dname > ls_dname | ||
4904 | 25 | find /etc/network/interfaces.d > find_interfacesd | ||
4905 | 26 | apt-key list "F430BBA5" > keyid-F430BBA5 | 24 | apt-key list "F430BBA5" > keyid-F430BBA5 |
4906 | 27 | apt-key list "0165013E" > keyppa-0165013E | 25 | apt-key list "0165013E" > keyppa-0165013E |
4907 | 28 | apt-key list "F470A0AC" > keylongid-F470A0AC | 26 | apt-key list "F470A0AC" > keylongid-F470A0AC |
4908 | diff --git a/tests/vmtests/test_basic.py b/tests/vmtests/test_basic.py | |||
4909 | index 01ffc89..54e3df8 100644 | |||
4910 | --- a/tests/vmtests/test_basic.py | |||
4911 | +++ b/tests/vmtests/test_basic.py | |||
4912 | @@ -4,12 +4,14 @@ from . import ( | |||
4913 | 4 | VMBaseClass, | 4 | VMBaseClass, |
4914 | 5 | get_apt_proxy) | 5 | get_apt_proxy) |
4915 | 6 | from .releases import base_vm_classes as relbase | 6 | from .releases import base_vm_classes as relbase |
4916 | 7 | from .releases import centos_base_vm_classes as centos_relbase | ||
4917 | 7 | 8 | ||
4918 | 8 | import textwrap | 9 | import textwrap |
4919 | 9 | from unittest import SkipTest | 10 | from unittest import SkipTest |
4920 | 10 | 11 | ||
4921 | 11 | 12 | ||
4922 | 12 | class TestBasicAbs(VMBaseClass): | 13 | class TestBasicAbs(VMBaseClass): |
4923 | 14 | test_type = 'storage' | ||
4924 | 13 | interactive = False | 15 | interactive = False |
4925 | 14 | nr_cpus = 2 | 16 | nr_cpus = 2 |
4926 | 15 | dirty_disks = True | 17 | dirty_disks = True |
4927 | @@ -18,29 +20,18 @@ class TestBasicAbs(VMBaseClass): | |||
4928 | 18 | nvme_disks = ['4G'] | 20 | nvme_disks = ['4G'] |
4929 | 19 | disk_to_check = [('main_disk_with_in---valid--dname', 1), | 21 | disk_to_check = [('main_disk_with_in---valid--dname', 1), |
4930 | 20 | ('main_disk_with_in---valid--dname', 2)] | 22 | ('main_disk_with_in---valid--dname', 2)] |
4932 | 21 | collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent(""" | 23 | extra_collect_scripts = [textwrap.dedent(""" |
4933 | 22 | cd OUTPUT_COLLECT_D | 24 | cd OUTPUT_COLLECT_D |
4937 | 23 | blkid -o export /dev/vda > blkid_output_vda | 25 | blkid -o export /dev/vda | cat >blkid_output_vda |
4938 | 24 | blkid -o export /dev/vda1 > blkid_output_vda1 | 26 | blkid -o export /dev/vda1 | cat >blkid_output_vda1 |
4939 | 25 | blkid -o export /dev/vda2 > blkid_output_vda2 | 27 | blkid -o export /dev/vda2 | cat >blkid_output_vda2 |
4940 | 26 | dev="/dev/vdd"; f="btrfs_uuid_${dev#/dev/*}"; | 28 | dev="/dev/vdd"; f="btrfs_uuid_${dev#/dev/*}"; |
4941 | 27 | if command -v btrfs-debug-tree >/dev/null; then | 29 | if command -v btrfs-debug-tree >/dev/null; then |
4942 | 28 | btrfs-debug-tree -r $dev | awk '/^uuid/ {print $2}' | grep "-" | 30 | btrfs-debug-tree -r $dev | awk '/^uuid/ {print $2}' | grep "-" |
4943 | 29 | else | 31 | else |
4944 | 30 | btrfs inspect-internal dump-super $dev | | 32 | btrfs inspect-internal dump-super $dev | |
4945 | 31 | awk '/^dev_item.fsid/ {print $2}' | 33 | awk '/^dev_item.fsid/ {print $2}' |
4958 | 32 | fi > $f | 34 | fi | cat >$f |
4947 | 33 | cat /proc/partitions > proc_partitions | ||
4948 | 34 | ls -al /dev/disk/by-uuid/ > ls_uuid | ||
4949 | 35 | cat /etc/fstab > fstab | ||
4950 | 36 | mkdir -p /dev/disk/by-dname | ||
4951 | 37 | ls /dev/disk/by-dname/ > ls_dname | ||
4952 | 38 | find /etc/network/interfaces.d > find_interfacesd | ||
4953 | 39 | |||
4954 | 40 | v="" | ||
4955 | 41 | out=$(apt-config shell v Acquire::HTTP::Proxy) | ||
4956 | 42 | eval "$out" | ||
4957 | 43 | echo "$v" > apt-proxy | ||
4959 | 44 | """)] | 35 | """)] |
4960 | 45 | 36 | ||
4961 | 46 | def _kname_to_uuid(self, kname): | 37 | def _kname_to_uuid(self, kname): |
4962 | @@ -48,7 +39,7 @@ class TestBasicAbs(VMBaseClass): | |||
4963 | 48 | # parsing ls -al output on /dev/disk/by-uuid: | 39 | # parsing ls -al output on /dev/disk/by-uuid: |
4964 | 49 | # lrwxrwxrwx 1 root root 9 Dec 4 20:02 | 40 | # lrwxrwxrwx 1 root root 9 Dec 4 20:02 |
4965 | 50 | # d591e9e9-825a-4f0a-b280-3bfaf470b83c -> ../../vdg | 41 | # d591e9e9-825a-4f0a-b280-3bfaf470b83c -> ../../vdg |
4967 | 51 | ls_uuid = self.load_collect_file("ls_uuid") | 42 | ls_uuid = self.load_collect_file("ls_al_byuuid") |
4968 | 52 | uuid = [line.split()[8] for line in ls_uuid.split('\n') | 43 | uuid = [line.split()[8] for line in ls_uuid.split('\n') |
4969 | 53 | if ("../../" + kname) in line.split()] | 44 | if ("../../" + kname) in line.split()] |
4970 | 54 | self.assertEqual(len(uuid), 1) | 45 | self.assertEqual(len(uuid), 1) |
4971 | @@ -57,81 +48,99 @@ class TestBasicAbs(VMBaseClass): | |||
4972 | 57 | self.assertEqual(len(uuid), 36) | 48 | self.assertEqual(len(uuid), 36) |
4973 | 58 | return uuid | 49 | return uuid |
4974 | 59 | 50 | ||
4983 | 60 | def test_output_files_exist(self): | 51 | def _test_ptable(self, blkid_output, expected): |
4976 | 61 | self.output_files_exist( | ||
4977 | 62 | ["blkid_output_vda", "blkid_output_vda1", "blkid_output_vda2", | ||
4978 | 63 | "btrfs_uuid_vdd", "fstab", "ls_dname", "ls_uuid", | ||
4979 | 64 | "proc_partitions", | ||
4980 | 65 | "root/curtin-install.log", "root/curtin-install-cfg.yaml"]) | ||
4981 | 66 | |||
4982 | 67 | def test_ptable(self, disk_to_check=None): | ||
4984 | 68 | if self.target_release == "trusty": | 52 | if self.target_release == "trusty": |
4985 | 69 | raise SkipTest("No PTTYPE blkid output on trusty") | 53 | raise SkipTest("No PTTYPE blkid output on trusty") |
4986 | 70 | 54 | ||
4989 | 71 | blkid_info = self.get_blkid_data("blkid_output_vda") | 55 | if not blkid_output: |
4990 | 72 | self.assertEquals(blkid_info["PTTYPE"], "dos") | 56 | raise RuntimeError('_test_ptable requires blkid output file') |
4991 | 73 | 57 | ||
4995 | 74 | def test_partition_numbers(self): | 58 | if not expected: |
4996 | 75 | # vde should have partitions 1 and 10 | 59 | raise RuntimeError('_test_ptable requires expected value') |
4997 | 76 | disk = "vde" | 60 | |
4998 | 61 | self.output_files_exist([blkid_output]) | ||
4999 | 62 | blkid_info = self.get_blkid_data(blkid_output) | ||
5000 | 63 | self.assertEquals(expected, blkid_info["PTTYPE"]) |
PASSED: Continuous integration, rev:59442cdefbb 6fd3c325c56266f ca7840593ea3b6 /jenkins. ubuntu. com/server/ job/curtin- ci/1002/ /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-arm64/ 1002 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-ppc64el/ 1002 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= metal-s390x/ 1002 /jenkins. ubuntu. com/server/ job/curtin- ci/nodes= torkoal/ 1002
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/curtin- ci/1002/ rebuild
https:/