Merge ~raharper/curtin:feature/enable-storage-vmtest-on-centos into curtin:master

Proposed by Ryan Harper
Status: Merged
Approved by: Ryan Harper
Approved revision: e0e98376b2e7ff3a09f3a8b339c1d029a3274b83
Merge reported by: Server Team CI bot
Merged at revision: not available
Proposed branch: ~raharper/curtin:feature/enable-storage-vmtest-on-centos
Merge into: curtin:master
Diff against target: 7050 lines (+2535/-1590)
82 files modified
curtin/__init__.py (+2/-0)
curtin/block/__init__.py (+0/-72)
curtin/block/deps.py (+103/-0)
curtin/block/iscsi.py (+25/-9)
curtin/block/lvm.py (+2/-1)
curtin/block/mdadm.py (+2/-1)
curtin/block/mkfs.py (+3/-2)
curtin/block/zfs.py (+2/-1)
curtin/commands/apply_net.py (+4/-3)
curtin/commands/apt_config.py (+13/-13)
curtin/commands/block_meta.py (+5/-4)
curtin/commands/curthooks.py (+391/-207)
curtin/commands/in_target.py (+2/-2)
curtin/commands/install.py (+4/-2)
curtin/commands/system_install.py (+2/-1)
curtin/commands/system_upgrade.py (+3/-2)
curtin/deps/__init__.py (+3/-3)
curtin/distro.py (+512/-0)
curtin/futil.py (+2/-1)
curtin/net/__init__.py (+0/-59)
curtin/net/deps.py (+72/-0)
curtin/paths.py (+34/-0)
curtin/util.py (+20/-318)
dev/null (+0/-96)
doc/topics/config.rst (+40/-0)
doc/topics/curthooks.rst (+18/-2)
examples/tests/filesystem_battery.yaml (+2/-2)
helpers/common (+156/-35)
tests/unittests/test_apt_custom_sources_list.py (+10/-8)
tests/unittests/test_apt_source.py (+8/-7)
tests/unittests/test_block_iscsi.py (+7/-0)
tests/unittests/test_block_lvm.py (+3/-2)
tests/unittests/test_block_mdadm.py (+18/-11)
tests/unittests/test_block_mkfs.py (+3/-2)
tests/unittests/test_block_zfs.py (+15/-9)
tests/unittests/test_commands_apply_net.py (+7/-7)
tests/unittests/test_commands_block_meta.py (+4/-3)
tests/unittests/test_curthooks.py (+103/-78)
tests/unittests/test_distro.py (+302/-0)
tests/unittests/test_feature.py (+3/-0)
tests/unittests/test_pack.py (+2/-0)
tests/unittests/test_util.py (+19/-122)
tests/vmtests/__init__.py (+80/-13)
tests/vmtests/helpers.py (+28/-1)
tests/vmtests/image_sync.py (+3/-1)
tests/vmtests/releases.py (+2/-2)
tests/vmtests/report_webhook_logger.py (+11/-6)
tests/vmtests/test_apt_config_cmd.py (+2/-4)
tests/vmtests/test_apt_source.py (+2/-4)
tests/vmtests/test_basic.py (+126/-152)
tests/vmtests/test_bcache_basic.py (+3/-6)
tests/vmtests/test_fs_battery.py (+25/-11)
tests/vmtests/test_install_umount.py (+1/-18)
tests/vmtests/test_iscsi.py (+10/-6)
tests/vmtests/test_journald_reporter.py (+2/-5)
tests/vmtests/test_lvm.py (+7/-8)
tests/vmtests/test_lvm_iscsi.py (+9/-4)
tests/vmtests/test_lvm_root.py (+40/-9)
tests/vmtests/test_mdadm_bcache.py (+41/-18)
tests/vmtests/test_mdadm_iscsi.py (+9/-3)
tests/vmtests/test_multipath.py (+8/-16)
tests/vmtests/test_network.py (+4/-19)
tests/vmtests/test_network_alias.py (+3/-3)
tests/vmtests/test_network_bonding.py (+3/-3)
tests/vmtests/test_network_bridging.py (+4/-4)
tests/vmtests/test_network_ipv6.py (+4/-4)
tests/vmtests/test_network_ipv6_static.py (+2/-2)
tests/vmtests/test_network_ipv6_vlan.py (+2/-2)
tests/vmtests/test_network_mtu.py (+5/-4)
tests/vmtests/test_network_static.py (+2/-11)
tests/vmtests/test_network_static_routes.py (+2/-2)
tests/vmtests/test_network_vlan.py (+3/-11)
tests/vmtests/test_nvme.py (+29/-56)
tests/vmtests/test_old_apt_features.py (+2/-4)
tests/vmtests/test_pollinate_useragent.py (+2/-2)
tests/vmtests/test_raid5_bcache.py (+6/-11)
tests/vmtests/test_simple.py (+5/-18)
tests/vmtests/test_ubuntu_core.py (+3/-8)
tests/vmtests/test_uefi_basic.py (+27/-28)
tests/vmtests/test_zfsroot.py (+5/-21)
tools/jenkins-runner (+30/-5)
tools/vmtest-filter (+57/-0)
Reviewer Review Type Date Requested Status
Server Team CI bot continuous-integration Approve
Lee Trager (community) Approve
Scott Moser (community) Approve
Chad Smith Pending
Review via email: mp+349075@code.launchpad.net

Commit message

Enable custom storage configuration for centos images.

Add support for the majority of storage configurations including
partitioning, lvm, raid, iscsi and combinations of these. Some
storage configs are unsupported at this time.
Unsupported storage config options on Centos:
  - bcache (no kernel support)
  - zfs (no kernel support)
  - jfs, ntfs, reiserfs (no kernel, userspace support)

Curtin's built-in curthooks now support Centos in addition
to Ubuntu. The built-in curthooks are now callable by
in-image curthooks. This feature is announced by the
presence of the feature flag, 'CENTOS_CURTHOOK_SUPPORT'

Other notable features added:
 - tools/jenkins-runner now includes a test filtering
   ability which enables generating the list of tests to
   run by specifying attributes of the classes. For example
   to run all centos70 tests append:
     --filter=target_release=centos70
 - curtin/distro.py includes distro specific methods, such as
   package install and distro version detection
 - util.target_path has now moved to curtin.paths module

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

I didn't get through the whole thing yet. Only as far as my comments stop.
will review more late.r

Feels like we need a 'distro' module.

def distro.read_distro(target=None):
   """Find the distro in target. return just distro name for now."""

also there woudl be

CENTOS='centos'
DEBIAN='debian'.

then you can avoid copying 'debian' string everywhere.

Revision history for this message
Ryan Harper (raharper) wrote :

Yes, I generally wanted some sort of distro value cache. Which is why in curthooks I end up grabbing it first and passing it around.

We could avoid passing, at the cost of a function call. Alternatively, we could import it into th curthooks module and call a "setter" to find the right value and it would be a global to the module.

Thoughts?

Revision history for this message
Chad Smith (chad.smith) wrote :

only a brief glance at the content, will look more tomorrow.

99517f4... by Ryan Harper

Simplify set construction for get_iscsi_ports_from_config

0ca7ee3... by Ryan Harper

Restore param order to copy_iscsi_conf

cca12fc... by Ryan Harper

Fix whitespace damage, update comment to have LP: #

Revision history for this message
Ryan Harper (raharper) wrote :

Thanks for the comments so far. Pulling in some suggested changes. Some responses in-line.

Revision history for this message
Scott Moser (smoser) wrote :

I got through the rest of it.
I like the test filter functionality.
comments inline.

1d890ad... by Ryan Harper

Drop if not target check, use target_path instead

3719756... by Ryan Harper

setup_grub, helpers/common: pass os-family to install_grub, fix shell nits

- Add --os-family to install_grub cli, have setup_grub() pass in the flag
- Address shell comments
- Fix unittests to work with --os-family

9555181... by Ryan Harper

Fix use of cls.target_distro, it always has a value now, drop test_type=core

2a5e3d2... by Ryan Harper

Use instead of ; use error/fail

Revision history for this message
Ryan Harper (raharper) wrote :

Pulling in most of the comments. Replied to a few questions inline. I'll reverify that we're still passing on centos tests and then push the fixes here for a second round.

Thanks for the review!

dac8fe0... by Ryan Harper

Flake8 fixes for vmtests-filter

Revision history for this message
Chad Smith (chad.smith) :
290b898... by Ryan Harper

helpers/common:install_grub: Fix getopt, os-family takes a parameter

4bf91d8... by Ryan Harper

Drop default_collect_scripts class attr, it's not needed

ac47a91... by Ryan Harper

helpers/install_grub: fix i386 grub_name/grub_target; catch silent missing package exit

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
bfe3021... by Ryan Harper

Refactor distro/osfamily into enumerated class

Introduce curtin/distro.py which provides distro variant and
osfamily mapping methods. Inside we enumerate all of the known
distro names, build a distro family to variant mapping and provide
a reverse mapping for translating from one to the other.

With this in place, add a singleton based methods to utils,
get_target_distroinfo, which queries /etc/os-release inside
the target path, and extracts the ID= value, looks that up
in the list of distros, and osfamilies creating a named tuple
that is globally cached. Added accessort methods for getting
the variant or osfamily and then used these to update
curthooks to query once, and then compare the value found versus
the enumerated distro objects. Where target is available, methods
will now use get_target_osfamily(target=target) to obtain a value
if one is not provided. In some methods that are distro specific
we default the osfamily to the correct value.

5cbf688... by Ryan Harper

Drop use of singleton, in-use for ephemeral and target, move DistroInfo to distro.py

Revision history for this message
Ryan Harper (raharper) wrote :

OK, I've given the curtin/distro.py a go. I think it works quite nicely. I'm happy to bikeshed on the attributes (distro vs variant vs osfamily, etc).

That's easy enough to switch around.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) :
Revision history for this message
Ryan Harper (raharper) wrote :

team review comments inline

90451c1... by Ryan Harper

Drop _target from get_distro, get_osfamily helpers

3d329af... by Ryan Harper

Refactor iscsi.get_iscsi_disks_from_config for modular use

- Introduce get_iscsi_volumes_from_config which returns a list of
  iscsi RFC uris which can be used to construct IscsiDiskObjects.
- Refactor get_iscsi_disks_from_config to use get_iscsi_volumes_from_config
- Add docstrings to all get_iscsi_* methods
- Migrate block,net detect_required_packages_mapping into module/dep.py
  respectively to avoid dependency loop between import of curtin.block
  and curtin.command.block_meta
- Fix up curthooks to import block and net deps module

78e38d2... by Ryan Harper

Refactor osfamily parameter, default to DISTROS.debian

Drop any if osfamily is None checks since we now default to
DISTROS.debian for osfamily. Add some checks if osfamily is
not the expected values and raise ValueErrors

926fd23... by Ryan Harper

Move targets_node_dir into function signature with default value

8411be2... by Scott Moser

Refactor util, distro and add paths.py

Rearrage package/distro related functions out of util.py into
distro.py. Move target_path into paths.py. Adjust callers
where necessary.

aa2c622... by Ryan Harper

Drop iscsi initator name hack, not needed

652b1c7... by Ryan Harper

Fix typo in initramfs string, make pollinate generic, check for binary in target

7c2e84a... by Ryan Harper

Add unittest for pollinate missing, drop yum_install

Add unittest for when pollinate binary is missing.
Drop distro.yum_install, folding settings and retries into run_apt_command
and run_yum_command.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

MP to get rid of users of util.target_path
  http://paste.ubuntu.com/p/YGBHC5tjrh/

some smaller things in line.

9276416... by Scott Moser

remove util.target_path users.

494d1f3... by Ryan Harper

Replace launchpad link with LP: #NNNN

d90b938... by Ryan Harper

Drop apt,yum retries for all commands, handle yum install in two parts

05ff544... by Ryan Harper

distro: add unittest and ensure osfamily varient is part of itself

a8d08f6... by Ryan Harper

helpers/common: map variant to os_family and update os_family switch statements

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
09a05cd... by Ryan Harper

curthooks: refactor builtin curthooks into a callable method

0b23cbd... by Ryan Harper

iscsi_get_volumes_from_config: handle curtin config and storage config

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

Ran a centos70 run on diglett:

% rm -rf ./output/; CURTIN_VMTEST_PARALLEL=4 CURTIN_VMTEST_BOOT_TIMEOUT=300 CURTIN_VMTEST_KEEP_DATA_FAIL=all ./tools/jenkins-runner --filter=target_release=centos70 -- -vv --nologcapture
...
----------------------------------------------------------------------
Ran 372 tests in 1898.647s

OK (SKIP=112)
Tue, 07 Aug 2018 16:05:46 -0500: vmtest end [0] in 1901s

The set of tests that run are:
% ./tools/vmtest-filter target_release=centos70
2018-08-07 16:20:08,785 - tests.vmtests - INFO - Logfile: /tmp/vmtest-2018-08-07T162008.784968.log . Working dir: /tmp/vmtest-2018-08-07T162008.784968
tests/vmtests/test_basic.py:Centos70XenialTestBasic
tests/vmtests/test_basic.py:Centos70XenialTestScsiBasic
tests/vmtests/test_centos_basic.py:Centos70BasicNetworkFromXenialTestBasic
tests/vmtests/test_fs_battery.py:Centos70XenialTestFsBattery
tests/vmtests/test_iscsi.py:Centos70XenialTestIscsiBasic
tests/vmtests/test_lvm.py:Centos70XenialTestLvm
tests/vmtests/test_lvm_iscsi.py:Centos70XenialTestLvmIscsi
tests/vmtests/test_lvm_root.py:Centos70XenialTestLvmRootExt4
tests/vmtests/test_lvm_root.py:Centos70XenialTestUefiLvmRootExt4
tests/vmtests/test_lvm_root.py:Centos70XenialTestUefiLvmRootXfs
tests/vmtests/test_mdadm_bcache.py:Centos70TestMirrorboot
tests/vmtests/test_mdadm_bcache.py:Centos70TestMirrorbootPartitions
tests/vmtests/test_mdadm_bcache.py:Centos70TestMirrorbootPartitionsUEFI
tests/vmtests/test_mdadm_bcache.py:Centos70TestRaid10boot
tests/vmtests/test_mdadm_bcache.py:Centos70TestRaid5boot
tests/vmtests/test_mdadm_bcache.py:Centos70TestRaid6boot
tests/vmtests/test_mdadm_iscsi.py:Centos70TestIscsiMdadm
tests/vmtests/test_multipath.py:Centos70TestMultipathBasic
tests/vmtests/test_network.py:Centos70TestNetworkBasic
tests/vmtests/test_network_alias.py:Centos70TestNetworkAlias
tests/vmtests/test_network_bonding.py:Centos70TestNetworkBonding
tests/vmtests/test_network_bridging.py:Centos70TestBridgeNetwork
tests/vmtests/test_network_ipv6.py:Centos70TestNetworkIPV6
tests/vmtests/test_network_ipv6_static.py:Centos70TestNetworkIPV6Static
tests/vmtests/test_network_ipv6_vlan.py:Centos70TestNetworkIPV6Vlan
tests/vmtests/test_network_mtu.py:Centos70TestNetworkMtu
tests/vmtests/test_network_static.py:Centos70TestNetworkStatic
tests/vmtests/test_network_vlan.py:Centos70TestNetworkVlan
tests/vmtests/test_nvme.py:Centos70TestNvme
tests/vmtests/test_simple.py:Centos70TestSimple
tests/vmtests/test_uefi_basic.py:Centos70UefiTestBasic

35b44dd... by Ryan Harper

doc: update curthooks docs

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
00a0658... by Ryan Harper

Pass in the real curtin config to builtin_curthooks

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) :
Revision history for this message
Ryan Harper (raharper) :
068ce08... by Ryan Harper

grub: require both os variant and family; pass variant to grub-install, it's fickle.

78d89b5... by Ryan Harper

Allow yum install, update, upgrade to use the two-step download,install method

979d53b... by Ryan Harper

Drop target is None checks in distro.py

adda1ab... by Ryan Harper

Add comments on use of ChrootableTarget for rpm/yum operations

Revision history for this message
Chad Smith (chad.smith) wrote :

Thanks Ryan!

Couple nits inline plus a significant question about detect_required_packages_mapping dropping v2 deps.

I added a pastebin to add --features argument to CLI, which I can do in a separate branch if you think it is a good idea.

Revision history for this message
Chad Smith (chad.smith) :
Revision history for this message
Ryan Harper (raharper) wrote :

Thanks for the review. I've replied inline.

Revision history for this message
Chad Smith (chad.smith) :
3062605... by Ryan Harper

block.deps: Add iscsi mapping to open-iscsi for debian family

Revision history for this message
Scott Moser (smoser) wrote :

2 questions

a.)
 'yum update' versus 'yum upgrade'

https://unix.stackexchange.com/questions/55777/in-centos-what-is-the-difference-between-yum-update-and-yum-upgrade

this feels like we want 'upgrade' as it is more similar to 'dist-upgrade' which is what we do in apt.

b.) I think really we still want the 2 phase for upgrade.
 retry this: yum --downloadonly --setopt=keepcache=1 upgrade
 then run this: yum upgrade --cacheonly --downloadonly --setopt=keepcache=1

It looks like we can mostly re-use the existing 'yum_install' but just manage to set ['install'] to be ['upgrade']

We are getting there...
I know that this 'upgrade' path isn't a huge thing, but if we have it there i'd like for it to work reliably.

4d3550b... by Ryan Harper

Reformat exception to not dangle text on the new line

Revision history for this message
Scott Moser (smoser) wrote :

I think i'm pretty much fine with this at this point.
mega-branch, but we can take and address any issues one by one.

Assuming the following are happy, I approve:
 a.) rharper
 b.) vmtest
 c.) c-i bot

review: Approve
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

On Thu, Aug 9, 2018 at 2:30 PM Scott Moser <email address hidden> wrote:
>
> Review: Approve
>
> I think i'm pretty much fine with this at this point.
> mega-branch, but we can take and address any issues one by one.
>
> Assuming the following are happy, I approve:
> a.) rharper

+1

> b.) vmtest

I'll kick off a full run on diglett; this allows me to "hack" in an
updated curtin-hooks.py for the centos images
However, we shouldn't land this until we get the MAAS image branch
approved and landed.

> c.) c-i bot
>
> --
> https://code.launchpad.net/~raharper/curtin/+git/curtin/+merge/349075
> You are the owner of ~raharper/curtin:feature/enable-storage-vmtest-on-centos.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
5f9e785... by Ryan Harper

Add support for redhat distros without /etc/os-release; fix centos6 grub install

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
76bc6dd... by Ryan Harper

curthooks: don't update initramfs unless we have storage config

The dracut config wasn't updated, but we still proceeded to regenerate
wasting time when it wasn't needed. Move rpm_ command into distro.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
e172702... by Ryan Harper

Drop extra case ;; and fix Uefi installs

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
59b9b94... by Ryan Harper

Drop centos_basic vmtest, handled in test_basic and test_network now

Revision history for this message
Ryan Harper (raharper) wrote :

This passed on diglett full vmtest run (with the new maas-image curtin-hooks injected into centos7 images).

% rm -rf output/; CURTIN_VMTEST_PARALLEL=3 CURTIN_VMTEST_BOOT_TIMEOUT=300 ./tools/jenkins-runner -vv --nologcapture tests/vmtests
Quering synced ephemeral images/kernels in /srv/images
======================================================================================
 Release Codename ImageDate Arch /SubArch Path
--------------------------------------------------------------------------------------
   12.04 precise 20170424 amd64/hwe-t precise/amd64/20170424/root-image.gz
   12.04 precise 20170424 amd64/hwe-t precise/amd64/20170424/vmtest.root-tgz
   12.04 precise 20170424.1 amd64/hwe-p precise/amd64/20170424/squashfs
   14.04 trusty 20180806 amd64/hwe-t trusty/amd64/20180806/squashfs
   14.04 trusty 20180806 amd64/hwe-x trusty/amd64/20180806/squashfs
   14.04 trusty 20180806 i386 /hwe-t trusty/i386/20180806/squashfs
   14.04 trusty 20180806 i386 /hwe-x trusty/i386/20180806/squashfs
   16.04 xenial 20180814 amd64/ga-16.04 xenial/amd64/20180814/squashfs
   16.04 xenial 20180814 amd64/hwe-16.04 xenial/amd64/20180814/squashfs
   16.04 xenial 20180814 amd64/hwe-16.04-edge xenial/amd64/20180814/squashfs
   16.04 xenial 20180814 i386 /ga-16.04 xenial/i386/20180814/squashfs
   16.04 xenial 20180814 i386 /hwe-16.04 xenial/i386/20180814/squashfs
   16.04 xenial 20180814 i386 /hwe-16.04-edge xenial/i386/20180814/squashfs
   17.04 zesty 20171219 amd64/ga-17.04 zesty/amd64/20171219/squashfs
   17.10 artful 20180718 amd64/ga-17.10 artful/amd64/20180718/squashfs
   17.10 artful 20180718 i386 /ga-17.10 artful/i386/20180718/squashfs
   18.04 bionic 20180814 amd64/ga-18.04 bionic/amd64/20180814/squashfs
   18.04 bionic 20180814 i386 /ga-18.04 bionic/i386/20180814/squashfs
   18.10 cosmic 20180813 amd64/ga-18.10 cosmic/amd64/20180813/squashfs
   18.10 cosmic 20180813 i386 /ga-18.10 cosmic/i386/20180813/squashfs
--------------------------------------------------------------------------------------
     6.6 centos66 20180501_01 amd64/generic centos66/amd64/20180501_01/root-tgz
     7.0 centos70 20180501_01 amd64/generic centos70/amd64/20180501_01/root-tgz
======================================================================================

Wed, 15 Aug 2018 14:58:02 -0500: vmtest start: nosetests3 --process-timeout=86400 --processes=3 -vv --nologcapture tests/vmtests

...

----------------------------------------------------------------------
Ran 3336 tests in 23577.156s
Wed, 15 Aug 2018 21:31:00 -0500: vmtest end [0] in 23580s

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Lee Trager (ltrager) wrote :

I have been testing this branch using the MAAS CI. Nodes in the MAAS CI have no direct access to the Internet. This is causing UEFI CentOS 7 installs to fail when running

yum --assumeyes --quiet install --downloadonly --setopt=keepcache=1 grub2-efi-x64

I made sure the image I built has grub2-efi-x64[1]. While I think its a good feature that Curtin will automatically install missing dependencies if those dependencies are currently on the system Curtin should not try to access the Internet.

I would suggest querying RPM directly to see if a package is available before trying to use yum

[root@autopkgtest /]# rpm -q grub2-efi-x64
grub2-efi-x64-2.02-0.65.el7.centos.2.x86_64
[root@autopkgtest /]# echo $?
0
[root@autopkgtest /]# rpm -q missing-package
package missing-package is not installed
[root@autopkgtest /]# echo $?
1

[1] https://code.launchpad.net/~ltrager/maas-images/centos_storage_curthooks/+merge/353351

review: Needs Fixing
4e1ad40... by Ryan Harper

Move grub package install to install_missing_packages

dc92fe4... by Ryan Harper

Make distro.has_pkg_available multi-distro

2545c5f... by Ryan Harper

Build list of uefi packages and then update needed set checking if installed

e1b9d38... by Ryan Harper

vmtests: Add environ variable IMAGE_SRC_KEYRING to specify gpg key path for testing unofficial images

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
479275f... by Ryan Harper

Fix package name: grub2-efi-modules

18ea647... by Ryan Harper

Fix package name once more, grub2-efi-x64-modules

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Lee Trager (ltrager) wrote :

Latest changes allow CentOS to confirm with custom storage in the MAAS CI! Approving as custom storage works but we still need to solve LP:1788088.

review: Approve
a66566a... by Ryan Harper

centos: UEFI only depends on grub2-efi-x64-modules

07972da... by Ryan Harper

helpers/common: make efibootmgr dump verbosely

f9c5916... by Ryan Harper

vmtest: collect /boot contents; collect efibootmgr output on UEFI

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

After some back and forth about what grub2 packages needed; we pulled out shim and grub2-efi-x64 for now and pushed secure boot to a separate feature.

A full vmtest run with this branch against current published images has passed. I've also run all centos7 tests against the proposed images from ltrager and that has passed as well.

d1e92f6... by Ryan Harper

Allow os_variant=rhel in grub install

When RHEL is installed, the os_variant value is 'rhel'. Allow this
value to match the centos|redhat case statement for grub install.

LP: #1790756

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
a04727e... by Ryan Harper

builtin-hooks call handle_cloudconfig on centos to config maas datasource

In-image curthooks in centos images called curthooks.handle_cloudconfig.
We need to do the same in the built-in hooks if we're on centos.

LP: #1791140

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
7198fbc... by Ryan Harper

jenkins-runner: restore missing -p|--parallel cli case statement

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
0f426eb... by Ryan Harper

jenkins-runner: better quoting and add --filter foo=bar support

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

Overall I'm happy with this at this point.
If Ryan is happy and c-i is happy then I'm good.

I think that we have to rebase though. There are several '<<<<'.

You'll have to rebase.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
fafa454... by Ryan Harper

Only install multipath packages if needed

211e2ad... by Ryan Harper

jenkins-runner: always append tests to nosetest

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
11ad4ba... by Ryan Harper

jenkins-runner: handle nosetest args passed with filters and test paths

e0e9837... by Ryan Harper

Drop debug and fix empty check to size of array

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

We've updated the centos images to have the required packages for offline install. Jenkins node didnt have access to the yum repos so the vmtests were failing for multipath and iscsi. With that resolved, I've re-run those tests and have positive results from here:

https://jenkins.ubuntu.com/server/job/curtin-vmtest-devel-debug/103/console

Ran 40 tests in 309.102s

OK (SKIP=12)
Fri, 21 Sep 2018 06:40:34 +0000: vmtest end [0] in 316s

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
diff --git a/curtin/__init__.py b/curtin/__init__.py
index 002454b..ee35ca3 100644
--- a/curtin/__init__.py
+++ b/curtin/__init__.py
@@ -10,6 +10,8 @@ KERNEL_CMDLINE_COPY_TO_INSTALL_SEP = "---"
10FEATURES = [10FEATURES = [
11 # curtin can apply centos networking via centos_apply_network_config11 # curtin can apply centos networking via centos_apply_network_config
12 'CENTOS_APPLY_NETWORK_CONFIG',12 'CENTOS_APPLY_NETWORK_CONFIG',
13 # curtin can configure centos storage devices and boot devices
14 'CENTOS_CURTHOOK_SUPPORT',
13 # install supports the 'network' config version 115 # install supports the 'network' config version 1
14 'NETWORK_CONFIG_V1',16 'NETWORK_CONFIG_V1',
15 # reporter supports 'webhook' type17 # reporter supports 'webhook' type
diff --git a/curtin/block/__init__.py b/curtin/block/__init__.py
index b771629..490c268 100644
--- a/curtin/block/__init__.py
+++ b/curtin/block/__init__.py
@@ -1003,78 +1003,6 @@ def wipe_volume(path, mode="superblock", exclusive=True):
1003 raise ValueError("wipe mode %s not supported" % mode)1003 raise ValueError("wipe mode %s not supported" % mode)
10041004
10051005
1006def storage_config_required_packages(storage_config, mapping):
1007 """Read storage configuration dictionary and determine
1008 which packages are required for the supplied configuration
1009 to function. Return a list of packaged to install.
1010 """
1011
1012 if not storage_config or not isinstance(storage_config, dict):
1013 raise ValueError('Invalid storage configuration. '
1014 'Must be a dict:\n %s' % storage_config)
1015
1016 if not mapping or not isinstance(mapping, dict):
1017 raise ValueError('Invalid storage mapping. Must be a dict')
1018
1019 if 'storage' in storage_config:
1020 storage_config = storage_config.get('storage')
1021
1022 needed_packages = []
1023
1024 # get reqs by device operation type
1025 dev_configs = set(operation['type']
1026 for operation in storage_config['config'])
1027
1028 for dev_type in dev_configs:
1029 if dev_type in mapping:
1030 needed_packages.extend(mapping[dev_type])
1031
1032 # for any format operations, check the fstype and
1033 # determine if we need any mkfs tools as well.
1034 format_configs = set([operation['fstype']
1035 for operation in storage_config['config']
1036 if operation['type'] == 'format'])
1037 for format_type in format_configs:
1038 if format_type in mapping:
1039 needed_packages.extend(mapping[format_type])
1040
1041 return needed_packages
1042
1043
1044def detect_required_packages_mapping():
1045 """Return a dictionary providing a versioned configuration which maps
1046 storage configuration elements to the packages which are required
1047 for functionality.
1048
1049 The mapping key is either a config type value, or an fstype value.
1050
1051 """
1052 version = 1
1053 mapping = {
1054 version: {
1055 'handler': storage_config_required_packages,
1056 'mapping': {
1057 'bcache': ['bcache-tools'],
1058 'btrfs': ['btrfs-tools'],
1059 'ext2': ['e2fsprogs'],
1060 'ext3': ['e2fsprogs'],
1061 'ext4': ['e2fsprogs'],
1062 'jfs': ['jfsutils'],
1063 'lvm_partition': ['lvm2'],
1064 'lvm_volgroup': ['lvm2'],
1065 'ntfs': ['ntfs-3g'],
1066 'raid': ['mdadm'],
1067 'reiserfs': ['reiserfsprogs'],
1068 'xfs': ['xfsprogs'],
1069 'zfsroot': ['zfsutils-linux', 'zfs-initramfs'],
1070 'zfs': ['zfsutils-linux', 'zfs-initramfs'],
1071 'zpool': ['zfsutils-linux', 'zfs-initramfs'],
1072 },
1073 },
1074 }
1075 return mapping
1076
1077
1078def get_supported_filesystems():1006def get_supported_filesystems():
1079 """ Return a list of filesystems that the kernel currently supports1007 """ Return a list of filesystems that the kernel currently supports
1080 as read from /proc/filesystems.1008 as read from /proc/filesystems.
diff --git a/curtin/block/deps.py b/curtin/block/deps.py
1081new file mode 1006441009new file mode 100644
index 0000000..930f764
--- /dev/null
+++ b/curtin/block/deps.py
@@ -0,0 +1,103 @@
1# This file is part of curtin. See LICENSE file for copyright and license info.
2
3from curtin.distro import DISTROS
4from curtin.block import iscsi
5
6
7def storage_config_required_packages(storage_config, mapping):
8 """Read storage configuration dictionary and determine
9 which packages are required for the supplied configuration
10 to function. Return a list of packaged to install.
11 """
12
13 if not storage_config or not isinstance(storage_config, dict):
14 raise ValueError('Invalid storage configuration. '
15 'Must be a dict:\n %s' % storage_config)
16
17 if not mapping or not isinstance(mapping, dict):
18 raise ValueError('Invalid storage mapping. Must be a dict')
19
20 if 'storage' in storage_config:
21 storage_config = storage_config.get('storage')
22
23 needed_packages = []
24
25 # get reqs by device operation type
26 dev_configs = set(operation['type']
27 for operation in storage_config['config'])
28
29 for dev_type in dev_configs:
30 if dev_type in mapping:
31 needed_packages.extend(mapping[dev_type])
32
33 # for disks with path: iscsi: we need iscsi tools
34 iscsi_vols = iscsi.get_iscsi_volumes_from_config(storage_config)
35 if len(iscsi_vols) > 0:
36 needed_packages.extend(mapping['iscsi'])
37
38 # for any format operations, check the fstype and
39 # determine if we need any mkfs tools as well.
40 format_configs = set([operation['fstype']
41 for operation in storage_config['config']
42 if operation['type'] == 'format'])
43 for format_type in format_configs:
44 if format_type in mapping:
45 needed_packages.extend(mapping[format_type])
46
47 return needed_packages
48
49
50def detect_required_packages_mapping(osfamily=DISTROS.debian):
51 """Return a dictionary providing a versioned configuration which maps
52 storage configuration elements to the packages which are required
53 for functionality.
54
55 The mapping key is either a config type value, or an fstype value.
56
57 """
58 distro_mapping = {
59 DISTROS.debian: {
60 'bcache': ['bcache-tools'],
61 'btrfs': ['btrfs-tools'],
62 'ext2': ['e2fsprogs'],
63 'ext3': ['e2fsprogs'],
64 'ext4': ['e2fsprogs'],
65 'jfs': ['jfsutils'],
66 'iscsi': ['open-iscsi'],
67 'lvm_partition': ['lvm2'],
68 'lvm_volgroup': ['lvm2'],
69 'ntfs': ['ntfs-3g'],
70 'raid': ['mdadm'],
71 'reiserfs': ['reiserfsprogs'],
72 'xfs': ['xfsprogs'],
73 'zfsroot': ['zfsutils-linux', 'zfs-initramfs'],
74 'zfs': ['zfsutils-linux', 'zfs-initramfs'],
75 'zpool': ['zfsutils-linux', 'zfs-initramfs'],
76 },
77 DISTROS.redhat: {
78 'bcache': [],
79 'btrfs': ['btrfs-progs'],
80 'ext2': ['e2fsprogs'],
81 'ext3': ['e2fsprogs'],
82 'ext4': ['e2fsprogs'],
83 'jfs': [],
84 'iscsi': ['iscsi-initiator-utils'],
85 'lvm_partition': ['lvm2'],
86 'lvm_volgroup': ['lvm2'],
87 'ntfs': [],
88 'raid': ['mdadm'],
89 'reiserfs': [],
90 'xfs': ['xfsprogs'],
91 'zfsroot': [],
92 'zfs': [],
93 'zpool': [],
94 },
95 }
96 if osfamily not in distro_mapping:
97 raise ValueError('No block package mapping for distro: %s' % osfamily)
98
99 return {1: {'handler': storage_config_required_packages,
100 'mapping': distro_mapping.get(osfamily)}}
101
102
103# vi: ts=4 expandtab syntax=python
diff --git a/curtin/block/iscsi.py b/curtin/block/iscsi.py
index 0c666b6..3c46500 100644
--- a/curtin/block/iscsi.py
+++ b/curtin/block/iscsi.py
@@ -9,7 +9,7 @@ import os
9import re9import re
10import shutil10import shutil
1111
12from curtin import (util, udev)12from curtin import (paths, util, udev)
13from curtin.block import (get_device_slave_knames,13from curtin.block import (get_device_slave_knames,
14 path_to_kname)14 path_to_kname)
1515
@@ -230,29 +230,45 @@ def connected_disks():
230 return _ISCSI_DISKS230 return _ISCSI_DISKS
231231
232232
233def get_iscsi_disks_from_config(cfg):233def get_iscsi_volumes_from_config(cfg):
234 """Parse a curtin storage config and return a list234 """Parse a curtin storage config and return a list
235 of iscsi disk objects for each configuration present235 of iscsi disk rfc4173 uris for each configuration present.
236 """236 """
237 if not cfg:237 if not cfg:
238 cfg = {}238 cfg = {}
239239
240 sconfig = cfg.get('storage', {}).get('config', {})240 if 'storage' in cfg:
241 if not sconfig:241 sconfig = cfg.get('storage', {}).get('config', [])
242 else:
243 sconfig = cfg.get('config', [])
244 if not sconfig or not isinstance(sconfig, list):
242 LOG.warning('Configuration dictionary did not contain'245 LOG.warning('Configuration dictionary did not contain'
243 ' a storage configuration')246 ' a storage configuration')
244 return []247 return []
245248
249 return [disk['path'] for disk in sconfig
250 if disk['type'] == 'disk' and
251 disk.get('path', "").startswith('iscsi:')]
252
253
254def get_iscsi_disks_from_config(cfg):
255 """Return a list of IscsiDisk objects for each iscsi volume present."""
246 # Construct IscsiDisk objects for each iscsi volume present256 # Construct IscsiDisk objects for each iscsi volume present
247 iscsi_disks = [IscsiDisk(disk['path']) for disk in sconfig257 iscsi_disks = [IscsiDisk(volume) for volume in
248 if disk['type'] == 'disk' and258 get_iscsi_volumes_from_config(cfg)]
249 disk.get('path', "").startswith('iscsi:')]
250 LOG.debug('Found %s iscsi disks in storage config', len(iscsi_disks))259 LOG.debug('Found %s iscsi disks in storage config', len(iscsi_disks))
251 return iscsi_disks260 return iscsi_disks
252261
253262
263def get_iscsi_ports_from_config(cfg):
264 """Return a set of ports that may be used when connecting to volumes."""
265 ports = set([d.port for d in get_iscsi_disks_from_config(cfg)])
266 LOG.debug('Found iscsi ports in use: %s', ports)
267 return ports
268
269
254def disconnect_target_disks(target_root_path=None):270def disconnect_target_disks(target_root_path=None):
255 target_nodes_path = util.target_path(target_root_path, '/etc/iscsi/nodes')271 target_nodes_path = paths.target_path(target_root_path, '/etc/iscsi/nodes')
256 fails = []272 fails = []
257 if os.path.isdir(target_nodes_path):273 if os.path.isdir(target_nodes_path):
258 for target in os.listdir(target_nodes_path):274 for target in os.listdir(target_nodes_path):
diff --git a/curtin/block/lvm.py b/curtin/block/lvm.py
index eca64f6..b3f8bcb 100644
--- a/curtin/block/lvm.py
+++ b/curtin/block/lvm.py
@@ -4,6 +4,7 @@
4This module provides some helper functions for manipulating lvm devices4This module provides some helper functions for manipulating lvm devices
5"""5"""
66
7from curtin import distro
7from curtin import util8from curtin import util
8from curtin.log import LOG9from curtin.log import LOG
9import os10import os
@@ -88,7 +89,7 @@ def lvm_scan(activate=True):
88 # before appending the cache flag though, check if lvmetad is running. this89 # before appending the cache flag though, check if lvmetad is running. this
89 # ensures that we do the right thing even if lvmetad is supported but is90 # ensures that we do the right thing even if lvmetad is supported but is
90 # not running91 # not running
91 release = util.lsb_release().get('codename')92 release = distro.lsb_release().get('codename')
92 if release in [None, 'UNAVAILABLE']:93 if release in [None, 'UNAVAILABLE']:
93 LOG.warning('unable to find release number, assuming xenial or later')94 LOG.warning('unable to find release number, assuming xenial or later')
94 release = 'xenial'95 release = 'xenial'
diff --git a/curtin/block/mdadm.py b/curtin/block/mdadm.py
index 8eff7fb..4ad6aa7 100644
--- a/curtin/block/mdadm.py
+++ b/curtin/block/mdadm.py
@@ -13,6 +13,7 @@ import time
1313
14from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path)14from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path)
15from curtin.block import get_holders15from curtin.block import get_holders
16from curtin.distro import lsb_release
16from curtin import (util, udev)17from curtin import (util, udev)
17from curtin.log import LOG18from curtin.log import LOG
1819
@@ -95,7 +96,7 @@ VALID_RAID_ARRAY_STATES = (
95 checks the mdadm version and will return True if we can use --export96 checks the mdadm version and will return True if we can use --export
96 for key=value list with enough info, false if version is less than97 for key=value list with enough info, false if version is less than
97'''98'''
98MDADM_USE_EXPORT = util.lsb_release()['codename'] not in ['precise', 'trusty']99MDADM_USE_EXPORT = lsb_release()['codename'] not in ['precise', 'trusty']
99100
100#101#
101# mdadm executors102# mdadm executors
diff --git a/curtin/block/mkfs.py b/curtin/block/mkfs.py
index f39017c..4a1e1f9 100644
--- a/curtin/block/mkfs.py
+++ b/curtin/block/mkfs.py
@@ -3,8 +3,9 @@
3# This module wraps calls to mkfs.<fstype> and determines the appropriate flags3# This module wraps calls to mkfs.<fstype> and determines the appropriate flags
4# for each filesystem type4# for each filesystem type
55
6from curtin import util
7from curtin import block6from curtin import block
7from curtin import distro
8from curtin import util
89
9import string10import string
10import os11import os
@@ -102,7 +103,7 @@ def valid_fstypes():
102103
103def get_flag_mapping(flag_name, fs_family, param=None, strict=False):104def get_flag_mapping(flag_name, fs_family, param=None, strict=False):
104 ret = []105 ret = []
105 release = util.lsb_release()['codename']106 release = distro.lsb_release()['codename']
106 overrides = release_flag_mapping_overrides.get(release, {})107 overrides = release_flag_mapping_overrides.get(release, {})
107 if flag_name in overrides and fs_family in overrides[flag_name]:108 if flag_name in overrides and fs_family in overrides[flag_name]:
108 flag_sym = overrides[flag_name][fs_family]109 flag_sym = overrides[flag_name][fs_family]
diff --git a/curtin/block/zfs.py b/curtin/block/zfs.py
index e279ab6..5615144 100644
--- a/curtin/block/zfs.py
+++ b/curtin/block/zfs.py
@@ -7,6 +7,7 @@ and volumes."""
7import os7import os
88
9from curtin.config import merge_config9from curtin.config import merge_config
10from curtin import distro
10from curtin import util11from curtin import util
11from . import blkid, get_supported_filesystems12from . import blkid, get_supported_filesystems
1213
@@ -90,7 +91,7 @@ def zfs_assert_supported():
90 if arch in ZFS_UNSUPPORTED_ARCHES:91 if arch in ZFS_UNSUPPORTED_ARCHES:
91 raise RuntimeError("zfs is not supported on architecture: %s" % arch)92 raise RuntimeError("zfs is not supported on architecture: %s" % arch)
9293
93 release = util.lsb_release()['codename']94 release = distro.lsb_release()['codename']
94 if release in ZFS_UNSUPPORTED_RELEASES:95 if release in ZFS_UNSUPPORTED_RELEASES:
95 raise RuntimeError("zfs is not supported on release: %s" % release)96 raise RuntimeError("zfs is not supported on release: %s" % release)
9697
diff --git a/curtin/commands/apply_net.py b/curtin/commands/apply_net.py
index ffd474e..ddc5056 100644
--- a/curtin/commands/apply_net.py
+++ b/curtin/commands/apply_net.py
@@ -7,6 +7,7 @@ from .. import log
7import curtin.net as net7import curtin.net as net
8import curtin.util as util8import curtin.util as util
9from curtin import config9from curtin import config
10from curtin import paths
10from . import populate_one_subcmd11from . import populate_one_subcmd
1112
1213
@@ -123,7 +124,7 @@ def _patch_ifupdown_ipv6_mtu_hook(target,
123124
124 for hook in ['prehook', 'posthook']:125 for hook in ['prehook', 'posthook']:
125 fn = hookfn[hook]126 fn = hookfn[hook]
126 cfg = util.target_path(target, path=fn)127 cfg = paths.target_path(target, path=fn)
127 LOG.info('Injecting fix for ipv6 mtu settings: %s', cfg)128 LOG.info('Injecting fix for ipv6 mtu settings: %s', cfg)
128 util.write_file(cfg, contents[hook], mode=0o755)129 util.write_file(cfg, contents[hook], mode=0o755)
129130
@@ -136,7 +137,7 @@ def _disable_ipv6_privacy_extensions(target,
136 Resolve this by allowing the cloud-image setting to win. """137 Resolve this by allowing the cloud-image setting to win. """
137138
138 LOG.debug('Attempting to remove ipv6 privacy extensions')139 LOG.debug('Attempting to remove ipv6 privacy extensions')
139 cfg = util.target_path(target, path=path)140 cfg = paths.target_path(target, path=path)
140 if not os.path.exists(cfg):141 if not os.path.exists(cfg):
141 LOG.warn('Failed to find ipv6 privacy conf file %s', cfg)142 LOG.warn('Failed to find ipv6 privacy conf file %s', cfg)
142 return143 return
@@ -182,7 +183,7 @@ def _maybe_remove_legacy_eth0(target,
182 - with unknown content, leave it and warn183 - with unknown content, leave it and warn
183 """184 """
184185
185 cfg = util.target_path(target, path=path)186 cfg = paths.target_path(target, path=path)
186 if not os.path.exists(cfg):187 if not os.path.exists(cfg):
187 LOG.warn('Failed to find legacy network conf file %s', cfg)188 LOG.warn('Failed to find legacy network conf file %s', cfg)
188 return189 return
diff --git a/curtin/commands/apt_config.py b/curtin/commands/apt_config.py
index 41c329e..9ce25b3 100644
--- a/curtin/commands/apt_config.py
+++ b/curtin/commands/apt_config.py
@@ -13,7 +13,7 @@ import sys
13import yaml13import yaml
1414
15from curtin.log import LOG15from curtin.log import LOG
16from curtin import (config, util, gpg)16from curtin import (config, distro, gpg, paths, util)
1717
18from . import populate_one_subcmd18from . import populate_one_subcmd
1919
@@ -61,7 +61,7 @@ def handle_apt(cfg, target=None):
61 curthooks if a global apt config was provided or via the "apt"61 curthooks if a global apt config was provided or via the "apt"
62 standalone command.62 standalone command.
63 """63 """
64 release = util.lsb_release(target=target)['codename']64 release = distro.lsb_release(target=target)['codename']
65 arch = util.get_architecture(target)65 arch = util.get_architecture(target)
66 mirrors = find_apt_mirror_info(cfg, arch)66 mirrors = find_apt_mirror_info(cfg, arch)
67 LOG.debug("Apt Mirror info: %s", mirrors)67 LOG.debug("Apt Mirror info: %s", mirrors)
@@ -148,7 +148,7 @@ def apply_debconf_selections(cfg, target=None):
148 pkg = re.sub(r"[:\s].*", "", line)148 pkg = re.sub(r"[:\s].*", "", line)
149 pkgs_cfgd.add(pkg)149 pkgs_cfgd.add(pkg)
150150
151 pkgs_installed = util.get_installed_packages(target)151 pkgs_installed = distro.get_installed_packages(target)
152152
153 LOG.debug("pkgs_cfgd: %s", pkgs_cfgd)153 LOG.debug("pkgs_cfgd: %s", pkgs_cfgd)
154 LOG.debug("pkgs_installed: %s", pkgs_installed)154 LOG.debug("pkgs_installed: %s", pkgs_installed)
@@ -164,7 +164,7 @@ def apply_debconf_selections(cfg, target=None):
164def clean_cloud_init(target):164def clean_cloud_init(target):
165 """clean out any local cloud-init config"""165 """clean out any local cloud-init config"""
166 flist = glob.glob(166 flist = glob.glob(
167 util.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*"))167 paths.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*"))
168168
169 LOG.debug("cleaning cloud-init config from: %s", flist)169 LOG.debug("cleaning cloud-init config from: %s", flist)
170 for dpkg_cfg in flist:170 for dpkg_cfg in flist:
@@ -194,7 +194,7 @@ def rename_apt_lists(new_mirrors, target=None):
194 """rename_apt_lists - rename apt lists to preserve old cache data"""194 """rename_apt_lists - rename apt lists to preserve old cache data"""
195 default_mirrors = get_default_mirrors(util.get_architecture(target))195 default_mirrors = get_default_mirrors(util.get_architecture(target))
196196
197 pre = util.target_path(target, APT_LISTS)197 pre = paths.target_path(target, APT_LISTS)
198 for (name, omirror) in default_mirrors.items():198 for (name, omirror) in default_mirrors.items():
199 nmirror = new_mirrors.get(name)199 nmirror = new_mirrors.get(name)
200 if not nmirror:200 if not nmirror:
@@ -299,7 +299,7 @@ def generate_sources_list(cfg, release, mirrors, target=None):
299 if tmpl is None:299 if tmpl is None:
300 LOG.info("No custom template provided, fall back to modify"300 LOG.info("No custom template provided, fall back to modify"
301 "mirrors in %s on the target system", aptsrc)301 "mirrors in %s on the target system", aptsrc)
302 tmpl = util.load_file(util.target_path(target, aptsrc))302 tmpl = util.load_file(paths.target_path(target, aptsrc))
303 # Strategy if no custom template was provided:303 # Strategy if no custom template was provided:
304 # - Only replacing mirrors304 # - Only replacing mirrors
305 # - no reason to replace "release" as it is from target anyway305 # - no reason to replace "release" as it is from target anyway
@@ -310,24 +310,24 @@ def generate_sources_list(cfg, release, mirrors, target=None):
310 tmpl = mirror_to_placeholder(tmpl, default_mirrors['SECURITY'],310 tmpl = mirror_to_placeholder(tmpl, default_mirrors['SECURITY'],
311 "$SECURITY")311 "$SECURITY")
312312
313 orig = util.target_path(target, aptsrc)313 orig = paths.target_path(target, aptsrc)
314 if os.path.exists(orig):314 if os.path.exists(orig):
315 os.rename(orig, orig + ".curtin.old")315 os.rename(orig, orig + ".curtin.old")
316316
317 rendered = util.render_string(tmpl, params)317 rendered = util.render_string(tmpl, params)
318 disabled = disable_suites(cfg.get('disable_suites'), rendered, release)318 disabled = disable_suites(cfg.get('disable_suites'), rendered, release)
319 util.write_file(util.target_path(target, aptsrc), disabled, mode=0o644)319 util.write_file(paths.target_path(target, aptsrc), disabled, mode=0o644)
320320
321 # protect the just generated sources.list from cloud-init321 # protect the just generated sources.list from cloud-init
322 cloudfile = "/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg"322 cloudfile = "/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg"
323 # this has to work with older cloud-init as well, so use old key323 # this has to work with older cloud-init as well, so use old key
324 cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1)324 cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1)
325 try:325 try:
326 util.write_file(util.target_path(target, cloudfile),326 util.write_file(paths.target_path(target, cloudfile),
327 cloudconf, mode=0o644)327 cloudconf, mode=0o644)
328 except IOError:328 except IOError:
329 LOG.exception("Failed to protect source.list from cloud-init in (%s)",329 LOG.exception("Failed to protect source.list from cloud-init in (%s)",
330 util.target_path(target, cloudfile))330 paths.target_path(target, cloudfile))
331 raise331 raise
332332
333333
@@ -409,7 +409,7 @@ def add_apt_sources(srcdict, target=None, template_params=None,
409 raise409 raise
410 continue410 continue
411411
412 sourcefn = util.target_path(target, ent['filename'])412 sourcefn = paths.target_path(target, ent['filename'])
413 try:413 try:
414 contents = "%s\n" % (source)414 contents = "%s\n" % (source)
415 util.write_file(sourcefn, contents, omode="a")415 util.write_file(sourcefn, contents, omode="a")
@@ -417,8 +417,8 @@ def add_apt_sources(srcdict, target=None, template_params=None,
417 LOG.exception("failed write to file %s: %s", sourcefn, detail)417 LOG.exception("failed write to file %s: %s", sourcefn, detail)
418 raise418 raise
419419
420 util.apt_update(target=target, force=True,420 distro.apt_update(target=target, force=True,
421 comment="apt-source changed config")421 comment="apt-source changed config")
422422
423 return423 return
424424
diff --git a/curtin/commands/block_meta.py b/curtin/commands/block_meta.py
index 6bd430d..197c1fd 100644
--- a/curtin/commands/block_meta.py
+++ b/curtin/commands/block_meta.py
@@ -1,8 +1,9 @@
1# This file is part of curtin. See LICENSE file for copyright and license info.1# This file is part of curtin. See LICENSE file for copyright and license info.
22
3from collections import OrderedDict, namedtuple3from collections import OrderedDict, namedtuple
4from curtin import (block, config, util)4from curtin import (block, config, paths, util)
5from curtin.block import (bcache, mdadm, mkfs, clear_holders, lvm, iscsi, zfs)5from curtin.block import (bcache, mdadm, mkfs, clear_holders, lvm, iscsi, zfs)
6from curtin import distro
6from curtin.log import LOG, logged_time7from curtin.log import LOG, logged_time
7from curtin.reporter import events8from curtin.reporter import events
89
@@ -730,12 +731,12 @@ def mount_fstab_data(fdata, target=None):
730731
731 :param fdata: a FstabData type732 :param fdata: a FstabData type
732 :return None."""733 :return None."""
733 mp = util.target_path(target, fdata.path)734 mp = paths.target_path(target, fdata.path)
734 if fdata.device:735 if fdata.device:
735 device = fdata.device736 device = fdata.device
736 else:737 else:
737 if fdata.spec.startswith("/") and not fdata.spec.startswith("/dev/"):738 if fdata.spec.startswith("/") and not fdata.spec.startswith("/dev/"):
738 device = util.target_path(target, fdata.spec)739 device = paths.target_path(target, fdata.spec)
739 else:740 else:
740 device = fdata.spec741 device = fdata.spec
741742
@@ -856,7 +857,7 @@ def lvm_partition_handler(info, storage_config):
856 # Use 'wipesignatures' (if available) and 'zero' to clear target lv857 # Use 'wipesignatures' (if available) and 'zero' to clear target lv
857 # of any fs metadata858 # of any fs metadata
858 cmd = ["lvcreate", volgroup, "--name", name, "--zero=y"]859 cmd = ["lvcreate", volgroup, "--name", name, "--zero=y"]
859 release = util.lsb_release()['codename']860 release = distro.lsb_release()['codename']
860 if release not in ['precise', 'trusty']:861 if release not in ['precise', 'trusty']:
861 cmd.extend(["--wipesignatures=y"])862 cmd.extend(["--wipesignatures=y"])
862863
diff --git a/curtin/commands/curthooks.py b/curtin/commands/curthooks.py
index f9a5a66..480eca4 100644
--- a/curtin/commands/curthooks.py
+++ b/curtin/commands/curthooks.py
@@ -11,12 +11,18 @@ import textwrap
1111
12from curtin import config12from curtin import config
13from curtin import block13from curtin import block
14from curtin import distro
15from curtin.block import iscsi
14from curtin import net16from curtin import net
15from curtin import futil17from curtin import futil
16from curtin.log import LOG18from curtin.log import LOG
19from curtin import paths
17from curtin import swap20from curtin import swap
18from curtin import util21from curtin import util
19from curtin import version as curtin_version22from curtin import version as curtin_version
23from curtin.block import deps as bdeps
24from curtin.distro import DISTROS
25from curtin.net import deps as ndeps
20from curtin.reporter import events26from curtin.reporter import events
21from curtin.commands import apply_net, apt_config27from curtin.commands import apply_net, apt_config
22from curtin.url_helper import get_maas_version28from curtin.url_helper import get_maas_version
@@ -173,10 +179,10 @@ def install_kernel(cfg, target):
173 # target only has required packages installed. See LP:1640519179 # target only has required packages installed. See LP:1640519
174 fk_packages = get_flash_kernel_pkgs()180 fk_packages = get_flash_kernel_pkgs()
175 if fk_packages:181 if fk_packages:
176 util.install_packages(fk_packages.split(), target=target)182 distro.install_packages(fk_packages.split(), target=target)
177183
178 if kernel_package:184 if kernel_package:
179 util.install_packages([kernel_package], target=target)185 distro.install_packages([kernel_package], target=target)
180 return186 return
181187
182 # uname[2] is kernel name (ie: 3.16.0-7-generic)188 # uname[2] is kernel name (ie: 3.16.0-7-generic)
@@ -193,24 +199,24 @@ def install_kernel(cfg, target):
193 LOG.warn("Couldn't detect kernel package to install for %s."199 LOG.warn("Couldn't detect kernel package to install for %s."
194 % kernel)200 % kernel)
195 if kernel_fallback is not None:201 if kernel_fallback is not None:
196 util.install_packages([kernel_fallback], target=target)202 distro.install_packages([kernel_fallback], target=target)
197 return203 return
198204
199 package = "linux-{flavor}{map_suffix}".format(205 package = "linux-{flavor}{map_suffix}".format(
200 flavor=flavor, map_suffix=map_suffix)206 flavor=flavor, map_suffix=map_suffix)
201207
202 if util.has_pkg_available(package, target):208 if distro.has_pkg_available(package, target):
203 if util.has_pkg_installed(package, target):209 if distro.has_pkg_installed(package, target):
204 LOG.debug("Kernel package '%s' already installed", package)210 LOG.debug("Kernel package '%s' already installed", package)
205 else:211 else:
206 LOG.debug("installing kernel package '%s'", package)212 LOG.debug("installing kernel package '%s'", package)
207 util.install_packages([package], target=target)213 distro.install_packages([package], target=target)
208 else:214 else:
209 if kernel_fallback is not None:215 if kernel_fallback is not None:
210 LOG.info("Kernel package '%s' not available. "216 LOG.info("Kernel package '%s' not available. "
211 "Installing fallback package '%s'.",217 "Installing fallback package '%s'.",
212 package, kernel_fallback)218 package, kernel_fallback)
213 util.install_packages([kernel_fallback], target=target)219 distro.install_packages([kernel_fallback], target=target)
214 else:220 else:
215 LOG.warn("Kernel package '%s' not available and no fallback."221 LOG.warn("Kernel package '%s' not available and no fallback."
216 " System may not boot.", package)222 " System may not boot.", package)
@@ -273,7 +279,7 @@ def uefi_reorder_loaders(grubcfg, target):
273 LOG.debug("Currently booted UEFI loader might no longer boot.")279 LOG.debug("Currently booted UEFI loader might no longer boot.")
274280
275281
276def setup_grub(cfg, target):282def setup_grub(cfg, target, osfamily=DISTROS.debian):
277 # target is the path to the mounted filesystem283 # target is the path to the mounted filesystem
278284
279 # FIXME: these methods need moving to curtin.block285 # FIXME: these methods need moving to curtin.block
@@ -353,24 +359,6 @@ def setup_grub(cfg, target):
353 else:359 else:
354 instdevs = list(blockdevs)360 instdevs = list(blockdevs)
355361
356 # UEFI requires grub-efi-{arch}. If a signed version of that package
357 # exists then it will be installed.
358 if util.is_uefi_bootable():
359 arch = util.get_architecture()
360 pkgs = ['grub-efi-%s' % arch]
361
362 # Architecture might support a signed UEFI loader
363 uefi_pkg_signed = 'grub-efi-%s-signed' % arch
364 if util.has_pkg_available(uefi_pkg_signed):
365 pkgs.append(uefi_pkg_signed)
366
367 # AMD64 has shim-signed for SecureBoot support
368 if arch == "amd64":
369 pkgs.append("shim-signed")
370
371 # Install the UEFI packages needed for the architecture
372 util.install_packages(pkgs, target=target)
373
374 env = os.environ.copy()362 env = os.environ.copy()
375363
376 replace_default = grubcfg.get('replace_linux_default', True)364 replace_default = grubcfg.get('replace_linux_default', True)
@@ -399,6 +387,7 @@ def setup_grub(cfg, target):
399 else:387 else:
400 LOG.debug("NOT enabling UEFI nvram updates")388 LOG.debug("NOT enabling UEFI nvram updates")
401 LOG.debug("Target system may not boot")389 LOG.debug("Target system may not boot")
390 args.append('--os-family=%s' % osfamily)
402 args.append(target)391 args.append(target)
403392
404 # capture stdout and stderr joined.393 # capture stdout and stderr joined.
@@ -435,14 +424,21 @@ def copy_crypttab(crypttab, target):
435 shutil.copy(crypttab, os.path.sep.join([target, 'etc/crypttab']))424 shutil.copy(crypttab, os.path.sep.join([target, 'etc/crypttab']))
436425
437426
438def copy_iscsi_conf(nodes_dir, target):427def copy_iscsi_conf(nodes_dir, target, target_nodes_dir='etc/iscsi/nodes'):
439 if not nodes_dir:428 if not nodes_dir:
440 LOG.warn("nodes directory must be specified, not copying")429 LOG.warn("nodes directory must be specified, not copying")
441 return430 return
442431
443 LOG.info("copying iscsi nodes database into target")432 LOG.info("copying iscsi nodes database into target")
444 shutil.copytree(nodes_dir, os.path.sep.join([target,433 tdir = os.path.sep.join([target, target_nodes_dir])
445 'etc/iscsi/nodes']))434 if not os.path.exists(tdir):
435 shutil.copytree(nodes_dir, tdir)
436 else:
437 # if /etc/iscsi/nodes exists, copy dirs underneath
438 for ndir in os.listdir(nodes_dir):
439 source_dir = os.path.join(nodes_dir, ndir)
440 target_dir = os.path.join(tdir, ndir)
441 shutil.copytree(source_dir, target_dir)
446442
447443
448def copy_mdadm_conf(mdadm_conf, target):444def copy_mdadm_conf(mdadm_conf, target):
@@ -486,7 +482,7 @@ def copy_dname_rules(rules_d, target):
486 if not rules_d:482 if not rules_d:
487 LOG.warn("no udev rules directory to copy")483 LOG.warn("no udev rules directory to copy")
488 return484 return
489 target_rules_dir = util.target_path(target, "etc/udev/rules.d")485 target_rules_dir = paths.target_path(target, "etc/udev/rules.d")
490 for rule in os.listdir(rules_d):486 for rule in os.listdir(rules_d):
491 target_file = os.path.join(target_rules_dir, rule)487 target_file = os.path.join(target_rules_dir, rule)
492 shutil.copy(os.path.join(rules_d, rule), target_file)488 shutil.copy(os.path.join(rules_d, rule), target_file)
@@ -532,11 +528,19 @@ def add_swap(cfg, target, fstab):
532 maxsize=maxsize)528 maxsize=maxsize)
533529
534530
535def detect_and_handle_multipath(cfg, target):531def detect_and_handle_multipath(cfg, target, osfamily=DISTROS.debian):
536 DEFAULT_MULTIPATH_PACKAGES = ['multipath-tools-boot']532 DEFAULT_MULTIPATH_PACKAGES = {
533 DISTROS.debian: ['multipath-tools-boot'],
534 DISTROS.redhat: ['device-mapper-multipath'],
535 }
536 if osfamily not in DEFAULT_MULTIPATH_PACKAGES:
537 raise ValueError(
538 'No multipath package mapping for distro: %s' % osfamily)
539
537 mpcfg = cfg.get('multipath', {})540 mpcfg = cfg.get('multipath', {})
538 mpmode = mpcfg.get('mode', 'auto')541 mpmode = mpcfg.get('mode', 'auto')
539 mppkgs = mpcfg.get('packages', DEFAULT_MULTIPATH_PACKAGES)542 mppkgs = mpcfg.get('packages',
543 DEFAULT_MULTIPATH_PACKAGES.get(osfamily))
540 mpbindings = mpcfg.get('overwrite_bindings', True)544 mpbindings = mpcfg.get('overwrite_bindings', True)
541545
542 if isinstance(mppkgs, str):546 if isinstance(mppkgs, str):
@@ -549,23 +553,28 @@ def detect_and_handle_multipath(cfg, target):
549 return553 return
550554
551 LOG.info("Detected multipath devices. Installing support via %s", mppkgs)555 LOG.info("Detected multipath devices. Installing support via %s", mppkgs)
556 needed = [pkg for pkg in mppkgs if pkg
557 not in distro.get_installed_packages(target)]
558 if needed:
559 distro.install_packages(needed, target=target, osfamily=osfamily)
552560
553 util.install_packages(mppkgs, target=target)
554 replace_spaces = True561 replace_spaces = True
555 try:562 if osfamily == DISTROS.debian:
556 # check in-target version563 try:
557 pkg_ver = util.get_package_version('multipath-tools', target=target)564 # check in-target version
558 LOG.debug("get_package_version:\n%s", pkg_ver)565 pkg_ver = distro.get_package_version('multipath-tools',
559 LOG.debug("multipath version is %s (major=%s minor=%s micro=%s)",566 target=target)
560 pkg_ver['semantic_version'], pkg_ver['major'],567 LOG.debug("get_package_version:\n%s", pkg_ver)
561 pkg_ver['minor'], pkg_ver['micro'])568 LOG.debug("multipath version is %s (major=%s minor=%s micro=%s)",
562 # multipath-tools versions < 0.5.0 do _NOT_ want whitespace replaced569 pkg_ver['semantic_version'], pkg_ver['major'],
563 # i.e. 0.4.X in Trusty.570 pkg_ver['minor'], pkg_ver['micro'])
564 if pkg_ver['semantic_version'] < 500:571 # multipath-tools versions < 0.5.0 do _NOT_
565 replace_spaces = False572 # want whitespace replaced i.e. 0.4.X in Trusty.
566 except Exception as e:573 if pkg_ver['semantic_version'] < 500:
567 LOG.warn("failed reading multipath-tools version, "574 replace_spaces = False
568 "assuming it wants no spaces in wwids: %s", e)575 except Exception as e:
576 LOG.warn("failed reading multipath-tools version, "
577 "assuming it wants no spaces in wwids: %s", e)
569578
570 multipath_cfg_path = os.path.sep.join([target, '/etc/multipath.conf'])579 multipath_cfg_path = os.path.sep.join([target, '/etc/multipath.conf'])
571 multipath_bind_path = os.path.sep.join([target, '/etc/multipath/bindings'])580 multipath_bind_path = os.path.sep.join([target, '/etc/multipath/bindings'])
@@ -574,7 +583,7 @@ def detect_and_handle_multipath(cfg, target):
574 if not os.path.isfile(multipath_cfg_path):583 if not os.path.isfile(multipath_cfg_path):
575 # Without user_friendly_names option enabled system fails to boot584 # Without user_friendly_names option enabled system fails to boot
576 # if any of the disks has spaces in its name. Package multipath-tools585 # if any of the disks has spaces in its name. Package multipath-tools
577 # has bug opened for this issue (LP: 1432062) but it was not fixed yet.586 # has bug opened for this issue LP: #1432062 but it was not fixed yet.
578 multipath_cfg_content = '\n'.join(587 multipath_cfg_content = '\n'.join(
579 ['# This file was created by curtin while installing the system.',588 ['# This file was created by curtin while installing the system.',
580 'defaults {',589 'defaults {',
@@ -593,7 +602,13 @@ def detect_and_handle_multipath(cfg, target):
593 mpname = "mpath0"602 mpname = "mpath0"
594 grub_dev = "/dev/mapper/" + mpname603 grub_dev = "/dev/mapper/" + mpname
595 if partno is not None:604 if partno is not None:
596 grub_dev += "-part%s" % partno605 if osfamily == DISTROS.debian:
606 grub_dev += "-part%s" % partno
607 elif osfamily == DISTROS.redhat:
608 grub_dev += "p%s" % partno
609 else:
610 raise ValueError(
611 'Unknown grub_dev mapping for distro: %s' % osfamily)
597612
598 LOG.debug("configuring multipath install for root=%s wwid=%s",613 LOG.debug("configuring multipath install for root=%s wwid=%s",
599 grub_dev, wwid)614 grub_dev, wwid)
@@ -606,31 +621,54 @@ def detect_and_handle_multipath(cfg, target):
606 ''])621 ''])
607 util.write_file(multipath_bind_path, content=multipath_bind_content)622 util.write_file(multipath_bind_path, content=multipath_bind_content)
608623
609 grub_cfg = os.path.sep.join(624 if osfamily == DISTROS.debian:
610 [target, '/etc/default/grub.d/50-curtin-multipath.cfg'])625 grub_cfg = os.path.sep.join(
626 [target, '/etc/default/grub.d/50-curtin-multipath.cfg'])
627 omode = 'w'
628 elif osfamily == DISTROS.redhat:
629 grub_cfg = os.path.sep.join([target, '/etc/default/grub'])
630 omode = 'a'
631 else:
632 raise ValueError(
633 'Unknown grub_cfg mapping for distro: %s' % osfamily)
634
611 msg = '\n'.join([635 msg = '\n'.join([
612 '# Written by curtin for multipath device wwid "%s"' % wwid,636 '# Written by curtin for multipath device %s %s' % (mpname, wwid),
613 'GRUB_DEVICE=%s' % grub_dev,637 'GRUB_DEVICE=%s' % grub_dev,
614 'GRUB_DISABLE_LINUX_UUID=true',638 'GRUB_DISABLE_LINUX_UUID=true',
615 ''])639 ''])
616 util.write_file(grub_cfg, content=msg)640 util.write_file(grub_cfg, omode=omode, content=msg)
617
618 else:641 else:
619 LOG.warn("Not sure how this will boot")642 LOG.warn("Not sure how this will boot")
620643
621 # Initrams needs to be updated to include /etc/multipath.cfg644 if osfamily == DISTROS.debian:
622 # and /etc/multipath/bindings files.645 # Initrams needs to be updated to include /etc/multipath.cfg
623 update_initramfs(target, all_kernels=True)646 # and /etc/multipath/bindings files.
647 update_initramfs(target, all_kernels=True)
648 elif osfamily == DISTROS.redhat:
649 # Write out initramfs/dracut config for multipath
650 dracut_conf_multipath = os.path.sep.join(
651 [target, '/etc/dracut.conf.d/10-curtin-multipath.conf'])
652 msg = '\n'.join([
653 '# Written by curtin for multipath device wwid "%s"' % wwid,
654 'force_drivers+=" dm-multipath "',
655 'add_dracutmodules+="multipath"',
656 'install_items+="/etc/multipath.conf /etc/multipath/bindings"',
657 ''])
658 util.write_file(dracut_conf_multipath, content=msg)
659 else:
660 raise ValueError(
661 'Unknown initramfs mapping for distro: %s' % osfamily)
624662
625663
626def detect_required_packages(cfg):664def detect_required_packages(cfg, osfamily=DISTROS.debian):
627 """665 """
628 detect packages that will be required in-target by custom config items666 detect packages that will be required in-target by custom config items
629 """667 """
630668
631 mapping = {669 mapping = {
632 'storage': block.detect_required_packages_mapping(),670 'storage': bdeps.detect_required_packages_mapping(osfamily=osfamily),
633 'network': net.detect_required_packages_mapping(),671 'network': ndeps.detect_required_packages_mapping(osfamily=osfamily),
634 }672 }
635673
636 needed_packages = []674 needed_packages = []
@@ -657,16 +695,16 @@ def detect_required_packages(cfg):
657 return needed_packages695 return needed_packages
658696
659697
660def install_missing_packages(cfg, target):698def install_missing_packages(cfg, target, osfamily=DISTROS.debian):
661 ''' describe which operation types will require specific packages699 ''' describe which operation types will require specific packages
662700
663 'custom_config_key': {701 'custom_config_key': {
664 'pkg1': ['op_name_1', 'op_name_2', ...]702 'pkg1': ['op_name_1', 'op_name_2', ...]
665 }703 }
666 '''704 '''
667705 installed_packages = distro.get_installed_packages(target)
668 installed_packages = util.get_installed_packages(target)706 needed_packages = set([pkg for pkg in
669 needed_packages = set([pkg for pkg in detect_required_packages(cfg)707 detect_required_packages(cfg, osfamily=osfamily)
670 if pkg not in installed_packages])708 if pkg not in installed_packages])
671709
672 arch_packages = {710 arch_packages = {
@@ -678,6 +716,31 @@ def install_missing_packages(cfg, target):
678 if pkg not in needed_packages:716 if pkg not in needed_packages:
679 needed_packages.add(pkg)717 needed_packages.add(pkg)
680718
719 # UEFI requires grub-efi-{arch}. If a signed version of that package
720 # exists then it will be installed.
721 if util.is_uefi_bootable():
722 uefi_pkgs = []
723 if osfamily == DISTROS.redhat:
724 # centos/redhat doesn't support 32-bit?
725 uefi_pkgs.extend(['grub2-efi-x64-modules'])
726 elif osfamily == DISTROS.debian:
727 arch = util.get_architecture()
728 uefi_pkgs.append('grub-efi-%s' % arch)
729
730 # Architecture might support a signed UEFI loader
731 uefi_pkg_signed = 'grub-efi-%s-signed' % arch
732 if distro.has_pkg_available(uefi_pkg_signed):
733 uefi_pkgs.append(uefi_pkg_signed)
734
735 # AMD64 has shim-signed for SecureBoot support
736 if arch == "amd64":
737 uefi_pkgs.append("shim-signed")
738 else:
739 raise ValueError('Unknown grub2 package list for distro: %s' %
740 osfamily)
741 needed_packages.update([pkg for pkg in uefi_pkgs
742 if pkg not in installed_packages])
743
681 # Filter out ifupdown network packages on netplan enabled systems.744 # Filter out ifupdown network packages on netplan enabled systems.
682 has_netplan = ('nplan' in installed_packages or745 has_netplan = ('nplan' in installed_packages or
683 'netplan.io' in installed_packages)746 'netplan.io' in installed_packages)
@@ -696,10 +759,10 @@ def install_missing_packages(cfg, target):
696 reporting_enabled=True, level="INFO",759 reporting_enabled=True, level="INFO",
697 description="Installing packages on target system: " +760 description="Installing packages on target system: " +
698 str(to_add)):761 str(to_add)):
699 util.install_packages(to_add, target=target)762 distro.install_packages(to_add, target=target, osfamily=osfamily)
700763
701764
702def system_upgrade(cfg, target):765def system_upgrade(cfg, target, osfamily=DISTROS.debian):
703 """run system-upgrade (apt-get dist-upgrade) or other in target.766 """run system-upgrade (apt-get dist-upgrade) or other in target.
704767
705 config:768 config:
@@ -718,7 +781,7 @@ def system_upgrade(cfg, target):
718 LOG.debug("system_upgrade disabled by config.")781 LOG.debug("system_upgrade disabled by config.")
719 return782 return
720783
721 util.system_upgrade(target=target)784 distro.system_upgrade(target=target, osfamily=osfamily)
722785
723786
724def inject_pollinate_user_agent_config(ua_cfg, target):787def inject_pollinate_user_agent_config(ua_cfg, target):
@@ -728,7 +791,7 @@ def inject_pollinate_user_agent_config(ua_cfg, target):
728 if not isinstance(ua_cfg, dict):791 if not isinstance(ua_cfg, dict):
729 raise ValueError('ua_cfg is not a dictionary: %s', ua_cfg)792 raise ValueError('ua_cfg is not a dictionary: %s', ua_cfg)
730793
731 pollinate_cfg = util.target_path(target, '/etc/pollinate/add-user-agent')794 pollinate_cfg = paths.target_path(target, '/etc/pollinate/add-user-agent')
732 comment = "# written by curtin"795 comment = "# written by curtin"
733 content = "\n".join(["%s/%s %s" % (ua_key, ua_val, comment)796 content = "\n".join(["%s/%s %s" % (ua_key, ua_val, comment)
734 for ua_key, ua_val in ua_cfg.items()]) + "\n"797 for ua_key, ua_val in ua_cfg.items()]) + "\n"
@@ -751,6 +814,8 @@ def handle_pollinate_user_agent(cfg, target):
751 curtin version814 curtin version
752 maas version (via endpoint URL, if present)815 maas version (via endpoint URL, if present)
753 """816 """
817 if not util.which('pollinate', target=target):
818 return
754819
755 pcfg = cfg.get('pollinate')820 pcfg = cfg.get('pollinate')
756 if not isinstance(pcfg, dict):821 if not isinstance(pcfg, dict):
@@ -776,6 +841,63 @@ def handle_pollinate_user_agent(cfg, target):
776 inject_pollinate_user_agent_config(uacfg, target)841 inject_pollinate_user_agent_config(uacfg, target)
777842
778843
844def configure_iscsi(cfg, state_etcd, target, osfamily=DISTROS.debian):
845 # If a /etc/iscsi/nodes/... file was created by block_meta then it
846 # needs to be copied onto the target system
847 nodes = os.path.join(state_etcd, "nodes")
848 if not os.path.exists(nodes):
849 return
850
851 LOG.info('Iscsi configuration found, enabling service')
852 if osfamily == DISTROS.redhat:
853 # copy iscsi node config to target image
854 LOG.debug('Copying iscsi node config to target')
855 copy_iscsi_conf(nodes, target, target_nodes_dir='var/lib/iscsi/nodes')
856
857 # update in-target config
858 with util.ChrootableTarget(target) as in_chroot:
859 # enable iscsid service
860 LOG.debug('Enabling iscsi daemon')
861 in_chroot.subp(['chkconfig', 'iscsid', 'on'])
862
863 # update selinux config for iscsi ports required
864 for port in [str(port) for port in
865 iscsi.get_iscsi_ports_from_config(cfg)]:
866 LOG.debug('Adding iscsi port %s to selinux iscsi_port_t list',
867 port)
868 in_chroot.subp(['semanage', 'port', '-a', '-t',
869 'iscsi_port_t', '-p', 'tcp', port])
870
871 elif osfamily == DISTROS.debian:
872 copy_iscsi_conf(nodes, target)
873 else:
874 raise ValueError(
875 'Unknown iscsi requirements for distro: %s' % osfamily)
876
877
878def configure_mdadm(cfg, state_etcd, target, osfamily=DISTROS.debian):
879 # If a mdadm.conf file was created by block_meta than it needs
880 # to be copied onto the target system
881 mdadm_location = os.path.join(state_etcd, "mdadm.conf")
882 if not os.path.exists(mdadm_location):
883 return
884
885 conf_map = {
886 DISTROS.debian: 'etc/mdadm/mdadm.conf',
887 DISTROS.redhat: 'etc/mdadm.conf',
888 }
889 if osfamily not in conf_map:
890 raise ValueError(
891 'Unknown mdadm conf mapping for distro: %s' % osfamily)
892 LOG.info('Mdadm configuration found, enabling service')
893 shutil.copy(mdadm_location, paths.target_path(target,
894 conf_map[osfamily]))
895 if osfamily == DISTROS.debian:
896 # as per LP: #964052 reconfigure mdadm
897 util.subp(['dpkg-reconfigure', '--frontend=noninteractive', 'mdadm'],
898 data=None, target=target)
899
900
779def handle_cloudconfig(cfg, base_dir=None):901def handle_cloudconfig(cfg, base_dir=None):
780 """write cloud-init configuration files into base_dir.902 """write cloud-init configuration files into base_dir.
781903
@@ -845,21 +967,11 @@ def ubuntu_core_curthooks(cfg, target=None):
845 content=config.dump_config({'network': netconfig}))967 content=config.dump_config({'network': netconfig}))
846968
847969
848def rpm_get_dist_id(target):970def redhat_upgrade_cloud_init(netcfg, target=None, osfamily=DISTROS.redhat):
849 """Use rpm command to extract the '%rhel' distro macro which returns
850 the major os version id (6, 7, 8). This works for centos or rhel
851 """
852 with util.ChrootableTarget(target) as in_chroot:
853 dist, _ = in_chroot.subp(['rpm', '-E', '%rhel'], capture=True)
854 return dist.rstrip()
855
856
857def centos_apply_network_config(netcfg, target=None):
858 """ CentOS images execute built-in curthooks which only supports971 """ CentOS images execute built-in curthooks which only supports
859 simple networking configuration. This hook enables advanced972 simple networking configuration. This hook enables advanced
860 network configuration via config passthrough to the target.973 network configuration via config passthrough to the target.
861 """974 """
862
863 def cloud_init_repo(version):975 def cloud_init_repo(version):
864 if not version:976 if not version:
865 raise ValueError('Missing required version parameter')977 raise ValueError('Missing required version parameter')
@@ -868,9 +980,9 @@ def centos_apply_network_config(netcfg, target=None):
868980
869 if netcfg:981 if netcfg:
870 LOG.info('Removing embedded network configuration (if present)')982 LOG.info('Removing embedded network configuration (if present)')
871 ifcfgs = glob.glob(util.target_path(target,983 ifcfgs = glob.glob(
872 'etc/sysconfig/network-scripts') +984 paths.target_path(target, 'etc/sysconfig/network-scripts') +
873 '/ifcfg-*')985 '/ifcfg-*')
874 # remove ifcfg-* (except ifcfg-lo)986 # remove ifcfg-* (except ifcfg-lo)
875 for ifcfg in ifcfgs:987 for ifcfg in ifcfgs:
876 if os.path.basename(ifcfg) != "ifcfg-lo":988 if os.path.basename(ifcfg) != "ifcfg-lo":
@@ -884,29 +996,27 @@ def centos_apply_network_config(netcfg, target=None):
884 # if in-target cloud-init is not updated, upgrade via cloud-init repo996 # if in-target cloud-init is not updated, upgrade via cloud-init repo
885 if not passthrough:997 if not passthrough:
886 cloud_init_yum_repo = (998 cloud_init_yum_repo = (
887 util.target_path(target,999 paths.target_path(target,
888 'etc/yum.repos.d/curtin-cloud-init.repo'))1000 'etc/yum.repos.d/curtin-cloud-init.repo'))
889 # Inject cloud-init daily yum repo1001 # Inject cloud-init daily yum repo
890 util.write_file(cloud_init_yum_repo,1002 util.write_file(cloud_init_yum_repo,
891 content=cloud_init_repo(rpm_get_dist_id(target)))1003 content=cloud_init_repo(
1004 distro.rpm_get_dist_id(target)))
8921005
893 # we separate the installation of repository packages (epel,1006 # we separate the installation of repository packages (epel,
894 # cloud-init-el-release) as we need a new invocation of yum1007 # cloud-init-el-release) as we need a new invocation of yum
895 # to read the newly installed repo files.1008 # to read the newly installed repo files.
896 YUM_CMD = ['yum', '-y', '--noplugins', 'install']1009
897 retries = [1] * 301010 # ensure up-to-date ca-certificates to handle https mirror
898 with util.ChrootableTarget(target) as in_chroot:1011 # connections
899 # ensure up-to-date ca-certificates to handle https mirror1012 distro.install_packages(['ca-certificates'], target=target,
900 # connections1013 osfamily=osfamily)
901 in_chroot.subp(YUM_CMD + ['ca-certificates'], capture=True,1014 distro.install_packages(['epel-release'], target=target,
902 log_captured=True, retries=retries)1015 osfamily=osfamily)
903 in_chroot.subp(YUM_CMD + ['epel-release'], capture=True,1016 distro.install_packages(['cloud-init-el-release'], target=target,
904 log_captured=True, retries=retries)1017 osfamily=osfamily)
905 in_chroot.subp(YUM_CMD + ['cloud-init-el-release'],1018 distro.install_packages(['cloud-init'], target=target,
906 log_captured=True, capture=True,1019 osfamily=osfamily)
907 retries=retries)
908 in_chroot.subp(YUM_CMD + ['cloud-init'], capture=True,
909 log_captured=True, retries=retries)
9101020
911 # remove cloud-init el-stable bootstrap repo config as the1021 # remove cloud-init el-stable bootstrap repo config as the
912 # cloud-init-el-release package points to the correct repo1022 # cloud-init-el-release package points to the correct repo
@@ -919,127 +1029,136 @@ def centos_apply_network_config(netcfg, target=None):
919 capture=False, rcs=[0])1029 capture=False, rcs=[0])
920 except util.ProcessExecutionError:1030 except util.ProcessExecutionError:
921 LOG.debug('Image missing bridge-utils package, installing')1031 LOG.debug('Image missing bridge-utils package, installing')
922 in_chroot.subp(YUM_CMD + ['bridge-utils'], capture=True,1032 distro.install_packages(['bridge-utils'], target=target,
923 log_captured=True, retries=retries)1033 osfamily=osfamily)
9241034
925 LOG.info('Passing network configuration through to target')1035 LOG.info('Passing network configuration through to target')
926 net.render_netconfig_passthrough(target, netconfig={'network': netcfg})1036 net.render_netconfig_passthrough(target, netconfig={'network': netcfg})
9271037
9281038
929def target_is_ubuntu_core(target):1039# Public API, maas may call this from internal curthooks
930 """Check if Ubuntu-Core specific directory is present at target"""1040centos_apply_network_config = redhat_upgrade_cloud_init
931 if target:
932 return os.path.exists(util.target_path(target,
933 'system-data/var/lib/snapd'))
934 return False
935
936
937def target_is_centos(target):
938 """Check if CentOS specific file is present at target"""
939 if target:
940 return os.path.exists(util.target_path(target, 'etc/centos-release'))
9411041
942 return False
9431042
1043def redhat_apply_selinux_autorelabel(target):
1044 """Creates file /.autorelabel.
9441045
945def target_is_rhel(target):1046 This is used by SELinux to relabel all of the
946 """Check if RHEL specific file is present at target"""1047 files on the filesystem to have the correct
947 if target:1048 security context. Without this SSH login will
948 return os.path.exists(util.target_path(target, 'etc/redhat-release'))1049 fail.
1050 """
1051 LOG.debug('enabling selinux autorelabel')
1052 open(paths.target_path(target, '.autorelabel'), 'a').close()
9491053
950 return False
9511054
1055def redhat_update_dracut_config(target, cfg):
1056 initramfs_mapping = {
1057 'lvm': {'conf': 'lvmconf', 'modules': 'lvm'},
1058 'raid': {'conf': 'mdadmconf', 'modules': 'mdraid'},
1059 }
9521060
953def curthooks(args):1061 # no need to update initramfs if no custom storage
954 state = util.load_command_environment()1062 if 'storage' not in cfg:
1063 return False
9551064
956 if args.target is not None:1065 storage_config = cfg.get('storage', {}).get('config')
957 target = args.target1066 if not storage_config:
958 else:1067 raise ValueError('Invalid storage config')
959 target = state['target']1068
1069 add_conf = set()
1070 add_modules = set()
1071 for scfg in storage_config:
1072 if scfg['type'] == 'raid':
1073 add_conf.add(initramfs_mapping['raid']['conf'])
1074 add_modules.add(initramfs_mapping['raid']['modules'])
1075 elif scfg['type'] in ['lvm_volgroup', 'lvm_partition']:
1076 add_conf.add(initramfs_mapping['lvm']['conf'])
1077 add_modules.add(initramfs_mapping['lvm']['modules'])
1078
1079 dconfig = ['# Written by curtin for custom storage config']
1080 dconfig.append('add_dracutmodules+="%s"' % (" ".join(add_modules)))
1081 for conf in add_conf:
1082 dconfig.append('%s="yes"' % conf)
1083
1084 # Write out initramfs/dracut config for storage config
1085 dracut_conf_storage = os.path.sep.join(
1086 [target, '/etc/dracut.conf.d/50-curtin-storage.conf'])
1087 msg = '\n'.join(dconfig + [''])
1088 LOG.debug('Updating redhat dracut config')
1089 util.write_file(dracut_conf_storage, content=msg)
1090 return True
1091
1092
1093def redhat_update_initramfs(target, cfg):
1094 if not redhat_update_dracut_config(target, cfg):
1095 LOG.debug('Skipping redhat initramfs update, no custom storage config')
1096 return
1097 kver_cmd = ['rpm', '-q', '--queryformat',
1098 '%{VERSION}-%{RELEASE}.%{ARCH}', 'kernel']
1099 with util.ChrootableTarget(target) as in_chroot:
1100 LOG.debug('Finding redhat kernel version: %s', kver_cmd)
1101 kver, _err = in_chroot.subp(kver_cmd, capture=True)
1102 LOG.debug('Found kver=%s' % kver)
1103 initramfs = '/boot/initramfs-%s.img' % kver
1104 dracut_cmd = ['dracut', '-f', initramfs, kver]
1105 LOG.debug('Rebuilding initramfs with: %s', dracut_cmd)
1106 in_chroot.subp(dracut_cmd, capture=True)
9601107
961 if target is None:
962 sys.stderr.write("Unable to find target. "
963 "Use --target or set TARGET_MOUNT_POINT\n")
964 sys.exit(2)
9651108
966 cfg = config.load_command_config(args, state)1109def builtin_curthooks(cfg, target, state):
1110 LOG.info('Running curtin builtin curthooks')
967 stack_prefix = state.get('report_stack_prefix', '')1111 stack_prefix = state.get('report_stack_prefix', '')
9681112 state_etcd = os.path.split(state['fstab'])[0]
969 # if curtin-hooks hook exists in target we can defer to the in-target hooks1113
970 if util.run_hook_if_exists(target, 'curtin-hooks'):1114 distro_info = distro.get_distroinfo(target=target)
971 # For vmtests to force execute centos_apply_network_config, uncomment1115 if not distro_info:
972 # the value in examples/tests/centos_defaults.yaml1116 raise RuntimeError('Failed to determine target distro')
973 if cfg.get('_ammend_centos_curthooks'):1117 osfamily = distro_info.family
974 if cfg.get('cloudconfig'):1118 LOG.info('Configuring target system for distro: %s osfamily: %s',
975 handle_cloudconfig(1119 distro_info.variant, osfamily)
976 cfg['cloudconfig'],1120 if osfamily == DISTROS.debian:
977 base_dir=util.target_path(target, 'etc/cloud/cloud.cfg.d'))
978
979 if target_is_centos(target) or target_is_rhel(target):
980 LOG.info('Detected RHEL/CentOS image, running extra hooks')
981 with events.ReportEventStack(
982 name=stack_prefix, reporting_enabled=True,
983 level="INFO",
984 description="Configuring CentOS for first boot"):
985 centos_apply_network_config(cfg.get('network', {}), target)
986 sys.exit(0)
987
988 if target_is_ubuntu_core(target):
989 LOG.info('Detected Ubuntu-Core image, running hooks')
990 with events.ReportEventStack(1121 with events.ReportEventStack(
991 name=stack_prefix, reporting_enabled=True, level="INFO",1122 name=stack_prefix + '/writing-apt-config',
992 description="Configuring Ubuntu-Core for first boot"):1123 reporting_enabled=True, level="INFO",
993 ubuntu_core_curthooks(cfg, target)1124 description="configuring apt configuring apt"):
994 sys.exit(0)1125 do_apt_config(cfg, target)
9951126 disable_overlayroot(cfg, target)
996 with events.ReportEventStack(
997 name=stack_prefix + '/writing-config',
998 reporting_enabled=True, level="INFO",
999 description="configuring apt configuring apt"):
1000 do_apt_config(cfg, target)
1001 disable_overlayroot(cfg, target)
10021127
1003 # LP: #1742560 prevent zfs-dkms from being installed (Xenial)1128 # LP: #1742560 prevent zfs-dkms from being installed (Xenial)
1004 if util.lsb_release(target=target)['codename'] == 'xenial':1129 if distro.lsb_release(target=target)['codename'] == 'xenial':
1005 util.apt_update(target=target)1130 distro.apt_update(target=target)
1006 with util.ChrootableTarget(target) as in_chroot:1131 with util.ChrootableTarget(target) as in_chroot:
1007 in_chroot.subp(['apt-mark', 'hold', 'zfs-dkms'])1132 in_chroot.subp(['apt-mark', 'hold', 'zfs-dkms'])
10081133
1009 # packages may be needed prior to installing kernel1134 # packages may be needed prior to installing kernel
1010 with events.ReportEventStack(1135 with events.ReportEventStack(
1011 name=stack_prefix + '/installing-missing-packages',1136 name=stack_prefix + '/installing-missing-packages',
1012 reporting_enabled=True, level="INFO",1137 reporting_enabled=True, level="INFO",
1013 description="installing missing packages"):1138 description="installing missing packages"):
1014 install_missing_packages(cfg, target)1139 install_missing_packages(cfg, target, osfamily=osfamily)
10151140
1016 # If a /etc/iscsi/nodes/... file was created by block_meta then it1141 with events.ReportEventStack(
1017 # needs to be copied onto the target system1142 name=stack_prefix + '/configuring-iscsi-service',
1018 nodes_location = os.path.join(os.path.split(state['fstab'])[0],1143 reporting_enabled=True, level="INFO",
1019 "nodes")1144 description="configuring iscsi service"):
1020 if os.path.exists(nodes_location):1145 configure_iscsi(cfg, state_etcd, target, osfamily=osfamily)
1021 copy_iscsi_conf(nodes_location, target)
1022 # do we need to reconfigure open-iscsi?
1023
1024 # If a mdadm.conf file was created by block_meta than it needs to be copied
1025 # onto the target system
1026 mdadm_location = os.path.join(os.path.split(state['fstab'])[0],
1027 "mdadm.conf")
1028 if os.path.exists(mdadm_location):
1029 copy_mdadm_conf(mdadm_location, target)
1030 # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052
1031 # reconfigure mdadm
1032 util.subp(['dpkg-reconfigure', '--frontend=noninteractive', 'mdadm'],
1033 data=None, target=target)
10341146
1035 with events.ReportEventStack(1147 with events.ReportEventStack(
1036 name=stack_prefix + '/installing-kernel',1148 name=stack_prefix + '/configuring-mdadm-service',
1037 reporting_enabled=True, level="INFO",1149 reporting_enabled=True, level="INFO",
1038 description="installing kernel"):1150 description="configuring raid (mdadm) service"):
1039 setup_zipl(cfg, target)1151 configure_mdadm(cfg, state_etcd, target, osfamily=osfamily)
1040 install_kernel(cfg, target)1152
1041 run_zipl(cfg, target)1153 if osfamily == DISTROS.debian:
1042 restore_dist_interfaces(cfg, target)1154 with events.ReportEventStack(
1155 name=stack_prefix + '/installing-kernel',
1156 reporting_enabled=True, level="INFO",
1157 description="installing kernel"):
1158 setup_zipl(cfg, target)
1159 install_kernel(cfg, target)
1160 run_zipl(cfg, target)
1161 restore_dist_interfaces(cfg, target)
10431162
1044 with events.ReportEventStack(1163 with events.ReportEventStack(
1045 name=stack_prefix + '/setting-up-swap',1164 name=stack_prefix + '/setting-up-swap',
@@ -1047,6 +1166,23 @@ def curthooks(args):
1047 description="setting up swap"):1166 description="setting up swap"):
1048 add_swap(cfg, target, state.get('fstab'))1167 add_swap(cfg, target, state.get('fstab'))
10491168
1169 if osfamily == DISTROS.redhat:
1170 # set cloud-init maas datasource for centos images
1171 if cfg.get('cloudconfig'):
1172 handle_cloudconfig(
1173 cfg['cloudconfig'],
1174 base_dir=paths.target_path(target,
1175 'etc/cloud/cloud.cfg.d'))
1176
1177 # For vmtests to force execute redhat_upgrade_cloud_init, uncomment
1178 # the value in examples/tests/centos_defaults.yaml
1179 if cfg.get('_ammend_centos_curthooks'):
1180 with events.ReportEventStack(
1181 name=stack_prefix + '/upgrading cloud-init',
1182 reporting_enabled=True, level="INFO",
1183 description="Upgrading cloud-init in target"):
1184 redhat_upgrade_cloud_init(cfg.get('network', {}), target)
1185
1050 with events.ReportEventStack(1186 with events.ReportEventStack(
1051 name=stack_prefix + '/apply-networking-config',1187 name=stack_prefix + '/apply-networking-config',
1052 reporting_enabled=True, level="INFO",1188 reporting_enabled=True, level="INFO",
@@ -1063,29 +1199,44 @@ def curthooks(args):
1063 name=stack_prefix + '/configuring-multipath',1199 name=stack_prefix + '/configuring-multipath',
1064 reporting_enabled=True, level="INFO",1200 reporting_enabled=True, level="INFO",
1065 description="configuring multipath"):1201 description="configuring multipath"):
1066 detect_and_handle_multipath(cfg, target)1202 detect_and_handle_multipath(cfg, target, osfamily=osfamily)
10671203
1068 with events.ReportEventStack(1204 with events.ReportEventStack(
1069 name=stack_prefix + '/system-upgrade',1205 name=stack_prefix + '/system-upgrade',
1070 reporting_enabled=True, level="INFO",1206 reporting_enabled=True, level="INFO",
1071 description="updating packages on target system"):1207 description="updating packages on target system"):
1072 system_upgrade(cfg, target)1208 system_upgrade(cfg, target, osfamily=osfamily)
1209
1210 if osfamily == DISTROS.redhat:
1211 with events.ReportEventStack(
1212 name=stack_prefix + '/enabling-selinux-autorelabel',
1213 reporting_enabled=True, level="INFO",
1214 description="enabling selinux autorelabel mode"):
1215 redhat_apply_selinux_autorelabel(target)
1216
1217 with events.ReportEventStack(
1218 name=stack_prefix + '/updating-initramfs-configuration',
1219 reporting_enabled=True, level="INFO",
1220 description="updating initramfs configuration"):
1221 redhat_update_initramfs(target, cfg)
10731222
1074 with events.ReportEventStack(1223 with events.ReportEventStack(
1075 name=stack_prefix + '/pollinate-user-agent',1224 name=stack_prefix + '/pollinate-user-agent',
1076 reporting_enabled=True, level="INFO",1225 reporting_enabled=True, level="INFO",
1077 description="configuring pollinate user-agent on target system"):1226 description="configuring pollinate user-agent on target"):
1078 handle_pollinate_user_agent(cfg, target)1227 handle_pollinate_user_agent(cfg, target)
10791228
1080 # If a crypttab file was created by block_meta than it needs to be copied1229 if osfamily == DISTROS.debian:
1081 # onto the target system, and update_initramfs() needs to be run, so that1230 # If a crypttab file was created by block_meta than it needs to be
1082 # the cryptsetup hooks are properly configured on the installed system and1231 # copied onto the target system, and update_initramfs() needs to be
1083 # it will be able to open encrypted volumes at boot.1232 # run, so that the cryptsetup hooks are properly configured on the
1084 crypttab_location = os.path.join(os.path.split(state['fstab'])[0],1233 # installed system and it will be able to open encrypted volumes
1085 "crypttab")1234 # at boot.
1086 if os.path.exists(crypttab_location):1235 crypttab_location = os.path.join(os.path.split(state['fstab'])[0],
1087 copy_crypttab(crypttab_location, target)1236 "crypttab")
1088 update_initramfs(target)1237 if os.path.exists(crypttab_location):
1238 copy_crypttab(crypttab_location, target)
1239 update_initramfs(target)
10891240
1090 # If udev dname rules were created, copy them to target1241 # If udev dname rules were created, copy them to target
1091 udev_rules_d = os.path.join(state['scratch'], "rules.d")1242 udev_rules_d = os.path.join(state['scratch'], "rules.d")
@@ -1102,8 +1253,41 @@ def curthooks(args):
1102 machine.startswith('aarch64') and not util.is_uefi_bootable()):1253 machine.startswith('aarch64') and not util.is_uefi_bootable()):
1103 update_initramfs(target)1254 update_initramfs(target)
1104 else:1255 else:
1105 setup_grub(cfg, target)1256 setup_grub(cfg, target, osfamily=osfamily)
1257
1258
1259def curthooks(args):
1260 state = util.load_command_environment()
1261
1262 if args.target is not None:
1263 target = args.target
1264 else:
1265 target = state['target']
1266
1267 if target is None:
1268 sys.stderr.write("Unable to find target. "
1269 "Use --target or set TARGET_MOUNT_POINT\n")
1270 sys.exit(2)
1271
1272 cfg = config.load_command_config(args, state)
1273 stack_prefix = state.get('report_stack_prefix', '')
1274 curthooks_mode = cfg.get('curthooks', {}).get('mode', 'auto')
1275
1276 # UC is special, handle it first.
1277 if distro.is_ubuntu_core(target):
1278 LOG.info('Detected Ubuntu-Core image, running hooks')
1279 with events.ReportEventStack(
1280 name=stack_prefix, reporting_enabled=True, level="INFO",
1281 description="Configuring Ubuntu-Core for first boot"):
1282 ubuntu_core_curthooks(cfg, target)
1283 sys.exit(0)
1284
1285 # user asked for target, or auto mode
1286 if curthooks_mode in ['auto', 'target']:
1287 if util.run_hook_if_exists(target, 'curtin-hooks'):
1288 sys.exit(0)
11061289
1290 builtin_curthooks(cfg, target, state)
1107 sys.exit(0)1291 sys.exit(0)
11081292
11091293
diff --git a/curtin/commands/in_target.py b/curtin/commands/in_target.py
index 8e839c0..c6f7abd 100644
--- a/curtin/commands/in_target.py
+++ b/curtin/commands/in_target.py
@@ -4,7 +4,7 @@ import os
4import pty4import pty
5import sys5import sys
66
7from curtin import util7from curtin import paths, util
88
9from . import populate_one_subcmd9from . import populate_one_subcmd
1010
@@ -41,7 +41,7 @@ def in_target_main(args):
41 sys.exit(2)41 sys.exit(2)
4242
43 daemons = args.allow_daemons43 daemons = args.allow_daemons
44 if util.target_path(args.target) == "/":44 if paths.target_path(args.target) == "/":
45 sys.stderr.write("WARN: Target is /, daemons are allowed.\n")45 sys.stderr.write("WARN: Target is /, daemons are allowed.\n")
46 daemons = True46 daemons = True
47 cmd = args.command_args47 cmd = args.command_args
diff --git a/curtin/commands/install.py b/curtin/commands/install.py
index 4d2a13f..244683c 100644
--- a/curtin/commands/install.py
+++ b/curtin/commands/install.py
@@ -13,7 +13,9 @@ import tempfile
1313
14from curtin.block import iscsi14from curtin.block import iscsi
15from curtin import config15from curtin import config
16from curtin import distro
16from curtin import util17from curtin import util
18from curtin import paths
17from curtin import version19from curtin import version
18from curtin.log import LOG, logged_time20from curtin.log import LOG, logged_time
19from curtin.reporter.legacy import load_reporter21from curtin.reporter.legacy import load_reporter
@@ -80,7 +82,7 @@ def copy_install_log(logfile, target, log_target_path):
80 LOG.debug('Copying curtin install log from %s to target/%s',82 LOG.debug('Copying curtin install log from %s to target/%s',
81 logfile, log_target_path)83 logfile, log_target_path)
82 util.write_file(84 util.write_file(
83 filename=util.target_path(target, log_target_path),85 filename=paths.target_path(target, log_target_path),
84 content=util.load_file(logfile, decode=False),86 content=util.load_file(logfile, decode=False),
85 mode=0o400, omode="wb")87 mode=0o400, omode="wb")
8688
@@ -319,7 +321,7 @@ def apply_kexec(kexec, target):
319 raise TypeError("kexec is not a dict.")321 raise TypeError("kexec is not a dict.")
320322
321 if not util.which('kexec'):323 if not util.which('kexec'):
322 util.install_packages('kexec-tools')324 distro.install_packages('kexec-tools')
323325
324 if not os.path.isfile(target_grubcfg):326 if not os.path.isfile(target_grubcfg):
325 raise ValueError("%s does not exist in target" % grubcfg)327 raise ValueError("%s does not exist in target" % grubcfg)
diff --git a/curtin/commands/system_install.py b/curtin/commands/system_install.py
index 05d70af..6d7b736 100644
--- a/curtin/commands/system_install.py
+++ b/curtin/commands/system_install.py
@@ -7,6 +7,7 @@ import curtin.util as util
77
8from . import populate_one_subcmd8from . import populate_one_subcmd
9from curtin.log import LOG9from curtin.log import LOG
10from curtin import distro
1011
1112
12def system_install_pkgs_main(args):13def system_install_pkgs_main(args):
@@ -16,7 +17,7 @@ def system_install_pkgs_main(args):
1617
17 exit_code = 018 exit_code = 0
18 try:19 try:
19 util.install_packages(20 distro.install_packages(
20 pkglist=args.packages, target=args.target,21 pkglist=args.packages, target=args.target,
21 allow_daemons=args.allow_daemons)22 allow_daemons=args.allow_daemons)
22 except util.ProcessExecutionError as e:23 except util.ProcessExecutionError as e:
diff --git a/curtin/commands/system_upgrade.py b/curtin/commands/system_upgrade.py
index fe10fac..d4f6735 100644
--- a/curtin/commands/system_upgrade.py
+++ b/curtin/commands/system_upgrade.py
@@ -7,6 +7,7 @@ import curtin.util as util
77
8from . import populate_one_subcmd8from . import populate_one_subcmd
9from curtin.log import LOG9from curtin.log import LOG
10from curtin import distro
1011
1112
12def system_upgrade_main(args):13def system_upgrade_main(args):
@@ -16,8 +17,8 @@ def system_upgrade_main(args):
1617
17 exit_code = 018 exit_code = 0
18 try:19 try:
19 util.system_upgrade(target=args.target,20 distro.system_upgrade(target=args.target,
20 allow_daemons=args.allow_daemons)21 allow_daemons=args.allow_daemons)
21 except util.ProcessExecutionError as e:22 except util.ProcessExecutionError as e:
22 LOG.warn("system upgrade failed: %s" % e)23 LOG.warn("system upgrade failed: %s" % e)
23 exit_code = e.exit_code24 exit_code = e.exit_code
diff --git a/curtin/deps/__init__.py b/curtin/deps/__init__.py
index 7014895..96df4f6 100644
--- a/curtin/deps/__init__.py
+++ b/curtin/deps/__init__.py
@@ -6,13 +6,13 @@ import sys
6from curtin.util import (6from curtin.util import (
7 ProcessExecutionError,7 ProcessExecutionError,
8 get_architecture,8 get_architecture,
9 install_packages,
10 is_uefi_bootable,9 is_uefi_bootable,
11 lsb_release,
12 subp,10 subp,
13 which,11 which,
14)12)
1513
14from curtin.distro import install_packages, lsb_release
15
16REQUIRED_IMPORTS = [16REQUIRED_IMPORTS = [
17 # import string to execute, python2 package, python3 package17 # import string to execute, python2 package, python3 package
18 ('import yaml', 'python-yaml', 'python3-yaml'),18 ('import yaml', 'python-yaml', 'python3-yaml'),
@@ -177,7 +177,7 @@ def install_deps(verbosity=False, dry_run=False, allow_daemons=True):
177 ret = 0177 ret = 0
178 try:178 try:
179 install_packages(missing_pkgs, allow_daemons=allow_daemons,179 install_packages(missing_pkgs, allow_daemons=allow_daemons,
180 aptopts=["--no-install-recommends"])180 opts=["--no-install-recommends"])
181 except ProcessExecutionError as e:181 except ProcessExecutionError as e:
182 sys.stderr.write("%s\n" % e)182 sys.stderr.write("%s\n" % e)
183 ret = e.exit_code183 ret = e.exit_code
diff --git a/curtin/distro.py b/curtin/distro.py
184new file mode 100644184new file mode 100644
index 0000000..f2a78ed
--- /dev/null
+++ b/curtin/distro.py
@@ -0,0 +1,512 @@
1# This file is part of curtin. See LICENSE file for copyright and license info.
2import glob
3from collections import namedtuple
4import os
5import re
6import shutil
7import tempfile
8
9from .paths import target_path
10from .util import (
11 ChrootableTarget,
12 find_newer,
13 load_file,
14 load_shell_content,
15 ProcessExecutionError,
16 set_unexecutable,
17 string_types,
18 subp,
19 which
20)
21from .log import LOG
22
23DistroInfo = namedtuple('DistroInfo', ('variant', 'family'))
24DISTRO_NAMES = ['arch', 'centos', 'debian', 'fedora', 'freebsd', 'gentoo',
25 'opensuse', 'redhat', 'rhel', 'sles', 'suse', 'ubuntu']
26
27
28# python2.7 lacks PEP 435, so we must make use an alternative for py2.7/3.x
29# https://stackoverflow.com/questions/36932/how-can-i-represent-an-enum-in-python
30def distro_enum(*distros):
31 return namedtuple('Distros', distros)(*distros)
32
33
34DISTROS = distro_enum(*DISTRO_NAMES)
35
36OS_FAMILIES = {
37 DISTROS.debian: [DISTROS.debian, DISTROS.ubuntu],
38 DISTROS.redhat: [DISTROS.centos, DISTROS.fedora, DISTROS.redhat,
39 DISTROS.rhel],
40 DISTROS.gentoo: [DISTROS.gentoo],
41 DISTROS.freebsd: [DISTROS.freebsd],
42 DISTROS.suse: [DISTROS.opensuse, DISTROS.sles, DISTROS.suse],
43 DISTROS.arch: [DISTROS.arch],
44}
45
46# invert the mapping for faster lookup of variants
47DISTRO_TO_OSFAMILY = (
48 {variant: family for family, variants in OS_FAMILIES.items()
49 for variant in variants})
50
51_LSB_RELEASE = {}
52
53
54def name_to_distro(distname):
55 try:
56 return DISTROS[DISTROS.index(distname)]
57 except (IndexError, AttributeError):
58 LOG.error('Unknown distro name: %s', distname)
59
60
61def lsb_release(target=None):
62 if target_path(target) != "/":
63 # do not use or update cache if target is provided
64 return _lsb_release(target)
65
66 global _LSB_RELEASE
67 if not _LSB_RELEASE:
68 data = _lsb_release()
69 _LSB_RELEASE.update(data)
70 return _LSB_RELEASE
71
72
73def os_release(target=None):
74 data = {}
75 os_release = target_path(target, 'etc/os-release')
76 if os.path.exists(os_release):
77 data = load_shell_content(load_file(os_release),
78 add_empty=False, empty_val=None)
79 if not data:
80 for relfile in [target_path(target, rel) for rel in
81 ['etc/centos-release', 'etc/redhat-release']]:
82 data = _parse_redhat_release(release_file=relfile, target=target)
83 if data:
84 break
85
86 return data
87
88
89def _parse_redhat_release(release_file=None, target=None):
90 """Return a dictionary of distro info fields from /etc/redhat-release.
91
92 Dict keys will align with /etc/os-release keys:
93 ID, VERSION_ID, VERSION_CODENAME
94 """
95
96 if not release_file:
97 release_file = target_path('etc/redhat-release')
98 if not os.path.exists(release_file):
99 return {}
100 redhat_release = load_file(release_file)
101 redhat_regex = (
102 r'(?P<name>.+) release (?P<version>[\d\.]+) '
103 r'\((?P<codename>[^)]+)\)')
104 match = re.match(redhat_regex, redhat_release)
105 if match:
106 group = match.groupdict()
107 group['name'] = group['name'].lower().partition(' linux')[0]
108 if group['name'] == 'red hat enterprise':
109 group['name'] = 'redhat'
110 return {'ID': group['name'], 'VERSION_ID': group['version'],
111 'VERSION_CODENAME': group['codename']}
112 return {}
113
114
115def get_distroinfo(target=None):
116 variant_name = os_release(target=target)['ID']
117 variant = name_to_distro(variant_name)
118 family = DISTRO_TO_OSFAMILY.get(variant)
119 return DistroInfo(variant, family)
120
121
122def get_distro(target=None):
123 distinfo = get_distroinfo(target=target)
124 return distinfo.variant
125
126
127def get_osfamily(target=None):
128 distinfo = get_distroinfo(target=target)
129 return distinfo.family
130
131
132def is_ubuntu_core(target=None):
133 """Check if Ubuntu-Core specific directory is present at target"""
134 return os.path.exists(target_path(target, 'system-data/var/lib/snapd'))
135
136
137def is_centos(target=None):
138 """Check if CentOS specific file is present at target"""
139 return os.path.exists(target_path(target, 'etc/centos-release'))
140
141
142def is_rhel(target=None):
143 """Check if RHEL specific file is present at target"""
144 return os.path.exists(target_path(target, 'etc/redhat-release'))
145
146
147def _lsb_release(target=None):
148 fmap = {'Codename': 'codename', 'Description': 'description',
149 'Distributor ID': 'id', 'Release': 'release'}
150
151 data = {}
152 try:
153 out, _ = subp(['lsb_release', '--all'], capture=True, target=target)
154 for line in out.splitlines():
155 fname, _, val = line.partition(":")
156 if fname in fmap:
157 data[fmap[fname]] = val.strip()
158 missing = [k for k in fmap.values() if k not in data]
159 if len(missing):
160 LOG.warn("Missing fields in lsb_release --all output: %s",
161 ','.join(missing))
162
163 except ProcessExecutionError as err:
164 LOG.warn("Unable to get lsb_release --all: %s", err)
165 data = {v: "UNAVAILABLE" for v in fmap.values()}
166
167 return data
168
169
170def apt_update(target=None, env=None, force=False, comment=None,
171 retries=None):
172
173 marker = "tmp/curtin.aptupdate"
174
175 if env is None:
176 env = os.environ.copy()
177
178 if retries is None:
179 # by default run apt-update up to 3 times to allow
180 # for transient failures
181 retries = (1, 2, 3)
182
183 if comment is None:
184 comment = "no comment provided"
185
186 if comment.endswith("\n"):
187 comment = comment[:-1]
188
189 marker = target_path(target, marker)
190 # if marker exists, check if there are files that would make it obsolete
191 listfiles = [target_path(target, "/etc/apt/sources.list")]
192 listfiles += glob.glob(
193 target_path(target, "etc/apt/sources.list.d/*.list"))
194
195 if os.path.exists(marker) and not force:
196 if len(find_newer(marker, listfiles)) == 0:
197 return
198
199 restore_perms = []
200
201 abs_tmpdir = tempfile.mkdtemp(dir=target_path(target, "/tmp"))
202 try:
203 abs_slist = abs_tmpdir + "/sources.list"
204 abs_slistd = abs_tmpdir + "/sources.list.d"
205 ch_tmpdir = "/tmp/" + os.path.basename(abs_tmpdir)
206 ch_slist = ch_tmpdir + "/sources.list"
207 ch_slistd = ch_tmpdir + "/sources.list.d"
208
209 # this file gets executed on apt-get update sometimes. (LP: #1527710)
210 motd_update = target_path(
211 target, "/usr/lib/update-notifier/update-motd-updates-available")
212 pmode = set_unexecutable(motd_update)
213 if pmode is not None:
214 restore_perms.append((motd_update, pmode),)
215
216 # create tmpdir/sources.list with all lines other than deb-src
217 # avoid apt complaining by using existing and empty dir for sourceparts
218 os.mkdir(abs_slistd)
219 with open(abs_slist, "w") as sfp:
220 for sfile in listfiles:
221 with open(sfile, "r") as fp:
222 contents = fp.read()
223 for line in contents.splitlines():
224 line = line.lstrip()
225 if not line.startswith("deb-src"):
226 sfp.write(line + "\n")
227
228 update_cmd = [
229 'apt-get', '--quiet',
230 '--option=Acquire::Languages=none',
231 '--option=Dir::Etc::sourcelist=%s' % ch_slist,
232 '--option=Dir::Etc::sourceparts=%s' % ch_slistd,
233 'update']
234
235 # do not using 'run_apt_command' so we can use 'retries' to subp
236 with ChrootableTarget(target, allow_daemons=True) as inchroot:
237 inchroot.subp(update_cmd, env=env, retries=retries)
238 finally:
239 for fname, perms in restore_perms:
240 os.chmod(fname, perms)
241 if abs_tmpdir:
242 shutil.rmtree(abs_tmpdir)
243
244 with open(marker, "w") as fp:
245 fp.write(comment + "\n")
246
247
248def run_apt_command(mode, args=None, opts=None, env=None, target=None,
249 execute=True, allow_daemons=False):
250 defopts = ['--quiet', '--assume-yes',
251 '--option=Dpkg::options::=--force-unsafe-io',
252 '--option=Dpkg::Options::=--force-confold']
253 if args is None:
254 args = []
255
256 if opts is None:
257 opts = []
258
259 if env is None:
260 env = os.environ.copy()
261 env['DEBIAN_FRONTEND'] = 'noninteractive'
262
263 if which('eatmydata', target=target):
264 emd = ['eatmydata']
265 else:
266 emd = []
267
268 cmd = emd + ['apt-get'] + defopts + opts + [mode] + args
269 if not execute:
270 return env, cmd
271
272 apt_update(target, env=env, comment=' '.join(cmd))
273 with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot:
274 return inchroot.subp(cmd, env=env)
275
276
277def run_yum_command(mode, args=None, opts=None, env=None, target=None,
278 execute=True, allow_daemons=False):
279 defopts = ['--assumeyes', '--quiet']
280
281 if args is None:
282 args = []
283
284 if opts is None:
285 opts = []
286
287 cmd = ['yum'] + defopts + opts + [mode] + args
288 if not execute:
289 return env, cmd
290
291 if mode in ["install", "update", "upgrade"]:
292 return yum_install(mode, args, opts=opts, env=env, target=target,
293 allow_daemons=allow_daemons)
294
295 with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot:
296 return inchroot.subp(cmd, env=env)
297
298
299def yum_install(mode, packages=None, opts=None, env=None, target=None,
300 allow_daemons=False):
301
302 defopts = ['--assumeyes', '--quiet']
303
304 if packages is None:
305 packages = []
306
307 if opts is None:
308 opts = []
309
310 if mode not in ['install', 'update', 'upgrade']:
311 raise ValueError(
312 'Unsupported mode "%s" for yum package install/upgrade' % mode)
313
314 # download first, then install/upgrade from cache
315 cmd = ['yum'] + defopts + opts + [mode]
316 dl_opts = ['--downloadonly', '--setopt=keepcache=1']
317 inst_opts = ['--cacheonly']
318
319 # rpm requires /dev /sys and /proc be mounted, use ChrootableTarget
320 with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot:
321 inchroot.subp(cmd + dl_opts + packages,
322 env=env, retries=[1] * 10)
323 return inchroot.subp(cmd + inst_opts + packages, env=env)
324
325
326def rpm_get_dist_id(target=None):
327 """Use rpm command to extract the '%rhel' distro macro which returns
328 the major os version id (6, 7, 8). This works for centos or rhel
329 """
330 with ChrootableTarget(target) as in_chroot:
331 dist, _ = in_chroot.subp(['rpm', '-E', '%rhel'], capture=True)
332 return dist.rstrip()
333
334
335def system_upgrade(opts=None, target=None, env=None, allow_daemons=False,
336 osfamily=None):
337 LOG.debug("Upgrading system in %s", target)
338
339 distro_cfg = {
340 DISTROS.debian: {'function': 'run_apt_command',
341 'subcommands': ('dist-upgrade', 'autoremove')},
342 DISTROS.redhat: {'function': 'run_yum_command',
343 'subcommands': ('upgrade')},
344 }
345 if osfamily not in distro_cfg:
346 raise ValueError('Distro "%s" does not have system_upgrade support',
347 osfamily)
348
349 for mode in distro_cfg[osfamily]['subcommands']:
350 ret = distro_cfg[osfamily]['function'](
351 mode, opts=opts, target=target,
352 env=env, allow_daemons=allow_daemons)
353 return ret
354
355
356def install_packages(pkglist, osfamily=None, opts=None, target=None, env=None,
357 allow_daemons=False):
358 if isinstance(pkglist, str):
359 pkglist = [pkglist]
360
361 if not osfamily:
362 osfamily = get_osfamily(target=target)
363
364 installer_map = {
365 DISTROS.debian: run_apt_command,
366 DISTROS.redhat: run_yum_command,
367 }
368
369 install_cmd = installer_map.get(osfamily)
370 if not install_cmd:
371 raise ValueError('No packge install command for distro: %s' %
372 osfamily)
373
374 return install_cmd('install', args=pkglist, opts=opts, target=target,
375 env=env, allow_daemons=allow_daemons)
376
377
378def has_pkg_available(pkg, target=None, osfamily=None):
379 if not osfamily:
380 osfamily = get_osfamily(target=target)
381
382 if osfamily not in [DISTROS.debian, DISTROS.redhat]:
383 raise ValueError('has_pkg_available: unsupported distro family: %s',
384 osfamily)
385
386 if osfamily == DISTROS.debian:
387 out, _ = subp(['apt-cache', 'pkgnames'], capture=True, target=target)
388 for item in out.splitlines():
389 if pkg == item.strip():
390 return True
391 return False
392
393 if osfamily == DISTROS.redhat:
394 out, _ = run_yum_command('list', opts=['--cacheonly'])
395 for item in out.splitlines():
396 if item.lower().startswith(pkg.lower()):
397 return True
398 return False
399
400
401def get_installed_packages(target=None):
402 if which('dpkg-query', target=target):
403 (out, _) = subp(['dpkg-query', '--list'], target=target, capture=True)
404 elif which('rpm', target=target):
405 # rpm requires /dev /sys and /proc be mounted, use ChrootableTarget
406 with ChrootableTarget(target) as in_chroot:
407 (out, _) = in_chroot.subp(['rpm', '-qa', '--queryformat',
408 'ii %{NAME} %{VERSION}-%{RELEASE}\n'],
409 target=target, capture=True)
410 if not out:
411 raise ValueError('No package query tool')
412
413 pkgs_inst = set()
414 for line in out.splitlines():
415 try:
416 (state, pkg, other) = line.split(None, 2)
417 except ValueError:
418 continue
419 if state.startswith("hi") or state.startswith("ii"):
420 pkgs_inst.add(re.sub(":.*", "", pkg))
421
422 return pkgs_inst
423
424
425def has_pkg_installed(pkg, target=None):
426 try:
427 out, _ = subp(['dpkg-query', '--show', '--showformat',
428 '${db:Status-Abbrev}', pkg],
429 capture=True, target=target)
430 return out.rstrip() == "ii"
431 except ProcessExecutionError:
432 return False
433
434
435def parse_dpkg_version(raw, name=None, semx=None):
436 """Parse a dpkg version string into various parts and calcualate a
437 numerical value of the version for use in comparing package versions
438
439 Native packages (without a '-'), will have the package version treated
440 as the upstream version.
441
442 returns a dictionary with fields:
443 'major' (int), 'minor' (int), 'micro' (int),
444 'semantic_version' (int),
445 'extra' (string), 'raw' (string), 'upstream' (string),
446 'name' (present only if name is not None)
447 """
448 if not isinstance(raw, string_types):
449 raise TypeError(
450 "Invalid type %s for parse_dpkg_version" % raw.__class__)
451
452 if semx is None:
453 semx = (10000, 100, 1)
454
455 if "-" in raw:
456 upstream = raw.rsplit('-', 1)[0]
457 else:
458 # this is a native package, package version treated as upstream.
459 upstream = raw
460
461 match = re.search(r'[^0-9.]', upstream)
462 if match:
463 extra = upstream[match.start():]
464 upstream_base = upstream[:match.start()]
465 else:
466 upstream_base = upstream
467 extra = None
468
469 toks = upstream_base.split(".", 2)
470 if len(toks) == 3:
471 major, minor, micro = toks
472 elif len(toks) == 2:
473 major, minor, micro = (toks[0], toks[1], 0)
474 elif len(toks) == 1:
475 major, minor, micro = (toks[0], 0, 0)
476
477 version = {
478 'major': int(major),
479 'minor': int(minor),
480 'micro': int(micro),
481 'extra': extra,
482 'raw': raw,
483 'upstream': upstream,
484 }
485 if name:
486 version['name'] = name
487
488 if semx:
489 try:
490 version['semantic_version'] = int(
491 int(major) * semx[0] + int(minor) * semx[1] +
492 int(micro) * semx[2])
493 except (ValueError, IndexError):
494 version['semantic_version'] = None
495
496 return version
497
498
499def get_package_version(pkg, target=None, semx=None):
500 """Use dpkg-query to extract package pkg's version string
501 and parse the version string into a dictionary
502 """
503 try:
504 out, _ = subp(['dpkg-query', '--show', '--showformat',
505 '${Version}', pkg], capture=True, target=target)
506 raw = out.rstrip()
507 return parse_dpkg_version(raw, name=pkg, semx=semx)
508 except ProcessExecutionError:
509 return None
510
511
512# vi: ts=4 expandtab syntax=python
diff --git a/curtin/futil.py b/curtin/futil.py
index 506964e..e603f88 100644
--- a/curtin/futil.py
+++ b/curtin/futil.py
@@ -5,7 +5,8 @@ import pwd
5import os5import os
6import warnings6import warnings
77
8from .util import write_file, target_path8from .util import write_file
9from .paths import target_path
9from .log import LOG10from .log import LOG
1011
1112
diff --git a/curtin/net/__init__.py b/curtin/net/__init__.py
index b4c9b59..ef2ba26 100644
--- a/curtin/net/__init__.py
+++ b/curtin/net/__init__.py
@@ -572,63 +572,4 @@ def get_interface_mac(ifname):
572 return read_sys_net(ifname, "address", enoent=False)572 return read_sys_net(ifname, "address", enoent=False)
573573
574574
575def network_config_required_packages(network_config, mapping=None):
576
577 if network_config is None:
578 network_config = {}
579
580 if not isinstance(network_config, dict):
581 raise ValueError('Invalid network configuration. Must be a dict')
582
583 if mapping is None:
584 mapping = {}
585
586 if not isinstance(mapping, dict):
587 raise ValueError('Invalid network mapping. Must be a dict')
588
589 # allow top-level 'network' key
590 if 'network' in network_config:
591 network_config = network_config.get('network')
592
593 # v1 has 'config' key and uses type: devtype elements
594 if 'config' in network_config:
595 dev_configs = set(device['type']
596 for device in network_config['config'])
597 else:
598 # v2 has no config key
599 dev_configs = set(cfgtype for (cfgtype, cfg) in
600 network_config.items() if cfgtype not in ['version'])
601
602 needed_packages = []
603 for dev_type in dev_configs:
604 if dev_type in mapping:
605 needed_packages.extend(mapping[dev_type])
606
607 return needed_packages
608
609
610def detect_required_packages_mapping():
611 """Return a dictionary providing a versioned configuration which maps
612 network configuration elements to the packages which are required
613 for functionality.
614 """
615 mapping = {
616 1: {
617 'handler': network_config_required_packages,
618 'mapping': {
619 'bond': ['ifenslave'],
620 'bridge': ['bridge-utils'],
621 'vlan': ['vlan']},
622 },
623 2: {
624 'handler': network_config_required_packages,
625 'mapping': {
626 'bonds': ['ifenslave'],
627 'bridges': ['bridge-utils'],
628 'vlans': ['vlan']}
629 },
630 }
631
632 return mapping
633
634# vi: ts=4 expandtab syntax=python575# vi: ts=4 expandtab syntax=python
diff --git a/curtin/net/deps.py b/curtin/net/deps.py
635new file mode 100644576new file mode 100644
index 0000000..b98961d
--- /dev/null
+++ b/curtin/net/deps.py
@@ -0,0 +1,72 @@
1# This file is part of curtin. See LICENSE file for copyright and license info.
2
3from curtin.distro import DISTROS
4
5
6def network_config_required_packages(network_config, mapping=None):
7
8 if network_config is None:
9 network_config = {}
10
11 if not isinstance(network_config, dict):
12 raise ValueError('Invalid network configuration. Must be a dict')
13
14 if mapping is None:
15 mapping = {}
16
17 if not isinstance(mapping, dict):
18 raise ValueError('Invalid network mapping. Must be a dict')
19
20 # allow top-level 'network' key
21 if 'network' in network_config:
22 network_config = network_config.get('network')
23
24 # v1 has 'config' key and uses type: devtype elements
25 if 'config' in network_config:
26 dev_configs = set(device['type']
27 for device in network_config['config'])
28 else:
29 # v2 has no config key
30 dev_configs = set(cfgtype for (cfgtype, cfg) in
31 network_config.items() if cfgtype not in ['version'])
32
33 needed_packages = []
34 for dev_type in dev_configs:
35 if dev_type in mapping:
36 needed_packages.extend(mapping[dev_type])
37
38 return needed_packages
39
40
41def detect_required_packages_mapping(osfamily=DISTROS.debian):
42 """Return a dictionary providing a versioned configuration which maps
43 network configuration elements to the packages which are required
44 for functionality.
45 """
46 # keys ending with 's' are v2 values
47 distro_mapping = {
48 DISTROS.debian: {
49 'bond': ['ifenslave'],
50 'bonds': [],
51 'bridge': ['bridge-utils'],
52 'bridges': [],
53 'vlan': ['vlan'],
54 'vlans': []},
55 DISTROS.redhat: {
56 'bond': [],
57 'bonds': [],
58 'bridge': [],
59 'bridges': [],
60 'vlan': [],
61 'vlans': []},
62 }
63 if osfamily not in distro_mapping:
64 raise ValueError('No net package mapping for distro: %s' % osfamily)
65
66 return {1: {'handler': network_config_required_packages,
67 'mapping': distro_mapping.get(osfamily)},
68 2: {'handler': network_config_required_packages,
69 'mapping': distro_mapping.get(osfamily)}}
70
71
72# vi: ts=4 expandtab syntax=python
diff --git a/curtin/paths.py b/curtin/paths.py
0new file mode 10064473new file mode 100644
index 0000000..064b060
--- /dev/null
+++ b/curtin/paths.py
@@ -0,0 +1,34 @@
1# This file is part of curtin. See LICENSE file for copyright and license info.
2import os
3
4try:
5 string_types = (basestring,)
6except NameError:
7 string_types = (str,)
8
9
10def target_path(target, path=None):
11 # return 'path' inside target, accepting target as None
12 if target in (None, ""):
13 target = "/"
14 elif not isinstance(target, string_types):
15 raise ValueError("Unexpected input for target: %s" % target)
16 else:
17 target = os.path.abspath(target)
18 # abspath("//") returns "//" specifically for 2 slashes.
19 if target.startswith("//"):
20 target = target[1:]
21
22 if not path:
23 return target
24
25 if not isinstance(path, string_types):
26 raise ValueError("Unexpected input for path: %s" % path)
27
28 # os.path.join("/etc", "/foo") returns "/foo". Chomp all leading /.
29 while len(path) and path[0] == "/":
30 path = path[1:]
31
32 return os.path.join(target, path)
33
34# vi: ts=4 expandtab syntax=python
diff --git a/curtin/util.py b/curtin/util.py
index 29bf06e..238d7c5 100644
--- a/curtin/util.py
+++ b/curtin/util.py
@@ -4,7 +4,6 @@ import argparse
4import collections4import collections
5from contextlib import contextmanager5from contextlib import contextmanager
6import errno6import errno
7import glob
8import json7import json
9import os8import os
10import platform9import platform
@@ -38,15 +37,16 @@ except NameError:
38 # python3 does not have a long type.37 # python3 does not have a long type.
39 numeric_types = (int, float)38 numeric_types = (int, float)
4039
40from . import paths
41from .log import LOG, log_call41from .log import LOG, log_call
4242
43_INSTALLED_HELPERS_PATH = 'usr/lib/curtin/helpers'43_INSTALLED_HELPERS_PATH = 'usr/lib/curtin/helpers'
44_INSTALLED_MAIN = 'usr/bin/curtin'44_INSTALLED_MAIN = 'usr/bin/curtin'
4545
46_LSB_RELEASE = {}
47_USES_SYSTEMD = None46_USES_SYSTEMD = None
48_HAS_UNSHARE_PID = None47_HAS_UNSHARE_PID = None
4948
49
50_DNS_REDIRECT_IP = None50_DNS_REDIRECT_IP = None
5151
52# matcher used in template rendering functions52# matcher used in template rendering functions
@@ -61,7 +61,7 @@ def _subp(args, data=None, rcs=None, env=None, capture=False,
61 rcs = [0]61 rcs = [0]
62 devnull_fp = None62 devnull_fp = None
6363
64 tpath = target_path(target)64 tpath = paths.target_path(target)
65 chroot_args = [] if tpath == "/" else ['chroot', target]65 chroot_args = [] if tpath == "/" else ['chroot', target]
66 sh_args = ['sh', '-c'] if shell else []66 sh_args = ['sh', '-c'] if shell else []
67 if isinstance(args, string_types):67 if isinstance(args, string_types):
@@ -165,7 +165,7 @@ def _get_unshare_pid_args(unshare_pid=None, target=None, euid=None):
165 if euid is None:165 if euid is None:
166 euid = os.geteuid()166 euid = os.geteuid()
167167
168 tpath = target_path(target)168 tpath = paths.target_path(target)
169169
170 unshare_pid_in = unshare_pid170 unshare_pid_in = unshare_pid
171 if unshare_pid is None:171 if unshare_pid is None:
@@ -595,7 +595,7 @@ def disable_daemons_in_root(target):
595 'done',595 'done',
596 ''])596 ''])
597597
598 fpath = target_path(target, "/usr/sbin/policy-rc.d")598 fpath = paths.target_path(target, "/usr/sbin/policy-rc.d")
599599
600 if os.path.isfile(fpath):600 if os.path.isfile(fpath):
601 return False601 return False
@@ -606,7 +606,7 @@ def disable_daemons_in_root(target):
606606
607def undisable_daemons_in_root(target):607def undisable_daemons_in_root(target):
608 try:608 try:
609 os.unlink(target_path(target, "/usr/sbin/policy-rc.d"))609 os.unlink(paths.target_path(target, "/usr/sbin/policy-rc.d"))
610 except OSError as e:610 except OSError as e:
611 if e.errno != errno.ENOENT:611 if e.errno != errno.ENOENT:
612 raise612 raise
@@ -618,7 +618,7 @@ class ChrootableTarget(object):
618 def __init__(self, target, allow_daemons=False, sys_resolvconf=True):618 def __init__(self, target, allow_daemons=False, sys_resolvconf=True):
619 if target is None:619 if target is None:
620 target = "/"620 target = "/"
621 self.target = target_path(target)621 self.target = paths.target_path(target)
622 self.mounts = ["/dev", "/proc", "/sys"]622 self.mounts = ["/dev", "/proc", "/sys"]
623 self.umounts = []623 self.umounts = []
624 self.disabled_daemons = False624 self.disabled_daemons = False
@@ -628,14 +628,14 @@ class ChrootableTarget(object):
628628
629 def __enter__(self):629 def __enter__(self):
630 for p in self.mounts:630 for p in self.mounts:
631 tpath = target_path(self.target, p)631 tpath = paths.target_path(self.target, p)
632 if do_mount(p, tpath, opts='--bind'):632 if do_mount(p, tpath, opts='--bind'):
633 self.umounts.append(tpath)633 self.umounts.append(tpath)
634634
635 if not self.allow_daemons:635 if not self.allow_daemons:
636 self.disabled_daemons = disable_daemons_in_root(self.target)636 self.disabled_daemons = disable_daemons_in_root(self.target)
637637
638 rconf = target_path(self.target, "/etc/resolv.conf")638 rconf = paths.target_path(self.target, "/etc/resolv.conf")
639 target_etc = os.path.dirname(rconf)639 target_etc = os.path.dirname(rconf)
640 if self.target != "/" and os.path.isdir(target_etc):640 if self.target != "/" and os.path.isdir(target_etc):
641 # never muck with resolv.conf on /641 # never muck with resolv.conf on /
@@ -660,13 +660,13 @@ class ChrootableTarget(object):
660 undisable_daemons_in_root(self.target)660 undisable_daemons_in_root(self.target)
661661
662 # if /dev is to be unmounted, udevadm settle (LP: #1462139)662 # if /dev is to be unmounted, udevadm settle (LP: #1462139)
663 if target_path(self.target, "/dev") in self.umounts:663 if paths.target_path(self.target, "/dev") in self.umounts:
664 log_call(subp, ['udevadm', 'settle'])664 log_call(subp, ['udevadm', 'settle'])
665665
666 for p in reversed(self.umounts):666 for p in reversed(self.umounts):
667 do_umount(p)667 do_umount(p)
668668
669 rconf = target_path(self.target, "/etc/resolv.conf")669 rconf = paths.target_path(self.target, "/etc/resolv.conf")
670 if self.sys_resolvconf and self.rconf_d:670 if self.sys_resolvconf and self.rconf_d:
671 os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf)671 os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf)
672 shutil.rmtree(self.rconf_d)672 shutil.rmtree(self.rconf_d)
@@ -676,7 +676,7 @@ class ChrootableTarget(object):
676 return subp(*args, **kwargs)676 return subp(*args, **kwargs)
677677
678 def path(self, path):678 def path(self, path):
679 return target_path(self.target, path)679 return paths.target_path(self.target, path)
680680
681681
682def is_exe(fpath):682def is_exe(fpath):
@@ -685,29 +685,29 @@ def is_exe(fpath):
685685
686686
687def which(program, search=None, target=None):687def which(program, search=None, target=None):
688 target = target_path(target)688 target = paths.target_path(target)
689689
690 if os.path.sep in program:690 if os.path.sep in program:
691 # if program had a '/' in it, then do not search PATH691 # if program had a '/' in it, then do not search PATH
692 # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls692 # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls
693 # so effectively we set cwd to / (or target)693 # so effectively we set cwd to / (or target)
694 if is_exe(target_path(target, program)):694 if is_exe(paths.target_path(target, program)):
695 return program695 return program
696696
697 if search is None:697 if search is None:
698 paths = [p.strip('"') for p in698 candpaths = [p.strip('"') for p in
699 os.environ.get("PATH", "").split(os.pathsep)]699 os.environ.get("PATH", "").split(os.pathsep)]
700 if target == "/":700 if target == "/":
701 search = paths701 search = candpaths
702 else:702 else:
703 search = [p for p in paths if p.startswith("/")]703 search = [p for p in candpaths if p.startswith("/")]
704704
705 # normalize path input705 # normalize path input
706 search = [os.path.abspath(p) for p in search]706 search = [os.path.abspath(p) for p in search]
707707
708 for path in search:708 for path in search:
709 ppath = os.path.sep.join((path, program))709 ppath = os.path.sep.join((path, program))
710 if is_exe(target_path(target, ppath)):710 if is_exe(paths.target_path(target, ppath)):
711 return ppath711 return ppath
712712
713 return None713 return None
@@ -773,116 +773,6 @@ def get_architecture(target=None):
773 return out.strip()773 return out.strip()
774774
775775
776def has_pkg_available(pkg, target=None):
777 out, _ = subp(['apt-cache', 'pkgnames'], capture=True, target=target)
778 for item in out.splitlines():
779 if pkg == item.strip():
780 return True
781 return False
782
783
784def get_installed_packages(target=None):
785 (out, _) = subp(['dpkg-query', '--list'], target=target, capture=True)
786
787 pkgs_inst = set()
788 for line in out.splitlines():
789 try:
790 (state, pkg, other) = line.split(None, 2)
791 except ValueError:
792 continue
793 if state.startswith("hi") or state.startswith("ii"):
794 pkgs_inst.add(re.sub(":.*", "", pkg))
795
796 return pkgs_inst
797
798
799def has_pkg_installed(pkg, target=None):
800 try:
801 out, _ = subp(['dpkg-query', '--show', '--showformat',
802 '${db:Status-Abbrev}', pkg],
803 capture=True, target=target)
804 return out.rstrip() == "ii"
805 except ProcessExecutionError:
806 return False
807
808
809def parse_dpkg_version(raw, name=None, semx=None):
810 """Parse a dpkg version string into various parts and calcualate a
811 numerical value of the version for use in comparing package versions
812
813 Native packages (without a '-'), will have the package version treated
814 as the upstream version.
815
816 returns a dictionary with fields:
817 'major' (int), 'minor' (int), 'micro' (int),
818 'semantic_version' (int),
819 'extra' (string), 'raw' (string), 'upstream' (string),
820 'name' (present only if name is not None)
821 """
822 if not isinstance(raw, string_types):
823 raise TypeError(
824 "Invalid type %s for parse_dpkg_version" % raw.__class__)
825
826 if semx is None:
827 semx = (10000, 100, 1)
828
829 if "-" in raw:
830 upstream = raw.rsplit('-', 1)[0]
831 else:
832 # this is a native package, package version treated as upstream.
833 upstream = raw
834
835 match = re.search(r'[^0-9.]', upstream)
836 if match:
837 extra = upstream[match.start():]
838 upstream_base = upstream[:match.start()]
839 else:
840 upstream_base = upstream
841 extra = None
842
843 toks = upstream_base.split(".", 2)
844 if len(toks) == 3:
845 major, minor, micro = toks
846 elif len(toks) == 2:
847 major, minor, micro = (toks[0], toks[1], 0)
848 elif len(toks) == 1:
849 major, minor, micro = (toks[0], 0, 0)
850
851 version = {
852 'major': int(major),
853 'minor': int(minor),
854 'micro': int(micro),
855 'extra': extra,
856 'raw': raw,
857 'upstream': upstream,
858 }
859 if name:
860 version['name'] = name
861
862 if semx:
863 try:
864 version['semantic_version'] = int(
865 int(major) * semx[0] + int(minor) * semx[1] +
866 int(micro) * semx[2])
867 except (ValueError, IndexError):
868 version['semantic_version'] = None
869
870 return version
871
872
873def get_package_version(pkg, target=None, semx=None):
874 """Use dpkg-query to extract package pkg's version string
875 and parse the version string into a dictionary
876 """
877 try:
878 out, _ = subp(['dpkg-query', '--show', '--showformat',
879 '${Version}', pkg], capture=True, target=target)
880 raw = out.rstrip()
881 return parse_dpkg_version(raw, name=pkg, semx=semx)
882 except ProcessExecutionError:
883 return None
884
885
886def find_newer(src, files):776def find_newer(src, files):
887 mtime = os.stat(src).st_mtime777 mtime = os.stat(src).st_mtime
888 return [f for f in files if778 return [f for f in files if
@@ -907,134 +797,6 @@ def set_unexecutable(fname, strict=False):
907 return cur797 return cur
908798
909799
910def apt_update(target=None, env=None, force=False, comment=None,
911 retries=None):
912
913 marker = "tmp/curtin.aptupdate"
914 if target is None:
915 target = "/"
916
917 if env is None:
918 env = os.environ.copy()
919
920 if retries is None:
921 # by default run apt-update up to 3 times to allow
922 # for transient failures
923 retries = (1, 2, 3)
924
925 if comment is None:
926 comment = "no comment provided"
927
928 if comment.endswith("\n"):
929 comment = comment[:-1]
930
931 marker = target_path(target, marker)
932 # if marker exists, check if there are files that would make it obsolete
933 listfiles = [target_path(target, "/etc/apt/sources.list")]
934 listfiles += glob.glob(
935 target_path(target, "etc/apt/sources.list.d/*.list"))
936
937 if os.path.exists(marker) and not force:
938 if len(find_newer(marker, listfiles)) == 0:
939 return
940
941 restore_perms = []
942
943 abs_tmpdir = tempfile.mkdtemp(dir=target_path(target, "/tmp"))
944 try:
945 abs_slist = abs_tmpdir + "/sources.list"
946 abs_slistd = abs_tmpdir + "/sources.list.d"
947 ch_tmpdir = "/tmp/" + os.path.basename(abs_tmpdir)
948 ch_slist = ch_tmpdir + "/sources.list"
949 ch_slistd = ch_tmpdir + "/sources.list.d"
950
951 # this file gets executed on apt-get update sometimes. (LP: #1527710)
952 motd_update = target_path(
953 target, "/usr/lib/update-notifier/update-motd-updates-available")
954 pmode = set_unexecutable(motd_update)
955 if pmode is not None:
956 restore_perms.append((motd_update, pmode),)
957
958 # create tmpdir/sources.list with all lines other than deb-src
959 # avoid apt complaining by using existing and empty dir for sourceparts
960 os.mkdir(abs_slistd)
961 with open(abs_slist, "w") as sfp:
962 for sfile in listfiles:
963 with open(sfile, "r") as fp:
964 contents = fp.read()
965 for line in contents.splitlines():
966 line = line.lstrip()
967 if not line.startswith("deb-src"):
968 sfp.write(line + "\n")
969
970 update_cmd = [
971 'apt-get', '--quiet',
972 '--option=Acquire::Languages=none',
973 '--option=Dir::Etc::sourcelist=%s' % ch_slist,
974 '--option=Dir::Etc::sourceparts=%s' % ch_slistd,
975 'update']
976
977 # do not using 'run_apt_command' so we can use 'retries' to subp
978 with ChrootableTarget(target, allow_daemons=True) as inchroot:
979 inchroot.subp(update_cmd, env=env, retries=retries)
980 finally:
981 for fname, perms in restore_perms:
982 os.chmod(fname, perms)
983 if abs_tmpdir:
984 shutil.rmtree(abs_tmpdir)
985
986 with open(marker, "w") as fp:
987 fp.write(comment + "\n")
988
989
990def run_apt_command(mode, args=None, aptopts=None, env=None, target=None,
991 execute=True, allow_daemons=False):
992 opts = ['--quiet', '--assume-yes',
993 '--option=Dpkg::options::=--force-unsafe-io',
994 '--option=Dpkg::Options::=--force-confold']
995
996 if args is None:
997 args = []
998
999 if aptopts is None:
1000 aptopts = []
1001
1002 if env is None:
1003 env = os.environ.copy()
1004 env['DEBIAN_FRONTEND'] = 'noninteractive'
1005
1006 if which('eatmydata', target=target):
1007 emd = ['eatmydata']
1008 else:
1009 emd = []
1010
1011 cmd = emd + ['apt-get'] + opts + aptopts + [mode] + args
1012 if not execute:
1013 return env, cmd
1014
1015 apt_update(target, env=env, comment=' '.join(cmd))
1016 with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot:
1017 return inchroot.subp(cmd, env=env)
1018
1019
1020def system_upgrade(aptopts=None, target=None, env=None, allow_daemons=False):
1021 LOG.debug("Upgrading system in %s", target)
1022 for mode in ('dist-upgrade', 'autoremove'):
1023 ret = run_apt_command(
1024 mode, aptopts=aptopts, target=target,
1025 env=env, allow_daemons=allow_daemons)
1026 return ret
1027
1028
1029def install_packages(pkglist, aptopts=None, target=None, env=None,
1030 allow_daemons=False):
1031 if isinstance(pkglist, str):
1032 pkglist = [pkglist]
1033 return run_apt_command(
1034 'install', args=pkglist,
1035 aptopts=aptopts, target=target, env=env, allow_daemons=allow_daemons)
1036
1037
1038def is_uefi_bootable():800def is_uefi_bootable():
1039 return os.path.exists('/sys/firmware/efi') is True801 return os.path.exists('/sys/firmware/efi') is True
1040802
@@ -1106,7 +868,7 @@ def run_hook_if_exists(target, hook):
1106 """868 """
1107 Look for "hook" in "target" and run it869 Look for "hook" in "target" and run it
1108 """870 """
1109 target_hook = target_path(target, '/curtin/' + hook)871 target_hook = paths.target_path(target, '/curtin/' + hook)
1110 if os.path.isfile(target_hook):872 if os.path.isfile(target_hook):
1111 LOG.debug("running %s" % target_hook)873 LOG.debug("running %s" % target_hook)
1112 subp([target_hook])874 subp([target_hook])
@@ -1261,41 +1023,6 @@ def is_file_not_found_exc(exc):
1261 exc.errno in (errno.ENOENT, errno.EIO, errno.ENXIO))1023 exc.errno in (errno.ENOENT, errno.EIO, errno.ENXIO))
12621024
12631025
1264def _lsb_release(target=None):
1265 fmap = {'Codename': 'codename', 'Description': 'description',
1266 'Distributor ID': 'id', 'Release': 'release'}
1267
1268 data = {}
1269 try:
1270 out, _ = subp(['lsb_release', '--all'], capture=True, target=target)
1271 for line in out.splitlines():
1272 fname, _, val = line.partition(":")
1273 if fname in fmap:
1274 data[fmap[fname]] = val.strip()
1275 missing = [k for k in fmap.values() if k not in data]
1276 if len(missing):
1277 LOG.warn("Missing fields in lsb_release --all output: %s",
1278 ','.join(missing))
1279
1280 except ProcessExecutionError as err:
1281 LOG.warn("Unable to get lsb_release --all: %s", err)
1282 data = {v: "UNAVAILABLE" for v in fmap.values()}
1283
1284 return data
1285
1286
1287def lsb_release(target=None):
1288 if target_path(target) != "/":
1289 # do not use or update cache if target is provided
1290 return _lsb_release(target)
1291
1292 global _LSB_RELEASE
1293 if not _LSB_RELEASE:
1294 data = _lsb_release()
1295 _LSB_RELEASE.update(data)
1296 return _LSB_RELEASE
1297
1298
1299class MergedCmdAppend(argparse.Action):1026class MergedCmdAppend(argparse.Action):
1300 """This appends to a list in order of appearence both the option string1027 """This appends to a list in order of appearence both the option string
1301 and the value"""1028 and the value"""
@@ -1430,31 +1157,6 @@ def is_resolvable_url(url):
1430 return is_resolvable(urlparse(url).hostname)1157 return is_resolvable(urlparse(url).hostname)
14311158
14321159
1433def target_path(target, path=None):
1434 # return 'path' inside target, accepting target as None
1435 if target in (None, ""):
1436 target = "/"
1437 elif not isinstance(target, string_types):
1438 raise ValueError("Unexpected input for target: %s" % target)
1439 else:
1440 target = os.path.abspath(target)
1441 # abspath("//") returns "//" specifically for 2 slashes.
1442 if target.startswith("//"):
1443 target = target[1:]
1444
1445 if not path:
1446 return target
1447
1448 if not isinstance(path, string_types):
1449 raise ValueError("Unexpected input for path: %s" % path)
1450
1451 # os.path.join("/etc", "/foo") returns "/foo". Chomp all leading /.
1452 while len(path) and path[0] == "/":
1453 path = path[1:]
1454
1455 return os.path.join(target, path)
1456
1457
1458class RunInChroot(ChrootableTarget):1160class RunInChroot(ChrootableTarget):
1459 """Backwards compatibility for RunInChroot (LP: #1617375).1161 """Backwards compatibility for RunInChroot (LP: #1617375).
1460 It needs to work like:1162 It needs to work like:
diff --git a/doc/topics/config.rst b/doc/topics/config.rst
index 76e520d..218bc17 100644
--- a/doc/topics/config.rst
+++ b/doc/topics/config.rst
@@ -14,6 +14,7 @@ Curtin's top level config keys are as follows:
14- apt_mirrors (``apt_mirrors``)14- apt_mirrors (``apt_mirrors``)
15- apt_proxy (``apt_proxy``)15- apt_proxy (``apt_proxy``)
16- block-meta (``block``)16- block-meta (``block``)
17- curthooks (``curthooks``)
17- debconf_selections (``debconf_selections``)18- debconf_selections (``debconf_selections``)
18- disable_overlayroot (``disable_overlayroot``)19- disable_overlayroot (``disable_overlayroot``)
19- grub (``grub``)20- grub (``grub``)
@@ -110,6 +111,45 @@ Specify the filesystem label on the boot partition.
110 label: my-boot-partition111 label: my-boot-partition
111112
112113
114curthooks
115~~~~~~~~~
116Configure how Curtin determines what :ref:`curthooks` to run during the installation
117process.
118
119**mode**: *<['auto', 'builtin', 'target']>*
120
121The default mode is ``auto``.
122
123In ``auto`` mode, curtin will execute curthooks within the image if present.
124For images without curthooks inside, curtin will execute its built-in hooks.
125
126Currently the built-in curthooks support the following OS families:
127
128- Ubuntu
129- Centos
130
131When specifying ``builtin``, curtin will only run the curthooks present in
132Curtin ignoring any curthooks that may be present in the target operating
133system.
134
135When specifying ``target``, curtin will attempt run the curthooks in the target
136operating system. If the target does NOT contain any curthooks, then the
137built-in curthooks will be run instead.
138
139Any errors during execution of curthooks (built-in or target) will fail the
140installation.
141
142**Example**::
143
144 # ignore any target curthooks
145 curthooks:
146 mode: builtin
147
148 # Only run target curthooks, fall back to built-in
149 curthooks:
150 mode: target
151
152
113debconf_selections153debconf_selections
114~~~~~~~~~~~~~~~~~~154~~~~~~~~~~~~~~~~~~
115Curtin will update the target with debconf set-selection values. Users will155Curtin will update the target with debconf set-selection values. Users will
diff --git a/doc/topics/curthooks.rst b/doc/topics/curthooks.rst
index e5f341b..c59aeaf 100644
--- a/doc/topics/curthooks.rst
+++ b/doc/topics/curthooks.rst
@@ -1,7 +1,13 @@
1.. _curthooks:
2
1========================================3========================================
2Curthooks / New OS Support 4Curthooks / New OS Support
3========================================5========================================
4Curtin has built-in support for installation of Ubuntu.6Curtin has built-in support for installation of:
7
8 - Ubuntu
9 - Centos
10
5Other operating systems are supported through a mechanism called11Other operating systems are supported through a mechanism called
6'curthooks' or 'curtin-hooks'.12'curthooks' or 'curtin-hooks'.
713
@@ -47,11 +53,21 @@ details. Specifically interesting to this stage are:
47 - ``CONFIG``: This is a path to the curtin config file. It is provided so53 - ``CONFIG``: This is a path to the curtin config file. It is provided so
48 that additional configuration could be provided through to the OS54 that additional configuration could be provided through to the OS
49 customization.55 customization.
56 - ``WORKING_DIR``: This is a path to a temporary directory where curtin
57 stores state and configuration files.
5058
51.. **TODO**: We should add 'PYTHON' or 'CURTIN_PYTHON' to this environment59.. **TODO**: We should add 'PYTHON' or 'CURTIN_PYTHON' to this environment
52 so that the hook can easily run a python program with the same python60 so that the hook can easily run a python program with the same python
53 that curtin ran with (ie, python2 or python3).61 that curtin ran with (ie, python2 or python3).
5462
63Running built-in hooks
64----------------------
65
66Curthooks may opt to run the built-in curthooks that are already provided in
67curtin itself. To do so, an in-image curthook can import the ``curthooks``
68module and invoke the ``builtin_curthooks`` function passing in the required
69parameters: config, target, and state.
70
5571
56Networking configuration72Networking configuration
57------------------------73------------------------
diff --git a/examples/tests/filesystem_battery.yaml b/examples/tests/filesystem_battery.yaml
index 3b1edbf..4eae5b6 100644
--- a/examples/tests/filesystem_battery.yaml
+++ b/examples/tests/filesystem_battery.yaml
@@ -113,8 +113,8 @@ storage:
113 - id: bind1113 - id: bind1
114 fstype: "none"114 fstype: "none"
115 options: "bind"115 options: "bind"
116 path: "/var/lib"116 path: "/var/cache"
117 spec: "/my/bind-over-var-lib"117 spec: "/my/bind-over-var-cache"
118 type: mount118 type: mount
119 - id: bind2119 - id: bind2
120 fstype: "none"120 fstype: "none"
diff --git a/helpers/common b/helpers/common
index ac2d0f3..f9217b7 100644
--- a/helpers/common
+++ b/helpers/common
@@ -541,18 +541,18 @@ get_carryover_params() {
541}541}
542542
543install_grub() {543install_grub() {
544 local long_opts="uefi,update-nvram"544 local long_opts="uefi,update-nvram,os-family:"
545 local getopt_out="" mp_efi=""545 local getopt_out="" mp_efi=""
546 getopt_out=$(getopt --name "${0##*/}" \546 getopt_out=$(getopt --name "${0##*/}" \
547 --options "" --long "${long_opts}" -- "$@") &&547 --options "" --long "${long_opts}" -- "$@") &&
548 eval set -- "${getopt_out}"548 eval set -- "${getopt_out}"
549549
550 local uefi=0550 local uefi=0 update_nvram=0 os_family=""
551 local update_nvram=0
552551
553 while [ $# -ne 0 ]; do552 while [ $# -ne 0 ]; do
554 cur="$1"; next="$2";553 cur="$1"; next="$2";
555 case "$cur" in554 case "$cur" in
555 --os-family) os_family=${next};;
556 --uefi) uefi=$((${uefi}+1));;556 --uefi) uefi=$((${uefi}+1));;
557 --update-nvram) update_nvram=$((${update_nvram}+1));;557 --update-nvram) update_nvram=$((${update_nvram}+1));;
558 --) shift; break;;558 --) shift; break;;
@@ -595,29 +595,88 @@ install_grub() {
595 error "$mp_dev ($fstype) is not a block device!"; return 1;595 error "$mp_dev ($fstype) is not a block device!"; return 1;
596 fi596 fi
597597
598 # get dpkg arch598 local os_variant=""
599 local dpkg_arch=""599 if [ -e "${mp}/etc/os-release" ]; then
600 dpkg_arch=$(chroot "$mp" dpkg --print-architecture)600 os_variant=$(chroot "$mp" \
601 r=$?601 /bin/sh -c 'echo $(. /etc/os-release; echo $ID)')
602 else
603 # Centos6 doesn't have os-release, so check for centos/redhat release
604 # looks like: CentOS release 6.9 (Final)
605 for rel in $(ls ${mp}/etc/*-release); do
606 os_variant=$(awk '{print tolower($1)}' $rel)
607 [ -n "$os_variant" ] && break
608 done
609 fi
610 [ $? != 0 ] &&
611 { error "Failed to read ID from $mp/etc/os-release"; return 1; }
612
613 local rhel_ver=""
614 case $os_variant in
615 debian|ubuntu) os_family="debian";;
616 centos|rhel)
617 os_family="redhat"
618 rhel_ver=$(chroot "$mp" rpm -E '%rhel')
619 ;;
620 esac
621
622 # ensure we have both settings, family and variant are needed
623 [ -n "${os_variant}" -a -n "${os_family}" ] ||
624 { error "Failed to determine os variant and family"; return 1; }
625
626 # get target arch
627 local target_arch="" r="1"
628 case $os_family in
629 debian)
630 target_arch=$(chroot "$mp" dpkg --print-architecture)
631 r=$?
632 ;;
633 redhat)
634 target_arch=$(chroot "$mp" rpm -E '%_arch')
635 r=$?
636 ;;
637 esac
602 [ $r -eq 0 ] || {638 [ $r -eq 0 ] || {
603 error "failed to get dpkg architecture [$r]"639 error "failed to get target architecture [$r]"
604 return 1;640 return 1;
605 }641 }
606642
607 # grub is not the bootloader you are looking for643 # grub is not the bootloader you are looking for
608 if [ "${dpkg_arch}" = "s390x" ]; then644 if [ "${target_arch}" = "s390x" ]; then
609 return 0;645 return 0;
610 fi646 fi
611647
612 # set correct grub package648 # set correct grub package
613 local grub_name="grub-pc"649 local grub_name=""
614 local grub_target="i386-pc"650 local grub_target=""
615 if [ "${dpkg_arch#ppc64}" != "${dpkg_arch}" ]; then651 case "$target_arch" in
652 i386|amd64)
653 # debian
654 grub_name="grub-pc"
655 grub_target="i386-pc"
656 ;;
657 x86_64)
658 case $rhel_ver in
659 6) grub_name="grub";;
660 7) grub_name="grub2-pc";;
661 *)
662 error "Unknown rhel_ver [$rhel_ver]";
663 return 1;
664 ;;
665 esac
666 grub_target="i386-pc"
667 ;;
668 esac
669 if [ "${target_arch#ppc64}" != "${target_arch}" ]; then
616 grub_name="grub-ieee1275"670 grub_name="grub-ieee1275"
617 grub_target="powerpc-ieee1275"671 grub_target="powerpc-ieee1275"
618 elif [ "$uefi" -ge 1 ]; then672 elif [ "$uefi" -ge 1 ]; then
619 grub_name="grub-efi-$dpkg_arch"673 grub_name="grub-efi-$target_arch"
620 case "$dpkg_arch" in674 case "$target_arch" in
675 x86_64)
676 # centos 7+, no centos6 support
677 grub_name="grub2-efi-x64-modules"
678 grub_target="x86_64-efi"
679 ;;
621 amd64)680 amd64)
622 grub_target="x86_64-efi";;681 grub_target="x86_64-efi";;
623 arm64)682 arm64)
@@ -626,9 +685,19 @@ install_grub() {
626 fi685 fi
627686
628 # check that the grub package is installed687 # check that the grub package is installed
629 tmp=$(chroot "$mp" dpkg-query --show \688 local r=$?
630 --showformat='${Status}\n' $grub_name)689 case $os_family in
631 r=$?690 debian)
691 tmp=$(chroot "$mp" dpkg-query --show \
692 --showformat='${Status}\n' $grub_name)
693 r=$?
694 ;;
695 redhat)
696 tmp=$(chroot "$mp" rpm -q \
697 --queryformat='install ok installed\n' $grub_name)
698 r=$?
699 ;;
700 esac
632 if [ $r -ne 0 -a $r -ne 1 ]; then701 if [ $r -ne 0 -a $r -ne 1 ]; then
633 error "failed to check if $grub_name installed";702 error "failed to check if $grub_name installed";
634 return 1;703 return 1;
@@ -636,11 +705,16 @@ install_grub() {
636 case "$tmp" in705 case "$tmp" in
637 install\ ok\ installed) :;;706 install\ ok\ installed) :;;
638 *) debug 1 "$grub_name not installed, not doing anything";707 *) debug 1 "$grub_name not installed, not doing anything";
639 return 0;;708 return 1;;
640 esac709 esac
641710
642 local grub_d="etc/default/grub.d"711 local grub_d="etc/default/grub.d"
643 local mygrub_cfg="$grub_d/50-curtin-settings.cfg"712 local mygrub_cfg="$grub_d/50-curtin-settings.cfg"
713 case $os_family in
714 redhat)
715 grub_d="etc/default"
716 mygrub_cfg="etc/default/grub";;
717 esac
644 [ -d "$mp/$grub_d" ] || mkdir -p "$mp/$grub_d" ||718 [ -d "$mp/$grub_d" ] || mkdir -p "$mp/$grub_d" ||
645 { error "Failed to create $grub_d"; return 1; }719 { error "Failed to create $grub_d"; return 1; }
646720
@@ -659,14 +733,23 @@ install_grub() {
659 error "Failed to get carryover parrameters from cmdline"; 733 error "Failed to get carryover parrameters from cmdline";
660 return 1;734 return 1;
661 }735 }
736 # always append rd.auto=1 for centos
737 case $os_family in
738 redhat)
739 newargs="$newargs rd.auto=1";;
740 esac
662 debug 1 "carryover command line params: $newargs"741 debug 1 "carryover command line params: $newargs"
663742
664 : > "$mp/$mygrub_cfg" ||743 case $os_family in
665 { error "Failed to write '$mygrub_cfg'"; return 1; }744 debian)
745 : > "$mp/$mygrub_cfg" ||
746 { error "Failed to write '$mygrub_cfg'"; return 1; }
747 ;;
748 esac
666 {749 {
667 [ "${REPLACE_GRUB_LINUX_DEFAULT:-1}" = "0" ] ||750 [ "${REPLACE_GRUB_LINUX_DEFAULT:-1}" = "0" ] ||
668 echo "GRUB_CMDLINE_LINUX_DEFAULT=\"$newargs\""751 echo "GRUB_CMDLINE_LINUX_DEFAULT=\"$newargs\""
669 echo "# disable grub os prober that might find other OS installs."752 echo "# Curtin disable grub os prober that might find other OS installs."
670 echo "GRUB_DISABLE_OS_PROBER=true"753 echo "GRUB_DISABLE_OS_PROBER=true"
671 echo "GRUB_TERMINAL=console"754 echo "GRUB_TERMINAL=console"
672 } >> "$mp/$mygrub_cfg"755 } >> "$mp/$mygrub_cfg"
@@ -692,30 +775,46 @@ install_grub() {
692 nvram="--no-nvram"775 nvram="--no-nvram"
693 if [ "$update_nvram" -ge 1 ]; then776 if [ "$update_nvram" -ge 1 ]; then
694 nvram=""777 nvram=""
695 fi 778 fi
696 debug 1 "curtin uefi: installing ${grub_name} to: /boot/efi"779 debug 1 "curtin uefi: installing ${grub_name} to: /boot/efi"
697 chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -exc '780 chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -exc '
698 echo "before grub-install efiboot settings"781 echo "before grub-install efiboot settings"
699 efibootmgr || echo "WARN: efibootmgr exited $?"782 efibootmgr -v || echo "WARN: efibootmgr exited $?"
700 dpkg-reconfigure "$1"783 bootid="$4"
701 update-grub784 grubpost=""
785 case $bootid in
786 debian|ubuntu)
787 grubcmd="grub-install"
788 dpkg-reconfigure "$1"
789 update-grub
790 ;;
791 centos|redhat|rhel)
792 grubcmd="grub2-install"
793 grubpost="grub2-mkconfig -o /boot/grub2/grub.cfg"
794 ;;
795 *)
796 echo "Unsupported OS: $bootid" 1>&2
797 exit 1
798 ;;
799 esac
702 # grub-install in 12.04 does not contain --no-nvram, --target,800 # grub-install in 12.04 does not contain --no-nvram, --target,
703 # or --efi-directory801 # or --efi-directory
704 target="--target=$2"802 target="--target=$2"
705 no_nvram="$3"803 no_nvram="$3"
706 efi_dir="--efi-directory=/boot/efi"804 efi_dir="--efi-directory=/boot/efi"
707 gi_out=$(grub-install --help 2>&1)805 gi_out=$($grubcmd --help 2>&1)
708 echo "$gi_out" | grep -q -- "$no_nvram" || no_nvram=""806 echo "$gi_out" | grep -q -- "$no_nvram" || no_nvram=""
709 echo "$gi_out" | grep -q -- "--target" || target=""807 echo "$gi_out" | grep -q -- "--target" || target=""
710 echo "$gi_out" | grep -q -- "--efi-directory" || efi_dir=""808 echo "$gi_out" | grep -q -- "--efi-directory" || efi_dir=""
711 grub-install $target $efi_dir \809 $grubcmd $target $efi_dir \
712 --bootloader-id=ubuntu --recheck $no_nvram' -- \810 --bootloader-id=$bootid --recheck $no_nvram
713 "${grub_name}" "${grub_target}" "$nvram" </dev/null ||811 [ -z "$grubpost" ] || $grubpost;' \
812 -- "${grub_name}" "${grub_target}" "$nvram" "$os_variant" </dev/null ||
714 { error "failed to install grub!"; return 1; }813 { error "failed to install grub!"; return 1; }
715814
716 chroot "$mp" sh -exc '815 chroot "$mp" sh -exc '
717 echo "after grub-install efiboot settings"816 echo "after grub-install efiboot settings"
718 efibootmgr || echo "WARN: efibootmgr exited $?"817 efibootmgr -v || echo "WARN: efibootmgr exited $?"
719 ' -- </dev/null ||818 ' -- </dev/null ||
720 { error "failed to list efi boot entries!"; return 1; }819 { error "failed to list efi boot entries!"; return 1; }
721 else820 else
@@ -728,10 +827,32 @@ install_grub() {
728 debug 1 "curtin non-uefi: installing ${grub_name} to: ${grubdevs[*]}"827 debug 1 "curtin non-uefi: installing ${grub_name} to: ${grubdevs[*]}"
729 chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -exc '828 chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -exc '
730 pkg=$1; shift;829 pkg=$1; shift;
731 dpkg-reconfigure "$pkg"830 bootid=$1; shift;
732 update-grub831 bootver=$1; shift;
733 for d in "$@"; do grub-install "$d" || exit; done' \832 grubpost=""
734 -- "${grub_name}" "${grubdevs[@]}" </dev/null ||833 case $bootid in
834 debian|ubuntu)
835 grubcmd="grub-install"
836 dpkg-reconfigure "$pkg"
837 update-grub
838 ;;
839 centos|redhat|rhel)
840 case $bootver in
841 6) grubcmd="grub-install";;
842 7) grubcmd="grub2-install"
843 grubpost="grub2-mkconfig -o /boot/grub2/grub.cfg";;
844 esac
845 ;;
846 *)
847 echo "Unsupported OS: $bootid"; 1>&2
848 exit 1
849 ;;
850 esac
851 for d in "$@"; do
852 echo $grubcmd "$d";
853 $grubcmd "$d" || exit; done
854 [ -z "$grubpost" ] || $grubpost;' \
855 -- "${grub_name}" "${os_variant}" "${rhel_ver}" "${grubdevs[@]}" </dev/null ||
735 { error "failed to install grub!"; return 1; }856 { error "failed to install grub!"; return 1; }
736 fi857 fi
737858
diff --git a/tests/unittests/test_apt_custom_sources_list.py b/tests/unittests/test_apt_custom_sources_list.py
index 5567dd5..a427ae9 100644
--- a/tests/unittests/test_apt_custom_sources_list.py
+++ b/tests/unittests/test_apt_custom_sources_list.py
@@ -11,6 +11,8 @@ from mock import call
11import textwrap11import textwrap
12import yaml12import yaml
1313
14from curtin import distro
15from curtin import paths
14from curtin import util16from curtin import util
15from curtin.commands import apt_config17from curtin.commands import apt_config
16from .helpers import CiTestCase18from .helpers import CiTestCase
@@ -106,7 +108,7 @@ class TestAptSourceConfigSourceList(CiTestCase):
106 # make test independent to executing system108 # make test independent to executing system
107 with mock.patch.object(util, 'load_file',109 with mock.patch.object(util, 'load_file',
108 return_value=MOCKED_APT_SRC_LIST):110 return_value=MOCKED_APT_SRC_LIST):
109 with mock.patch.object(util, 'lsb_release',111 with mock.patch.object(distro, 'lsb_release',
110 return_value={'codename':112 return_value={'codename':
111 'fakerel'}):113 'fakerel'}):
112 apt_config.handle_apt(cfg, TARGET)114 apt_config.handle_apt(cfg, TARGET)
@@ -115,10 +117,10 @@ class TestAptSourceConfigSourceList(CiTestCase):
115117
116 cloudfile = '/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg'118 cloudfile = '/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg'
117 cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1)119 cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1)
118 calls = [call(util.target_path(TARGET, '/etc/apt/sources.list'),120 calls = [call(paths.target_path(TARGET, '/etc/apt/sources.list'),
119 expected,121 expected,
120 mode=0o644),122 mode=0o644),
121 call(util.target_path(TARGET, cloudfile),123 call(paths.target_path(TARGET, cloudfile),
122 cloudconf,124 cloudconf,
123 mode=0o644)]125 mode=0o644)]
124 mockwrite.assert_has_calls(calls)126 mockwrite.assert_has_calls(calls)
@@ -147,19 +149,19 @@ class TestAptSourceConfigSourceList(CiTestCase):
147 arch = util.get_architecture()149 arch = util.get_architecture()
148 # would fail inside the unittest context150 # would fail inside the unittest context
149 with mock.patch.object(util, 'get_architecture', return_value=arch):151 with mock.patch.object(util, 'get_architecture', return_value=arch):
150 with mock.patch.object(util, 'lsb_release',152 with mock.patch.object(distro, 'lsb_release',
151 return_value={'codename': 'fakerel'}):153 return_value={'codename': 'fakerel'}):
152 apt_config.handle_apt(cfg, target)154 apt_config.handle_apt(cfg, target)
153155
154 self.assertEqual(156 self.assertEqual(
155 EXPECTED_CONVERTED_CONTENT,157 EXPECTED_CONVERTED_CONTENT,
156 util.load_file(util.target_path(target, "/etc/apt/sources.list")))158 util.load_file(paths.target_path(target, "/etc/apt/sources.list")))
157 cloudfile = util.target_path(159 cloudfile = paths.target_path(
158 target, '/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg')160 target, '/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg')
159 self.assertEqual({'apt_preserve_sources_list': True},161 self.assertEqual({'apt_preserve_sources_list': True},
160 yaml.load(util.load_file(cloudfile)))162 yaml.load(util.load_file(cloudfile)))
161163
162 @mock.patch("curtin.util.lsb_release")164 @mock.patch("curtin.distro.lsb_release")
163 @mock.patch("curtin.util.get_architecture", return_value="amd64")165 @mock.patch("curtin.util.get_architecture", return_value="amd64")
164 def test_trusty_source_lists(self, m_get_arch, m_lsb_release):166 def test_trusty_source_lists(self, m_get_arch, m_lsb_release):
165 """Support mirror equivalency with and without trailing /.167 """Support mirror equivalency with and without trailing /.
@@ -199,7 +201,7 @@ class TestAptSourceConfigSourceList(CiTestCase):
199201
200 release = 'trusty'202 release = 'trusty'
201 comps = 'main universe multiverse restricted'203 comps = 'main universe multiverse restricted'
202 easl = util.target_path(target, 'etc/apt/sources.list')204 easl = paths.target_path(target, 'etc/apt/sources.list')
203205
204 orig_content = tmpl.format(206 orig_content = tmpl.format(
205 mirror=orig_primary, security=orig_security,207 mirror=orig_primary, security=orig_security,
diff --git a/tests/unittests/test_apt_source.py b/tests/unittests/test_apt_source.py
index 2ede986..353cdf8 100644
--- a/tests/unittests/test_apt_source.py
+++ b/tests/unittests/test_apt_source.py
@@ -12,8 +12,9 @@ import socket
12import mock12import mock
13from mock import call13from mock import call
1414
15from curtin import util15from curtin import distro
16from curtin import gpg16from curtin import gpg
17from curtin import util
17from curtin.commands import apt_config18from curtin.commands import apt_config
18from .helpers import CiTestCase19from .helpers import CiTestCase
1920
@@ -77,7 +78,7 @@ class TestAptSourceConfig(CiTestCase):
7778
78 @staticmethod79 @staticmethod
79 def _add_apt_sources(*args, **kwargs):80 def _add_apt_sources(*args, **kwargs):
80 with mock.patch.object(util, 'apt_update'):81 with mock.patch.object(distro, 'apt_update'):
81 apt_config.add_apt_sources(*args, **kwargs)82 apt_config.add_apt_sources(*args, **kwargs)
8283
83 @staticmethod84 @staticmethod
@@ -86,7 +87,7 @@ class TestAptSourceConfig(CiTestCase):
86 Get the most basic default mrror and release info to be used in tests87 Get the most basic default mrror and release info to be used in tests
87 """88 """
88 params = {}89 params = {}
89 params['RELEASE'] = util.lsb_release()['codename']90 params['RELEASE'] = distro.lsb_release()['codename']
90 arch = util.get_architecture()91 arch = util.get_architecture()
91 params['MIRROR'] = apt_config.get_default_mirrors(arch)["PRIMARY"]92 params['MIRROR'] = apt_config.get_default_mirrors(arch)["PRIMARY"]
92 return params93 return params
@@ -472,7 +473,7 @@ class TestAptSourceConfig(CiTestCase):
472 'uri':473 'uri':
473 'http://testsec.ubuntu.com/%s/' % component}]}474 'http://testsec.ubuntu.com/%s/' % component}]}
474 post = ("%s_dists_%s-updates_InRelease" %475 post = ("%s_dists_%s-updates_InRelease" %
475 (component, util.lsb_release()['codename']))476 (component, distro.lsb_release()['codename']))
476 fromfn = ("%s/%s_%s" % (pre, archive, post))477 fromfn = ("%s/%s_%s" % (pre, archive, post))
477 tofn = ("%s/test.ubuntu.com_%s" % (pre, post))478 tofn = ("%s/test.ubuntu.com_%s" % (pre, post))
478479
@@ -937,7 +938,7 @@ class TestDebconfSelections(CiTestCase):
937 m_set_sel.assert_not_called()938 m_set_sel.assert_not_called()
938939
939 @mock.patch("curtin.commands.apt_config.debconf_set_selections")940 @mock.patch("curtin.commands.apt_config.debconf_set_selections")
940 @mock.patch("curtin.commands.apt_config.util.get_installed_packages")941 @mock.patch("curtin.commands.apt_config.distro.get_installed_packages")
941 def test_set_sel_call_has_expected_input(self, m_get_inst, m_set_sel):942 def test_set_sel_call_has_expected_input(self, m_get_inst, m_set_sel):
942 data = {943 data = {
943 'set1': 'pkga pkga/q1 mybool false',944 'set1': 'pkga pkga/q1 mybool false',
@@ -960,7 +961,7 @@ class TestDebconfSelections(CiTestCase):
960961
961 @mock.patch("curtin.commands.apt_config.dpkg_reconfigure")962 @mock.patch("curtin.commands.apt_config.dpkg_reconfigure")
962 @mock.patch("curtin.commands.apt_config.debconf_set_selections")963 @mock.patch("curtin.commands.apt_config.debconf_set_selections")
963 @mock.patch("curtin.commands.apt_config.util.get_installed_packages")964 @mock.patch("curtin.commands.apt_config.distro.get_installed_packages")
964 def test_reconfigure_if_intersection(self, m_get_inst, m_set_sel,965 def test_reconfigure_if_intersection(self, m_get_inst, m_set_sel,
965 m_dpkg_r):966 m_dpkg_r):
966 data = {967 data = {
@@ -985,7 +986,7 @@ class TestDebconfSelections(CiTestCase):
985986
986 @mock.patch("curtin.commands.apt_config.dpkg_reconfigure")987 @mock.patch("curtin.commands.apt_config.dpkg_reconfigure")
987 @mock.patch("curtin.commands.apt_config.debconf_set_selections")988 @mock.patch("curtin.commands.apt_config.debconf_set_selections")
988 @mock.patch("curtin.commands.apt_config.util.get_installed_packages")989 @mock.patch("curtin.commands.apt_config.distro.get_installed_packages")
989 def test_reconfigure_if_no_intersection(self, m_get_inst, m_set_sel,990 def test_reconfigure_if_no_intersection(self, m_get_inst, m_set_sel,
990 m_dpkg_r):991 m_dpkg_r):
991 data = {'set1': 'pkga pkga/q1 mybool false'}992 data = {'set1': 'pkga pkga/q1 mybool false'}
diff --git a/tests/unittests/test_block_iscsi.py b/tests/unittests/test_block_iscsi.py
index afaf1f6..f8ef5d8 100644
--- a/tests/unittests/test_block_iscsi.py
+++ b/tests/unittests/test_block_iscsi.py
@@ -588,6 +588,13 @@ class TestBlockIscsiDiskFromConfig(CiTestCase):
588 # utilize IscsiDisk str method for equality check588 # utilize IscsiDisk str method for equality check
589 self.assertEqual(str(expected_iscsi_disk), str(iscsi_disk))589 self.assertEqual(str(expected_iscsi_disk), str(iscsi_disk))
590590
591 # test with cfg.get('storage') since caller may already have
592 # grabbed the 'storage' value from the curtin config
593 iscsi_disk = iscsi.get_iscsi_disks_from_config(
594 cfg.get('storage')).pop()
595 # utilize IscsiDisk str method for equality check
596 self.assertEqual(str(expected_iscsi_disk), str(iscsi_disk))
597
591 def test_parse_iscsi_disk_from_config_no_iscsi(self):598 def test_parse_iscsi_disk_from_config_no_iscsi(self):
592 """Test parsing storage config with no iscsi disks included"""599 """Test parsing storage config with no iscsi disks included"""
593 cfg = {600 cfg = {
diff --git a/tests/unittests/test_block_lvm.py b/tests/unittests/test_block_lvm.py
index 22fb064..c92c1ec 100644
--- a/tests/unittests/test_block_lvm.py
+++ b/tests/unittests/test_block_lvm.py
@@ -73,7 +73,8 @@ class TestBlockLvm(CiTestCase):
7373
74 @mock.patch('curtin.block.lvm.lvmetad_running')74 @mock.patch('curtin.block.lvm.lvmetad_running')
75 @mock.patch('curtin.block.lvm.util')75 @mock.patch('curtin.block.lvm.util')
76 def test_lvm_scan(self, mock_util, mock_lvmetad):76 @mock.patch('curtin.block.lvm.distro')
77 def test_lvm_scan(self, mock_distro, mock_util, mock_lvmetad):
77 """check that lvm_scan formats commands correctly for each release"""78 """check that lvm_scan formats commands correctly for each release"""
78 cmds = [['pvscan'], ['vgscan', '--mknodes']]79 cmds = [['pvscan'], ['vgscan', '--mknodes']]
79 for (count, (codename, lvmetad_status, use_cache)) in enumerate(80 for (count, (codename, lvmetad_status, use_cache)) in enumerate(
@@ -81,7 +82,7 @@ class TestBlockLvm(CiTestCase):
81 ('trusty', False, False),82 ('trusty', False, False),
82 ('xenial', False, False), ('xenial', True, True),83 ('xenial', False, False), ('xenial', True, True),
83 (None, True, True), (None, False, False)]):84 (None, True, True), (None, False, False)]):
84 mock_util.lsb_release.return_value = {'codename': codename}85 mock_distro.lsb_release.return_value = {'codename': codename}
85 mock_lvmetad.return_value = lvmetad_status86 mock_lvmetad.return_value = lvmetad_status
86 lvm.lvm_scan()87 lvm.lvm_scan()
87 expected = [cmd for cmd in cmds]88 expected = [cmd for cmd in cmds]
diff --git a/tests/unittests/test_block_mdadm.py b/tests/unittests/test_block_mdadm.py
index 341e49d..d017930 100644
--- a/tests/unittests/test_block_mdadm.py
+++ b/tests/unittests/test_block_mdadm.py
@@ -15,12 +15,13 @@ class TestBlockMdadmAssemble(CiTestCase):
15 def setUp(self):15 def setUp(self):
16 super(TestBlockMdadmAssemble, self).setUp()16 super(TestBlockMdadmAssemble, self).setUp()
17 self.add_patch('curtin.block.mdadm.util', 'mock_util')17 self.add_patch('curtin.block.mdadm.util', 'mock_util')
18 self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release')
18 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')19 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
19 self.add_patch('curtin.block.mdadm.udev', 'mock_udev')20 self.add_patch('curtin.block.mdadm.udev', 'mock_udev')
2021
21 # Common mock settings22 # Common mock settings
22 self.mock_valid.return_value = True23 self.mock_valid.return_value = True
23 self.mock_util.lsb_release.return_value = {'codename': 'precise'}24 self.mock_lsb_release.return_value = {'codename': 'precise'}
24 self.mock_util.subp.return_value = ('', '')25 self.mock_util.subp.return_value = ('', '')
2526
26 def test_mdadm_assemble_scan(self):27 def test_mdadm_assemble_scan(self):
@@ -88,6 +89,7 @@ class TestBlockMdadmCreate(CiTestCase):
88 def setUp(self):89 def setUp(self):
89 super(TestBlockMdadmCreate, self).setUp()90 super(TestBlockMdadmCreate, self).setUp()
90 self.add_patch('curtin.block.mdadm.util', 'mock_util')91 self.add_patch('curtin.block.mdadm.util', 'mock_util')
92 self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release')
91 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')93 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
92 self.add_patch('curtin.block.mdadm.get_holders', 'mock_holders')94 self.add_patch('curtin.block.mdadm.get_holders', 'mock_holders')
93 self.add_patch('curtin.block.mdadm.udev.udevadm_settle',95 self.add_patch('curtin.block.mdadm.udev.udevadm_settle',
@@ -95,7 +97,7 @@ class TestBlockMdadmCreate(CiTestCase):
9597
96 # Common mock settings98 # Common mock settings
97 self.mock_valid.return_value = True99 self.mock_valid.return_value = True
98 self.mock_util.lsb_release.return_value = {'codename': 'precise'}100 self.mock_lsb_release.return_value = {'codename': 'precise'}
99 self.mock_holders.return_value = []101 self.mock_holders.return_value = []
100102
101 def prepare_mock(self, md_devname, raidlevel, devices, spares):103 def prepare_mock(self, md_devname, raidlevel, devices, spares):
@@ -236,14 +238,15 @@ class TestBlockMdadmExamine(CiTestCase):
236 def setUp(self):238 def setUp(self):
237 super(TestBlockMdadmExamine, self).setUp()239 super(TestBlockMdadmExamine, self).setUp()
238 self.add_patch('curtin.block.mdadm.util', 'mock_util')240 self.add_patch('curtin.block.mdadm.util', 'mock_util')
241 self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release')
239 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')242 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
240243
241 # Common mock settings244 # Common mock settings
242 self.mock_valid.return_value = True245 self.mock_valid.return_value = True
243 self.mock_util.lsb_release.return_value = {'codename': 'precise'}246 self.mock_lsb_release.return_value = {'codename': 'precise'}
244247
245 def test_mdadm_examine_export(self):248 def test_mdadm_examine_export(self):
246 self.mock_util.lsb_release.return_value = {'codename': 'xenial'}249 self.mock_lsb_release.return_value = {'codename': 'xenial'}
247 self.mock_util.subp.return_value = (250 self.mock_util.subp.return_value = (
248 """251 """
249 MD_LEVEL=raid0252 MD_LEVEL=raid0
@@ -320,7 +323,7 @@ class TestBlockMdadmExamine(CiTestCase):
320class TestBlockMdadmStop(CiTestCase):323class TestBlockMdadmStop(CiTestCase):
321 def setUp(self):324 def setUp(self):
322 super(TestBlockMdadmStop, self).setUp()325 super(TestBlockMdadmStop, self).setUp()
323 self.add_patch('curtin.block.mdadm.util.lsb_release', 'mock_util_lsb')326 self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release')
324 self.add_patch('curtin.block.mdadm.util.subp', 'mock_util_subp')327 self.add_patch('curtin.block.mdadm.util.subp', 'mock_util_subp')
325 self.add_patch('curtin.block.mdadm.util.write_file',328 self.add_patch('curtin.block.mdadm.util.write_file',
326 'mock_util_write_file')329 'mock_util_write_file')
@@ -333,7 +336,7 @@ class TestBlockMdadmStop(CiTestCase):
333336
334 # Common mock settings337 # Common mock settings
335 self.mock_valid.return_value = True338 self.mock_valid.return_value = True
336 self.mock_util_lsb.return_value = {'codename': 'xenial'}339 self.mock_lsb_release.return_value = {'codename': 'xenial'}
337 self.mock_util_subp.side_effect = iter([340 self.mock_util_subp.side_effect = iter([
338 ("", ""), # mdadm stop device341 ("", ""), # mdadm stop device
339 ])342 ])
@@ -488,11 +491,12 @@ class TestBlockMdadmRemove(CiTestCase):
488 def setUp(self):491 def setUp(self):
489 super(TestBlockMdadmRemove, self).setUp()492 super(TestBlockMdadmRemove, self).setUp()
490 self.add_patch('curtin.block.mdadm.util', 'mock_util')493 self.add_patch('curtin.block.mdadm.util', 'mock_util')
494 self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release')
491 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')495 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
492496
493 # Common mock settings497 # Common mock settings
494 self.mock_valid.return_value = True498 self.mock_valid.return_value = True
495 self.mock_util.lsb_release.return_value = {'codename': 'xenial'}499 self.mock_lsb_release.return_value = {'codename': 'xenial'}
496 self.mock_util.subp.side_effect = [500 self.mock_util.subp.side_effect = [
497 ("", ""), # mdadm remove device501 ("", ""), # mdadm remove device
498 ]502 ]
@@ -514,14 +518,15 @@ class TestBlockMdadmQueryDetail(CiTestCase):
514 def setUp(self):518 def setUp(self):
515 super(TestBlockMdadmQueryDetail, self).setUp()519 super(TestBlockMdadmQueryDetail, self).setUp()
516 self.add_patch('curtin.block.mdadm.util', 'mock_util')520 self.add_patch('curtin.block.mdadm.util', 'mock_util')
521 self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release')
517 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')522 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
518523
519 # Common mock settings524 # Common mock settings
520 self.mock_valid.return_value = True525 self.mock_valid.return_value = True
521 self.mock_util.lsb_release.return_value = {'codename': 'precise'}526 self.mock_lsb_release.return_value = {'codename': 'precise'}
522527
523 def test_mdadm_query_detail_export(self):528 def test_mdadm_query_detail_export(self):
524 self.mock_util.lsb_release.return_value = {'codename': 'xenial'}529 self.mock_lsb_release.return_value = {'codename': 'xenial'}
525 self.mock_util.subp.return_value = (530 self.mock_util.subp.return_value = (
526 """531 """
527 MD_LEVEL=raid1532 MD_LEVEL=raid1
@@ -592,13 +597,14 @@ class TestBlockMdadmDetailScan(CiTestCase):
592 def setUp(self):597 def setUp(self):
593 super(TestBlockMdadmDetailScan, self).setUp()598 super(TestBlockMdadmDetailScan, self).setUp()
594 self.add_patch('curtin.block.mdadm.util', 'mock_util')599 self.add_patch('curtin.block.mdadm.util', 'mock_util')
600 self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release')
595 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')601 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
596602
597 # Common mock settings603 # Common mock settings
598 self.scan_output = ("ARRAY /dev/md0 metadata=1.2 spares=2 name=0 " +604 self.scan_output = ("ARRAY /dev/md0 metadata=1.2 spares=2 name=0 " +
599 "UUID=b1eae2ff:69b6b02e:1d63bb53:ddfa6e4a")605 "UUID=b1eae2ff:69b6b02e:1d63bb53:ddfa6e4a")
600 self.mock_valid.return_value = True606 self.mock_valid.return_value = True
601 self.mock_util.lsb_release.return_value = {'codename': 'xenial'}607 self.mock_lsb_release.return_value = {'codename': 'xenial'}
602 self.mock_util.subp.side_effect = [608 self.mock_util.subp.side_effect = [
603 (self.scan_output, ""), # mdadm --detail --scan609 (self.scan_output, ""), # mdadm --detail --scan
604 ]610 ]
@@ -627,10 +633,11 @@ class TestBlockMdadmMdHelpers(CiTestCase):
627 def setUp(self):633 def setUp(self):
628 super(TestBlockMdadmMdHelpers, self).setUp()634 super(TestBlockMdadmMdHelpers, self).setUp()
629 self.add_patch('curtin.block.mdadm.util', 'mock_util')635 self.add_patch('curtin.block.mdadm.util', 'mock_util')
636 self.add_patch('curtin.block.mdadm.lsb_release', 'mock_lsb_release')
630 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')637 self.add_patch('curtin.block.mdadm.is_valid_device', 'mock_valid')
631638
632 self.mock_valid.return_value = True639 self.mock_valid.return_value = True
633 self.mock_util.lsb_release.return_value = {'codename': 'xenial'}640 self.mock_lsb_release.return_value = {'codename': 'xenial'}
634641
635 def test_valid_mdname(self):642 def test_valid_mdname(self):
636 mdname = "/dev/md0"643 mdname = "/dev/md0"
diff --git a/tests/unittests/test_block_mkfs.py b/tests/unittests/test_block_mkfs.py
index c756281..679f85b 100644
--- a/tests/unittests/test_block_mkfs.py
+++ b/tests/unittests/test_block_mkfs.py
@@ -37,11 +37,12 @@ class TestBlockMkfs(CiTestCase):
37 @mock.patch("curtin.block.mkfs.block")37 @mock.patch("curtin.block.mkfs.block")
38 @mock.patch("curtin.block.mkfs.os")38 @mock.patch("curtin.block.mkfs.os")
39 @mock.patch("curtin.block.mkfs.util")39 @mock.patch("curtin.block.mkfs.util")
40 @mock.patch("curtin.block.mkfs.distro.lsb_release")
40 def _run_mkfs_with_config(self, config, expected_cmd, expected_flags,41 def _run_mkfs_with_config(self, config, expected_cmd, expected_flags,
41 mock_util, mock_os, mock_block,42 mock_lsb_release, mock_util, mock_os, mock_block,
42 release="wily", strict=False):43 release="wily", strict=False):
43 # Pretend we are on wily as there are no known edge cases for it44 # Pretend we are on wily as there are no known edge cases for it
44 mock_util.lsb_release.return_value = {"codename": release}45 mock_lsb_release.return_value = {"codename": release}
45 mock_os.path.exists.return_value = True46 mock_os.path.exists.return_value = True
46 mock_block.get_blockdev_sector_size.return_value = (512, 512)47 mock_block.get_blockdev_sector_size.return_value = (512, 512)
4748
diff --git a/tests/unittests/test_block_zfs.py b/tests/unittests/test_block_zfs.py
index c18f6a3..9781946 100644
--- a/tests/unittests/test_block_zfs.py
+++ b/tests/unittests/test_block_zfs.py
@@ -384,7 +384,7 @@ class TestBlockZfsAssertZfsSupported(CiTestCase):
384 super(TestBlockZfsAssertZfsSupported, self).setUp()384 super(TestBlockZfsAssertZfsSupported, self).setUp()
385 self.add_patch('curtin.block.zfs.util.subp', 'mock_subp')385 self.add_patch('curtin.block.zfs.util.subp', 'mock_subp')
386 self.add_patch('curtin.block.zfs.util.get_platform_arch', 'mock_arch')386 self.add_patch('curtin.block.zfs.util.get_platform_arch', 'mock_arch')
387 self.add_patch('curtin.block.zfs.util.lsb_release', 'mock_release')387 self.add_patch('curtin.block.zfs.distro.lsb_release', 'mock_release')
388 self.add_patch('curtin.block.zfs.util.which', 'mock_which')388 self.add_patch('curtin.block.zfs.util.which', 'mock_which')
389 self.add_patch('curtin.block.zfs.get_supported_filesystems',389 self.add_patch('curtin.block.zfs.get_supported_filesystems',
390 'mock_supfs')390 'mock_supfs')
@@ -426,46 +426,52 @@ class TestAssertZfsSupported(CiTestCase):
426 super(TestAssertZfsSupported, self).setUp()426 super(TestAssertZfsSupported, self).setUp()
427427
428 @mock.patch('curtin.block.zfs.get_supported_filesystems')428 @mock.patch('curtin.block.zfs.get_supported_filesystems')
429 @mock.patch('curtin.block.zfs.distro')
429 @mock.patch('curtin.block.zfs.util')430 @mock.patch('curtin.block.zfs.util')
430 def test_zfs_assert_supported_returns_true(self, mock_util, mock_supfs):431 def test_zfs_assert_supported_returns_true(self, mock_util, mock_distro,
432 mock_supfs):
431 """zfs_assert_supported returns True on supported platforms"""433 """zfs_assert_supported returns True on supported platforms"""
432 mock_util.get_platform_arch.return_value = 'amd64'434 mock_util.get_platform_arch.return_value = 'amd64'
433 mock_util.lsb_release.return_value = {'codename': 'bionic'}435 mock_distro.lsb_release.return_value = {'codename': 'bionic'}
434 mock_util.subp.return_value = ("", "")436 mock_util.subp.return_value = ("", "")
435 mock_supfs.return_value = ['zfs']437 mock_supfs.return_value = ['zfs']
436 mock_util.which.side_effect = iter(['/wark/zpool', '/wark/zfs'])438 mock_util.which.side_effect = iter(['/wark/zpool', '/wark/zfs'])
437439
438 self.assertNotIn(mock_util.get_platform_arch.return_value,440 self.assertNotIn(mock_util.get_platform_arch.return_value,
439 zfs.ZFS_UNSUPPORTED_ARCHES)441 zfs.ZFS_UNSUPPORTED_ARCHES)
440 self.assertNotIn(mock_util.lsb_release.return_value['codename'],442 self.assertNotIn(mock_distro.lsb_release.return_value['codename'],
441 zfs.ZFS_UNSUPPORTED_RELEASES)443 zfs.ZFS_UNSUPPORTED_RELEASES)
442 self.assertTrue(zfs.zfs_supported())444 self.assertTrue(zfs.zfs_supported())
443445
446 @mock.patch('curtin.block.zfs.distro')
444 @mock.patch('curtin.block.zfs.util')447 @mock.patch('curtin.block.zfs.util')
445 def test_zfs_assert_supported_raises_exception_on_bad_arch(self,448 def test_zfs_assert_supported_raises_exception_on_bad_arch(self,
446 mock_util):449 mock_util,
450 mock_distro):
447 """zfs_assert_supported raises RuntimeError on unspported arches"""451 """zfs_assert_supported raises RuntimeError on unspported arches"""
448 mock_util.lsb_release.return_value = {'codename': 'bionic'}452 mock_distro.lsb_release.return_value = {'codename': 'bionic'}
449 mock_util.subp.return_value = ("", "")453 mock_util.subp.return_value = ("", "")
450 for arch in zfs.ZFS_UNSUPPORTED_ARCHES:454 for arch in zfs.ZFS_UNSUPPORTED_ARCHES:
451 mock_util.get_platform_arch.return_value = arch455 mock_util.get_platform_arch.return_value = arch
452 with self.assertRaises(RuntimeError):456 with self.assertRaises(RuntimeError):
453 zfs.zfs_assert_supported()457 zfs.zfs_assert_supported()
454458
459 @mock.patch('curtin.block.zfs.distro')
455 @mock.patch('curtin.block.zfs.util')460 @mock.patch('curtin.block.zfs.util')
456 def test_zfs_assert_supported_raises_exc_on_bad_releases(self, mock_util):461 def test_zfs_assert_supported_raises_exc_on_bad_releases(self, mock_util,
462 mock_distro):
457 """zfs_assert_supported raises RuntimeError on unspported releases"""463 """zfs_assert_supported raises RuntimeError on unspported releases"""
458 mock_util.get_platform_arch.return_value = 'amd64'464 mock_util.get_platform_arch.return_value = 'amd64'
459 mock_util.subp.return_value = ("", "")465 mock_util.subp.return_value = ("", "")
460 for release in zfs.ZFS_UNSUPPORTED_RELEASES:466 for release in zfs.ZFS_UNSUPPORTED_RELEASES:
461 mock_util.lsb_release.return_value = {'codename': release}467 mock_distro.lsb_release.return_value = {'codename': release}
462 with self.assertRaises(RuntimeError):468 with self.assertRaises(RuntimeError):
463 zfs.zfs_assert_supported()469 zfs.zfs_assert_supported()
464470
465 @mock.patch('curtin.block.zfs.util.subprocess.Popen')471 @mock.patch('curtin.block.zfs.util.subprocess.Popen')
466 @mock.patch('curtin.block.zfs.util.is_kmod_loaded')472 @mock.patch('curtin.block.zfs.util.is_kmod_loaded')
467 @mock.patch('curtin.block.zfs.get_supported_filesystems')473 @mock.patch('curtin.block.zfs.get_supported_filesystems')
468 @mock.patch('curtin.block.zfs.util.lsb_release')474 @mock.patch('curtin.block.zfs.distro.lsb_release')
469 @mock.patch('curtin.block.zfs.util.get_platform_arch')475 @mock.patch('curtin.block.zfs.util.get_platform_arch')
470 def test_zfs_assert_supported_raises_exc_on_missing_module(self,476 def test_zfs_assert_supported_raises_exc_on_missing_module(self,
471 m_arch,477 m_arch,
diff --git a/tests/unittests/test_commands_apply_net.py b/tests/unittests/test_commands_apply_net.py
index a55ab17..04b7f2e 100644
--- a/tests/unittests/test_commands_apply_net.py
+++ b/tests/unittests/test_commands_apply_net.py
@@ -5,7 +5,7 @@ import copy
5import os5import os
66
7from curtin.commands import apply_net7from curtin.commands import apply_net
8from curtin import util8from curtin import paths
9from .helpers import CiTestCase9from .helpers import CiTestCase
1010
1111
@@ -153,8 +153,8 @@ class TestApplyNetPatchIfupdown(CiTestCase):
153 prehookfn=prehookfn,153 prehookfn=prehookfn,
154 posthookfn=posthookfn)154 posthookfn=posthookfn)
155155
156 precfg = util.target_path(target, path=prehookfn)156 precfg = paths.target_path(target, path=prehookfn)
157 postcfg = util.target_path(target, path=posthookfn)157 postcfg = paths.target_path(target, path=posthookfn)
158 precontents = apply_net.IFUPDOWN_IPV6_MTU_PRE_HOOK158 precontents = apply_net.IFUPDOWN_IPV6_MTU_PRE_HOOK
159 postcontents = apply_net.IFUPDOWN_IPV6_MTU_POST_HOOK159 postcontents = apply_net.IFUPDOWN_IPV6_MTU_POST_HOOK
160160
@@ -231,7 +231,7 @@ class TestApplyNetPatchIpv6Priv(CiTestCase):
231231
232 apply_net._disable_ipv6_privacy_extensions(target)232 apply_net._disable_ipv6_privacy_extensions(target)
233233
234 cfg = util.target_path(target, path=path)234 cfg = paths.target_path(target, path=path)
235 mock_write.assert_called_with(cfg, expected_ipv6_priv_contents)235 mock_write.assert_called_with(cfg, expected_ipv6_priv_contents)
236236
237 @patch('curtin.util.load_file')237 @patch('curtin.util.load_file')
@@ -259,7 +259,7 @@ class TestApplyNetPatchIpv6Priv(CiTestCase):
259 apply_net._disable_ipv6_privacy_extensions(target, path=path)259 apply_net._disable_ipv6_privacy_extensions(target, path=path)
260260
261 # source file not found261 # source file not found
262 cfg = util.target_path(target, path)262 cfg = paths.target_path(target, path)
263 mock_ospath.exists.assert_called_with(cfg)263 mock_ospath.exists.assert_called_with(cfg)
264 self.assertEqual(0, mock_load.call_count)264 self.assertEqual(0, mock_load.call_count)
265265
@@ -272,7 +272,7 @@ class TestApplyNetRemoveLegacyEth0(CiTestCase):
272 def test_remove_legacy_eth0(self, mock_ospath, mock_load, mock_del):272 def test_remove_legacy_eth0(self, mock_ospath, mock_load, mock_del):
273 target = 'mytarget'273 target = 'mytarget'
274 path = 'eth0.cfg'274 path = 'eth0.cfg'
275 cfg = util.target_path(target, path)275 cfg = paths.target_path(target, path)
276 legacy_eth0_contents = (276 legacy_eth0_contents = (
277 'auto eth0\n'277 'auto eth0\n'
278 'iface eth0 inet dhcp')278 'iface eth0 inet dhcp')
@@ -330,7 +330,7 @@ class TestApplyNetRemoveLegacyEth0(CiTestCase):
330 apply_net._maybe_remove_legacy_eth0(target, path)330 apply_net._maybe_remove_legacy_eth0(target, path)
331331
332 # source file not found332 # source file not found
333 cfg = util.target_path(target, path)333 cfg = paths.target_path(target, path)
334 mock_ospath.exists.assert_called_with(cfg)334 mock_ospath.exists.assert_called_with(cfg)
335 self.assertEqual(0, mock_load.call_count)335 self.assertEqual(0, mock_load.call_count)
336 self.assertEqual(0, mock_del.call_count)336 self.assertEqual(0, mock_del.call_count)
diff --git a/tests/unittests/test_commands_block_meta.py b/tests/unittests/test_commands_block_meta.py
index a6a0b13..e70d6ed 100644
--- a/tests/unittests/test_commands_block_meta.py
+++ b/tests/unittests/test_commands_block_meta.py
@@ -7,7 +7,7 @@ from mock import patch, call
7import os7import os
88
9from curtin.commands import block_meta9from curtin.commands import block_meta
10from curtin import util10from curtin import paths, util
11from .helpers import CiTestCase11from .helpers import CiTestCase
1212
1313
@@ -688,8 +688,9 @@ class TestFstabData(CiTestCase):
688 if target is None:688 if target is None:
689 target = self.tmp_dir()689 target = self.tmp_dir()
690690
691 expected = [a if a != "_T_MP" else util.target_path(target, fdata.path)691 expected = [
692 for a in expected]692 a if a != "_T_MP" else paths.target_path(target, fdata.path)
693 for a in expected]
693 with patch("curtin.util.subp") as m_subp:694 with patch("curtin.util.subp") as m_subp:
694 block_meta.mount_fstab_data(fdata, target=target)695 block_meta.mount_fstab_data(fdata, target=target)
695696
diff --git a/tests/unittests/test_curthooks.py b/tests/unittests/test_curthooks.py
index a8275c7..8fd7933 100644
--- a/tests/unittests/test_curthooks.py
+++ b/tests/unittests/test_curthooks.py
@@ -4,6 +4,7 @@ import os
4from mock import call, patch, MagicMock4from mock import call, patch, MagicMock
55
6from curtin.commands import curthooks6from curtin.commands import curthooks
7from curtin import distro
7from curtin import util8from curtin import util
8from curtin import config9from curtin import config
9from curtin.reporter import events10from curtin.reporter import events
@@ -47,8 +48,8 @@ class TestGetFlashKernelPkgs(CiTestCase):
47class TestCurthooksInstallKernel(CiTestCase):48class TestCurthooksInstallKernel(CiTestCase):
48 def setUp(self):49 def setUp(self):
49 super(TestCurthooksInstallKernel, self).setUp()50 super(TestCurthooksInstallKernel, self).setUp()
50 self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg')51 self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg')
51 self.add_patch('curtin.util.install_packages', 'mock_instpkg')52 self.add_patch('curtin.distro.install_packages', 'mock_instpkg')
52 self.add_patch(53 self.add_patch(
53 'curtin.commands.curthooks.get_flash_kernel_pkgs',54 'curtin.commands.curthooks.get_flash_kernel_pkgs',
54 'mock_get_flash_kernel_pkgs')55 'mock_get_flash_kernel_pkgs')
@@ -122,12 +123,21 @@ class TestInstallMissingPkgs(CiTestCase):
122 def setUp(self):123 def setUp(self):
123 super(TestInstallMissingPkgs, self).setUp()124 super(TestInstallMissingPkgs, self).setUp()
124 self.add_patch('platform.machine', 'mock_machine')125 self.add_patch('platform.machine', 'mock_machine')
125 self.add_patch('curtin.util.get_installed_packages',126 self.add_patch('curtin.util.get_architecture', 'mock_arch')
127 self.add_patch('curtin.distro.get_installed_packages',
126 'mock_get_installed_packages')128 'mock_get_installed_packages')
127 self.add_patch('curtin.util.load_command_environment',129 self.add_patch('curtin.util.load_command_environment',
128 'mock_load_cmd_evn')130 'mock_load_cmd_evn')
129 self.add_patch('curtin.util.which', 'mock_which')131 self.add_patch('curtin.util.which', 'mock_which')
130 self.add_patch('curtin.util.install_packages', 'mock_install_packages')132 self.add_patch('curtin.util.is_uefi_bootable', 'mock_uefi')
133 self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg')
134 self.add_patch('curtin.distro.install_packages',
135 'mock_install_packages')
136 self.add_patch('curtin.distro.get_osfamily', 'mock_osfamily')
137 self.distro_family = distro.DISTROS.debian
138 self.mock_osfamily.return_value = self.distro_family
139 self.mock_uefi.return_value = False
140 self.mock_haspkg.return_value = False
131141
132 @patch.object(events, 'ReportEventStack')142 @patch.object(events, 'ReportEventStack')
133 def test_install_packages_s390x(self, mock_events):143 def test_install_packages_s390x(self, mock_events):
@@ -137,8 +147,8 @@ class TestInstallMissingPkgs(CiTestCase):
137 target = "not-a-real-target"147 target = "not-a-real-target"
138 cfg = {}148 cfg = {}
139 curthooks.install_missing_packages(cfg, target=target)149 curthooks.install_missing_packages(cfg, target=target)
140 self.mock_install_packages.assert_called_with(['s390-tools'],150 self.mock_install_packages.assert_called_with(
141 target=target)151 ['s390-tools'], target=target, osfamily=self.distro_family)
142152
143 @patch.object(events, 'ReportEventStack')153 @patch.object(events, 'ReportEventStack')
144 def test_install_packages_s390x_has_zipl(self, mock_events):154 def test_install_packages_s390x_has_zipl(self, mock_events):
@@ -159,6 +169,50 @@ class TestInstallMissingPkgs(CiTestCase):
159 curthooks.install_missing_packages(cfg, target=target)169 curthooks.install_missing_packages(cfg, target=target)
160 self.assertEqual([], self.mock_install_packages.call_args_list)170 self.assertEqual([], self.mock_install_packages.call_args_list)
161171
172 @patch.object(events, 'ReportEventStack')
173 def test_install_packages_on_uefi_amd64_shim_signed(self, mock_events):
174 arch = 'amd64'
175 self.mock_arch.return_value = arch
176 self.mock_machine.return_value = 'x86_64'
177 expected_pkgs = ['grub-efi-%s' % arch,
178 'grub-efi-%s-signed' % arch,
179 'shim-signed']
180 self.mock_machine.return_value = 'x86_64'
181 self.mock_uefi.return_value = True
182 self.mock_haspkg.return_value = True
183 target = "not-a-real-target"
184 cfg = {}
185 curthooks.install_missing_packages(cfg, target=target)
186 self.mock_install_packages.assert_called_with(
187 expected_pkgs, target=target, osfamily=self.distro_family)
188
189 @patch.object(events, 'ReportEventStack')
190 def test_install_packages_on_uefi_i386_noshim_nosigned(self, mock_events):
191 arch = 'i386'
192 self.mock_arch.return_value = arch
193 self.mock_machine.return_value = 'i386'
194 expected_pkgs = ['grub-efi-%s' % arch]
195 self.mock_machine.return_value = 'i686'
196 self.mock_uefi.return_value = True
197 target = "not-a-real-target"
198 cfg = {}
199 curthooks.install_missing_packages(cfg, target=target)
200 self.mock_install_packages.assert_called_with(
201 expected_pkgs, target=target, osfamily=self.distro_family)
202
203 @patch.object(events, 'ReportEventStack')
204 def test_install_packages_on_uefi_arm64_nosign_noshim(self, mock_events):
205 arch = 'arm64'
206 self.mock_arch.return_value = arch
207 self.mock_machine.return_value = 'aarch64'
208 expected_pkgs = ['grub-efi-%s' % arch]
209 self.mock_uefi.return_value = True
210 target = "not-a-real-target"
211 cfg = {}
212 curthooks.install_missing_packages(cfg, target=target)
213 self.mock_install_packages.assert_called_with(
214 expected_pkgs, target=target, osfamily=self.distro_family)
215
162216
163class TestSetupZipl(CiTestCase):217class TestSetupZipl(CiTestCase):
164218
@@ -192,7 +246,8 @@ class TestSetupGrub(CiTestCase):
192 def setUp(self):246 def setUp(self):
193 super(TestSetupGrub, self).setUp()247 super(TestSetupGrub, self).setUp()
194 self.target = self.tmp_dir()248 self.target = self.tmp_dir()
195 self.add_patch('curtin.util.lsb_release', 'mock_lsb_release')249 self.distro_family = distro.DISTROS.debian
250 self.add_patch('curtin.distro.lsb_release', 'mock_lsb_release')
196 self.mock_lsb_release.return_value = {251 self.mock_lsb_release.return_value = {
197 'codename': 'xenial',252 'codename': 'xenial',
198 }253 }
@@ -219,11 +274,12 @@ class TestSetupGrub(CiTestCase):
219 'grub_install_devices': ['/dev/vdb']274 'grub_install_devices': ['/dev/vdb']
220 }275 }
221 self.subp_output.append(('', ''))276 self.subp_output.append(('', ''))
222 curthooks.setup_grub(cfg, self.target)277 curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family)
223 self.assertEquals(278 self.assertEquals(
224 ([279 ([
225 'sh', '-c', 'exec "$0" "$@" 2>&1',280 'sh', '-c', 'exec "$0" "$@" 2>&1',
226 'install-grub', self.target, '/dev/vdb'],),281 'install-grub', '--os-family=%s' % self.distro_family,
282 self.target, '/dev/vdb'],),
227 self.mock_subp.call_args_list[0][0])283 self.mock_subp.call_args_list[0][0])
228284
229 def test_uses_install_devices_in_grubcfg(self):285 def test_uses_install_devices_in_grubcfg(self):
@@ -233,11 +289,12 @@ class TestSetupGrub(CiTestCase):
233 },289 },
234 }290 }
235 self.subp_output.append(('', ''))291 self.subp_output.append(('', ''))
236 curthooks.setup_grub(cfg, self.target)292 curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family)
237 self.assertEquals(293 self.assertEquals(
238 ([294 ([
239 'sh', '-c', 'exec "$0" "$@" 2>&1',295 'sh', '-c', 'exec "$0" "$@" 2>&1',
240 'install-grub', self.target, '/dev/vdb'],),296 'install-grub', '--os-family=%s' % self.distro_family,
297 self.target, '/dev/vdb'],),
241 self.mock_subp.call_args_list[0][0])298 self.mock_subp.call_args_list[0][0])
242299
243 def test_uses_grub_install_on_storage_config(self):300 def test_uses_grub_install_on_storage_config(self):
@@ -255,11 +312,12 @@ class TestSetupGrub(CiTestCase):
255 },312 },
256 }313 }
257 self.subp_output.append(('', ''))314 self.subp_output.append(('', ''))
258 curthooks.setup_grub(cfg, self.target)315 curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family)
259 self.assertEquals(316 self.assertEquals(
260 ([317 ([
261 'sh', '-c', 'exec "$0" "$@" 2>&1',318 'sh', '-c', 'exec "$0" "$@" 2>&1',
262 'install-grub', self.target, '/dev/vdb'],),319 'install-grub', '--os-family=%s' % self.distro_family,
320 self.target, '/dev/vdb'],),
263 self.mock_subp.call_args_list[0][0])321 self.mock_subp.call_args_list[0][0])
264322
265 def test_grub_install_installs_to_none_if_install_devices_None(self):323 def test_grub_install_installs_to_none_if_install_devices_None(self):
@@ -269,62 +327,17 @@ class TestSetupGrub(CiTestCase):
269 },327 },
270 }328 }
271 self.subp_output.append(('', ''))329 self.subp_output.append(('', ''))
272 curthooks.setup_grub(cfg, self.target)330 curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family)
273 self.assertEquals(
274 ([
275 'sh', '-c', 'exec "$0" "$@" 2>&1',
276 'install-grub', self.target, 'none'],),
277 self.mock_subp.call_args_list[0][0])
278
279 def test_grub_install_uefi_installs_signed_packages_for_amd64(self):
280 self.add_patch('curtin.util.install_packages', 'mock_install')
281 self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg')
282 self.mock_is_uefi_bootable.return_value = True
283 cfg = {
284 'grub': {
285 'install_devices': ['/dev/vdb'],
286 'update_nvram': False,
287 },
288 }
289 self.subp_output.append(('', ''))
290 self.mock_arch.return_value = 'amd64'
291 self.mock_haspkg.return_value = True
292 curthooks.setup_grub(cfg, self.target)
293 self.assertEquals(
294 (['grub-efi-amd64', 'grub-efi-amd64-signed', 'shim-signed'],),
295 self.mock_install.call_args_list[0][0])
296 self.assertEquals(331 self.assertEquals(
297 ([332 ([
298 'sh', '-c', 'exec "$0" "$@" 2>&1',333 'sh', '-c', 'exec "$0" "$@" 2>&1',
299 'install-grub', '--uefi', self.target, '/dev/vdb'],),334 'install-grub', '--os-family=%s' % self.distro_family,
300 self.mock_subp.call_args_list[0][0])335 self.target, 'none'],),
301
302 def test_grub_install_uefi_installs_packages_for_arm64(self):
303 self.add_patch('curtin.util.install_packages', 'mock_install')
304 self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg')
305 self.mock_is_uefi_bootable.return_value = True
306 cfg = {
307 'grub': {
308 'install_devices': ['/dev/vdb'],
309 'update_nvram': False,
310 },
311 }
312 self.subp_output.append(('', ''))
313 self.mock_arch.return_value = 'arm64'
314 self.mock_haspkg.return_value = False
315 curthooks.setup_grub(cfg, self.target)
316 self.assertEquals(
317 (['grub-efi-arm64'],),
318 self.mock_install.call_args_list[0][0])
319 self.assertEquals(
320 ([
321 'sh', '-c', 'exec "$0" "$@" 2>&1',
322 'install-grub', '--uefi', self.target, '/dev/vdb'],),
323 self.mock_subp.call_args_list[0][0])336 self.mock_subp.call_args_list[0][0])
324337
325 def test_grub_install_uefi_updates_nvram_skips_remove_and_reorder(self):338 def test_grub_install_uefi_updates_nvram_skips_remove_and_reorder(self):
326 self.add_patch('curtin.util.install_packages', 'mock_install')339 self.add_patch('curtin.distro.install_packages', 'mock_install')
327 self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg')340 self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg')
328 self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr')341 self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr')
329 self.mock_is_uefi_bootable.return_value = True342 self.mock_is_uefi_bootable.return_value = True
330 cfg = {343 cfg = {
@@ -347,17 +360,18 @@ class TestSetupGrub(CiTestCase):
347 }360 }
348 }361 }
349 }362 }
350 curthooks.setup_grub(cfg, self.target)363 curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family)
351 self.assertEquals(364 self.assertEquals(
352 ([365 ([
353 'sh', '-c', 'exec "$0" "$@" 2>&1',366 'sh', '-c', 'exec "$0" "$@" 2>&1',
354 'install-grub', '--uefi', '--update-nvram',367 'install-grub', '--uefi', '--update-nvram',
368 '--os-family=%s' % self.distro_family,
355 self.target, '/dev/vdb'],),369 self.target, '/dev/vdb'],),
356 self.mock_subp.call_args_list[0][0])370 self.mock_subp.call_args_list[0][0])
357371
358 def test_grub_install_uefi_updates_nvram_removes_old_loaders(self):372 def test_grub_install_uefi_updates_nvram_removes_old_loaders(self):
359 self.add_patch('curtin.util.install_packages', 'mock_install')373 self.add_patch('curtin.distro.install_packages', 'mock_install')
360 self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg')374 self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg')
361 self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr')375 self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr')
362 self.mock_is_uefi_bootable.return_value = True376 self.mock_is_uefi_bootable.return_value = True
363 cfg = {377 cfg = {
@@ -392,7 +406,7 @@ class TestSetupGrub(CiTestCase):
392 self.in_chroot_subp_output.append(('', ''))406 self.in_chroot_subp_output.append(('', ''))
393 self.in_chroot_subp_output.append(('', ''))407 self.in_chroot_subp_output.append(('', ''))
394 self.mock_haspkg.return_value = False408 self.mock_haspkg.return_value = False
395 curthooks.setup_grub(cfg, self.target)409 curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family)
396 self.assertEquals(410 self.assertEquals(
397 ['efibootmgr', '-B', '-b'],411 ['efibootmgr', '-B', '-b'],
398 self.mock_in_chroot_subp.call_args_list[0][0][0][:3])412 self.mock_in_chroot_subp.call_args_list[0][0][0][:3])
@@ -406,8 +420,8 @@ class TestSetupGrub(CiTestCase):
406 self.mock_in_chroot_subp.call_args_list[1][0][0][3]]))420 self.mock_in_chroot_subp.call_args_list[1][0][0][3]]))
407421
408 def test_grub_install_uefi_updates_nvram_reorders_loaders(self):422 def test_grub_install_uefi_updates_nvram_reorders_loaders(self):
409 self.add_patch('curtin.util.install_packages', 'mock_install')423 self.add_patch('curtin.distro.install_packages', 'mock_install')
410 self.add_patch('curtin.util.has_pkg_available', 'mock_haspkg')424 self.add_patch('curtin.distro.has_pkg_available', 'mock_haspkg')
411 self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr')425 self.add_patch('curtin.util.get_efibootmgr', 'mock_efibootmgr')
412 self.mock_is_uefi_bootable.return_value = True426 self.mock_is_uefi_bootable.return_value = True
413 cfg = {427 cfg = {
@@ -436,7 +450,7 @@ class TestSetupGrub(CiTestCase):
436 }450 }
437 self.in_chroot_subp_output.append(('', ''))451 self.in_chroot_subp_output.append(('', ''))
438 self.mock_haspkg.return_value = False452 self.mock_haspkg.return_value = False
439 curthooks.setup_grub(cfg, self.target)453 curthooks.setup_grub(cfg, self.target, osfamily=self.distro_family)
440 self.assertEquals(454 self.assertEquals(
441 (['efibootmgr', '-o', '0001,0000'],),455 (['efibootmgr', '-o', '0001,0000'],),
442 self.mock_in_chroot_subp.call_args_list[0][0])456 self.mock_in_chroot_subp.call_args_list[0][0])
@@ -453,11 +467,11 @@ class TestUbuntuCoreHooks(CiTestCase):
453 'var/lib/snapd')467 'var/lib/snapd')
454 util.ensure_dir(ubuntu_core_path)468 util.ensure_dir(ubuntu_core_path)
455 self.assertTrue(os.path.isdir(ubuntu_core_path))469 self.assertTrue(os.path.isdir(ubuntu_core_path))
456 is_core = curthooks.target_is_ubuntu_core(self.target)470 is_core = distro.is_ubuntu_core(self.target)
457 self.assertTrue(is_core)471 self.assertTrue(is_core)
458472
459 def test_target_is_ubuntu_core_no_target(self):473 def test_target_is_ubuntu_core_no_target(self):
460 is_core = curthooks.target_is_ubuntu_core(self.target)474 is_core = distro.is_ubuntu_core(self.target)
461 self.assertFalse(is_core)475 self.assertFalse(is_core)
462476
463 def test_target_is_ubuntu_core_noncore_target(self):477 def test_target_is_ubuntu_core_noncore_target(self):
@@ -465,7 +479,7 @@ class TestUbuntuCoreHooks(CiTestCase):
465 non_core_path = os.path.join(self.target, 'curtin')479 non_core_path = os.path.join(self.target, 'curtin')
466 util.ensure_dir(non_core_path)480 util.ensure_dir(non_core_path)
467 self.assertTrue(os.path.isdir(non_core_path))481 self.assertTrue(os.path.isdir(non_core_path))
468 is_core = curthooks.target_is_ubuntu_core(self.target)482 is_core = distro.is_ubuntu_core(self.target)
469 self.assertFalse(is_core)483 self.assertFalse(is_core)
470484
471 @patch('curtin.util.write_file')485 @patch('curtin.util.write_file')
@@ -736,15 +750,15 @@ class TestDetectRequiredPackages(CiTestCase):
736 ({'network': {750 ({'network': {
737 'version': 2,751 'version': 2,
738 'items': ('bridge',)}},752 'items': ('bridge',)}},
739 ('bridge-utils',)),753 ()),
740 ({'network': {754 ({'network': {
741 'version': 2,755 'version': 2,
742 'items': ('vlan',)}},756 'items': ('vlan',)}},
743 ('vlan',)),757 ()),
744 ({'network': {758 ({'network': {
745 'version': 2,759 'version': 2,
746 'items': ('vlan', 'bridge')}},760 'items': ('vlan', 'bridge')}},
747 ('vlan', 'bridge-utils')),761 ()),
748 ))762 ))
749763
750 def test_mixed_storage_v1_network_v2_detect(self):764 def test_mixed_storage_v1_network_v2_detect(self):
@@ -755,7 +769,7 @@ class TestDetectRequiredPackages(CiTestCase):
755 'storage': {769 'storage': {
756 'version': 1,770 'version': 1,
757 'items': ('raid', 'bcache', 'ext4')}},771 'items': ('raid', 'bcache', 'ext4')}},
758 ('vlan', 'bridge-utils', 'mdadm', 'bcache-tools', 'e2fsprogs')),772 ('mdadm', 'bcache-tools', 'e2fsprogs')),
759 ))773 ))
760774
761 def test_invalid_version_in_config(self):775 def test_invalid_version_in_config(self):
@@ -782,7 +796,7 @@ class TestCurthooksWriteFiles(CiTestCase):
782 dict((cfg[i]['path'], cfg[i]['content']) for i in cfg.keys()),796 dict((cfg[i]['path'], cfg[i]['content']) for i in cfg.keys()),
783 dir2dict(tmpd, prefix=tmpd))797 dir2dict(tmpd, prefix=tmpd))
784798
785 @patch('curtin.commands.curthooks.futil.target_path')799 @patch('curtin.commands.curthooks.paths.target_path')
786 @patch('curtin.commands.curthooks.futil.write_finfo')800 @patch('curtin.commands.curthooks.futil.write_finfo')
787 def test_handle_write_files_finfo(self, mock_write_finfo, mock_tp):801 def test_handle_write_files_finfo(self, mock_write_finfo, mock_tp):
788 """ Validate that futils.write_files handles target_path correctly """802 """ Validate that futils.write_files handles target_path correctly """
@@ -816,6 +830,8 @@ class TestCurthooksPollinate(CiTestCase):
816 self.add_patch('curtin.util.write_file', 'mock_write')830 self.add_patch('curtin.util.write_file', 'mock_write')
817 self.add_patch('curtin.commands.curthooks.get_maas_version',831 self.add_patch('curtin.commands.curthooks.get_maas_version',
818 'mock_maas_version')832 'mock_maas_version')
833 self.add_patch('curtin.util.which', 'mock_which')
834 self.mock_which.return_value = '/usr/bin/pollinate'
819 self.target = self.tmp_dir()835 self.target = self.tmp_dir()
820836
821 def test_handle_pollinate_user_agent_disable(self):837 def test_handle_pollinate_user_agent_disable(self):
@@ -826,6 +842,15 @@ class TestCurthooksPollinate(CiTestCase):
826 self.assertEqual(0, self.mock_maas_version.call_count)842 self.assertEqual(0, self.mock_maas_version.call_count)
827 self.assertEqual(0, self.mock_write.call_count)843 self.assertEqual(0, self.mock_write.call_count)
828844
845 def test_handle_pollinate_returns_if_no_pollinate_binary(self):
846 """ handle_pollinate_user_agent does nothing if no pollinate binary"""
847 self.mock_which.return_value = None
848 cfg = {'reporting': {'maas': {'endpoint': 'http://127.0.0.1/foo'}}}
849 curthooks.handle_pollinate_user_agent(cfg, self.target)
850 self.assertEqual(0, self.mock_curtin_version.call_count)
851 self.assertEqual(0, self.mock_maas_version.call_count)
852 self.assertEqual(0, self.mock_write.call_count)
853
829 def test_handle_pollinate_user_agent_default(self):854 def test_handle_pollinate_user_agent_default(self):
830 """ handle_pollinate_user_agent checks curtin/maas version by default855 """ handle_pollinate_user_agent checks curtin/maas version by default
831 """856 """
diff --git a/tests/unittests/test_distro.py b/tests/unittests/test_distro.py
832new file mode 100644857new file mode 100644
index 0000000..d4e5a1e
--- /dev/null
+++ b/tests/unittests/test_distro.py
@@ -0,0 +1,302 @@
1# This file is part of curtin. See LICENSE file for copyright and license info.
2
3from unittest import skipIf
4import mock
5import sys
6
7from curtin import distro
8from curtin import paths
9from curtin import util
10from .helpers import CiTestCase
11
12
13class TestLsbRelease(CiTestCase):
14
15 def setUp(self):
16 super(TestLsbRelease, self).setUp()
17 self._reset_cache()
18
19 def _reset_cache(self):
20 keys = [k for k in distro._LSB_RELEASE.keys()]
21 for d in keys:
22 del distro._LSB_RELEASE[d]
23
24 @mock.patch("curtin.distro.subp")
25 def test_lsb_release_functional(self, mock_subp):
26 output = '\n'.join([
27 "Distributor ID: Ubuntu",
28 "Description: Ubuntu 14.04.2 LTS",
29 "Release: 14.04",
30 "Codename: trusty",
31 ])
32 rdata = {'id': 'Ubuntu', 'description': 'Ubuntu 14.04.2 LTS',
33 'codename': 'trusty', 'release': '14.04'}
34
35 def fake_subp(cmd, capture=False, target=None):
36 return output, 'No LSB modules are available.'
37
38 mock_subp.side_effect = fake_subp
39 found = distro.lsb_release()
40 mock_subp.assert_called_with(
41 ['lsb_release', '--all'], capture=True, target=None)
42 self.assertEqual(found, rdata)
43
44 @mock.patch("curtin.distro.subp")
45 def test_lsb_release_unavailable(self, mock_subp):
46 def doraise(*args, **kwargs):
47 raise util.ProcessExecutionError("foo")
48 mock_subp.side_effect = doraise
49
50 expected = {k: "UNAVAILABLE" for k in
51 ('id', 'description', 'codename', 'release')}
52 self.assertEqual(distro.lsb_release(), expected)
53
54
55class TestParseDpkgVersion(CiTestCase):
56 """test parse_dpkg_version."""
57
58 def test_none_raises_type_error(self):
59 self.assertRaises(TypeError, distro.parse_dpkg_version, None)
60
61 @skipIf(sys.version_info.major < 3, "python 2 bytes are strings.")
62 def test_bytes_raises_type_error(self):
63 self.assertRaises(TypeError, distro.parse_dpkg_version, b'1.2.3-0')
64
65 def test_simple_native_package_version(self):
66 """dpkg versions must have a -. If not present expect value error."""
67 self.assertEqual(
68 {'major': 2, 'minor': 28, 'micro': 0, 'extra': None,
69 'raw': '2.28', 'upstream': '2.28', 'name': 'germinate',
70 'semantic_version': 22800},
71 distro.parse_dpkg_version('2.28', name='germinate'))
72
73 def test_complex_native_package_version(self):
74 dver = '1.0.106ubuntu2+really1.0.97ubuntu1'
75 self.assertEqual(
76 {'major': 1, 'minor': 0, 'micro': 106,
77 'extra': 'ubuntu2+really1.0.97ubuntu1',
78 'raw': dver, 'upstream': dver, 'name': 'debootstrap',
79 'semantic_version': 100106},
80 distro.parse_dpkg_version(dver, name='debootstrap',
81 semx=(100000, 1000, 1)))
82
83 def test_simple_valid(self):
84 self.assertEqual(
85 {'major': 1, 'minor': 2, 'micro': 3, 'extra': None,
86 'raw': '1.2.3-0', 'upstream': '1.2.3', 'name': 'foo',
87 'semantic_version': 10203},
88 distro.parse_dpkg_version('1.2.3-0', name='foo'))
89
90 def test_simple_valid_with_semx(self):
91 self.assertEqual(
92 {'major': 1, 'minor': 2, 'micro': 3, 'extra': None,
93 'raw': '1.2.3-0', 'upstream': '1.2.3',
94 'semantic_version': 123},
95 distro.parse_dpkg_version('1.2.3-0', semx=(100, 10, 1)))
96
97 def test_upstream_with_hyphen(self):
98 """upstream versions may have a hyphen."""
99 cver = '18.2-14-g6d48d265-0ubuntu1'
100 self.assertEqual(
101 {'major': 18, 'minor': 2, 'micro': 0, 'extra': '-14-g6d48d265',
102 'raw': cver, 'upstream': '18.2-14-g6d48d265',
103 'name': 'cloud-init', 'semantic_version': 180200},
104 distro.parse_dpkg_version(cver, name='cloud-init'))
105
106 def test_upstream_with_plus(self):
107 """multipath tools has a + in it."""
108 mver = '0.5.0+git1.656f8865-5ubuntu2.5'
109 self.assertEqual(
110 {'major': 0, 'minor': 5, 'micro': 0, 'extra': '+git1.656f8865',
111 'raw': mver, 'upstream': '0.5.0+git1.656f8865',
112 'semantic_version': 500},
113 distro.parse_dpkg_version(mver))
114
115
116class TestDistros(CiTestCase):
117
118 def test_distro_names(self):
119 all_distros = list(distro.DISTROS)
120 for distro_name in distro.DISTRO_NAMES:
121 distro_enum = getattr(distro.DISTROS, distro_name)
122 self.assertIn(distro_enum, all_distros)
123
124 def test_distro_names_unknown(self):
125 distro_name = "ImNotADistro"
126 self.assertNotIn(distro_name, distro.DISTRO_NAMES)
127 with self.assertRaises(AttributeError):
128 getattr(distro.DISTROS, distro_name)
129
130 def test_distro_osfamily(self):
131 for variant, family in distro.OS_FAMILIES.items():
132 self.assertNotEqual(variant, family)
133 self.assertIn(variant, distro.DISTROS)
134 for dname in family:
135 self.assertIn(dname, distro.DISTROS)
136
137 def test_distro_osfmaily_identity(self):
138 for family, variants in distro.OS_FAMILIES.items():
139 self.assertIn(family, variants)
140
141 def test_name_to_distro(self):
142 for distro_name in distro.DISTRO_NAMES:
143 dobj = distro.name_to_distro(distro_name)
144 self.assertEqual(dobj, getattr(distro.DISTROS, distro_name))
145
146 def test_name_to_distro_unknown_value(self):
147 with self.assertRaises(ValueError):
148 distro.name_to_distro(None)
149
150 def test_name_to_distro_unknown_attr(self):
151 with self.assertRaises(ValueError):
152 distro.name_to_distro('NotADistro')
153
154 def test_distros_unknown_attr(self):
155 with self.assertRaises(AttributeError):
156 distro.DISTROS.notadistro
157
158 def test_distros_unknown_index(self):
159 with self.assertRaises(IndexError):
160 distro.DISTROS[len(distro.DISTROS)+1]
161
162
163class TestDistroInfo(CiTestCase):
164
165 def setUp(self):
166 super(TestDistroInfo, self).setUp()
167 self.add_patch('curtin.distro.os_release', 'mock_os_release')
168
169 def test_get_distroinfo(self):
170 for distro_name in distro.DISTRO_NAMES:
171 self.mock_os_release.return_value = {'ID': distro_name}
172 variant = distro.name_to_distro(distro_name)
173 family = distro.DISTRO_TO_OSFAMILY[variant]
174 distro_info = distro.get_distroinfo()
175 self.assertEqual(variant, distro_info.variant)
176 self.assertEqual(family, distro_info.family)
177
178 def test_get_distro(self):
179 for distro_name in distro.DISTRO_NAMES:
180 self.mock_os_release.return_value = {'ID': distro_name}
181 variant = distro.name_to_distro(distro_name)
182 distro_obj = distro.get_distro()
183 self.assertEqual(variant, distro_obj)
184
185 def test_get_osfamily(self):
186 for distro_name in distro.DISTRO_NAMES:
187 self.mock_os_release.return_value = {'ID': distro_name}
188 variant = distro.name_to_distro(distro_name)
189 family = distro.DISTRO_TO_OSFAMILY[variant]
190 distro_obj = distro.get_osfamily()
191 self.assertEqual(family, distro_obj)
192
193
194class TestDistroIdentity(CiTestCase):
195
196 def setUp(self):
197 super(TestDistroIdentity, self).setUp()
198 self.add_patch('curtin.distro.os.path.exists', 'mock_os_path')
199
200 def test_is_ubuntu_core(self):
201 for exists in [True, False]:
202 self.mock_os_path.return_value = exists
203 self.assertEqual(exists, distro.is_ubuntu_core())
204 self.mock_os_path.assert_called_with('/system-data/var/lib/snapd')
205
206 def test_is_centos(self):
207 for exists in [True, False]:
208 self.mock_os_path.return_value = exists
209 self.assertEqual(exists, distro.is_centos())
210 self.mock_os_path.assert_called_with('/etc/centos-release')
211
212 def test_is_rhel(self):
213 for exists in [True, False]:
214 self.mock_os_path.return_value = exists
215 self.assertEqual(exists, distro.is_rhel())
216 self.mock_os_path.assert_called_with('/etc/redhat-release')
217
218
219class TestYumInstall(CiTestCase):
220
221 @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a)
222 @mock.patch('curtin.util.subp')
223 def test_yum_install(self, m_subp):
224 pkglist = ['foobar', 'wark']
225 target = 'mytarget'
226 mode = 'install'
227 expected_calls = [
228 mock.call(['yum', '--assumeyes', '--quiet', 'install',
229 '--downloadonly', '--setopt=keepcache=1'] + pkglist,
230 env=None, retries=[1] * 10,
231 target=paths.target_path(target)),
232 mock.call(['yum', '--assumeyes', '--quiet', 'install',
233 '--cacheonly'] + pkglist, env=None,
234 target=paths.target_path(target))
235 ]
236
237 # call yum_install directly
238 distro.yum_install(mode, pkglist, target=target)
239 m_subp.assert_has_calls(expected_calls)
240
241 # call yum_install through run_yum_command
242 m_subp.reset()
243 distro.run_yum_command('install', pkglist, target=target)
244 m_subp.assert_has_calls(expected_calls)
245
246 # call yum_install through install_packages
247 m_subp.reset()
248 osfamily = distro.DISTROS.redhat
249 distro.install_packages(pkglist, osfamily=osfamily, target=target)
250 m_subp.assert_has_calls(expected_calls)
251
252
253class TestHasPkgAvailable(CiTestCase):
254
255 def setUp(self):
256 super(TestHasPkgAvailable, self).setUp()
257 self.package = 'foobar'
258 self.target = paths.target_path('mytarget')
259
260 @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a)
261 @mock.patch('curtin.distro.subp')
262 def test_has_pkg_available_debian(self, m_subp):
263 osfamily = distro.DISTROS.debian
264 m_subp.return_value = (self.package, '')
265 result = distro.has_pkg_available(self.package, self.target, osfamily)
266 self.assertTrue(result)
267 m_subp.assert_has_calls([mock.call(['apt-cache', 'pkgnames'],
268 capture=True,
269 target=self.target)])
270
271 @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a)
272 @mock.patch('curtin.distro.subp')
273 def test_has_pkg_available_debian_returns_false_not_avail(self, m_subp):
274 pkg = 'wark'
275 osfamily = distro.DISTROS.debian
276 m_subp.return_value = (pkg, '')
277 result = distro.has_pkg_available(self.package, self.target, osfamily)
278 self.assertEqual(pkg == self.package, result)
279 m_subp.assert_has_calls([mock.call(['apt-cache', 'pkgnames'],
280 capture=True,
281 target=self.target)])
282
283 @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a)
284 @mock.patch('curtin.distro.run_yum_command')
285 def test_has_pkg_available_redhat(self, m_subp):
286 osfamily = distro.DISTROS.redhat
287 m_subp.return_value = (self.package, '')
288 result = distro.has_pkg_available(self.package, self.target, osfamily)
289 self.assertTrue(result)
290 m_subp.assert_has_calls([mock.call('list', opts=['--cacheonly'])])
291
292 @mock.patch.object(util.ChrootableTarget, "__enter__", new=lambda a: a)
293 @mock.patch('curtin.distro.run_yum_command')
294 def test_has_pkg_available_redhat_returns_false_not_avail(self, m_subp):
295 pkg = 'wark'
296 osfamily = distro.DISTROS.redhat
297 m_subp.return_value = (pkg, '')
298 result = distro.has_pkg_available(self.package, self.target, osfamily)
299 self.assertEqual(pkg == self.package, result)
300 m_subp.assert_has_calls([mock.call('list', opts=['--cacheonly'])])
301
302# vi: ts=4 expandtab syntax=python
diff --git a/tests/unittests/test_feature.py b/tests/unittests/test_feature.py
index c62e0cd..7c55882 100644
--- a/tests/unittests/test_feature.py
+++ b/tests/unittests/test_feature.py
@@ -21,4 +21,7 @@ class TestExportsFeatures(CiTestCase):
21 def test_has_centos_apply_network_config(self):21 def test_has_centos_apply_network_config(self):
22 self.assertIn('CENTOS_APPLY_NETWORK_CONFIG', curtin.FEATURES)22 self.assertIn('CENTOS_APPLY_NETWORK_CONFIG', curtin.FEATURES)
2323
24 def test_has_centos_curthook_support(self):
25 self.assertIn('CENTOS_CURTHOOK_SUPPORT', curtin.FEATURES)
26
24# vi: ts=4 expandtab syntax=python27# vi: ts=4 expandtab syntax=python
diff --git a/tests/unittests/test_pack.py b/tests/unittests/test_pack.py
index 1aae456..cb0b135 100644
--- a/tests/unittests/test_pack.py
+++ b/tests/unittests/test_pack.py
@@ -97,6 +97,8 @@ class TestPack(TestCase):
97 }}97 }}
9898
99 out, err, rc, log_contents = self.run_install(cfg)99 out, err, rc, log_contents = self.run_install(cfg)
100 print("out=%s" % out)
101 print("err=%s" % err)
100102
101 # the version string and users command output should be in output103 # the version string and users command output should be in output
102 self.assertIn(version.version_string(), out)104 self.assertIn(version.version_string(), out)
diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py
index 7fb332d..a64be16 100644
--- a/tests/unittests/test_util.py
+++ b/tests/unittests/test_util.py
@@ -4,10 +4,10 @@ from unittest import skipIf
4import mock4import mock
5import os5import os
6import stat6import stat
7import sys
8from textwrap import dedent7from textwrap import dedent
98
10from curtin import util9from curtin import util
10from curtin import paths
11from .helpers import CiTestCase, simple_mocked_open11from .helpers import CiTestCase, simple_mocked_open
1212
1313
@@ -104,48 +104,6 @@ class TestWhich(CiTestCase):
104 self.assertEqual(found, "/usr/bin2/fuzz")104 self.assertEqual(found, "/usr/bin2/fuzz")
105105
106106
107class TestLsbRelease(CiTestCase):
108
109 def setUp(self):
110 super(TestLsbRelease, self).setUp()
111 self._reset_cache()
112
113 def _reset_cache(self):
114 keys = [k for k in util._LSB_RELEASE.keys()]
115 for d in keys:
116 del util._LSB_RELEASE[d]
117
118 @mock.patch("curtin.util.subp")
119 def test_lsb_release_functional(self, mock_subp):
120 output = '\n'.join([
121 "Distributor ID: Ubuntu",
122 "Description: Ubuntu 14.04.2 LTS",
123 "Release: 14.04",
124 "Codename: trusty",
125 ])
126 rdata = {'id': 'Ubuntu', 'description': 'Ubuntu 14.04.2 LTS',
127 'codename': 'trusty', 'release': '14.04'}
128
129 def fake_subp(cmd, capture=False, target=None):
130 return output, 'No LSB modules are available.'
131
132 mock_subp.side_effect = fake_subp
133 found = util.lsb_release()
134 mock_subp.assert_called_with(
135 ['lsb_release', '--all'], capture=True, target=None)
136 self.assertEqual(found, rdata)
137
138 @mock.patch("curtin.util.subp")
139 def test_lsb_release_unavailable(self, mock_subp):
140 def doraise(*args, **kwargs):
141 raise util.ProcessExecutionError("foo")
142 mock_subp.side_effect = doraise
143
144 expected = {k: "UNAVAILABLE" for k in
145 ('id', 'description', 'codename', 'release')}
146 self.assertEqual(util.lsb_release(), expected)
147
148
149class TestSubp(CiTestCase):107class TestSubp(CiTestCase):
150108
151 stdin2err = ['bash', '-c', 'cat >&2']109 stdin2err = ['bash', '-c', 'cat >&2']
@@ -312,7 +270,7 @@ class TestSubp(CiTestCase):
312 # if target is not provided or is /, chroot should not be used270 # if target is not provided or is /, chroot should not be used
313 calls = m_popen.call_args_list271 calls = m_popen.call_args_list
314 popen_args, popen_kwargs = calls[-1]272 popen_args, popen_kwargs = calls[-1]
315 target = util.target_path(kwargs.get('target', None))273 target = paths.target_path(kwargs.get('target', None))
316 unshcmd = self.mock_get_unshare_pid_args.return_value274 unshcmd = self.mock_get_unshare_pid_args.return_value
317 if target == "/":275 if target == "/":
318 self.assertEqual(unshcmd + list(cmd), popen_args[0])276 self.assertEqual(unshcmd + list(cmd), popen_args[0])
@@ -554,44 +512,44 @@ class TestSetUnExecutable(CiTestCase):
554512
555class TestTargetPath(CiTestCase):513class TestTargetPath(CiTestCase):
556 def test_target_empty_string(self):514 def test_target_empty_string(self):
557 self.assertEqual("/etc/passwd", util.target_path("", "/etc/passwd"))515 self.assertEqual("/etc/passwd", paths.target_path("", "/etc/passwd"))
558516
559 def test_target_non_string_raises(self):517 def test_target_non_string_raises(self):
560 self.assertRaises(ValueError, util.target_path, False)518 self.assertRaises(ValueError, paths.target_path, False)
561 self.assertRaises(ValueError, util.target_path, 9)519 self.assertRaises(ValueError, paths.target_path, 9)
562 self.assertRaises(ValueError, util.target_path, True)520 self.assertRaises(ValueError, paths.target_path, True)
563521
564 def test_lots_of_slashes_is_slash(self):522 def test_lots_of_slashes_is_slash(self):
565 self.assertEqual("/", util.target_path("/"))523 self.assertEqual("/", paths.target_path("/"))
566 self.assertEqual("/", util.target_path("//"))524 self.assertEqual("/", paths.target_path("//"))
567 self.assertEqual("/", util.target_path("///"))525 self.assertEqual("/", paths.target_path("///"))
568 self.assertEqual("/", util.target_path("////"))526 self.assertEqual("/", paths.target_path("////"))
569527
570 def test_empty_string_is_slash(self):528 def test_empty_string_is_slash(self):
571 self.assertEqual("/", util.target_path(""))529 self.assertEqual("/", paths.target_path(""))
572530
573 def test_recognizes_relative(self):531 def test_recognizes_relative(self):
574 self.assertEqual("/", util.target_path("/foo/../"))532 self.assertEqual("/", paths.target_path("/foo/../"))
575 self.assertEqual("/", util.target_path("/foo//bar/../../"))533 self.assertEqual("/", paths.target_path("/foo//bar/../../"))
576534
577 def test_no_path(self):535 def test_no_path(self):
578 self.assertEqual("/my/target", util.target_path("/my/target"))536 self.assertEqual("/my/target", paths.target_path("/my/target"))
579537
580 def test_no_target_no_path(self):538 def test_no_target_no_path(self):
581 self.assertEqual("/", util.target_path(None))539 self.assertEqual("/", paths.target_path(None))
582540
583 def test_no_target_with_path(self):541 def test_no_target_with_path(self):
584 self.assertEqual("/my/path", util.target_path(None, "/my/path"))542 self.assertEqual("/my/path", paths.target_path(None, "/my/path"))
585543
586 def test_trailing_slash(self):544 def test_trailing_slash(self):
587 self.assertEqual("/my/target/my/path",545 self.assertEqual("/my/target/my/path",
588 util.target_path("/my/target/", "/my/path"))546 paths.target_path("/my/target/", "/my/path"))
589547
590 def test_bunch_of_slashes_in_path(self):548 def test_bunch_of_slashes_in_path(self):
591 self.assertEqual("/target/my/path/",549 self.assertEqual("/target/my/path/",
592 util.target_path("/target/", "//my/path/"))550 paths.target_path("/target/", "//my/path/"))
593 self.assertEqual("/target/my/path/",551 self.assertEqual("/target/my/path/",
594 util.target_path("/target/", "///my/path/"))552 paths.target_path("/target/", "///my/path/"))
595553
596554
597class TestRunInChroot(CiTestCase):555class TestRunInChroot(CiTestCase):
@@ -1036,65 +994,4 @@ class TestLoadKernelModule(CiTestCase):
1036 self.assertEqual(0, self.m_subp.call_count)994 self.assertEqual(0, self.m_subp.call_count)
1037995
1038996
1039class TestParseDpkgVersion(CiTestCase):
1040 """test parse_dpkg_version."""
1041
1042 def test_none_raises_type_error(self):
1043 self.assertRaises(TypeError, util.parse_dpkg_version, None)
1044
1045 @skipIf(sys.version_info.major < 3, "python 2 bytes are strings.")
1046 def test_bytes_raises_type_error(self):
1047 self.assertRaises(TypeError, util.parse_dpkg_version, b'1.2.3-0')
1048
1049 def test_simple_native_package_version(self):
1050 """dpkg versions must have a -. If not present expect value error."""
1051 self.assertEqual(
1052 {'major': 2, 'minor': 28, 'micro': 0, 'extra': None,
1053 'raw': '2.28', 'upstream': '2.28', 'name': 'germinate',
1054 'semantic_version': 22800},
1055 util.parse_dpkg_version('2.28', name='germinate'))
1056
1057 def test_complex_native_package_version(self):
1058 dver = '1.0.106ubuntu2+really1.0.97ubuntu1'
1059 self.assertEqual(
1060 {'major': 1, 'minor': 0, 'micro': 106,
1061 'extra': 'ubuntu2+really1.0.97ubuntu1',
1062 'raw': dver, 'upstream': dver, 'name': 'debootstrap',
1063 'semantic_version': 100106},
1064 util.parse_dpkg_version(dver, name='debootstrap',
1065 semx=(100000, 1000, 1)))
1066
1067 def test_simple_valid(self):
1068 self.assertEqual(
1069 {'major': 1, 'minor': 2, 'micro': 3, 'extra': None,
1070 'raw': '1.2.3-0', 'upstream': '1.2.3', 'name': 'foo',
1071 'semantic_version': 10203},
1072 util.parse_dpkg_version('1.2.3-0', name='foo'))
1073
1074 def test_simple_valid_with_semx(self):
1075 self.assertEqual(
1076 {'major': 1, 'minor': 2, 'micro': 3, 'extra': None,
1077 'raw': '1.2.3-0', 'upstream': '1.2.3',
1078 'semantic_version': 123},
1079 util.parse_dpkg_version('1.2.3-0', semx=(100, 10, 1)))
1080
1081 def test_upstream_with_hyphen(self):
1082 """upstream versions may have a hyphen."""
1083 cver = '18.2-14-g6d48d265-0ubuntu1'
1084 self.assertEqual(
1085 {'major': 18, 'minor': 2, 'micro': 0, 'extra': '-14-g6d48d265',
1086 'raw': cver, 'upstream': '18.2-14-g6d48d265',
1087 'name': 'cloud-init', 'semantic_version': 180200},
1088 util.parse_dpkg_version(cver, name='cloud-init'))
1089
1090 def test_upstream_with_plus(self):
1091 """multipath tools has a + in it."""
1092 mver = '0.5.0+git1.656f8865-5ubuntu2.5'
1093 self.assertEqual(
1094 {'major': 0, 'minor': 5, 'micro': 0, 'extra': '+git1.656f8865',
1095 'raw': mver, 'upstream': '0.5.0+git1.656f8865',
1096 'semantic_version': 500},
1097 util.parse_dpkg_version(mver))
1098
1099
1100# vi: ts=4 expandtab syntax=python997# vi: ts=4 expandtab syntax=python
diff --git a/tests/vmtests/__init__.py b/tests/vmtests/__init__.py
index bd159c4..7e31491 100644
--- a/tests/vmtests/__init__.py
+++ b/tests/vmtests/__init__.py
@@ -493,18 +493,67 @@ def skip_by_date(bugnum, fixby, removeby=None, skips=None, install=True):
493 return decorator493 return decorator
494494
495495
496DEFAULT_COLLECT_SCRIPTS = {
497 'common': [textwrap.dedent("""
498 cd OUTPUT_COLLECT_D
499 cp /etc/fstab ./fstab
500 cp -a /etc/udev/rules.d ./udev_rules.d
501 ifconfig -a | cat >ifconfig_a
502 ip a | cat >ip_a
503 cp -a /var/log/messages .
504 cp -a /var/log/syslog .
505 cp -a /var/log/cloud-init* .
506 cp -a /var/lib/cloud ./var_lib_cloud
507 cp -a /run/cloud-init ./run_cloud-init
508 cp -a /proc/cmdline ./proc_cmdline
509 cp -a /proc/mounts ./proc_mounts
510 cp -a /proc/partitions ./proc_partitions
511 cp -a /proc/swaps ./proc-swaps
512 # ls -al /dev/disk/*
513 mkdir -p /dev/disk/by-dname
514 ls /dev/disk/by-dname/ | cat >ls_dname
515 ls -al /dev/disk/by-dname/ | cat >ls_al_bydname
516 ls -al /dev/disk/by-id/ | cat >ls_al_byid
517 ls -al /dev/disk/by-uuid/ | cat >ls_al_byuuid
518 blkid -o export | cat >blkid.out
519 find /boot | cat > find_boot.out
520 [ -e /sys/firmware/efi ] && {
521 efibootmgr -v | cat >efibootmgr.out;
522 }
523 """)],
524 'centos': [textwrap.dedent("""
525 # XXX: command | cat >output is required for Centos under SELinux
526 # http://danwalsh.livejournal.com/22860.html
527 cd OUTPUT_COLLECT_D
528 rpm -qa | cat >rpm_qa
529 cp -a /etc/sysconfig/network-scripts .
530 rpm -q --queryformat '%{VERSION}\n' cloud-init |tee rpm_ci_version
531 rpm -E '%rhel' > rpm_dist_version_major
532 cp -a /etc/centos-release .
533 """)],
534 'ubuntu': [textwrap.dedent("""
535 cd OUTPUT_COLLECT_D
536 dpkg-query --show \
537 --showformat='${db:Status-Abbrev}\t${Package}\t${Version}\n' \
538 > debian-packages.txt 2> debian-packages.txt.err
539 cp -av /etc/network/interfaces .
540 cp -av /etc/network/interfaces.d .
541 find /etc/network/interfaces.d > find_interfacesd
542 v=""
543 out=$(apt-config shell v Acquire::HTTP::Proxy)
544 eval "$out"
545 echo "$v" > apt-proxy
546 """)]
547}
548
549
496class VMBaseClass(TestCase):550class VMBaseClass(TestCase):
497 __test__ = False551 __test__ = False
498 expected_failure = False552 expected_failure = False
499 arch_skip = []553 arch_skip = []
500 boot_timeout = BOOT_TIMEOUT554 boot_timeout = BOOT_TIMEOUT
501 collect_scripts = [textwrap.dedent("""555 collect_scripts = []
502 cd OUTPUT_COLLECT_D556 extra_collect_scripts = []
503 dpkg-query --show \
504 --showformat='${db:Status-Abbrev}\t${Package}\t${Version}\n' \
505 > debian-packages.txt 2> debian-packages.txt.err
506 cat /proc/swaps > proc-swaps
507 """)]
508 conf_file = "examples/tests/basic.yaml"557 conf_file = "examples/tests/basic.yaml"
509 nr_cpus = None558 nr_cpus = None
510 dirty_disks = False559 dirty_disks = False
@@ -528,6 +577,10 @@ class VMBaseClass(TestCase):
528 conf_replace = {}577 conf_replace = {}
529 uefi = False578 uefi = False
530 proxy = None579 proxy = None
580 url_map = {
581 '/MAAS/api/version/': '2.0',
582 '/MAAS/api/2.0/version/':
583 json.dumps({'version': '2.5.0+curtin-vmtest'})}
531584
532 # these get set from base_vm_classes585 # these get set from base_vm_classes
533 release = None586 release = None
@@ -773,6 +826,16 @@ class VMBaseClass(TestCase):
773 cls.arch)826 cls.arch)
774 raise SkipTest(reason)827 raise SkipTest(reason)
775828
829 # assign default collect scripts
830 if not cls.collect_scripts:
831 cls.collect_scripts = (
832 DEFAULT_COLLECT_SCRIPTS['common'] +
833 DEFAULT_COLLECT_SCRIPTS[cls.target_distro])
834
835 # append extra from subclass
836 if cls.extra_collect_scripts:
837 cls.collect_scripts.extend(cls.extra_collect_scripts)
838
776 setup_start = time.time()839 setup_start = time.time()
777 logger.info(840 logger.info(
778 ('Starting setup for testclass: {__name__} '841 ('Starting setup for testclass: {__name__} '
@@ -994,7 +1057,8 @@ class VMBaseClass(TestCase):
9941057
995 # set reporting logger1058 # set reporting logger
996 cls.reporting_log = os.path.join(cls.td.logs, 'webhooks-events.json')1059 cls.reporting_log = os.path.join(cls.td.logs, 'webhooks-events.json')
997 reporting_logger = CaptureReporting(cls.reporting_log)1060 reporting_logger = CaptureReporting(cls.reporting_log,
1061 url_mapping=cls.url_map)
9981062
999 # write reporting config1063 # write reporting config
1000 reporting_config = os.path.join(cls.td.install, 'reporting.cfg')1064 reporting_config = os.path.join(cls.td.install, 'reporting.cfg')
@@ -1442,6 +1506,8 @@ class VMBaseClass(TestCase):
1442 if self.target_release == "trusty":1506 if self.target_release == "trusty":
1443 raise SkipTest(1507 raise SkipTest(
1444 "(LP: #1523037): dname does not work on trusty kernels")1508 "(LP: #1523037): dname does not work on trusty kernels")
1509 if self.target_distro != "ubuntu":
1510 raise SkipTest("dname not present in non-ubuntu releases")
14451511
1446 if not disk_to_check:1512 if not disk_to_check:
1447 disk_to_check = self.disk_to_check1513 disk_to_check = self.disk_to_check
@@ -1449,11 +1515,9 @@ class VMBaseClass(TestCase):
1449 logger.debug('test_dname: no disks to check')1515 logger.debug('test_dname: no disks to check')
1450 return1516 return
1451 logger.debug('test_dname: checking disks: %s', disk_to_check)1517 logger.debug('test_dname: checking disks: %s', disk_to_check)
1452 path = self.collect_path("ls_dname")1518 self.output_files_exist(["ls_dname"])
1453 if not os.path.exists(path):1519
1454 logger.debug('test_dname: no "ls_dname" file: %s', path)1520 contents = self.load_collect_file("ls_dname")
1455 return
1456 contents = util.load_file(path)
1457 for diskname, part in self.disk_to_check:1521 for diskname, part in self.disk_to_check:
1458 if part is not 0:1522 if part is not 0:
1459 link = diskname + "-part" + str(part)1523 link = diskname + "-part" + str(part)
@@ -1485,6 +1549,9 @@ class VMBaseClass(TestCase):
1485 """ Check that curtin has removed /etc/network/interfaces.d/eth0.cfg1549 """ Check that curtin has removed /etc/network/interfaces.d/eth0.cfg
1486 by examining the output of a find /etc/network > find_interfaces.d1550 by examining the output of a find /etc/network > find_interfaces.d
1487 """1551 """
1552 # target_distro is set for non-ubuntu targets
1553 if self.target_distro != 'ubuntu':
1554 raise SkipTest("eni/ifupdown not present in non-ubuntu releases")
1488 interfacesd = self.load_collect_file("find_interfacesd")1555 interfacesd = self.load_collect_file("find_interfacesd")
1489 self.assertNotIn("/etc/network/interfaces.d/eth0.cfg",1556 self.assertNotIn("/etc/network/interfaces.d/eth0.cfg",
1490 interfacesd.split("\n"))1557 interfacesd.split("\n"))
diff --git a/tests/vmtests/helpers.py b/tests/vmtests/helpers.py
index 10e20b3..6dddcc6 100644
--- a/tests/vmtests/helpers.py
+++ b/tests/vmtests/helpers.py
@@ -2,6 +2,7 @@
2# This file is part of curtin. See LICENSE file for copyright and license info.2# This file is part of curtin. See LICENSE file for copyright and license info.
33
4import os4import os
5import re
5import subprocess6import subprocess
6import signal7import signal
7import threading8import threading
@@ -86,7 +87,26 @@ def check_call(cmd, signal=signal.SIGTERM, **kwargs):
86 return Command(cmd, signal).run(**kwargs)87 return Command(cmd, signal).run(**kwargs)
8788
8889
89def find_testcases():90def find_testcases_by_attr(**kwargs):
91 class_match = set()
92 for test_case in find_testcases(**kwargs):
93 tc_name = str(test_case.__class__)
94 full_path = tc_name.split("'")[1].split(".")
95 class_name = full_path[-1]
96 if class_name in class_match:
97 continue
98 class_match.add(class_name)
99 filename = "/".join(full_path[0:-1]) + ".py"
100 yield "%s:%s" % (filename, class_name)
101
102
103def _attr_match(pattern, value):
104 if not value:
105 return False
106 return re.match(pattern, str(value))
107
108
109def find_testcases(**kwargs):
90 # Use the TestLoder to load all test cases defined within tests/vmtests/110 # Use the TestLoder to load all test cases defined within tests/vmtests/
91 # and figure out what distros and releases they are testing. Any tests111 # and figure out what distros and releases they are testing. Any tests
92 # which are disabled will be excluded.112 # which are disabled will be excluded.
@@ -97,12 +117,19 @@ def find_testcases():
97 root_dir = os.path.split(os.path.split(tests_dir)[0])[0]117 root_dir = os.path.split(os.path.split(tests_dir)[0])[0]
98 # Find all test modules defined in curtin/tests/vmtests/118 # Find all test modules defined in curtin/tests/vmtests/
99 module_test_suites = loader.discover(tests_dir, top_level_dir=root_dir)119 module_test_suites = loader.discover(tests_dir, top_level_dir=root_dir)
120 filter_attrs = [attr for attr, value in kwargs.items() if value]
100 for mts in module_test_suites:121 for mts in module_test_suites:
101 for class_test_suite in mts:122 for class_test_suite in mts:
102 for test_case in class_test_suite:123 for test_case in class_test_suite:
103 # skip disabled tests124 # skip disabled tests
104 if not getattr(test_case, '__test__', False):125 if not getattr(test_case, '__test__', False):
105 continue126 continue
127 # compare each filter attr with the specified value
128 tcmatch = [not _attr_match(kwargs[attr],
129 getattr(test_case, attr, False))
130 for attr in filter_attrs]
131 if any(tcmatch):
132 continue
106 yield test_case133 yield test_case
107134
108135
diff --git a/tests/vmtests/image_sync.py b/tests/vmtests/image_sync.py
index e2cedc1..69c19ef 100644
--- a/tests/vmtests/image_sync.py
+++ b/tests/vmtests/image_sync.py
@@ -30,7 +30,9 @@ IMAGE_SRC_URL = os.environ.get(
30 "http://maas.ubuntu.com/images/ephemeral-v3/daily/streams/v1/index.sjson")30 "http://maas.ubuntu.com/images/ephemeral-v3/daily/streams/v1/index.sjson")
31IMAGE_DIR = os.environ.get("IMAGE_DIR", "/srv/images")31IMAGE_DIR = os.environ.get("IMAGE_DIR", "/srv/images")
3232
33KEYRING = '/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg'33KEYRING = os.environ.get(
34 'IMAGE_SRC_KEYRING',
35 '/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg')
34ITEM_NAME_FILTERS = \36ITEM_NAME_FILTERS = \
35 ['ftype~(boot-initrd|boot-kernel|root-tgz|squashfs)']37 ['ftype~(boot-initrd|boot-kernel|root-tgz|squashfs)']
36FORMAT_JSON = 'JSON'38FORMAT_JSON = 'JSON'
diff --git a/tests/vmtests/releases.py b/tests/vmtests/releases.py
index 02cbfe5..7be8feb 100644
--- a/tests/vmtests/releases.py
+++ b/tests/vmtests/releases.py
@@ -131,8 +131,8 @@ class _Releases(object):
131131
132132
133class _CentosReleases(object):133class _CentosReleases(object):
134 centos70fromxenial = _Centos70FromXenialBase134 centos70_xenial = _Centos70FromXenialBase
135 centos66fromxenial = _Centos66FromXenialBase135 centos66_xenial = _Centos66FromXenialBase
136136
137137
138class _UbuntuCoreReleases(object):138class _UbuntuCoreReleases(object):
diff --git a/tests/vmtests/report_webhook_logger.py b/tests/vmtests/report_webhook_logger.py
index e95397c..5e7d63b 100755
--- a/tests/vmtests/report_webhook_logger.py
+++ b/tests/vmtests/report_webhook_logger.py
@@ -76,7 +76,10 @@ class ServerHandler(http_server.SimpleHTTPRequestHandler):
76 self._message = None76 self._message = None
77 self.send_response(200)77 self.send_response(200)
78 self.end_headers()78 self.end_headers()
79 self.wfile.write(("content of %s\n" % self.path).encode('utf-8'))79 if self.url_mapping and self.path in self.url_mapping:
80 self.wfile.write(self.url_mapping[self.path].encode('utf-8'))
81 else:
82 self.wfile.write(("content of %s\n" % self.path).encode('utf-8'))
8083
81 def do_POST(self):84 def do_POST(self):
82 length = int(self.headers['Content-Length'])85 length = int(self.headers['Content-Length'])
@@ -96,13 +99,14 @@ class ServerHandler(http_server.SimpleHTTPRequestHandler):
96 self.wfile.write(msg.encode('utf-8'))99 self.wfile.write(msg.encode('utf-8'))
97100
98101
99def GenServerHandlerWithResultFile(file_path):102def GenServerHandlerWithResultFile(file_path, url_map):
100 class ExtendedServerHandler(ServerHandler):103 class ExtendedServerHandler(ServerHandler):
101 result_log_file = file_path104 result_log_file = file_path
105 url_mapping = url_map
102 return ExtendedServerHandler106 return ExtendedServerHandler
103107
104108
105def get_httpd(port=None, result_file=None):109def get_httpd(port=None, result_file=None, url_mapping=None):
106 # avoid 'Address already in use' after ctrl-c110 # avoid 'Address already in use' after ctrl-c
107 socketserver.TCPServer.allow_reuse_address = True111 socketserver.TCPServer.allow_reuse_address = True
108112
@@ -111,7 +115,7 @@ def get_httpd(port=None, result_file=None):
111 port = 0115 port = 0
112116
113 if result_file:117 if result_file:
114 Handler = GenServerHandlerWithResultFile(result_file)118 Handler = GenServerHandlerWithResultFile(result_file, url_mapping)
115 else:119 else:
116 Handler = ServerHandler120 Handler = ServerHandler
117 httpd = HTTPServerV6(("::", port), Handler)121 httpd = HTTPServerV6(("::", port), Handler)
@@ -143,10 +147,11 @@ def run_server(port=DEFAULT_PORT, log_data=True):
143147
144class CaptureReporting:148class CaptureReporting:
145149
146 def __init__(self, result_file):150 def __init__(self, result_file, url_mapping=None):
151 self.url_mapping = url_mapping
147 self.result_file = result_file152 self.result_file = result_file
148 self.httpd = get_httpd(result_file=self.result_file,153 self.httpd = get_httpd(result_file=self.result_file,
149 port=None)154 port=None, url_mapping=self.url_mapping)
150 self.httpd.server_activate()155 self.httpd.server_activate()
151 # socket.AF_INET6 returns156 # socket.AF_INET6 returns
152 # (host, port, flowinfo, scopeid)157 # (host, port, flowinfo, scopeid)
diff --git a/tests/vmtests/test_apt_config_cmd.py b/tests/vmtests/test_apt_config_cmd.py
index efd04f3..f9b6a09 100644
--- a/tests/vmtests/test_apt_config_cmd.py
+++ b/tests/vmtests/test_apt_config_cmd.py
@@ -12,16 +12,14 @@ from .releases import base_vm_classes as relbase
1212
13class TestAptConfigCMD(VMBaseClass):13class TestAptConfigCMD(VMBaseClass):
14 """TestAptConfigCMD - test standalone command"""14 """TestAptConfigCMD - test standalone command"""
15 test_type = 'config'
15 conf_file = "examples/tests/apt_config_command.yaml"16 conf_file = "examples/tests/apt_config_command.yaml"
16 interactive = False17 interactive = False
17 extra_disks = []18 extra_disks = []
18 fstab_expected = {}19 fstab_expected = {}
19 disk_to_check = []20 disk_to_check = []
20 collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent("""21 extra_collect_scripts = [textwrap.dedent("""
21 cd OUTPUT_COLLECT_D22 cd OUTPUT_COLLECT_D
22 cat /etc/fstab > fstab
23 ls /dev/disk/by-dname > ls_dname
24 find /etc/network/interfaces.d > find_interfacesd
25 cp /etc/apt/sources.list.d/curtin-dev-ubuntu-test-archive-*.list .23 cp /etc/apt/sources.list.d/curtin-dev-ubuntu-test-archive-*.list .
26 cp /etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg .24 cp /etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg .
27 apt-cache policy | grep proposed > proposed-enabled25 apt-cache policy | grep proposed > proposed-enabled
diff --git a/tests/vmtests/test_apt_source.py b/tests/vmtests/test_apt_source.py
index f34913a..bb502b2 100644
--- a/tests/vmtests/test_apt_source.py
+++ b/tests/vmtests/test_apt_source.py
@@ -14,15 +14,13 @@ from curtin import util
1414
15class TestAptSrcAbs(VMBaseClass):15class TestAptSrcAbs(VMBaseClass):
16 """TestAptSrcAbs - Basic tests for apt features of curtin"""16 """TestAptSrcAbs - Basic tests for apt features of curtin"""
17 test_type = 'config'
17 interactive = False18 interactive = False
18 extra_disks = []19 extra_disks = []
19 fstab_expected = {}20 fstab_expected = {}
20 disk_to_check = []21 disk_to_check = []
21 collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent("""22 extra_collect_scripts = [textwrap.dedent("""
22 cd OUTPUT_COLLECT_D23 cd OUTPUT_COLLECT_D
23 cat /etc/fstab > fstab
24 ls /dev/disk/by-dname > ls_dname
25 find /etc/network/interfaces.d > find_interfacesd
26 apt-key list "F430BBA5" > keyid-F430BBA524 apt-key list "F430BBA5" > keyid-F430BBA5
27 apt-key list "0165013E" > keyppa-0165013E25 apt-key list "0165013E" > keyppa-0165013E
28 apt-key list "F470A0AC" > keylongid-F470A0AC26 apt-key list "F470A0AC" > keylongid-F470A0AC
diff --git a/tests/vmtests/test_basic.py b/tests/vmtests/test_basic.py
index 01ffc89..54e3df8 100644
--- a/tests/vmtests/test_basic.py
+++ b/tests/vmtests/test_basic.py
@@ -4,12 +4,14 @@ from . import (
4 VMBaseClass,4 VMBaseClass,
5 get_apt_proxy)5 get_apt_proxy)
6from .releases import base_vm_classes as relbase6from .releases import base_vm_classes as relbase
7from .releases import centos_base_vm_classes as centos_relbase
78
8import textwrap9import textwrap
9from unittest import SkipTest10from unittest import SkipTest
1011
1112
12class TestBasicAbs(VMBaseClass):13class TestBasicAbs(VMBaseClass):
14 test_type = 'storage'
13 interactive = False15 interactive = False
14 nr_cpus = 216 nr_cpus = 2
15 dirty_disks = True17 dirty_disks = True
@@ -18,29 +20,18 @@ class TestBasicAbs(VMBaseClass):
18 nvme_disks = ['4G']20 nvme_disks = ['4G']
19 disk_to_check = [('main_disk_with_in---valid--dname', 1),21 disk_to_check = [('main_disk_with_in---valid--dname', 1),
20 ('main_disk_with_in---valid--dname', 2)]22 ('main_disk_with_in---valid--dname', 2)]
21 collect_scripts = VMBaseClass.collect_scripts + [textwrap.dedent("""23 extra_collect_scripts = [textwrap.dedent("""
22 cd OUTPUT_COLLECT_D24 cd OUTPUT_COLLECT_D
23 blkid -o export /dev/vda > blkid_output_vda25 blkid -o export /dev/vda | cat >blkid_output_vda
24 blkid -o export /dev/vda1 > blkid_output_vda126 blkid -o export /dev/vda1 | cat >blkid_output_vda1
25 blkid -o export /dev/vda2 > blkid_output_vda227 blkid -o export /dev/vda2 | cat >blkid_output_vda2
26 dev="/dev/vdd"; f="btrfs_uuid_${dev#/dev/*}";28 dev="/dev/vdd"; f="btrfs_uuid_${dev#/dev/*}";
27 if command -v btrfs-debug-tree >/dev/null; then29 if command -v btrfs-debug-tree >/dev/null; then
28 btrfs-debug-tree -r $dev | awk '/^uuid/ {print $2}' | grep "-"30 btrfs-debug-tree -r $dev | awk '/^uuid/ {print $2}' | grep "-"
29 else31 else
30 btrfs inspect-internal dump-super $dev |32 btrfs inspect-internal dump-super $dev |
31 awk '/^dev_item.fsid/ {print $2}'33 awk '/^dev_item.fsid/ {print $2}'
32 fi > $f34 fi | cat >$f
33 cat /proc/partitions > proc_partitions
34 ls -al /dev/disk/by-uuid/ > ls_uuid
35 cat /etc/fstab > fstab
36 mkdir -p /dev/disk/by-dname
37 ls /dev/disk/by-dname/ > ls_dname
38 find /etc/network/interfaces.d > find_interfacesd
39
40 v=""
41 out=$(apt-config shell v Acquire::HTTP::Proxy)
42 eval "$out"
43 echo "$v" > apt-proxy
44 """)]35 """)]
4536
46 def _kname_to_uuid(self, kname):37 def _kname_to_uuid(self, kname):
@@ -48,7 +39,7 @@ class TestBasicAbs(VMBaseClass):
48 # parsing ls -al output on /dev/disk/by-uuid:39 # parsing ls -al output on /dev/disk/by-uuid:
49 # lrwxrwxrwx 1 root root 9 Dec 4 20:0240 # lrwxrwxrwx 1 root root 9 Dec 4 20:02
50 # d591e9e9-825a-4f0a-b280-3bfaf470b83c -> ../../vdg41 # d591e9e9-825a-4f0a-b280-3bfaf470b83c -> ../../vdg
51 ls_uuid = self.load_collect_file("ls_uuid")42 ls_uuid = self.load_collect_file("ls_al_byuuid")
52 uuid = [line.split()[8] for line in ls_uuid.split('\n')43 uuid = [line.split()[8] for line in ls_uuid.split('\n')
53 if ("../../" + kname) in line.split()]44 if ("../../" + kname) in line.split()]
54 self.assertEqual(len(uuid), 1)45 self.assertEqual(len(uuid), 1)
@@ -57,81 +48,99 @@ class TestBasicAbs(VMBaseClass):
57 self.assertEqual(len(uuid), 36)48 self.assertEqual(len(uuid), 36)
58 return uuid49 return uuid
5950
60 def test_output_files_exist(self):51 def _test_ptable(self, blkid_output, expected):
61 self.output_files_exist(
62 ["blkid_output_vda", "blkid_output_vda1", "blkid_output_vda2",
63 "btrfs_uuid_vdd", "fstab", "ls_dname", "ls_uuid",
64 "proc_partitions",
65 "root/curtin-install.log", "root/curtin-install-cfg.yaml"])
66
67 def test_ptable(self, disk_to_check=None):
68 if self.target_release == "trusty":52 if self.target_release == "trusty":
69 raise SkipTest("No PTTYPE blkid output on trusty")53 raise SkipTest("No PTTYPE blkid output on trusty")
7054
71 blkid_info = self.get_blkid_data("blkid_output_vda")55 if not blkid_output:
72 self.assertEquals(blkid_info["PTTYPE"], "dos")56 raise RuntimeError('_test_ptable requires blkid output file')
7357
74 def test_partition_numbers(self):58 if not expected:
75 # vde should have partitions 1 and 1059 raise RuntimeError('_test_ptable requires expected value')
76 disk = "vde"60
61 self.output_files_exist([blkid_output])
62 blkid_info = self.get_blkid_data(blkid_output)
63 self.assertEquals(expected, blkid_info["PTTYPE"])
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches